Deep fakes and other AI applications a danger to democracy and human rights

Image by Gerd Altmann from Pixabay under Pixabay Licence
Image by Gerd Altmann from Pixabay under Pixabay Licence

A new report from the ANU Cybercrime Observatory, Artificial Intelligence and Crime addresses the pressing challenges to cyber safety and security that automated crime-­‐ware or malware pose to increasingly common automated processes in many domains of modern life. The research paper also highlights the role artificial intelligence programs may play in preventing cybercrime, particularly in detecting and mitigating the interference and manipulation of the data relied upon for automated decision making or guidance.


Investment and interest in developing machine learning (ML) technologies that underpin AI capabilities across  industry,  civil  and  military  applications  have  grown  exponentially  in  the  past  decade.  This  investment in AI research and development has boosted innovation and productivity as well as intensifying competition between states in the race to gain technological advantage. The scale of data acquisition  and  aggregation  essential  for  the  ‘training’  of  effective  AI  via  ML  pattern  recognition  processes also poses broader associated risks to privacy and fairness as such technology shifts from a niche  to  a  general  purpose  technology.  The  potential  weaponisation  of  AI  applications  such  as computational marketing, ‘deep’ fake images or news and enhanced surveillance, however, are pressing challenges to democracies and human rights. Poorly implemented ethical, accountability and regulatory  measures  will  also  provide  opportunities  for  criminal  activity  as  well  as  the  potential  of  accidents, unreliability and other unintended consequences.

Read the entire Artificial Intelligence and Crime report via the link below.

Image by Gerd Altmann from Pixabay under Pixabay Licence.