Image: Mike mackenzie via www.vpnsrus.com

Image: Mike mackenzie via www.vpnsrus.com

Deep fakes and other AI applications a danger to democracy and human rights

29th July 2019

More information

Roderic Broadhurst is Professor of Criminology at RegNet. He is Director of the ANU Cybercrime Observatory which was established in 2012. The Observatory is a focal point for research on human factors and cybercrime.

His current research focuses on crime and development, the recidivism of homicide offenders, cybercrime and organized and transnational crime.

You might also like

Related centres

A new report from the ANU Cybercrime Observatory, Artificial Intelligence and Crime addresses the pressing challenges to cyber safety and security that automated crime-­‐ware or malware pose to increasingly common automated processes in many domains of modern life. The research paper also highlights the role artificial intelligence programs may play in preventing cybercrime, particularly in detecting and mitigating the interference and manipulation of the data relied upon for automated decision making or guidance.

Summary

Investment and interest in developing machine learning (ML) technologies that underpin AI capabilities across  industry,  civil  and  military  applications  have  grown  exponentially  in  the  past  decade.  This  investment in AI research and development has boosted innovation and productivity as well as intensifying competition between states in the race to gain technological advantage. The scale of data acquisition  and  aggregation  essential  for  the  ‘training’  of  effective  AI  via  ML  pattern  recognition  processes also poses broader associated risks to privacy and fairness as such technology shifts from a niche  to  a  general  purpose  technology.  The  potential  weaponisation  of  AI  applications  such  as computational marketing, ‘deep’ fake images or news and enhanced surveillance, however, are pressing challenges to democracies and human rights. Poorly implemented ethical, accountability and regulatory  measures  will  also  provide  opportunities  for  criminal  activity  as  well  as  the  potential  of  accidents, unreliability and other unintended consequences.

Read the entire Artificial Intelligence and Crime report via the link below.

Updated:  10 August 2017/Responsible Officer:  Director, RegNet/Page Contact:  Director, RegNet