You might also like
by Kate Henne
Following the March4Justice protests, New South Wales Police Commissioner Mick Fuller encouraged the use of digital applications or ‘apps’ as part of the response to sexual violence, suggesting they could record positive consent. He received widespread criticism for misunderstanding the problem; counteracting sexual violence requires addressing unequal power relations, not just making sure consent is clearly communicated.
While Fuller’s recommendation is a flawed response, it reflects a growing trend of using mobile apps to counter forms of gender-based violence, including sexual assault and harassment. Most of these technologies are designed to provide victim-survivors with emergency assistance and an easy means to document and report their experience to authorities. Many apps digitally capture and transmit data about individual incidents, such as descriptions of abuse, the location and time of its occurrence, perpetrator characteristics and images that could serve as evidence.
Although proponents have framed these technologies as tools that empower women, our analysis of a wide range of digital apps and artificial intelligence (AI) chatbots shows that the promise of data-enabled solutions is not without peril. There are significant limitations when using these apps, including device failure, challenges accessing recommended emergency services, and the fact that data collected does not always create clear or useful evidence. The data can be used in ways victim-survivors may not anticipate, including against their wishes or even against their claims. These apps can restrict reporting options by channelling data to authorities for the purpose of punishment rather than offering other possibilities.
These technologies do not overcome long-standing criticisms that legal responses to gender-based violence often fail to support victim-survivors, especially those who experience discrimination related to disability, ethnicity, race or sexuality. They can even create additional risks. They collect, track and analyse users’ Internet activity and personal data, which is often stored by a third party or the app’s parent company. The data is then processed and aggregated for key users — such as investigators, lawyers and police — and often for a variety of purposes beyond the reported incident. This data-driven model of service provision creates an unequal exchange that is skewed towards technology companies and their partners. Further, research indicates the data ecosystems that these apps rely on have been exploited, putting victim-survivors’ information at risk.
These technologies are not neutral communications devices. They reflect design choices shaped by societal perceptions, including problematic assumptions about the causes of and solutions to sexual violence. For example, their features often reify racist rape myths, such as ‘stranger danger’, even though most sexual assaults are committed by people who victim-survivors already know. In addition, they put the onus on victim-survivors to initiate and manage the process of responding to sexual violence, which subjects them — not perpetrators — to scrutiny when their reports are assessed by authorities.
While mobile apps may make it easier for some victim-survivors to document and report instances of sexual assault or harassment, they are not a comprehensive response to sexual violence. More data does not necessarily mean social change or better institutional responses. Without more radical corrective strategies and stronger protections for victim-survivors, these apps can exacerbate harms by promoting technology as a viable fix for complex social issues.
Professor Kate Henne is Director of the ANU School of Regulation and Global Governance. This article is based on research by Henne and ANU PhD scholar Jenna Imad Harb, with Dr Renee Shelby from Northwestern University.