Imagine your partner could read every text and email you send, hear every phone call, or track your location 24/7. This is the reality for some victims of domestic abuse. In the UK, 1 in 4 women and 1 in 3 men have experienced domestic abuse. Around 48% of domestic abuse victims [1], responding to a survey funded by Comic Relief, reported that perpetrators were using technology to further stalk, harass and intimidate them. From threatening texts and phone calls to hacked online accounts, and monitored devices, victims often feel that they do not have any privacy at all.
What is more, cyber-aggression often escalates once a victim has left the abusive relationship and can go on for several years after. Through volunteering with domestic abuse support organisations, and ongoing research into the topic, I have come to realise that existing support structures are not prepared to advise victims on issues of cyber-security and privacy. In many cases, support workers do not have the time, nor the opportunity, to undergo the training necessary to understand novel cyber-threats and the measures that can be taken to mitigate them.
An initial phase of the research involved analysing data from online discussion forums for victims of domestic abuse [2][3]. A total of 745 individual posts were scraped from 3 forums and analysed following a thematic approach. What the analysis revealed is the multitude of ways in which victims are monitored and harassed, through digital consumer technologies, by perpetrators. In this brief article, we will focus on 2 of the main findings: referring to forms of overt and covert surveillance, as described below.
Victims reported overt monitoring of their accounts and devices, along with the feeling that there was no way of hiding information from the perpetrator, nor maintaining any sort of individual privacy. Perpetrators accomplished overt surveillance either by coercing victims’ into giving up their passwords, or through unauthorised remote attempts to gain access to their accounts. Victims would often become aware of unauthorised access by finding emails that the perpetrator deleted and/or read, or through notifications sent by digital service providers asking them to verify access from a new location/device.
In such cases, it can be extremely difficult for victims to take back ownership and control of the compromised devices and/or accounts due to fear of the consequences of doing so, especially if the perpetrator still has physical access to the victim. Oftentimes, the threat of physical abuse is perceived to be more severe than the ongoing surveillance, control, and lack of privacy.
In addition to overt monitoring, covert and far more insidious forms of surveillance were also discussed on the forums. Victims would often suspect that keylogging software or spyware had been installed on their devices. Either due to the fact that perpetrators seemed to know their location at all times, or because their phone had been behaving in an odd manner, such as presenting a flickering screen, lighting-up on its own, running slow, using large amounts of data, or running out of battery a lot quicker than usual.
What is equally revealing is that the advice exchanged on the forums shows victims’ do not have access to support services that can verify the existence of spyware and take steps to remove it. Instead, advice exchanged on the forum, based on information from online search queries, seemed to be the only source of support available to victims. Often the advice involved a factory reset of the phone but did not include details on how spyware might be installed in the first place. Given the extensive technical knowledge needed to install malware remotely, we assume that the spyware reported by victims, might have been installed when the perpetrator had physical access to their device.
On the other hand, the term “spyware” could have been used to describe legitimate applications such as Find my Friends or parental control apps that are being misused by perpetrators, without the victims’ knowledge, to track and monitor them. Nonetheless, in most cases, both covert and overt forms of surveillance require that perpetrators have access to victims’ login credentials and, oftentimes, that they have access to their devices at least once. One way of addressing this issue could be to improve authentication mechanisms that can distinguish between a primary user and other users, all of whom may have access to the same devices and accounts.
The design of user authentication mechanisms is an ongoing, and relatively novel, challenge. This article argues that the application of extreme users cases in the design of these mechanisms could prove to be beneficial in long-term, future-oriented thinking. Considering “extreme users” in innovation allows designs to be assessed from new perspectives, which can often provoke unusual and interesting ideas. Victims of ongoing surveillance and cyber-aggression, perpetrated by an (ex-)intimate partner, can be viewed extreme users [1,2,3], who, in the specificity of their needs, offer broader implications for the design of digital privacy and security mechanisms.
Take, for example, face-recognition authentication, which could potentially detect who is trying to install software on a device and limit access rights to everyone apart from the primary user. It could be used to assure that the user interacting with a device is the same user that initially logged-in. This form of authentication may be useful to victims of domestic abuse but also in settings where multiple users have access to a single device. Smart home appliances are examples of consumer devices with more than one user — potentially a whole family — that could benefit from multiple accounts and ways of quickly and effectively identifying different users.
In cases where face recognition is not possible (e.g., due to lighting conditions) [4], then AI capable of learning and recognising usage patterns could be useful in distinguishing primary users from secondary, or unauthorised, users. The design of cyber-privacy and -security mechanisms, which is soon to be made exponentially more complex by the internet-of-things, could benefit from looking at extreme users across a broad spectrum of circumstances. This article argues not only for the inclusion of the requirements of victims of domestic abuse, but also, for the consideration of users beyond standardised contexts and usage scenarios.
Notes:
[1] – All participants identified as female
[2] – Only open forums, that do not require user-registration, and do not prohibit research, were used in the analysis.
[3] – The forums have not be identified in order to preserve user-anonymity.
References:
1. Jed R. Brubaker and Janet Vertesi. 2010. Death and the social network. CHI Workshop on Death and the Digital.
2. Lars Erik Holmquist. 2004. User-driven Innovation in the Future Applications Lab. CHI ’04 Extended Abstracts on Human Factors in Computing Systems, ACM, 1091–1092.
3. A. F. Newell, P. Gregor, M. Morgan, G. Pullin, and C. Macaulay. 2011. User-Sensitive Inclusive Design. Universal Access in the Information Society; Heidelberg10, 3: 235–243.
4. Esteban Vazquez-Fernandez and Daniel Gonzalez-Jimenez. 2016. Face recognition for authentication on mobile devices. Image and Vision Computing 55: 31–33.