Most social analyses of the use of personal health data for dataveillance (watching and monitoring people using information gathered about them) have largely focused on people who engage in voluntary self-tracking to promote or manage their health and fitness. With the outbreak of COVID-19 (novel coronavirus), a new form of health dataveillance has emerged. I call it ‘digitised quarantine’.
Traditional quarantine measures, involving the physical isolation of people deemed to be infected with a contagious illness or those who have had close contact with infected people, have been employed for centuries as a disease control measure. Histories of medicine and public health outline that quarantine (from the Italian for ’40 days’ – often the length of the isolation period) was practised as early as the 14th century as a way of protecting people living in European coastal cities from the plague brought by visiting ships.
With the advent of COVID-19, quarantine has been actively used in many of the locations that have experienced large numbers of cases. Millions of people have already been placed in isolation. Quarantine measures have included self-isolation, involving people keeping themselves at home for the required 14-day period, as well as imposed isolation, such as requiring people to stay in dedicated quarantine stations, and large-scale travel bans and lock-downs of whole large cities. Quarantine began with lock-downs of Wuhan and nearby cities in the Chinese province of Hubei. At the time of writing, cases have been discovered in many other countries, often with identified hot-spots of contagion around identifiable places and regions, including a South Korean church, a north Italian region and a cruise ship docked in Japan.
Side-by-side with these centuries-old measures, in some locations, digital technologies and digital data analytics have been taken up as ways of monitoring people, identifying those who are infected and tracking their movements to ensure that they adhere to self-isolation restrictions for the length of the quarantine period. In China, people were prevented from leaving their homes if they had been identified as infected with COVID-19 by a digitised rating system on a phone app that coded them ‘red’. Chinese government agencies also released a ‘close contact detector’ app that alerted people if they had been in close proximity to someone infected with the virus. In some Chinese cities, local government authorities have brought in monitoring measures using facial recognition data and smartphone data tracking combined with information derived by requesting people to enter details about their health and travel history into online forms when visiting public places.
It is not only Chinese authorities who are experimenting with digitised forms of identifying infection risk and enforcing isolation. In the Australian city of Adelaide, two people identified as having COVID-19 were placed under voluntary home isolation, their movements monitored by the police using their smartphone metadata. It is notable that the police emphasised that this is the same dataveillance system used for tracking offenders in criminal investigations. As is the case with traditional quarantine measures, the freedoms and autonomy of those deemed to be infected or at risk of infection are in tension with public health goals to control epidemics. The types of digitised monitoring of people’s movements using their smartphones or enforced notifications to complete online questionnaires are redolent of the measures that are used in the criminal justice system, where employing electronic monitoring technologies such as digital tracking bands has been a feature of controlling offenders’ movements once released from a custodial sentence.
These resonances with law enforcement should perhaps not be surprising, given that public health acts in many countries allow for the enforced isolation or even imposing significant fines or incarceration of people deemed to pose a risk to others because they are infectious or identified as being in a high-risk category of transmitting disease. There is a recent history of countries such as Singapore using technologies such as surveillance cameras and electronic tags for controlling the spread of SARS in 2003. These practices have been called into question by scholars interested in investigating the implications for human rights.
Since then, the opportunities to conduct close monitoring of people using their smartphones and online interactions have vastly expanded. The use of detailed data sets generated from diverse sources in these novel digitised quarantine measures leads to a range of new human rights challenges. Such monitoring may be viewed as a ‘soft’ form of policing infection, in which physical isolation measures are combined with dataveillance. However, underlying the apparent convenience offered by digitised quarantine are significant failures. One difficulty is the potential for the data sets and algorithmic processing used to calculate COVID-19 infection risk to be inaccurate, unfairly confining people to isolation and allowing them no opportunity to challenge the decision made by the app. Examples of such inaccuracies have already been reported by Chinese citizens subjected to these measures. As one man claimed: “I felt I was at the mercy of big data,” … “I couldn’t go anywhere. There’s no one I could turn to for help, except answer bots.”
At a broader level, another problem raised by digitised quarantine measures is the ever-expanding reach into people’s private lives and movements by health authorities and other government agencies that they portend. This function creep requires sustained examination for its implications for human rights. The data-utopian visions promoted by those seeking to impose digitised quarantine may well lead to data hubris when their inaccuracies, biases and injustices are exposed.
Acknowledgement: Thanks to Trent Yarby for alerting me to two of the news stories upon which I drew for this post.