Who owns your personal health and medical data?

09/01/15 -- A moment during day 1 of the 2-day international Healthcare and Social Media Summit in Brisbane, Australia on September 1, 2015. Mayo Clinic partnered with the Australian Private Hospitals Association (APHA), a Mayo Clinic Social Media Health Network member to bring this first of it's kind summit to Queensland's Brisbane Convention & Exhibition Centre. (Photo by Jason Pratt / Mayo Clinic)

Presenting my talk at the Mayo Clinic Social Media and Healthcare Summit (Photo by Jason Pratt / Mayo Clinic)

Tomorrow I am speaking on a panel at the Mayo Clinic Healthcare and Social Media Summit on the topic of ‘Who owns your big data?’. I am the only academic among the panel members, who comprise of a former president of the Australian Medical Association, the CEO of the Consumers Health Forum, the Executive Director of a private hospital organisation and the Chief Executive of the Medical Technology Association of Australia. The Summit itself is directed at healthcare providers, seeking to demonstrate how they may use social media to publicise their organisations and promote health among their clients.

As a sociologist, my perspective on the use of social media in healthcare is inevitably directed at troubling the taken-for-granted assumptions that underpin the jargon of ‘disruption’, ‘catalysing’, ‘leveraging’ and ‘acceleration’ that tend to recur in digital health discourses and practices. When I discuss the big data phenomenon, I evoke the ‘13 Ps of big data‘ which recognise their social and cultural assumptions and uses.

When I speak at the Summit, I will note that the first issue to consider is for whom and by whom personal health and medical data are collected. Who decides whether personal digital data should be generated and collected? Who has control over these decisions? What are the power relations and differentials that are involved? This often very intimate information is generated in many different ways – via routine online transactions (e.g. Googling medical symptoms, purchasing products on websites) or more deliberately as part of people’s contributions to social media platforms (such as PatientsLikeMe or Facebook patient support pages) or as part of self-tracking or patient self-care endeavours or workplace wellness programs. The extent to which the generation of such information is voluntary, pushed, coerced or exploited, or indeed, even covert, conducted without the individual’s knowledge or consent, varies in each case. Many self-trackers collect biometric data on themselves for their private purposes. In contrast, patients who are sent home with self-care regimes may do so reluctantly. In some situations, very little choice is offered people: such as school students who are told to wearing self-tracking devices during physical education lessons or employees who work in a culture in which monitoring their health and fitness is expected of them or who may be confronted with financial penalties if they refuse.

Then we need to think about what happens to personal digital data once they are generated. Jotting down details of one’s health in a paper journal or sharing information with a doctor that is maintained in a folder in a filing cabinet in the doctor’s surgery can be kept private and secure. In this era of using digital tools to generate and archive such information, this privacy and security can no longer be guaranteed. Once any kind of personal data are collected and transmitted to the computing cloud, the person who generated the data loses control of it. These details become big data, part of the digital data economy and available to any number of second or third parties for repurposing: data mining companies, marketers, health insurance, healthcare and medical device companies, hackers, researchers, the internet empires themselves and even national security agencies, as Edward Snowden’s revelations demonstrated.

Even the large institutions that are trusted by patients for offering reliable and credible health and medical information online (such as the Mayo Clinic itself, which ranks among the top most popular health websites with 30 million unique estimated monthly visitors) may inadvertently supply personal details of those who use their websites to third parties. One recent study found that nine out of ten visits to health or medical websites result in data being leaked to third parties, including companies such as Google and Facebook, online advertisers and data brokers because the websites use third party analytic tools that automatically send information to the developers about what pages people are visiting. This information can then be used to construct risk profiles on users that may shut them out of insurance, credit or job opportunities. Data security breaches are common in healthcare organisations, and cyber criminals are very interested in stealing personal medical details from such organisations’ archives. This information is valuable as it can be sold for profit or used to create fake IDs to purchase medical equipment or drugs or fraudulent health insurance claims.

In short, the answer to the question ‘Who owns your personal health and medical data?’ is generally no longer individuals themselves.

My research and that of others who are investigating people’s responses to big data and the scandals that have erupted around data security and privacy are finding that concepts of privacy and notions of data ownership are beginning to change in response. People are becoming aware of how their personal data may be accessed, legally or illegally, by a plethora of actors and agencies and exploited for commercial profit. Major digital entrepreneurs, such as Apple CEO Tim Cook, are in turn responding to the public’s concern about the privacy and security of their personal information. Healthcare organisations and medical providers need to recognise these concerns and manage their data collection initiatives ethically, openly and responsibly.

Personal digital data as a companion species

Update: I have now published a journal article that brings this post together with the following post on ‘eating’ digital data – the article can be found here.

While an intense interest in digital data in popular and research cultures is now evident, we still know little about how humans interacting with, making sense of and using the digital data that they generate. Everyday data practices remain under-researched and under-theorised. In attempting to identify and think through some of the ways in which critical digital data scholars may seek to contribute to understandings of data practices, I am developing an argument that rests largely on the work of two scholars in the field of science and technology studies: Donna Haraway and Annemarie Mol. In this post I begin with Haraway, while my next post will discuss Mol.

Haraway’s work has often attempted ‘to find descriptive language that names emergent ontologies’, and I use her ideas here in the spirit of developing new terms and concepts to describe humans’ encounters with digital data. Haraway emphasises that humans cannot be separated from nonhumans conceptually, as we are constantly interacting with other animals and material objects as we go about our daily lives. Her writings on the cyborg have been influential in theory for conceptualising human and computer technological encounters (Haraway, 1991). In this work, Haraway drew attention to the idea that human ontology must be understood as multiple and dynamic rather than fixed and essential, as blurring boundaries between nature and culture, human and nonhuman, Self and Other. She contends that actors, whether human or nonhuman, are never pre-established; rather they emerge through relational encounters (Bhavnani and Haraway, 1994). The cyborg metaphor encapsulates this idea, not solely in relation to human-technology assemblages but to any interaction of humans with nonhumans.

This perspective already provides a basis for thinking through the emergent ontologies that are the digital data assemblages that are configured by humans’ interactions with the software and hardware that generate digital data about them. Haraway’s musings on human and nonhuman animal interactions (Haraway, 2003, 2008, 2015) also have resonance for how we might understand digital data-human assemblages. Haraway uses the term ‘companion species’ to describe the relationships that the human species has not only with other animal species but also with technologies. Humans are companion species with the nonhumans with which they live alongside and engage, each species learning from and influencing the other, co-evolving. Haraway refers to companion species as ‘post-cyborg entities, acknowledging the development of her thinking since her original cyborg exegesis.

This trope of companion species may be taken up to think about the ways in which humans generate, materialise and engage with digital data. Thrift has described the new ‘hybrid beings’ that are comprised of digital data and human flesh. Adopting Haraway’s companion species trope allows for the extension of this idea by acknowledging the liveliness of digital data and the relational nature of our interactions with these data. Haraway has commented in a lecture that she has learnt

through my own inhabiting of the figure of the cyborg about the non-anthropomorphic agency and the liveliness of artifacts. The kind of sociality that joins humans and machines is a sociality that constitutes both, so if there is some kind of liveliness going on here it is both human and non-human. Who humans are ontologically is constituted out of that relationality.

This observation goes to the heart of how we might begin to theorise the liveliness of digital data in the context of our own aliveness/liveliness, highlighting the relationality and sociality that connect them.

Like companion species and their humans, digital data are lively combinations of nature/culture. Digital data are lively in several ways. They are about life itself (details about human’s and other living species), they are constantly generated and regenerated as well as purposed and repurposed as they enter into the digital knowledge economy, they have potential impacts on humans’ and other species’ lives via the assumptions and inferences that they are used to develop and they have consequences for livelihoods in terms of their commercial and other value and effects.

Rather than think of the contemporary digitised human body/self as posthuman (cf. Haraway’s comments on posthumanism in her interview with Gane, 2006), the companion species perspective develops the idea of ‘co-human’ entities. Just as digital data assemblages are comprised of specific information points about people’s lives, and thus learn from people as algorithmic processes manipulate this personal information, people in turn learn from the digital data assemblages of which they are a part. The book choices that Amazon offers the, the ads that are delivered to them on Facebook or Twitter, the returns that are listed from search engine queries or browsing histories, the information that a fitness trackers provides about their heart rate or calories burnt each day are all customised to their digitised behaviours. Perusing these data can provide people with insights about themselves and may structure their future behaviour.

These aspects of digital data assemblages are perhaps becoming even more pronounced as the Internet of Things develops and humans become just one node in a network of smart objects that configure and exchange digital data with each other. Humans move around in data-saturated environments and they are able to wear personalised data-generating devices on their bodies, including not only their smartphones but objects such as sensor-embedded wristbands, clothing or watches. The devices that we carry with us literally are our companions: in the case of smartphones regularly touched, fiddled with and looked at throughout the day. But in distinction from previous technological prostheses, these mobile and wearable devices are also invested with and send out continuous flows of personal information. They have become the repositories of communication with others, geolocation information, personal images, biometric information and more. They also leak these data outwards as they are transmitted to computing cloud servers. All this is happening in real-time and continuously, raising important questions about the security and privacy of the very intimate information that these devices generate, transmit and archive (Tene and Polonetsky, 2013).

The companion species trope recognises the inevitability of our relationship with our digital data assemblages and the importance of learning to live together and to learn from each other. It suggests both the vitality of these assemblages and also the possibility of developing a productive relationship, recognising our mutual dependency. We may begin to think about our digital data assemblages as members of a companion species that have lives of their own that are beyond our complete control. These proliferating digital data companion species, as they are ceaselessly configured and reconfigured, emerge beyond our bodies/selves and into the wild of digital data economies and circulations. They are purposed and repurposed by second and third parties and even more actors beyond our reckoning as they are assembled and reassembled. Yet even as our digital data companion species engage in their own lives, they are still part of us and we remain part of them. We may interact with them or not; we may be allowed access to them or not; we may be totally unaware of them or we may engage in purposeful collection and use of them. They have implications for our lives in a rapidly growing array of contexts, from the international travel we are allowed to undertake to the insurance premiums, job offers or credit we are offered.

If we adopt Haraway’s companion species trope, we might ask the following: What are our affective responses to our digital data companion species? Do we love or hate them, or simply feel indifferent to them? What are the contexts for these responses? How do we live with our digital data companion species? How do they live with us? How do our lives intersect with them? What do they learn from us, and what do we learn from them? What is the nature of their own lives as they move around the digital data economy? How are we influenced by them? How much can we domesticate or discipline them? How do they domesticate or discipline us? How does each species co-evolve?

References

Bhavnani, K.-K. & Haraway, D. (1994) Shifting the subject: a conversation between Kum-Kum Bhavnani and Donna Haraway, 12 April 1993, Santa Cruz, California. Feminism & Psychology, 4, 19-39.

Gane, N. (2006) When we have never been human, what is to be done?: Interview with Donna Haraway. Theory, Culture & Society, 23, 135-58.

Haraway, D. (1991) Simians, Cyborgs and Women: the Reinvention of NatureLondon: Free Association.

Haraway, D. (2003) The Companion Species Manifesto: Dogs, People, and Significant Otherness. Chicago: Prickly Paradigm.

Haraway, D. (2008) When Species Meet. Minneapolis: The University of Minnesota Press.

Tene, O. & Polonetsky, J. (2013) Big data for all: Privacy and user control in the age of analytics. Northwestern Journal of Technology & Intellectual Property, 11, 239-73.

The thirteen Ps of big data

Big data are often described as being characterised by the ‘3 Vs’: volume (the large scale of the data); variety (the different forms of data sets that can now be gathered by digital devices and software); and velocity (the constant generation of these data). An online search of the ‘Vs’ of big data soon reveals that some commentators have augmented these Vs with the following: value (the opportunities offered by big data to generate insights); veracity/validity (the accuracy/truthfulness of big data); virality (the speed at which big data can circulate online); and viscosity (the resistances and frictions in the flow of big data) (see Uprichard, 2013 for a list of even more ‘Vs’).

These characterisations principally come from the worlds of data science and data analytics. From the perspective of critical data researchers, there are different ways in which big data can be described and conceptualised (see the further reading list below for some key works in this literature). Anthropologists Tom Boellstorff and Bill Maurer (2015a) refer to the ‘3 Rs’: relation, recognition and rot. As they explain, big data are always formed and given meaning via relationships with human and nonhuman actors that extend beyond data themselves; how data are recognised qua data is a sociocultural and political process; and data are susceptible to ‘rot’, or deterioration or unintended transformation as they are purposed and repurposed, sometimes in unintended ways.

Based on my research and reading of the critical data studies literature, I have generated my own list that can be organised around what I am choosing to call the ‘Thirteen Ps’ of big data. As in any such schema, this ‘Thirteen Ps’ list is reductive, acting as a discursive framework to organise and present ideas. But it is one way to draw attention to the sociocultural dimensions of big data that the ‘Vs’ lists have thus far failed to acknowledge, and to challenge the taken-for-granted attributes of the big data phenomenon.

  1. Portentous: The popular discourse on big data tends to represent the phenomenon as having momentous significance for commercial, managerial, governmental and research purposes.
  2. Perverse: Representations of big data are also ambivalent, demonstrating not only breathless excitement about the opportunities they offer but also fear and anxiety about not being able to exert control over their sheer volume and unceasing generation and the ways in which they are deployed (as evidenced in metaphors of big data that refer to ‘deluges’ and ‘tsunamis’ that threaten to overwhelm us).
  3. Personal: Big data incorporate, aggregate and reveal detailed information about people’s personal behaviours, preferences, relationships, bodily functions and emotions.
  4. Productive: The big data phenomenon is generative in many ways, configuring new or different ways of conceptualising, representing and managing selfhood, the body, social groups, environments, government, the economy and so on.
  5. Partial: Big data can only ever tell a certain narrative, and as such they offer a limited perspective. There are many other ways of telling stories using different forms of knowledges. Big data are also partial in the same way as they are relational: only some phenomena are singled out and labelled as ‘data’, while others are ignored. Furthermore, more big data are collected on some groups than others: those people who do not use or have access to the internet, for example, will be underrepresented in big digital data sets.
  6. Practices: The generation and use of big data sets involve a range of data practices on the part of individuals and organisations, including collecting information about oneself using self-tracking devices, contributing content on social media sites, the harvesting of online transactions by the internet empires and the data mining industry and the development of tools and software to produce, analyse, represent and store big data sets.
  7. Predictive: Predictive analytics using big data are used to make inferences about people’s behaviour. These inferences are becoming influential in optimising or limiting people’s opportunities and life chances, including their access to healthcare, insurance, employment and credit.
  8. Political: Big data is a phenomenon that involves power relations, including struggles over ownership of or access to data sets, the meanings and interpretations that should be attributed to big data, the ways in which digital surveillance is conducted and the exacerbation of socioeconomic disadvantage.
  9. Provocative: The big data phenomenon is controversial. It has provoked much recent debate in response to various scandals and controversies related to the digital surveillance of citizens by national security agencies, the use and misuse of personal data, the commercialisation of data and whether or not big data poses a challenge to the expertise of the academic social sciences.
  10. Privacy: There are growing concerns in relation to the privacy and security of big data sets as people are becoming aware of how their personal data are used for surveillance and marketing purposes, often without their consent or knowledge and the vulnerability of digital data to hackers.
  11. Polyvalent: The social, cultural, geographical and temporal contexts in which big data are generated, purposed and repurposed by a multitude of actors and agencies, and the proliferating data profiles on individuals and social groups that big data sets generate give these data many meanings for the different entities involved.
  12. Polymorphous: Big data can take many forms as data sets are generated, combined, manipulated and materialised in different ways, from 2D graphics to 3D-printed objects.
  13. Playful: Generating and materialising big data sets can have a ludic quality: for self-trackers who enjoy collecting and sharing information on themselves or competing with other self-trackers, for example, or for data visualisation experts or data artists who enjoy manipulating big data to produce beautiful graphics.

Critical Data Studies – Further Reading List

Andrejevic, M. (2014) The big data divide, International Journal of Communication, 8,  1673-89.

Boellstorff, T. (2013) Making big data, in theory, First Monday, 18 (10). <http://firstmonday.org/ojs/index.php/fm/article/view/4869/3750&gt;, accessed 8 October 2013.

Boellstorff, T. & Maurer, B. (2015a) Introduction, in T. Boellstorff & B. Maurer (eds.), Data, Now Bigger and Better! (Chicago, IL: Prickly Paradigm Press), 1-6.

Boellstorff, T. & Maurer, B. (eds.) (2015b) Data, Now Bigger and Better! Chicago, IL: Prickly Paradigm Press.

boyd, d. & Crawford, K. (2012) Critical questions for Big Data: provocations for a cultural, technological, and scholarly phenomenon, Information, Communication & Society, 15 (5),  662-79.

Burrows, R. & Savage, M. (2014) After the crisis? Big Data and the methodological challenges of empirical sociology, Big Data & Society, 1 (1).

Cheney-Lippold, J. (2011) A new algorithmic identity: soft biopolitics and the modulation of control, Theory, Culture & Society, 28 (6),  164-81.

Crawford, K. & Schultz, J. (2014) Big data and due process: toward a framework to redress predictive privacy harms, Boston College Law Review, 55 (1),  93-128.

Gitelman, L. & Jackson, V. (2013) Introduction, in L. Gitelman (ed.), Raw Data is an Oxymoron. Cambridge, MA: MIT Press, pp. 1-14.

Helles, R. & Jensen, K.B. (2013) Making data – big data and beyond: Introduction to the special issue, First Monday, 18 (10). <http://firstmonday.org/ojs/index.php/fm/article/view/4860/3748&gt;, accessed 8 October 2013.

Kitchin, R. (2014) The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences. London: Sage.

Kitchin, R. & Lauriault, T. (2014) Towards critical data studies: charting and unpacking data assemblages and their work, Social Science Research Network. <http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2474112&gt;, accessed 27 August 2014.

Lupton, D. (2015) ‘Chapter 5: A Critical Sociology of Big Data’ in Digital Sociology. London: Routledge.

Lyon, D. (2014) Surveillance, Snowden, and Big Data: Capacities, consequences, critique, Big Data & Society, 1 (2). <http://bds.sagepub.com/content/1/2/2053951714541861&gt;, accessed 13 December 2014.

Madden, M. (2014) Public Perceptions of Privacy and Security in the post-Snowden Era, Pew Research Internet Project: Pew Research Center.

McCosker, A. & Wilken, R. (2014) Rethinking ‘big data’ as visual knowledge: the sublime and the diagrammatic in data visualisation, Visual Studies, 29 (2),  155-64.

Robinson, D., Yu, H., and Rieke, A. (2014) Civil Rights, Big Data, and Our Algorithmic Future. No place of publication provided: Robinson + Yu.

Ruppert, E. (2013) Rethinking empirical social sciences, Dialogues in Human Geography, 3 (3),  268-73.

Tene, O. & Polonetsky, J. (2013) A theory of creepy: technology, privacy and shifting social norms, Yale Journal of Law & Technology, 16,  59-134.

Thrift, N. (2014) The ‘sentient’ city and what it may portend, Big Data & Society, 1 (1). <http://bds.sagepub.com/content/1/1/2053951714532241.full.pdf+html&gt;, accessed 1 April 2014.

Tinati, R., Halford, S., Carr, L., and Pope, C. (2014) Big data: methodological challenges and approaches for sociological analysis, Sociology, 48 (4),  663-81.

Uprichard, E. (2013) Big data, little questions?, Discover Society,  (1). <http://www.discoversociety.org/focus-big-data-little-questions/&gt;, accessed 28 October 2013.

van Dijck, J. (2014) Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology, Surveillance & Society, 12 (2),  197-208.

Vis, F. (2013) A critical reflection on Big Data: considering APIs, researchers and tools as data makers, First Monday, 18 (10). <http://firstmonday.org/ojs/index.php/fm/article/view/4878/3755&gt;, accessed 27 October 2013.

The politics of privacy in the digital age

The latest except from my forthcoming book Digital Sociology (due to be released by Routledge on 12 November 2014). This one is from Chapter 7: Digital Politics and Citizen Digital Public Engagement.

The distinction between public and private has become challenged and transformed via digital media practices. Indeed it has been contended that via the use of online confessional practices, as well as the accumulating masses of data that are generated about digital technology users’ everyday habits, activities and preferences, the concept of privacy has changed. Increasingly, as data from many other users are aggregated and interpreted using algorithms, one’s own data has an impact on others by predicting their tastes and preferences (boyd, 2012). The concept of ‘networked privacy’ developed by danah boyd (2012) acknowledges this complexity. As she points out, it is difficult to make a case for privacy as an individual issue in the age of social media networks and sousveillance. Many people who upload images or comments to social media sites include other people in the material, either deliberately or inadvertently. As boyd (2012: 348) observes, ‘I can’t even count the number of photos that were taken by strangers with me in the background at the Taj Mahal’.

Many users have come to realise that the information about themselves and their friends and family members that they choose to share on social media platforms may be accessible to others, depending on the privacy policy of the platform and the ways in which users have operated privacy settings. Information that is shared on Facebook, for example, is far easier to limit to Facebook friends if privacy settings restrict access than are data that users upload to platforms such as Twitter, YouTube or Instagram, which have few, if any, settings that can be used to limit access to personal content. Even within Facebook, however, users must accept that their data may be accessed by those that they have chosen as friends. They may be included in photos that are uploaded by their friends even if they do not wish others to view the photo, for example.

Open source data harvesting tools are now available that allow people to search their friends’ data. Using a tool such as Facebook Graph Search, people who have joined that social media platform can mine the data uploaded by their friends and search for patterns. Such elements as ‘photos of my friends in New York’ or ‘restaurants my friends like’ can be identified using this tool. In certain professions, such as academia, others can use search engines to find out many details about one’s employment details and accomplishments (just one example is Google Scholar, which lists academics’ publications as well as how often and where they have been cited by others). Such personal data as online photographs or videos of people, their social media profiles and online comments can easily be accessed by others by using search engines.

Furthermore, not only are individuals’ personal data shared in social networks, they may now be used to make predictions about others’ actions, interests, preferences or even health states (Andrejevic, 2013; boyd, 2012). When people’s small data are aggregated with others to produce big data, the resultant datasets are used for predictive analytics (Chapter 5). As part of algorithmic veillance and the production of algorithmic identities, people become represented as configurations of others in the social media networks with which they engage and the websites people characterised as ‘like them’ visit. There is little, if any, opportunity to opt out of participation in these data assemblages that are configured about oneself.

A significant tension exists in discourses about online privacy. Research suggests that people hold ambivalent and sometimes paradoxical ideas about privacy in digital society. Many people value the use of dataveillance for security purposes and for improving economic and social wellbeing. It is common for digital media users to state that they are not concerned about being monitored by others online because they have nothing to hide (Best, 2010). On the other hand, however, there is evidence of unease about the continuous, ubiquitous and pervasive nature of digital surveillance. It has become recognised that there are limits to the extent to which privacy can be protected, at least in terms of individuals being able to exert control over access to digital data about themselves or enjoy the right to be forgotten (Rosen, 2012; Rosenzweig, 2012). Some commentators have contended that notions of privacy, indeed, need to be rethought in the digital era. Rosenzweig (2012) has described previous concepts as ‘antique privacy’, which require challenging and reassessment in the contemporary world of ubiquitous dataveillance. He asserts that in weighing up rights and freedoms, the means, ends and consequences of any dataveillance program should be individually assessed.

Recent surveys of Americans by the Pew Research Center (Rainie and Madden, 2013) have found that the majority still value the notion of personal privacy but also value the protection against criminals or terrorists that breaches of their own privacy may offer. Digital technology users for the most part are aware of the trade-off between protecting their personal data from others’ scrutiny or commercial use, and gaining benefits from using digital media platforms that collect these data as a condition of use. This research demonstrates that the context in which personal data are collected is important to people’s assessments of whether their privacy should be intruded upon. The Americans surveyed were more concerned about others knowing the content of their emails than their internet searches, and were more likely to experience or witness breaches of privacy in their own social media networks than to be aware of government surveillance of their personal data.

Another study using qualitative interviews with Britons (The Wellcome Trust, 2013) investigated public attitudes to personal data and the linking of these data. The research found that many interviewees demonstrated a positive perspective on the use of big data for national security and the prevention and detection of crime, improving government services, the allocation of resources and planning, identifying social and population trends, convenience and time-saving when doing shopping and other online transactions, identifying dishonest practices and making vital medical information available in an emergency. However the interviewees also expressed a number of concerns about the use of their data, including the potential for the data to be lost, stolen, hacked or leaked and shared without consent, the invasion of privacy when used for surveillance, unsolicited marketing and advertising, the difficulty of correcting inaccurate data on oneself and the use of the data to discriminate against people. Those interviewees of low socioeconomic status were more likely to feel powerless about dealing with potential personal data breaches, identity theft or the use of their data to discriminate against them.

References

Andrejevic, M. (2013) Infoglut: How Too Much Information is Changing the Way We Think and KnowNew York: Routledge.

Best, K. (2010) Living in the control society: surveillance, users and digital screen technologies. International Journal of Cultural Studies, 13, 5-24.

boyd, d. (2012) Networked privacy. Surveillance & Society, 10, 348-50.

Rainie, L. & Madden, M. (2013) 5 findings about privacy. http://networked.pewinternet.org/2013/12/23/5-findings-about-privacy, accessed 24 December 2013.

Rosen, J. (2012) The right to be forgotten. Stanford Law Review Online, 64 (88). http://www.stanfordlawreview.org/online/privacy-paradox/right-to-be-forgotten/, accessed 21 November 2013.

Rosenzweig, P. (2012) Whither privacy? Surveillance & Society, 10, 344-47.

The Wellcome Trust (2013) Summary Report of Qualitative Research into Public Attitudes to Personal Data and Linking Personal Data [online text], The Wellcome Trust http://www.wellcome.ac.uk/stellent/groups/corporatesite/@msh_grants/documents/web_document/wtp053205.pdf

 

The digital tracking of school students in physical education classes: a critique

I have had a new article published in the journal of Sport, Education and Society on the topic of how school  health and physical education (HPE) is becoming digitised and technologies of self-tracking are being introduced into classes. As its title suggests – ‘Data assemblages, sentient schools and digitised HPE (response to Gard)’ – the article outlines some thoughts in response to a piece published in the same journal by another Australian sociologist, Michael Gard. Gard contends that a new era of HPE seems to be emerging in the wake of the digitising of society in general and the commercialising of education, which is incorporating the use of digital technologies.

Few commentators in education, health promotion or sports studies have begun to realise the extent to which digital data surveillance (‘dataveillance’) and analytics are now encroaching into many social institutions and settings and the ways in which actors and agencies in the digital knowledge economy are appropriating these data. In my article I give some examples of the types of surveillance technologies that are being introduced into school HPE. Apps such as Coach’s Eye and Ubersense are beginning to be advocated in HPE circles, as are other health and fitness apps. Some self-tracking apps have been designed specifically for HPE teachers for use with their students. For example the Polar GoFit app with a set of heart rate sensors is expressly designed for HPE teachers as a monitoring tool for students’ physical activities during lessons. It allows teachers to distribute the heart rate sensors to students, set a target zone for heart rate levels and then monitor these online while the lesson takes place, either for individuals or the class as a group.

I argue that there are significant political and ethical implications of the move towards mobilising digital devices to collect personal data on school students. I have elsewhere identified a typology of five modes of self-tracking that involve different levels of voluntary engagement and ways in which personal data are employed. ‘Private’ self-tracking is undertaken voluntarily and initiated by the participant for personal reasons, ‘communal’ self-tracking involves the voluntary sharing of one’s personal data with others, ‘pushed’ self-tracking involves ‘nudging’ or persuasion, ‘imposed’ self-tracking is forced upon people and ‘exploited’ self-tracking involves the use of personal data for the express purposes of others.

Digitised HPE potentially involves all five of these modes. In the context of the institution of the school and the more specific site of HPE, the previous tendencies of HPE to represent paternalistic disciplinary control over the unruly bodies of children and young people and to exercise authority over what the concepts of ‘health’, ‘the ideal body’ and ‘fitness’ should mean can only be exacerbated. More enthusiastic students who enjoy sport and fitness activities may willingly and voluntarily adopt or consent to dataveillance of their bodies as part of achieving personal fitness or sporting performance goals. However when students are forced to wear heart rate monitors to demonstrate that they are conforming to the exertions demanded of them by the HPE teacher, there is little room for resistance. When certain very specific targets of appropriate number of steps, heart-rate levels, body fat or BMI measurements and the like are set and students’ digitised data compared against them, the capacity for the apparatus of HPE to constitute a normalising, surveilling and disciplinary gaze on children and young people and the capacity for using these data for public shaming are enhanced.

The abstract of the article is below. If you would like a copy, please email me on deborah.lupton@canberra.edu.au.

Michael Gard (2014) raises some important issues in his opinion piece on digitised health and physical education (HPE) in the school setting. His piece represents the beginning of a more critical approach to the instrumental and solutionist perspectives that are currently offered on digitised HPE. Few commentators in education, health promotion or sports studies have begun to realise the extent to which digital data surveillance and analytics are now encroaching into many social institutions and settings and the ways in which actors and agencies in the digital knowledge economy are appropriating these data. Identifying what is happening and the implications for concepts of selfhood, the body and social relations, not to mention the more specific issues of privacy and the commercialisation and exploitation of personal data, requires much greater attention than these issues have previously received in the critical social literature. While Gard has begun to do this in his article, there is much more to discuss. In this response, I present some discussion that seeks to provide a complementary commentary on the broader context in which digitised HPE is developing and manifesting. Whether or not one takes a position that is techno-utopian, dystopian or somewhere in between, I would argue that to fully understand the social, cultural and political resonances of digitised HPE, such contextualising is vital.