Towards a sociology of 3D printing

As a digital sociologist, I have become fascinated by the social and cultural implications of 3D printing technologies. Few sociologists or any other critical academic commentators have begun to investigate how 3D printing is beginning to affect society. Yet as 3D printing technologies move into an expanding realm of contexts, there is much opportunity to analyse their effects. Not only are these technologies having an impact on industrial manufacturing and the distribution of goods, makers, artists and designers are taking them up in intriguing ways. 3D printing is being used in medicine and dentistry, public relations and marketing and in fan cultures. These technologies are being introduced into schools and incorporated into the curriculum. As the price of 3D printers falls, they will become an addition to more households. There are significant environmental and legal issues in relation to how they are used, including questions about intellectual property.

As part of my initial explorations into the sociology of 3D printing, last week I published two pieces on these technologies. One was an article for The Conversation, in which I discussed the phenomenon of the 3D self replica. This is a figurine that can be made of a person using the digital data derived from 3D scanning software. The technologies to generate these artefacts are rapidly moving into a range of leisure domains, including sporting events, shopping centres, airports, concerts and amusement parks as well as fan cultures and marketing programs. 3D printed self replicas can even be made at home using a software package developed for the Xbox Kinect game box and a home 3D printer. Some commentators have referred to these replicas as ‘3D selfies’ because they involve the production of a personal likeness. In the article I speculated about the ways in which people may start to use these figures as markers or mementos of their bodies and social relationships.

The second piece was an academic article that discusses the use of 3D printing of what I entitle ‘digital body objects’ for medical and health-related purposes. The article explores the use of non-organic materialisations of people’s body parts for medical purposes as well as the fabrication of self-tracked bodily data into objects. Here is the abstract: the full paper can be accessed here:

The advent of 3D printing technologies has generated new ways of representing and conceptualising health and illness, medical practice and the body. There are many social, cultural and political implications of 3D printing, but a critical sociology of 3D printing is only beginning to emerge. In this article I seek to contribute to this nascent literature by addressing some of the ways in which 3D printing technologies are being used to convert digital data collected on human bodies and fabricate them into tangible forms that can be touched and held. I focus in particular on the use of 3D printing to manufacture non-organic replicas of individuals’ bodies, body parts or bodily functions and activities. The article is also a reflection on a specific set of digital data practices and the meaning of such data to individuals. In analysing these new forms of human bodies, I draw on sociomaterialist perspectives as well as the recent work of scholars who have sought to reflect on selfhood, embodiment, place and space in digital society and the nature of people’s interactions with digital data. I argue that these objects incite intriguing ways of thinking about the ways in digital data on embodiment, health and illnesses are interpreted and used across a range of contexts. The article ends with some speculations about where these technologies may be headed and outlining future research directions.

These initial forays into a sociology of 3D printing represent merely a small component of possible avenues for theorising and research into the social impact of this technology. What I am particularly interested in at the moment is the implications for people’s data practices, or how the material objects that are generated from 3D printing technologies act as ‘solidified’ personal data. Future writings will investigate these issues in greater depth.

Digital Sociology now out

Digital Sociology has now been published (click here for the Amazon link and here for the publisher’s link).

 

The publisher’s blurb is below:

Digital Sociology

We now live in a digital society. New digital technologies have had a profound influence on everyday life, social relations, government, commerce, the economy and the production and dissemination of knowledge. People’s movements in space, their purchasing habits and their online communication with others are now monitored in detail by digital technologies. We are increasingly becoming digital data subjects, whether we like it or not, and whether we choose this or not.

The sub-discipline of digital sociology provides a means by which the impact, development and use of these technologies and their incorporation into social worlds, social institutions and concepts of selfhood and embodiment may be investigated, analysed and understood. This book introduces a range of interesting social, cultural and political dimensions of digital society and discusses some of the important debates occurring in research and scholarship on these aspects. It covers the new knowledge economy and big data, reconceptualising research in the digital era, the digitisation of higher education, the diversity of digital use, digital politics and citizen digital engagement, the politics of surveillance, privacy issues, the contribution of digital devices to embodiment and concepts of selfhood and many other topics.

Digital Sociology is essential reading not only for students and academics in sociology, anthropology, media and communication, digital cultures, digital humanities, internet studies, science and technology studies, cultural geography and social computing, but for other readers interested in the social impact of digital technologies.

The politics of privacy in the digital age

The latest except from my forthcoming book Digital Sociology (due to be released by Routledge on 12 November 2014). This one is from Chapter 7: Digital Politics and Citizen Digital Public Engagement.

The distinction between public and private has become challenged and transformed via digital media practices. Indeed it has been contended that via the use of online confessional practices, as well as the accumulating masses of data that are generated about digital technology users’ everyday habits, activities and preferences, the concept of privacy has changed. Increasingly, as data from many other users are aggregated and interpreted using algorithms, one’s own data has an impact on others by predicting their tastes and preferences (boyd, 2012). The concept of ‘networked privacy’ developed by danah boyd (2012) acknowledges this complexity. As she points out, it is difficult to make a case for privacy as an individual issue in the age of social media networks and sousveillance. Many people who upload images or comments to social media sites include other people in the material, either deliberately or inadvertently. As boyd (2012: 348) observes, ‘I can’t even count the number of photos that were taken by strangers with me in the background at the Taj Mahal’.

Many users have come to realise that the information about themselves and their friends and family members that they choose to share on social media platforms may be accessible to others, depending on the privacy policy of the platform and the ways in which users have operated privacy settings. Information that is shared on Facebook, for example, is far easier to limit to Facebook friends if privacy settings restrict access than are data that users upload to platforms such as Twitter, YouTube or Instagram, which have few, if any, settings that can be used to limit access to personal content. Even within Facebook, however, users must accept that their data may be accessed by those that they have chosen as friends. They may be included in photos that are uploaded by their friends even if they do not wish others to view the photo, for example.

Open source data harvesting tools are now available that allow people to search their friends’ data. Using a tool such as Facebook Graph Search, people who have joined that social media platform can mine the data uploaded by their friends and search for patterns. Such elements as ‘photos of my friends in New York’ or ‘restaurants my friends like’ can be identified using this tool. In certain professions, such as academia, others can use search engines to find out many details about one’s employment details and accomplishments (just one example is Google Scholar, which lists academics’ publications as well as how often and where they have been cited by others). Such personal data as online photographs or videos of people, their social media profiles and online comments can easily be accessed by others by using search engines.

Furthermore, not only are individuals’ personal data shared in social networks, they may now be used to make predictions about others’ actions, interests, preferences or even health states (Andrejevic, 2013; boyd, 2012). When people’s small data are aggregated with others to produce big data, the resultant datasets are used for predictive analytics (Chapter 5). As part of algorithmic veillance and the production of algorithmic identities, people become represented as configurations of others in the social media networks with which they engage and the websites people characterised as ‘like them’ visit. There is little, if any, opportunity to opt out of participation in these data assemblages that are configured about oneself.

A significant tension exists in discourses about online privacy. Research suggests that people hold ambivalent and sometimes paradoxical ideas about privacy in digital society. Many people value the use of dataveillance for security purposes and for improving economic and social wellbeing. It is common for digital media users to state that they are not concerned about being monitored by others online because they have nothing to hide (Best, 2010). On the other hand, however, there is evidence of unease about the continuous, ubiquitous and pervasive nature of digital surveillance. It has become recognised that there are limits to the extent to which privacy can be protected, at least in terms of individuals being able to exert control over access to digital data about themselves or enjoy the right to be forgotten (Rosen, 2012; Rosenzweig, 2012). Some commentators have contended that notions of privacy, indeed, need to be rethought in the digital era. Rosenzweig (2012) has described previous concepts as ‘antique privacy’, which require challenging and reassessment in the contemporary world of ubiquitous dataveillance. He asserts that in weighing up rights and freedoms, the means, ends and consequences of any dataveillance program should be individually assessed.

Recent surveys of Americans by the Pew Research Center (Rainie and Madden, 2013) have found that the majority still value the notion of personal privacy but also value the protection against criminals or terrorists that breaches of their own privacy may offer. Digital technology users for the most part are aware of the trade-off between protecting their personal data from others’ scrutiny or commercial use, and gaining benefits from using digital media platforms that collect these data as a condition of use. This research demonstrates that the context in which personal data are collected is important to people’s assessments of whether their privacy should be intruded upon. The Americans surveyed were more concerned about others knowing the content of their emails than their internet searches, and were more likely to experience or witness breaches of privacy in their own social media networks than to be aware of government surveillance of their personal data.

Another study using qualitative interviews with Britons (The Wellcome Trust, 2013) investigated public attitudes to personal data and the linking of these data. The research found that many interviewees demonstrated a positive perspective on the use of big data for national security and the prevention and detection of crime, improving government services, the allocation of resources and planning, identifying social and population trends, convenience and time-saving when doing shopping and other online transactions, identifying dishonest practices and making vital medical information available in an emergency. However the interviewees also expressed a number of concerns about the use of their data, including the potential for the data to be lost, stolen, hacked or leaked and shared without consent, the invasion of privacy when used for surveillance, unsolicited marketing and advertising, the difficulty of correcting inaccurate data on oneself and the use of the data to discriminate against people. Those interviewees of low socioeconomic status were more likely to feel powerless about dealing with potential personal data breaches, identity theft or the use of their data to discriminate against them.

References

Andrejevic, M. (2013) Infoglut: How Too Much Information is Changing the Way We Think and KnowNew York: Routledge.

Best, K. (2010) Living in the control society: surveillance, users and digital screen technologies. International Journal of Cultural Studies, 13, 5-24.

boyd, d. (2012) Networked privacy. Surveillance & Society, 10, 348-50.

Rainie, L. & Madden, M. (2013) 5 findings about privacy. http://networked.pewinternet.org/2013/12/23/5-findings-about-privacy, accessed 24 December 2013.

Rosen, J. (2012) The right to be forgotten. Stanford Law Review Online, 64 (88). http://www.stanfordlawreview.org/online/privacy-paradox/right-to-be-forgotten/, accessed 21 November 2013.

Rosenzweig, P. (2012) Whither privacy? Surveillance & Society, 10, 344-47.

The Wellcome Trust (2013) Summary Report of Qualitative Research into Public Attitudes to Personal Data and Linking Personal Data [online text], The Wellcome Trust http://www.wellcome.ac.uk/stellent/groups/corporatesite/@msh_grants/documents/web_document/wtp053205.pdf

 

Seams in the cyborg

Another excerpt from my forthcoming book Digital Sociology (due to be released on 12 November 2014). From chapter 8: ‘The Digitised Body/Self’.

Such is the extent of our intimate relations with digital technologies that we often respond emotionally to the devices themselves and to the content contained within or created by these devices. The design of digital devices and software interfaces is highly important to users’ responses to them. Devices such as iPhones are often described in highly affective and aestheticised terms: as beautiful playthings, glossy and shiny objects of desire, even as edible or delicious. Advertising for the iPhone and other Apple devices often focus on inspiring child-like wonder at their beauty and magical capabilities (Cannon and Barker 2012).

Affective responses to material objects are integral to their biographical meaning to their owners and their participation in intimate relationships. Writers on material culture and affect have noted the entangling of bodies/selves with physical objects and how artefacts act as extensions or prostheses of the body/self, becoming markers of personhood. Objects become invested with sentimental value by virtue of their association with specific people and places, and thus move from anonymous, mass-produced items to biographically-inscribed artefacts that bear with them personal meanings. Over use and with time, such initially anonymised objects become personalised prosthetics of the self, their purely functional status and monetary value replaced by more personal and sentimental value (Miller 2008, Turkle 2007).

… Bell and Dourish (2011) refer to the mythologies and the mess of ubiquitous computing technologies. By myths they mean the cultural stories, values and meanings that are drawn upon to make sense and represent these technologies. The types of myths surrounding new digital technologies tend to focus on their very novelty, their apparent divergence from what has come before them and their ability to provide solutions to problems. The ‘mess’ of digital technologies inheres in the challenges to myths that suggest that they are infallible, offer an ideal solution to a problem: the ‘practical reality’ of their everyday use (Bell & Dourish, 2011, p. 4). When digital technologies operate as we expect them to, they feel as they are inextricably part of our bodies and selves. Inevitably, however, there are moments when we become aware of our dependence on technologies, or find them annoying or difficult to use, or lose interest in them. Technologies break down, fail to work as expected; infrastructure and government regulations may not support them adequately; users may become bored with using them or their bodies may rebel and develop over-use symptoms. There may be resistances, personal or organised, to their use, and contestations over their meanings and value (Lupton, 1995; Miller & Horst, 2012).

Freund (2004, p. 273) uses the term ‘technological habitus’ to describe the ‘internalised control’ and kinds of consciousness required of individuals to function in technological environments such as those currently offered in contemporary western societies. The human/machine entity, he argues, is not seamless: rather there are disjunctions – or, as he puts it, ‘seams in the cyborg’ – where fleshly body and machine do not intermesh smoothly, and discomfort, stress or disempowerment may result. Sleep patterns, increasing work and commuting time and a decrease in leisure time, for example, can be disrupted by the use of technologies, causing illness, stress and fatigue. Our bodies may begin to alert us that these objects are material in the ways that they affect our embodiment: through eye-strain, hand, neck or back pain or headaches from using the devices too much (Lupton, 1995).

People may feel overwhelmed by the sheer mass of data conveyed by their digital devices and the need to keep up with social network updates. Analyses of social media platforms such as Facebook are beginning to appear that suggest that users may simultaneously recognise their dependence upon social media to maintain their social network but may also resent this dependence and the time that is taken up in engaging with them, even fearing that they may be ‘addicted’ to their use (Davis, 2012). Users may also feel ‘invaded’ by the sheer overload of data that may be generated by membership of social networking sites and the difficulty of switching off mobile devices and taking time out from using them (boyd, 2008).

Technology developers are constantly working on ways to incorporate digital devices into embodiment and everyday life, to render them ever less obtrusive and ever more part of our bodies and selves. As the technical lead and manager of the Google Glass (a wearable device that is worn on the face like spectacles) project contends, ‘bringing technology and computing closer to the body can actually improve communication and attention – allowing technology to get further out of the way’ (Starner, 2013, p. no page numbers given, emphasis in the original). He asserts that by rendering these devices smaller and more easily worn on the body, they recede further into the background rather than dominating users’ attention (as is so overtly the case with the current popular smartphone and tablet computers). Despite these efforts, Glass wearers have been subjected to constant attention from others that is often negative and based on the presumption that the device is too obvious, unstylish and unattractive, or that the people who wear them are wealthy computer nerds who do not respect the privacy of others. They have reported many incidences of angry responses from others when wearing Glass in public, even to the point of people ripping the device off their faces or asking them to leave a venue (Gross, 2014). The design of digital devices, therefore, may incite emotional responses not only in the users themselves but also in onlookers.

Some people find wearable self-tracking devices not fashionable enough, or not water-proof enough, or too clunky or heavy, or not comfortable enough to wear, or find that they get destroyed in the washing machine when the user forgets to remove them from their clothing. One designer (Darmour, 2013) has argued that if these technologies remain too obvious, ‘bolting’ these devices to our bodies will ‘distract, disrupt, and ultimately disengage us from others, ultimately degrading our human experience’. She asserts that instead these objects need to be designed more carefully so that they may be integrated into the ‘fabric of our lives’. Her suggested ways of doing this include making them look more beautiful, like jewellery (broaches, necklaces, bracelets, rings), incorporating them into fashionable garments, making them peripheral and making them meaningful: using colours or vibrations rather than numbers to display data readings from these devices.

References

Bell, G., & Dourish, P. (2011). Divining a Digital Future: Mess and Mythology in Ubiquitous Computing. Cambridge, Mass: MIT Press.

Cannon, K., & Barker, J. (2012). Hard candy. In P. Snickars & P. Vonderau (Eds.), Moving Data: The iPhone and the Future of Medicine (pp. 73-88). New York: Columbia University Press.

boyd, d. (2008). Facebook’s privacy trainwreck: exposure, invasion, and social convergence. Convergence, 14(1), 13-20.

Darmour, J. (2013). 3 ways to make wearable tech actually wearable. Co.Design. Retrieved from http://www.fastcodesign.com/1672107/3-ways-to-make-wearable-tech-actually-wearable

Davis, J. (2012). Social media and experiential ambivalence. Future Internet, 4(4), 955-970.

Freund, P. (2004). Civilised bodies redux: seams in the cyborg. Social Theory & Health, 2(3), 273-289.

Gross, A. (2014). What’s the problem with Google Glass? Retrieved from http://www.newyorker.com/online/blogs/currency/2014/03/whats-the-problem-with-google-glass.html

Lupton, D. (1995). The embodied computer/user. Body & Society, 1(3-4), 97-112.

Miller, D. (2008). The Comfort of Things. Cambridge: Polity Press.

Miller, D., & Horst, H. (2012). The digital and the human: a prospectus for digital anthropology. In H. Horst & D. Miller (Eds.), Digital Anthropology (pp. 3-35). London: Berg.

Starner, T. (2013). Google glass lead: how wearing tech on our bodies actually helps it get out of our way. Wired. Retrieved from http://www.wired.com/opinion/2013/12/the-paradox-of-wearables-close-to-your-body-but-keeping-tech-far-away/

Turkle, S. (2007). Evocative Objects: Things We Think With. Cambridge, Mass: Massachusetts Institute of Technology.

The digital tracking of school students in physical education classes: a critique

I have had a new article published in the journal of Sport, Education and Society on the topic of how school  health and physical education (HPE) is becoming digitised and technologies of self-tracking are being introduced into classes. As its title suggests – ‘Data assemblages, sentient schools and digitised HPE (response to Gard)’ – the article outlines some thoughts in response to a piece published in the same journal by another Australian sociologist, Michael Gard. Gard contends that a new era of HPE seems to be emerging in the wake of the digitising of society in general and the commercialising of education, which is incorporating the use of digital technologies.

Few commentators in education, health promotion or sports studies have begun to realise the extent to which digital data surveillance (‘dataveillance’) and analytics are now encroaching into many social institutions and settings and the ways in which actors and agencies in the digital knowledge economy are appropriating these data. In my article I give some examples of the types of surveillance technologies that are being introduced into school HPE. Apps such as Coach’s Eye and Ubersense are beginning to be advocated in HPE circles, as are other health and fitness apps. Some self-tracking apps have been designed specifically for HPE teachers for use with their students. For example the Polar GoFit app with a set of heart rate sensors is expressly designed for HPE teachers as a monitoring tool for students’ physical activities during lessons. It allows teachers to distribute the heart rate sensors to students, set a target zone for heart rate levels and then monitor these online while the lesson takes place, either for individuals or the class as a group.

I argue that there are significant political and ethical implications of the move towards mobilising digital devices to collect personal data on school students. I have elsewhere identified a typology of five modes of self-tracking that involve different levels of voluntary engagement and ways in which personal data are employed. ‘Private’ self-tracking is undertaken voluntarily and initiated by the participant for personal reasons, ‘communal’ self-tracking involves the voluntary sharing of one’s personal data with others, ‘pushed’ self-tracking involves ‘nudging’ or persuasion, ‘imposed’ self-tracking is forced upon people and ‘exploited’ self-tracking involves the use of personal data for the express purposes of others.

Digitised HPE potentially involves all five of these modes. In the context of the institution of the school and the more specific site of HPE, the previous tendencies of HPE to represent paternalistic disciplinary control over the unruly bodies of children and young people and to exercise authority over what the concepts of ‘health’, ‘the ideal body’ and ‘fitness’ should mean can only be exacerbated. More enthusiastic students who enjoy sport and fitness activities may willingly and voluntarily adopt or consent to dataveillance of their bodies as part of achieving personal fitness or sporting performance goals. However when students are forced to wear heart rate monitors to demonstrate that they are conforming to the exertions demanded of them by the HPE teacher, there is little room for resistance. When certain very specific targets of appropriate number of steps, heart-rate levels, body fat or BMI measurements and the like are set and students’ digitised data compared against them, the capacity for the apparatus of HPE to constitute a normalising, surveilling and disciplinary gaze on children and young people and the capacity for using these data for public shaming are enhanced.

The abstract of the article is below. If you would like a copy, please email me on deborah.lupton@canberra.edu.au.

Michael Gard (2014) raises some important issues in his opinion piece on digitised health and physical education (HPE) in the school setting. His piece represents the beginning of a more critical approach to the instrumental and solutionist perspectives that are currently offered on digitised HPE. Few commentators in education, health promotion or sports studies have begun to realise the extent to which digital data surveillance and analytics are now encroaching into many social institutions and settings and the ways in which actors and agencies in the digital knowledge economy are appropriating these data. Identifying what is happening and the implications for concepts of selfhood, the body and social relations, not to mention the more specific issues of privacy and the commercialisation and exploitation of personal data, requires much greater attention than these issues have previously received in the critical social literature. While Gard has begun to do this in his article, there is much more to discuss. In this response, I present some discussion that seeks to provide a complementary commentary on the broader context in which digitised HPE is developing and manifesting. Whether or not one takes a position that is techno-utopian, dystopian or somewhere in between, I would argue that to fully understand the social, cultural and political resonances of digitised HPE, such contextualising is vital.

Self-tracking cultures: towards a sociology of personal informatics

I have had a full refereed paper accepted for the OzCHI conference, to be held in Sydney in December. The abstract is below. For a PDF of the full paper, click here Self-tracking Cultures – OzCHI Conference Paper

A body of literature on self-tracking has been established in human-computer interaction studies. Contributors to this literature tend to take a cognitive or behavioural psychology approach to theorising and explaining self-tracking. Such an approach is limited to understanding individual behaviour. Yet self-tracking is a profoundly social practice, both in terms of the enculturated meanings with which it is invested and the social encounters and social institutions that are part of the self-tracking phenomenon. In this paper I contend that sociological perspectives can contribute some intriguing possibilities for human-computer interaction research, particularly in developing an understanding of the wider social, cultural and political dimensions of what I refer to as ‘self-tracking cultures’. The discussion focuses on the following topics: self-optimisation and governing the self; entanglements of bodies and technologies; the valorisation of data; data doubles; and social inequalities and self-tracking. The paper ends with outlining some directions for future research on self-tracking cultures that goes beyond the individual to the social.

 

Big Data Cultures symposium abstracts

Tomorrow the Big Data Cultures symposium that I have convened at the University of Canberra is taking place. There is a very interesting program from a range of Australian academics working on the social, cultural and political dimensions of the big data phenomenon. Here are the abstracts:

Keynote: ‘Visual dimensions’

Greg More, RMIT University

It’s a small problem for data to scale, but a wicked problem for us to make sense of big data that scales to infinity.  The aim of this article is to explore the translation of data into geometrical relationships: the art and design of creative forms of data visualisation to give data a meaningful visual dimension. Data has dimensionality, but not in a geometrical sense. Topology – the mathematical study of shape – will be used as lens to examine projects where designers utilise metaphors and abstraction to construct visual languages for data. Consider this cartography of data that makes sense of a scaleless territory. What is important in this examination how the designers of data visualisations understand the character of the data itself – the texture, nuance and signal contained within the information – and use this to make data tangible and at a scale we can interact with.

‘To hold a social form in your hand: how far are interactive holograms of social data?’

Alexia Maddox, Deakin University and Curtin University

Starting with the question, ‘can we reanimate social data into three dimensional forms?’, this paper explores the possibility of presenting research findings in three dimensional formats. These formats could include information that we can print through 3D printers or animate through interactive holograms. This paper will interrogate this approach to data presentation and discuss from a sociological point of view the ways it could engage with Big Data.  The combination of visual presentation derived from digital trace data provides us with a lens through which to investigate social patterns and trends.  Building data into three-dimensional formats has the capacity to enhance the cognitive literacy of information and its presentation to diverse stakeholders.  A social surface, that which is defined by form, needs a conceptual framework upon which to gain dynamic presence and dimension in space. Through my research into the Herpetological community, I explored the interior structures of community and patterns of socio-technical engagement. The resulting conceptual approach from this work seeks to situate mediated sociability within social ecologies and build social data into social form. This environmental approach aligns with current trends in geodemographic analysis and incorporates the socio-technical actor that moves beyond physical space and into virtual terrains.  The challenge of this conceptual approach is to explore how Big Data can be incorporated as environmental information or digital trace data.

Stranded deviations: Big Data and the contextually marginalised’

Andrew McNicol, University of New South Wales

As social and practical interactions moved to the digital realm, facilitated by technological breakthroughs and social pressures, many have become understandably concerned about user privacy. With the increased scale and complexity of stored information, giving rise to the term ‘Big Data’, the potential for another person to scrutinise our personal information in a way that makes us uncomfortable increases. However, as attention is a finite resource, in the majority of cases user information never comes under scrutiny by unwanted human eyes – it is lost in the noise and is only treated as data available for computational analysis. In a big data society privacy breaches increasingly occur as a result of algorithms allowing targets to emerge from data sets. This means that in any context certain individuals become disproportionately targeted for unwanted privacy breaches and those who are regularly contextually marginalised have the most to lose from participating in a culture of Big Data, raising issues of equal access. In this paper I bring these ideas together to argue that the privacy discourse should not only focus on the potential for scrutiny of personal data, but also the systems in place, both social and technological, that facilitate an environment where some users are more safe than others.

Health, big data and the culture of irresponsibility’

Bruce Baer Arnold and Wendy Bonython, University of Canberra

The analysis of whole-of-population clinical, hospital and genomic data offers potential major benefits regarding improved public health administration, pharmaceutical research and wellness through identification of susceptibilities to health conditions. Achievement of those benefits will be fundamentally inhibited by ‘health big data culture of irresponsibility’ in the public and private sectors. This paper critiques health big data cultures through reference to problematical initiatives such as 23andme (a global direct-to-consumer DNA service) and mismanaged release of weakly de-identified health data covering millions of people in the UK. It notes whole-of-population health data mining projects such as those involving DeCODE (Iceland) and Maccabi-Merck (Israel) that are more problematical than the so-called ‘vampire project’ involving Indigenous peoples. It draws on the authors’ work regarding privacy, bioethics, consumer protection and the OECD Health Information Infrastructure initiative. It highlights the need for coherent national and global health data management frameworks that address issues such as the genomic commons, intergenerational implications of genetic data and insurance redlining. It also highlights questions about media representations of big data governance.

‘Public problems for the digital humanities: debating Big Data methodologies, legitimating institutional knowledges’

Grant Bollmer, University of Sydney

While Big Data have clear implications for the knowledges produced by the social sciences, the various practices of the Digital Humanities have taken the methods associated with Big Data and applied them to objects rarely thought to be ‘Big’ or even ‘Data’. Scholars have used computation to examine literary history, visualising massive literary data sets in ways to make claims that, methodologically at least, are often perceived as threats to the humanities at a moment where traditional methods of teaching and performing humanistic scholarship are likewise under attack from a corporatized managerial university system. This paper uses the debates surrounding the Digital Humanities to investigate the political and institutional arguments that have emerged around Big Data methodologies in the humanities, along with the contrasting knowledge claims that ground these debates. I argue that, in its emphasis on methodology, these discussions overlook how academic publics have been transformed over the past decades. I suggest that normative claims about Big Data in the humanities must investigate its ‘public problems’—moments in which a specific culture defined around the technologically mediated circulation of discourse produces internal norms that are concealed for the sake of external legitimation and funding.

‘Big data’s golems: bots as a technique of tactical media’

Chris Rodley, University of Sydney

Big data has enabled the creation of a diverse range of bots which collect, analyse and process digital information programmatically. While corporations and political parties were early adopters of bots, a growing number of activists, artists and programmers have recently begun to create their own data-driven bots on social platforms such as Twitter as a way of critiquing or disrupting dominant discourses. This paper considers a selection of bots created to comment on issues including NSA surveillance and gun control, arguing that they represent a radical departure from the Situationist strategy of détournement or the tactical disruptions envisaged by Michel de Certeau. It considers the ethics of adopting the techniques of the sensor society – or what Mark Andrejevic has termed “drone logic” – and the implications of bots entering the public sphere as semi-autonomous political actors. Like the Golem of Prague in Jewish folklore, these personifications of big data may simultaneously represent both a powerful defensive strategy as well as a potentially destructive, uncontrollable force.

‘“Paranoid nominalism” as cultural technique of the quantified self’

Christopher O’Neill, University of Melbourne

The Quantified Self movement constitutes a growing community of those committed to practices of self-tracking through mobile sensors and apps. This paper will offer a critique of contemporary Quantified Self discourse, arguing that it is characterised by a certain ‘paranoid nominalism’. That is, an inability to ‘reconcile’ the intimacy of sensors with the abstraction of statistical technologies. This critique shall be pursued through a genealogical investigation of the precursors of some of the key technologies of the Quantified Self movement, especially Étienne-Jules Marey’s work on developing a ‘second positivism’ through sensor technologies, and Adolphe Quetelet’s production of statistical technologies of governance. Drawing on the ‘cultural techniques’ approach of media theory, this paper will investigate these technological prehistories of the Quantified Self movement in order to probe its ideological aporias.

‘There’s an app for that: digital culture and the rise of technologism’

Doug Lorman, Deakin University

Humans have always used technology to overcome bodily and mental boundaries and limitations in the pursuit of personal transcendence. The development of digital technologies such as ‘apps’ and wearable technology have helped to further this pursuit. Digital technologies allow us to collect, store and analyse data on ourselves and take appropriate action. The growth of self-quantification means that technology is no longer disconnected from us, but is part of being human. Technology and its user are mutually constitutive; one influences the other.

The benefits of self-quantification have been touted elsewhere. My concern is that with our inherent desire to conquer nature and override the natural way of doing things we are placing an inordinate amount of faith in the ability of technology to resolve our issues. My talk will argue that the development of a blind faith in digital technologies is creating a phenomenon I call technologism; the belief that technological outputs or results (big data) are the absolute and only justifiable solutions to personal issues. The result of this is that we pay less attention to our surroundings, our lived events and put our faith in technology, relying on it to guide us, help us, heal us, and so on.

‘Database activism’

Mathieu O’Neil, University of Canberra

When data was rare, the focus lay in finding it and collecting it. Now that there is an overabundance of data, datatabases have assumed a central role for the sorting, organising, querying and representation of data. In the realm of science, databases operate as both scientific instruments and as a means of communicating results (Hine 2006). Similarly in the news media field, journalists are increasingly using databases to render the flow of data meaningful and, through visualisation, to make important and pertinent information memorable. Like scientists, data journalists have to be concerned with the integrity of data, and present their methods and findings; database literacy is increasingly framed as a mandatory journalistic skill. At the same time the reliance on databases has led to the emergence of new forms of collective emotions and indignations (Parasie 2013). Unlike journalists, “civic hackers” (such as for example maplight.org which tracks the influence of money on US politics) do not aim to reveal victims and guilty parties hidden in the data, or to organise collective indignations. Data itself is held to be captive from governing authorities and must be freed: civic hackers reveal, without denouncing.

Hine, C. (2006) “Databases as scientific instruments and their role in the ordering of scientific work”, Social Studies of Science 36(2), pp. 269-298.

Parasie, S. (2013) “Des machines à scandale. Éléments pour une sociologie morale des bases de données”, Reseaux 178-179, pp. 127-161.

‘Disability data cultures’

Gerard Goggin, University of Sydney

A fascinating, cross-cutting case study in big data cultures lies in the dynamic, evolving, and contested space of contemporary disability and digital technology. Disability is now recognized as a significant part of social life, identity, and the life course. Over the past twenty years, digital technology – especially computers, the Internet, mobile media, social media, apps, geolocation technologies, and now, wearable computers, and even technologies such as driverless cars ­– have emerged as a significant part of the mediascape, cultural infrastructure, social support system, and personal identity and repertoire of many people with disabilities. New social relations of disability are premised on ­– and increasingly ‘congealed’ in – forms of digital technology. In the Australian context, we might think, for instance, of the present conjuncture and its coincidence of two big national projects where disability and digital technology are both entangled – the National Disability Insurance Scheme (NDIS) and National Broadband Network (NBN).

There is an emerging research, policy, design, and activist engagement with disability and digital technology, but as yet questions of disability and big data have been not so well canvassed. This is significant, given that, historically, the emergence of forms of data concerning disability has been bound up with classification, exclusion, government, and discrimination, as well as the new forms of knowledge and governmentality associated with new socially oriented models and paradigms of disability.

Accordingly, this paper provides a preliminary exploration of the forms, affordances, characteristics, issues, challenges, ethics, and possibilities of what might be termed ‘disability data cultures’. Firstly, I identify and discuss particular kinds of digital technologies, infrastructures, and softwares, and their distinctive affordances and design trajectories relating to disability.  As well as explicitly nominated and dedicated disability data technologies, I also discuss the emergence of health, self-tracking, and quantified self apps by which normalcy and ability is exnominated (or naturalized). Secondly, I look at the kinds of applications, harvesting, computational logics, and the will to power, emerging in order to provide more comprehensive and targeted data on disability ­– for citizens and users, and service, political, and cultural intermediaries, as well as disability service providers, agencies, and governments. Thirdly, I look at the nascent disability-inflected contribution to, and participation in, open data and citizen data initiatives and experiments. 

‘Theoretical perspectives on privacy, selfhood and big data’

Janice Richardson,  Monash University

Big data practices produce specific anxieties about privacy, based upon the fact that information about us, of which we were previously unaware, may be revealed to our detriment. The concerns of the “masters of suspicion” (Nietzsche, Marx, Freud) provides a cultural background view that some important aspect of our lives are hidden or inaccessible to us. This framework has given way to the Foucauldian position that big data could be characterised as having the potential to create new ways in which we are categorised rather than revealing our hidden essence or truth. However, this shift from revelation to construction does nothing to undermine our need to control such potentially harmful practices by both companies and government. As a result, it is necessary to consider how to conceptualise an ethical basis of such privacy claims, which arise as a result of unpredictable knowledge that is produced, rather than as a breach of confidence of pre-existing knowledge. I consider the potential for Spinoza – and his distinction between adequate and inadequate knowledge – to provide such a framework.

‘Big data/surveillant assemblages, interfaces, and user experiences: the cultivation of the docile data subject’

Ashlin Lee, University of Tasmania

The phenomenon of big data represents a socio-technical assemblage of services and devices that are involved in data collection and analysis.  One example of this is through personal ‘sensor’ devices (Andrejevic and Burdon 2014), like smartphones.  Here users are interfaced into big data, simultaneously using big data for their own needs, while fuelling it with their personal information, being the target of data collection and dataveillance/surveillance.  With the popularity of these devices it is important to consider what implications this interfacing has, and the relationship between users and big data/surveillance. This paper describes the results of empirical research into users and their interfaces – conceptualised under Lee’s (2013) idea of convergent mobile technologies (CMTs) – and the implications of user interfacing with big data and surveillance.  Highlighted is how these interfaces valorise user experiences that are ‘immediate’ over all others.  In the context of their relationship with big data this is problematic, as users dismiss or disengage from issues of security and surveillance as long as rapidity is maintained. These CMT interfaces can be thus understood as contributing to the creation of ‘docile data subjects’, who happily bleed personal information into the big data (and surveillant) assemblage(s), in exchange for an experiential state deemed valuable.

‘Altmetrics in policy communication: investigating informal policy actors using social media data’

Fiona Martin and Jonathon Hutchinson, University of Sydney

For nearly a decade citizens have taken to social media to launch public conversations and connective action around issues of civic concern – conversations which have various impacts on the shaping of policy, regulation and governance. Now Facebook, LinkedIn and Twitter are increasingly being used to build, inform and influence informal expert networks, particularly around emerging technologies and practices, and their associated policy problems. Such networks link actors from data cultures such as computing science and medical research to those in hybrid industrial ecologies, like that of mobile health software development. Their conversations are often transnational. They promote and market as much as debate and mobilise. Thus they complicate Gov 2.0 assumptions about democratic participation and engagement, as well as data security. In this paper we argue that it is vital to have new analytic frameworks to measure and evaluate the identity, reach, and relative agency of actors in those networks, in order to understand their potential impact on policy development.

We model one such framework – a mixed method social media network analysis (SNMA) and digital ethnography used to analyse agency and influence in Twitter conversations about mhealth. Using hashtagged conversations captured in the wake of the U.S. Food & Drug Administration’s September 2013 release of guidelines on Mobile Medical Applications, we visualize the network communications then locate and profile the key influencers, exploring their motivations for engagement. Drawing on this data and altmetrics research we discuss registers of impact in expert social media networks and propose a research agenda for exploring the political, cultural and economic value of Twitter conversations in policy formation.

Capturing capacity: quantified self technologies and the commodification of affect’

Miranda Bruce, Australian National University

The Quantified Self (QS) movement is part of a growing technological trend that exploits and modulates the potential of human life. QS finds ways to quantify the active and passive dimensions of the daily processes of human existence, in order to extract meaning from them and modify the ways that we move in and through the world. This paper will explore, firstly, the idea that QS represents a commodification of human capacity, an extraction of power, or form of immaterial labour consistent with the logic of neoliberal capitalism. I will then turn to Deleuzian affect theory to open up a quite different ontological and ultimately practical approach to the problem of QS, which stresses the excessive, and thus un-capturable, nature of lived potential. Finally, I will offer some reflections on the relationship of this technology to broader trends concerning the technological modulation of human capacity.​

 ‘Live data/sociology: what digital sociologists can learn from artists’ responses to big data’

Deborah Lupton, University of Canberra

The big data phenomenon has attracted much publicity in public forums, both in terms of its potential for offering insights into manifold aspects of social and economic life and for its negative associations with mass surveillance and the reduction of the complexity of behaviour into quantifiable data. In this paper I will discuss some of the ways in which artists have responded to big data. I contend that their conceptualisations and critiques of big data offer intriguing insights into the tacit assumptions and emotions (fears and anxieties as well as pleasures and satisfactions) that these digitised methods of knowledge production engender. Digital data are lively in a number of ways: they have become forms of ‘lively capital’ (that is, drawing commercial value from human embodiment, or life itself); they generate embodied and affective responses; they contribute recursively to life itself; and they have a social life of their own, constantly circulating and transforming as they are appropriated and re-purposed. Artists’ responses can contribute to what might be described as a ‘live data/sociology’ (drawing on Les Back’s concept of a ‘live sociology’ that departs from ‘zombie sociology’) which identifies and theorises the forms of liveliness that big digital data may encompass.

New project on fitness self-tracking apps and websites

My colleague Glen Fuller and I have started a new project on people’s use of fitness self-tracking apps and platforms (such as Strava and RunKeeper). We are interviewing people who are active users of these devices, seeking to identify why they have chosen to take up these practices, what apps and platforms they use, how they use them and what they do with the personal data that are generated from these technologies. We are interested in exploring issues around identity and self-representation, concepts of health, fitness and the body, privacy, surveillance and data practices and cultures.

The city in which we live and work, Canberra, is an ideal place to conduct this project, as there are many ardent cyclists and runners living here.

See here for our project’s website and further details of the study.

Digital technologies and data as sociomaterial objects

An excerpt from Chapter 2: ‘Theorising digital society’ from my book Digital Sociology (forthcoming, Routledge).

As sociologists and other social theorists have begun to argue, digital data are neither immaterial nor only miniscule components of a larger material entity. This perspective adopts a sociomaterial approach drawn from science and technology studies, an interdisciplinary field which has provided a critical stance on media technologies in general, and computerised technologies more specifically … In this literature, the digital data objects that are brought together through digital technologies, including ‘like’ or ‘share’ buttons, individuals’ browser histories, personalised recommendations and comments on social media posts as well as the hardware and software that structure the choices available to users, are assemblages of complex interactions of economic, technological, social and cultural logics (Mackenzie, 2005; Mackenzie and Vurdubakis, 2011; Caplan, 2013; Langois and Elmer, 2013). Representing digital phenomena as objects serves the purpose of acknowledging their existence, effects and power (Marres, 2012; Caplan, 2013; Hands, 2013; Langois and Elmer, 2013).

The cultural and political analysis of computer software is sometimes referred to as software studies. Writers in software studies place an emphasis not on the transmission or reception of messages, as in the old model of communication, but rather have developed a sociomaterial interest in the ways in which acts of computation produce and shape knowledges. Computer coding are positioned as agents in configurations and assemblages (Fuller, 2008), producing what Kitchin and  Dodge (2011) refer to as ‘coded assemblages’. Indeed the pervasive nature of software in everyday life is such that Manovich (2013: no page number given) argues that it has become ‘a universal language, the interface to our imagination and the world’. He contends, therefore, that social researchers should be conceptualising people’s interactions with digital technologies as ‘software performances’ which are constructed and reconstructed in real-time, with the software constantly reacting to the user’s actions.

… Digital data are also positioned as sociomaterial objects in this literature. Whereas many commentators in the popular media, government and business world view digital data as the ultimate forms of truth and accurate knowledge, sociologists and other social theorists have emphasised that these forms of information, like any other type, are socially created and have a social life, a vitality, of their own. Digital data objects structure our concepts of identity, embodiment, relationships, our choices and preferences and even our access to services or spaces.

There are many material aspects to digital data. They are the product of complex decisions, creative ideas, the solving and management of technical problems and marketing efforts on the part of those workers who are involved in producing the materials that create, manage and store these data. They are also the product of the labour of the prosumers who create the data. These are the ‘invisible’ material aspects of digital data (Aslinger and Huntemann, 2013).

Algorithms play an important role in configuring digital data objects. Algorithms measure and sort the users of digital technologies, deciding what choices they may be offered. Digital data objects aggregated together, often from a variety of sources, configure ‘metric assemblages’ (Burrows, 2012) or ‘surveillant assemblages’ (Haggerty and Ericson, 2000) that produce a virtual doppelganger of the user. Algorithms and other elements of software, therefore, are generative, a productive form of power (Mackenzie, 2005; Beer, 2009; Cheney-Lippold, 2011; Mackenzie and Vurdubakis, 2011; boyd and Crawford, 2012; Beer, 2013; Ruppert et al., 2013).

Scholars who have adopted a sociomaterial perspective have also highlighted the tangible physicality of aspects of digital technology manufacture and use. Despite the rhetoric of seamless, proficient operation that so commonly is employed to discuss the internet and ubiquitous computing, the maintenance that supports this operation is messy and contingent, often involving pragmatic compromises negotiations and just-in-time interventions to keep the system working. Geographical, economic, social, political and cultural factors – including such basic requirements as a stable electricity supply and access to a computer network – combine to promote or undermine the workings of digital technologies (Bell, 2006; Bell and Dourish, 2007; Dourish and Bell, 2007; Bell and Dourish, 2011). The materiality of digital hardware becomes very apparent when devices that are no longer required must be disposed of, creating the problem of digital waste (or ‘e-waste’) that often contains toxic materials (Gabrys, 2011; Miller and Horst, 2012).

Given the high turnover of digital devices, their tendency towards fast obsolescence and the fact that they are often replaced every few years in wealthy countries by people seeking the newest technologies and upgrades, vast quantities of digital waste is constantly generated. The vast majority of discarded digital devices end up in landfill. Only a small minority are recycled or reused, and those that are tend to be sent from wealthy to poor countries for scrap and salvaging of components. When they are outmoded and discarded, the once highly desirable, shiny digital devices that were so full of promise when they were purchased simply become another form of rubbish; dirty, unsightly and potentially contaminating pollutants (Gabrys, 2011). The electricity supplies that power digital technologies and digital data storage units themselves have environmental effects on humans and other living things, such as the release of smoke and particles from coal-fired electricity generating plants. ‘The digital is a regime of energies: human energy and the energy needed for technological machines’ (Parikka, 2013: no page given).

The materiality of digital objects is also apparent in debates over how and where digital data should be stored, as they require ever-larger physical structures (servers) for archiving purposes. Despite the metaphor of the computing ‘cloud’, digital data do not hover in the ether but must be contained within hardware. Furthermore, digital data are very difficult to erase or remove, and thus can be very stubbornly material. At the same time, however, if stored too long and not used, they may quickly become obsolete and therefore useless, if contemporary technologies can no longer access and make use of them. Digital data, therefore, may be said to ‘decay’ if left too long, and lost and forgotten, if they are not migrated to new technological formats. Digital memory is volatile because the technologies used to store and access data change so quickly. Analogue materials that are rendered into digital form for archival purposes and then destroyed may therefore be lost if their digital forms can no longer be used (Gabrys, 2011).

References

Aslinger B and Huntemann N. (2013) Digital media studies futures. Media, Culture & Society 35(1): 9-12.

Beer D. (2009) Power through the algorithm? Participatory web cultures and the technological unconscious. New Media & Society 11(6): 985-1002.

Beer D. (2013) Popular Culture and New Media: the Politics of Circulation, Houndmills: Palgrave Macmillan.

Bell G. (2006) ‘Satu keluarga, satu komputer’ (one home, one computer): cultural accounts of ICTs in South and Southeast Asia. Design Issues 22(2): 35-55.

Bell G and Dourish P. (2007) Yesterday’s tomorrows: notes on ubiquitous computing’s dominant vision. Personal and Ubiquitous Computing 11(2): 133-143.

Bell G and Dourish P. (2011) Divining a Digital Future: Mess and Mythology in Ubiquitous Computing, Cambridge, Mass: MIT Press.

boyd d and Crawford K. (2012) Critical questions for Big Data: provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society 15(5): 662-679.

Burrows R. (2012) Living with the h-index? Metric assemblages in the contemporary academy. The Sociological Review 60(2): 355-372.

Caplan P. (2013) Software tunnels through the rags ‘n refuse: object oriented software studes and platform politics. Culture Machine, 14. (accessed 8 August 2013).

Cheney-Lippold J. (2011) A new algorithmic identity: soft biopolitics and the modulation of control. Theory, Culture & Society 28(6): 164-181.

Dourish P and Bell G. (2007) The infrastructure of experience and the experience of infrastructure: meaning and structure in everyday encounters with space. Environment and Planning B: Planning & Design 34(3): 414-430.

Fuller M. (2008) Introduction, the stuff of software. In: Fuller M (ed) Software Studies: A Lexicon. Cambridge, MA: The MIT Press, 1-13.

Gabrys J. (2011) Digital Rubbish: A Natural History of Electronics, Ann Arbor, MI: University of Michigan Press.

Haggerty K and Ericson R. (2000) The surveillant assemblage. British Journal of Sociology 51(4): 605-622.

Hands J. (2013) Introduction: politics, power and ‘platformativity’. Culture Machine, 14. (accessed 5 February 2014).

Kitchin R and Dodge M. (2011) Code/Space: Software and Everyday Life, Cambridge, Mass: MIT Press.

Langois G and Elmer G. (2013) The research politics of social media platforms. Culture Machine, 14. (accessed 8 August 2013).

Mackenzie A. (2005) The performativity of code: software and cultures of circulation. Theory, Culture & Society 22(1): 71-92.

Mackenzie A and Vurdubakis T. (2011) Codes and codings in crisis: signification, performativity and excess. Theory, Culture & Society 28(6): 3-23.

Manovich L. (2013) The algorithms of our lives. The Chronicle of Higher Education. (accessed 17 December 2013).

Marres N. (2012) Material Participation: Technology, the Environment and Everyday Publics, New York: Palgrave Macmillan.

Miller D and Horst H. (2012) The digital and the human: a prospectus for digital anthropology. In: Horst H and Miller D (eds) Digital Anthropology. London: Berg, 3-35.

Parikka J. (2013) Dust and exhaustion: the labor of media materialism. CTheory. (accessed 2 November 2013).

Ruppert E, Law J and Savage M. (2013) Reassembling social science methods: the challenge of digital devices. Theory, Culture & Society 30(4): 22-46.

 

The five modes of self-tracking

Recently I have been working on a conference paper that seeks to outline the five different modes of self-tracking that I have identified as currently in existence. (Update – the full paper can now be downloaded here).

I argue that there is evidence that the personal data that are derived from individuals engaging in reflexive self-monitoring are now beginning to be used by agencies and organisations beyond the personal and privatised realm. Self-tracking rationales and sites are proliferating as part of a ‘function creep’ of the technology and ethos of self-tracking. The detail offered by these data on individuals and the growing commodification and commercial value of digital data have led government, managerial and commercial enterprises to explore ways of appropriating self-tracking for their own purposes. In some contexts people are encouraged, ‘nudged’, obliged or coerced into using digital devices to produce personal data which are then used by others.

The paper examines these issues, outlining five modes of self-tracking that have emerged: private, pushed, communal, imposed and exploited. There are intersections and recursive relationships between each of these self-tracking modes. However there are also observable differences related to the extent to which the self-tracking is taken up voluntarily and the purposes to which the data thus created are put.

Here are definitions of the typology of self-tracking that I have developed:

  • Private self-tracking relates to self-tracking practices that are taken up voluntarily as part of the quest for self-knowledge and self-optimisation and as an often pleasurable and playful mode of selfhood. Private self-tracking, as espoused in the Quantified Self’s goal of ‘self  knowledge through numbers’, is undertaken for purely personal reasons and the data are kept private or shared only with limited and selected others. This is perhaps the most public and well-known face of self-tracking.
  • Pushed self-tracking represents a mode that departs from the private self-tracking mode in that the initial incentive for engaging in self-tracking comes from another actor or agency. Self-monitoring may be taken up voluntarily, but in response to external encouragement or advocating rather than as a personal and wholly private initiative. Examples include the move in preventive medicine, health promotion and patient self-care to encourage people to monitor their biometrics to achieve targeted health goals. The workplace has become a key site of pushed self-tracking, particularly in relation to corporate wellness programs where workers are encourage to take up self-tracking and share their data with their employer.
  • Communal self-tracking involves the voluntary sharing of a tracker’s personal data with other people. They may use social media, platforms designed for comparing and sharing personal data and sites such as the Quantified Self website to engage with and learn from other self-trackers. Some attend meetups or conferences to meet face-to-face with other self-trackers and share their data and evaluations of the value of different techniques and devices for self-tracking. This mode is also evident in citizen science, citizen sensing and community development initiatives using data collected by individuals on their local environs, such as air quality, traffic conditions and crime rate that are then aggregated with other participants for use in improving local conditions and services or political action.
  • Imposed self-tracking involves the imposition of self-tracking practices upon individuals by others primarily for these others’ benefit. These include the use of tracking devices as part of worker productivity monitoring and efficiency programs. There is a fine line between pushed self-tracking and imposed self-tracking. While some elements of self-interest may still operate, people may not always have full choice over whether or not they engage in self-tracking. In the case of self-tracking in corporate wellness programs, employees must give their consent to wearing the devices and allowing employers to view their activity data. However failure to comply may lead to higher health insurance premiums enforced by an employer, as is happening in some workplaces in the United States. At its most coercive, imposed self-tracking is used in programs involving monitoring of location and drug use for probation and parole surveillance, drug addiction programs and family law and child custody monitoring.
  • Exploited self-tracking refers to the ways in which individuals’ personal data are repurposed for the (often commercial) benefit of others. Exploited self-tracking is often marketed to consumers as a way for them to benefit personally, whether by sharing their information with others as a form of communal self-tracking or by earning points or rewards. However their data are then used by second parties for their own purposes and in some cases are sold to or used by third parties. Customer loyalty programs, in which consumers voluntarily sign up to have their individual purchasing habits logged by retailers in return for points or rewards is one example. Some retailers (for example a large pharmacy chain in the US) are beginning to use wearable devices as part of their customer rewards schemes, encouraging customers to upload their personal fitness data to their platforms. The data can then be used by the retailers for their marketing, advertising and product offers as well as onsold to third parties.

In the rest of the paper I draw upon theoretical perspectives on concepts of selfhood, citizenship, biopolitics and data practices and assemblages in discussing the wider sociocultural implications of the emergence and development of these modes of self-tracking. I argue that there are many important issues that require further exploring in relation to the appropriation of self-tracking. As humans increasingly become nodes in the Internet of Things, generating and exchanging digital data with other smart, sensor-equipped objects, self-tracking practices will most probably become unavoidable for many people, whether they are taken up voluntarily or pushed or imposed upon them. The evidence outlined in this paper suggests a gradually widening scope for the use of self-tracking that is likely to expand as a growing number of agencies and organisations realise the potential of the data that are produced from these practices.