The thirteen Ps of big data

Big data are often described as being characterised by the ‘3 Vs’: volume (the large scale of the data); variety (the different forms of data sets that can now be gathered by digital devices and software); and velocity (the constant generation of these data). An online search of the ‘Vs’ of big data soon reveals that some commentators have augmented these Vs with the following: value (the opportunities offered by big data to generate insights); veracity/validity (the accuracy/truthfulness of big data); virality (the speed at which big data can circulate online); and viscosity (the resistances and frictions in the flow of big data) (see Uprichard, 2013 for a list of even more ‘Vs’).

These characterisations principally come from the worlds of data science and data analytics. From the perspective of critical data researchers, there are different ways in which big data can be described and conceptualised (see the further reading list below for some key works in this literature). Anthropologists Tom Boellstorff and Bill Maurer (2015a) refer to the ‘3 Rs’: relation, recognition and rot. As they explain, big data are always formed and given meaning via relationships with human and nonhuman actors that extend beyond data themselves; how data are recognised qua data is a sociocultural and political process; and data are susceptible to ‘rot’, or deterioration or unintended transformation as they are purposed and repurposed, sometimes in unintended ways.

Based on my research and reading of the critical data studies literature, I have generated my own list that can be organised around what I am choosing to call the ‘Thirteen Ps’ of big data. As in any such schema, this ‘Thirteen Ps’ list is reductive, acting as a discursive framework to organise and present ideas. But it is one way to draw attention to the sociocultural dimensions of big data that the ‘Vs’ lists have thus far failed to acknowledge, and to challenge the taken-for-granted attributes of the big data phenomenon.

  1. Portentous: The popular discourse on big data tends to represent the phenomenon as having momentous significance for commercial, managerial, governmental and research purposes.
  2. Perverse: Representations of big data are also ambivalent, demonstrating not only breathless excitement about the opportunities they offer but also fear and anxiety about not being able to exert control over their sheer volume and unceasing generation and the ways in which they are deployed (as evidenced in metaphors of big data that refer to ‘deluges’ and ‘tsunamis’ that threaten to overwhelm us).
  3. Personal: Big data incorporate, aggregate and reveal detailed information about people’s personal behaviours, preferences, relationships, bodily functions and emotions.
  4. Productive: The big data phenomenon is generative in many ways, configuring new or different ways of conceptualising, representing and managing selfhood, the body, social groups, environments, government, the economy and so on.
  5. Partial: Big data can only ever tell a certain narrative, and as such they offer a limited perspective. There are many other ways of telling stories using different forms of knowledges. Big data are also partial in the same way as they are relational: only some phenomena are singled out and labelled as ‘data’, while others are ignored. Furthermore, more big data are collected on some groups than others: those people who do not use or have access to the internet, for example, will be underrepresented in big digital data sets.
  6. Practices: The generation and use of big data sets involve a range of data practices on the part of individuals and organisations, including collecting information about oneself using self-tracking devices, contributing content on social media sites, the harvesting of online transactions by the internet empires and the data mining industry and the development of tools and software to produce, analyse, represent and store big data sets.
  7. Predictive: Predictive analytics using big data are used to make inferences about people’s behaviour. These inferences are becoming influential in optimising or limiting people’s opportunities and life chances, including their access to healthcare, insurance, employment and credit.
  8. Political: Big data is a phenomenon that involves power relations, including struggles over ownership of or access to data sets, the meanings and interpretations that should be attributed to big data, the ways in which digital surveillance is conducted and the exacerbation of socioeconomic disadvantage.
  9. Provocative: The big data phenomenon is controversial. It has provoked much recent debate in response to various scandals and controversies related to the digital surveillance of citizens by national security agencies, the use and misuse of personal data, the commercialisation of data and whether or not big data poses a challenge to the expertise of the academic social sciences.
  10. Privacy: There are growing concerns in relation to the privacy and security of big data sets as people are becoming aware of how their personal data are used for surveillance and marketing purposes, often without their consent or knowledge and the vulnerability of digital data to hackers.
  11. Polyvalent: The social, cultural, geographical and temporal contexts in which big data are generated, purposed and repurposed by a multitude of actors and agencies, and the proliferating data profiles on individuals and social groups that big data sets generate give these data many meanings for the different entities involved.
  12. Polymorphous: Big data can take many forms as data sets are generated, combined, manipulated and materialised in different ways, from 2D graphics to 3D-printed objects.
  13. Playful: Generating and materialising big data sets can have a ludic quality: for self-trackers who enjoy collecting and sharing information on themselves or competing with other self-trackers, for example, or for data visualisation experts or data artists who enjoy manipulating big data to produce beautiful graphics.

Critical Data Studies – Further Reading List

Andrejevic, M. (2014) The big data divide, International Journal of Communication, 8,  1673-89.

Boellstorff, T. (2013) Making big data, in theory, First Monday, 18 (10). <http://firstmonday.org/ojs/index.php/fm/article/view/4869/3750&gt;, accessed 8 October 2013.

Boellstorff, T. & Maurer, B. (2015a) Introduction, in T. Boellstorff & B. Maurer (eds.), Data, Now Bigger and Better! (Chicago, IL: Prickly Paradigm Press), 1-6.

Boellstorff, T. & Maurer, B. (eds.) (2015b) Data, Now Bigger and Better! Chicago, IL: Prickly Paradigm Press.

boyd, d. & Crawford, K. (2012) Critical questions for Big Data: provocations for a cultural, technological, and scholarly phenomenon, Information, Communication & Society, 15 (5),  662-79.

Burrows, R. & Savage, M. (2014) After the crisis? Big Data and the methodological challenges of empirical sociology, Big Data & Society, 1 (1).

Cheney-Lippold, J. (2011) A new algorithmic identity: soft biopolitics and the modulation of control, Theory, Culture & Society, 28 (6),  164-81.

Crawford, K. & Schultz, J. (2014) Big data and due process: toward a framework to redress predictive privacy harms, Boston College Law Review, 55 (1),  93-128.

Gitelman, L. & Jackson, V. (2013) Introduction, in L. Gitelman (ed.), Raw Data is an Oxymoron. Cambridge, MA: MIT Press, pp. 1-14.

Helles, R. & Jensen, K.B. (2013) Making data – big data and beyond: Introduction to the special issue, First Monday, 18 (10). <http://firstmonday.org/ojs/index.php/fm/article/view/4860/3748&gt;, accessed 8 October 2013.

Kitchin, R. (2014) The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences. London: Sage.

Kitchin, R. & Lauriault, T. (2014) Towards critical data studies: charting and unpacking data assemblages and their work, Social Science Research Network. <http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2474112&gt;, accessed 27 August 2014.

Lupton, D. (2015) ‘Chapter 5: A Critical Sociology of Big Data’ in Digital Sociology. London: Routledge.

Lyon, D. (2014) Surveillance, Snowden, and Big Data: Capacities, consequences, critique, Big Data & Society, 1 (2). <http://bds.sagepub.com/content/1/2/2053951714541861&gt;, accessed 13 December 2014.

Madden, M. (2014) Public Perceptions of Privacy and Security in the post-Snowden Era, Pew Research Internet Project: Pew Research Center.

McCosker, A. & Wilken, R. (2014) Rethinking ‘big data’ as visual knowledge: the sublime and the diagrammatic in data visualisation, Visual Studies, 29 (2),  155-64.

Robinson, D., Yu, H., and Rieke, A. (2014) Civil Rights, Big Data, and Our Algorithmic Future. No place of publication provided: Robinson + Yu.

Ruppert, E. (2013) Rethinking empirical social sciences, Dialogues in Human Geography, 3 (3),  268-73.

Tene, O. & Polonetsky, J. (2013) A theory of creepy: technology, privacy and shifting social norms, Yale Journal of Law & Technology, 16,  59-134.

Thrift, N. (2014) The ‘sentient’ city and what it may portend, Big Data & Society, 1 (1). <http://bds.sagepub.com/content/1/1/2053951714532241.full.pdf+html&gt;, accessed 1 April 2014.

Tinati, R., Halford, S., Carr, L., and Pope, C. (2014) Big data: methodological challenges and approaches for sociological analysis, Sociology, 48 (4),  663-81.

Uprichard, E. (2013) Big data, little questions?, Discover Society,  (1). <http://www.discoversociety.org/focus-big-data-little-questions/&gt;, accessed 28 October 2013.

van Dijck, J. (2014) Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology, Surveillance & Society, 12 (2),  197-208.

Vis, F. (2013) A critical reflection on Big Data: considering APIs, researchers and tools as data makers, First Monday, 18 (10). <http://firstmonday.org/ojs/index.php/fm/article/view/4878/3755&gt;, accessed 27 October 2013.

Medical diagnosis apps – study findings

Over 100,000 medical and health apps for mobile digital devices have now been listed in the Apple App Store and Google Play. They represent diverse opportunities for lay people to access medical information and track their body functions and medical conditions. As yet, however, few critical social researchers have sought to analyse these apps.

In a study I did with Annemarie Jutel we undertook a sociological analysis of medical diagnosis apps, and two articles have now been published from the study. Annemarie is a sociology of diagnosis expert and we were interested in investigating how these apps represented the process of diagnosis. We drew on the perspective that apps are sociocultural artefacts that draw on and reproduce tacit norms and assumptions. We argue that from a sociological perspective, digital devices such as health and medical apps have significant implications for the ways in which the human body is understood, visualised and treated by medical practitioners and lay people alike, for the doctor-patient relationship and the practice of medicine.

In one article, published in Social Science & Medicine, we focused on self-diagnosis apps directed at lay people. We undertook a search using the terms ‘medical diagnosis’ and ‘symptom checker’ for apps that were available for download to smartphones in mid-April 2014 in the Apple App Store and Google Play. We found 35 self-diagnosis apps that claimed to diagnose across a range of conditions (we didn’t include apps directed at diagnosis of single conditions). Some have been downloaded by tens or hundreds of thousands, and the case of WebMD and iTriage Health, millions of smartphone owners.

Our analysis suggests that these apps inhabit a contested and ambiguous site of meaning and practice. The very existence of self-diagnosis apps speaks to several important dimensions of contemporary patienthood and healthcare in the context of a rapidly developing ecosystem of digital health technologies. They also participate in the quest for patient ‘engagement’ and ‘empowerment’ that is a hallmark of digital health rhetoric (or what I call ‘digital patient engagement’).

Self-diagnosis apps, like other technologies designed to give lay people the opportunity to monitor their bodies and their health states and engage with the discourses of healthism and control that pervade contemporary medicine We found that app developers combined claims to medical expertise in conjunction with appeals to algorithmic authority to promote their apps to potential users. While the developers also used appeals to patient engagement as part of their promotional efforts, these were undermined by routine disclaimers that users should seek medical advice to effect a diagnosis. While the cautions that are offered on the apps that they are for ‘entertainment purposes only’ and not designed to ‘replace a diagnosis from a medical professional’ may be added for legal reasons, they detract from the authority that the app may offer and indeed call into question why anyone should use it.

In our other article, published in the new journal Diagnosis, we directed attention at diagnosis apps that are designed for the use of medical practitioners as well as lay people. We analysed 176 such apps that we found in Google Play and the Apple App Store in December 2013. While 36 of these were directed at lay people, the remainder were for medical practitioners. The Diagnosis article mainly concentrates on the latter, given that our other article was about the self-diagnosis apps for lay people.

Our research suggests that these apps should be used with great caution by both lay people and practitioners. The lack of verifiable information provided about the evidence or expertise used to develop these apps is of major concern. The apps are of very variable quality, ranging from those that appear to have the support and input of distinguished medical experts, specialty groups or medical societies to those that offer little or nothing to support their knowledge claims. While at one end of the spectrum we can see apps as a delivery system for information which has been subject to the conventional forms of academic review, at the other extreme, we see apps developed by entrepreneurs with interests in many topics outside medicine, with little input from medical sources, or with inadequate information to ascertain what the sources might be. The lack of information provided by many app developers also raises questions about how users can determine the presence of conflicts of interest and commercial interests that might determine content.

Managing and materialising data as part of self-tracking

Like many other forms of digital data, self-tracking data have a vitality and social life of their own, circulating across and between a multitude of sites. In a context in which digital data are culturally represented as liquid entities that require management and containment, part of the project of managing the contemporary body is that of containment of the data that one’s body produces. As discursive representations of self-tracking and the quantified self frequently contend, personal data are profligate: it is only right that one should seek first, to collect these data, and second, to manage and discipline the data by aggregating them, representing them visually, and making sense of them.

Shifting forms of selfhood are configured via these digital data assemblages, depending on the context in and purpose for which they are assembled. As the digital data produced by self-tracking are constantly generated and the combinations of data sets that may be brought together on individuals are numerous, personal data assemblages are never stable or contained. They represent a ‘snap-shot’ of a particular moment in time and a particular rationale of data practice. The data assemblages are always mutable, dynamic, responsive to new inputs and interpretations. They thus represent a type of selfhood that is distributed between different and constantly changing data sets. Self-tracking assemblages are constantly created and recreated when information about individuals is derived via digital technologies and then reassembled for various purposes.

Bodies and selves are always multiple, in whatever context they find themselves. However for self-trackers, this multiplicity is foregrounded in ways that may not have occurred in previous eras. If they are reviewing their personal data regularly, they cannot fail to be confronted with the shifting data assemblages that serve to represent their bodies and their selves. Part of the data practices in which they are invited to engage as part of self-tracking culture, therefore, is the negotiation and sense-making around the hybridity and vitality of their data assemblages.

To gain meaning from these data sets, self-trackers or third parties who seek to use their data must engage in sense-making that can interpret these data and gain some purchase on their mutating forms. An important element of self-tracking practices for many people is the visualisation or presentation of their personal data. The notion that data can be beautiful and aesthetically pleasing when presented in appropriate formats pervades data science in general: Gregg (2015) refers to this phenomenon as the ‘data spectacle’. The generation of digital data visualisations can be, variously, acts of work, creative expression and the presentation or performance of selfhood, with the latter an element in particular of self-tracking practices.

In the ‘show-and-tell’ ethos of the Quantified Self movement, finding compelling visual modes to demonstrate the patterns in one’s data is a central feature. The Quantified Self website is full of demonstrations by members of their data, including videos of their ‘show-and-tell’ presentations and still images of their visualisations. Collecting and aggregating personal data, therefore, are part of a range of practices involving self-knowledge and self-expression. By showing one’s data to others in a visually interesting and explanatory graphic, a self-tracker is achieving both self-knowledge and self-expression. Self-tracking becomes performative, both for the insights that a self-tracker may achieve about her or his life but also in terms of the aesthetics of the data that she or he may be able to curate.

The aesthetic elements of data visualisations involve affective responses that may include both pleasure and anxiety (McCosker and Wilken 2014). Indeed McCosker and Wilken (2014) refer to the tendency in data visualisation circles towards the fetishising and sublimity of ‘beautiful data’ as part of exerting mastery over the seemingly unlimited and thus overwhelming amounts of big digital datasets. Extending this logic, the physical materialising of digital data in the form of a 2D or 3D data materialisation may offer a solution to the anxieties of big data. When it is one’s personal data drawn from one’s own flesh that is being manifested in a material digital data object, this may provoke a sense of mastery over what may be experienced as a continually data-emitting subjectivity. The liquidity, flows and force of personal digital data become frozen in time and space, offering an opportunity to make sense of one’s data.

References

Gregg, M. (2015) Inside the data spectacle. Television & New Media, 16 (1), 37-51.

McCosker, A. and Wilken, R. (2014) Rethinking ‘big data’ as visual knowledge: the sublime and the diagrammatic in data visualisation. Visual Studies, 29 (2), 155-164.

Changing representations of self-tracking

I recently completed a chapter for a book on lifelogging that discussed the concepts and uses of data as they are expressed in representations of self-tracking (see here for the full paper, available open access). In part of the chapter I looked at the ways in which people writing about the quantified self and other interpretations of self-tracking represent data and data practices, including in articles published in Wired magazine and other media outlets and blogs.

From the beginning of discussions of the quantified self, the representation of data in quantified self-tracking discourses (as least as it was expressed by its progenitors) included several factors. These include the following: quantified data are powerful entities; it is important not only to collect quantified data on oneself, but to analyse these data for the patterns and insights they reveal; data (and particularly quantified or quantifiable data) are an avenue to self-knowledge; the emergence of new digital and mobile devices for gathering information about oneself have facilitated self-tracking and the generation of quantified personal data; quantifiable data are more neutral, reliable, intellectual and objective than qualitative data, which are intuitive, emotional and subjective; self-tracked data can provide greater insights than the information that a person receives from their senses, revealing previously hidden patterns or correlations; self-tracked data can be motivational phenomena, inspiring action, by entering into a feedback loop; everything can be rendered as data; and data about individuals are emblematic of their true selves.

In more recent times, however, it is evident that a further set of concepts about self-tracked data have emerged since the original euphoria of the early accounts of quantified self-tracking. They include: the meaning of self-tracked data can be difficult to interpret; personal data can be disempowering as well as empowering; the conditions in which data are gathered can influence their validity; the contexts in which data are generated are vital to understanding their meaning; individuals’ personal data are not necessarily secure or private; quantified personal data can be reductive; and personal data can be used to discriminate against individuals.

We as yet know very little about how people are conceptualising and engaging with digital data about themselves. Given the recent scandals about how people’s personal data may be hacked or used or manipulated without their knowledge (the Snowden revelations about security agencies’ use of metadata, the Facebook emotional manipulation experiment, the celebrity nude photo and Sony Pictures hackings, for example), as well as growing coverage of the potentially negative implications of self-tracking as described above, these are pressing issues.

The cultural specificity of digital health technologies

Digital health technologies configure a certain type of practising medicine and public health, a certain type of patient or lay person and a specific perspective on the human body. The techno-utopian approach to using digital health technologies tends to assume that these tacit norms and assumptions are shared and accepted by all the actors involved, and that they are acting on a universal human body. Yet a cursory examination of surveys of digital health technology use demonstrates that social structural factors such as age, gender, education level, occupation and race/ethnicity, as well as people’s state of health and their geographical location play a major role in influencing how such technologies are taken up among lay people or the extent to which they are able to access the technologies.

An American study of the use of some digital health technologies using representative data collected by the National Cancer Institute in 2012, for example, found no evidence of differences by race or ethnicity, but significant differences for gender, age and socioeconomic status (Kontos et al. 2014). Female respondents were more likely to use online technologies for health-related information, as were younger people (under less than 65) and those of higher socioeconomic status. People of low socioeconomic status were less likely to go online to look for a healthcare provider, use email or the internet to connect with a doctor, track their personal health information online, using a website to track to help track diet, weight or physical activity or download health information to a mobile device. However they were more likely to use social media sites to access or share health information. Women were more likely than men to engage in all of these activities.

While there is little academic research on how different social groups use apps, market research reports have generated some insights. One report showed that women install 40 per cent more apps than men and buy 17 per cent more paid apps. Men use health and fitness apps slightly more (10 per cent) than women (Koetsier 2013). A Nielsen market report on the use of wearable devices found that while men and women used fitness activity bands in equal numbers, women were more likely to use diet and calorie counter apps (Nielsen 2014).

As these findings suggest, gender is one important characteristic that structures the use of digital health technologies. The digital technology culture is generally male-dominated: most technology designers, developers and entrepreneurs are male. As a result, a certain blindness to the needs of women can be evident. For example, when the Apple Health app was announced in 2014, destined to be included as part of a suite of apps on the Apple Watch, it did not include a function for the tracking of menstrual cycles (Eveleth 2014). Gender stereotypes are routinely reproduced in devices such as health and medical apps. As I noted in my study of sexuality and reproduction self-tracking apps, the sexuality apps tend to focus on documenting and celebrating male sexual performance, with little acknowledgement of women’s sexuality, while reproduction apps emphasise women’s over men’s fertility.

App designers and those who develop many other digital technologies for medical and health-related purposes often fail to recognise the social and cultural differences that may influence how people interact with them. Just as cultural beliefs about health and illness vary from culture to culture, so too do responses to the cultural artefacts that are digital health technologies. Aboriginal people living in a remote region of Australia, for example, have very different notions of embodiment, health and disease from those that tend to feature in the health literacy apps that have been developed for mainstream white Australian culture (Christie and Verran 2014). It is therefore not surprising that a review of the efficacy of a number of social media and apps developed for health promotion interventions targeted at Aboriginal Australians found no evidence of their effectiveness or benefit to this population (Brusse et al. 2014).

Few other analyses have sought to highlight the cultural differences in which people respond to and use digital health technologies. This kind of research is surely imperative to challenge existing assumptions about ‘the user’ of these technologies and provide greater insights into their benefits and limitations.

My publications for 2014

This the list of my publications that came out in 2014. If you would like a copy of any of the articles, please contact me on deborah.lupton@canberra.edu.au.

Books

Lupton, D. (2015) Digital Sociology (Routledge: this  has a 2015 publication date, but actually was published in November 2014).

Special Journal Issue

Editor of special issue on ‘Beyond techno-utopia: critical approaches to digital health technologies’, Societies (volume 4, number 2), 2014.

Book Chapters

Lupton, D. (2014) The reproductive citizen: motherhood and health education. In Fitzpatrick, K. and Tinning, R. (eds), Health Education: Critical Perspectives. London: Routledge, pp. 48—60.

Lupton, D. (2014) Unborn assemblages: shifting configurations of embryonic and foetal embodiment. In Nash, M. (ed), Reframing Reproduction: Conceiving Gendered Experiences. Houndmills: Palgrave Macmillan.

Peer-reviewed Journal Articles

Lupton, D. (2014) ‘How do you measure up?’ Assumptions about ‘obesity’ and health-related behaviors in ‘obesity’ prevention campaigns. Fat Studies, 3(1), 32—44.

Lupton, D. (2014) The commodification of patient opinion: the digital patient experience economy in the age of big data. Sociology of Health & Illness, 36(6), 856—69.

Lupton, D. (2014) Precious, pure, uncivilised, vulnerable: infant embodiment in the Australian popular media. Children & Society, 28(5), 341—51.

Lupton, D. (2014) Quantified sex: a critical analysis of sexual and reproductive self-tracking apps. Culture, Health & Sexuality, online first, doi: 1080/13691058.2014.920528.

Lupton, D. (2014) Data assemblages, sentient schools and digitised HPE (response to Gard). Sport, Education and Society, online first, doi: 1080/13573322.2014.962496.

Lupton, D. (2014) Health promotion in the digital era: a critical commentary. Health Promotion International, online first, doi: 10.1093/heapro/dau091.

Lupton, D. (2014) Apps as artefacts: towards a critical sociological perspective on health and medical apps. Societies, 4, 606—22.

Lupton, D. (2014) Critical perspectives on digital health technologies. Sociology Compass, 8(12), 1344—59.

Editorials

Lupton, D. (2014) Beyond techno-utopia: critical approaches to digital health technologies. Societies, 4(4), 706—11.

Other Academic Publications

Lupton, D. (2014) Risk. In Cockerham, W., Dingwall, R. and Quah, S. (eds), The Wiley-Blackwell Encyclopedia of Health, Illness, Behavior and Society. New York: Blackwell, pp. 2067—71.

Lupton, D. (2014) Feeling Better Connected’: Academics’ Use of Social Media. Canberra: News & Media Research Centre.

Itinerary for my trip to England in January 2015

Next month I will be visiting England to give talks and meet colleagues. It’s a whirlwind visit, with 8 talks at 7 universities in five days. The itinerary and further details are provided below for those who might be interested in coming along to any of the talks.

Monday 12 January

  • 10.30 am—3.00 pm: NSMNSS Knowledge Exchange Event, London: Speaking on ‘Using social media for academic work – possibilities, benefits and risks’. Further details here.
  • 5.00 pm—6.30 pm: Seminar at UCL, London. Speaking on ‘Fabricated data bodies: reflections on 3D printed digital body objects in medical and health domains‘. Venue: Daryll Forde room, Department of Anthropology, UCL.

Tuesday 13 January

  • 2.00 pm—4.00 pm: Sociological Perspectives on Digital Health event, Warwick University. Speaking on ‘Critical digital health studies: a research agenda’. Further details here.
  • 5.00 pm-7.00 pm: What is Digital Sociology? event, Warwick University. Speaking on ‘What is digital sociology?’. Further details here.

Wednesday 14 January

  • 9.30 am—12.00 pm: Workshop at the Department of Primary Care Health Sciences, Green Templeton College, Oxford University. Workshop topic is ‘Theorising and researching medical and health apps and wearable self-tracking devices‘.
  • 5.00 pm—7.00 pm: Digital Sociology event, Goldsmiths, University of London. Speaking on a panel on ‘Digital sociology, digital cultures, web science, data science  … what’s the difference?’. Further details here.

Thursday 15 January

  • 10.00 am—4.00 pm: ‘Biosensors in Everyday Life’ workshop at Lancaster University. Speaking on ‘Self-tracking cultures: thinking sociologically about the quantified self’. Further details here.

Friday 16 January

  • 12.00 pm—4.00 pm: Yorkshire BSA Medsoc group event, University of York. Speaking on ‘Digital data, big and small: some critical sociological reflections‘. Further details here.

 

 

Towards a sociology of 3D printing

As a digital sociologist, I have become fascinated by the social and cultural implications of 3D printing technologies. Few sociologists or any other critical academic commentators have begun to investigate how 3D printing is beginning to affect society. Yet as 3D printing technologies move into an expanding realm of contexts, there is much opportunity to analyse their effects. Not only are these technologies having an impact on industrial manufacturing and the distribution of goods, makers, artists and designers are taking them up in intriguing ways. 3D printing is being used in medicine and dentistry, public relations and marketing and in fan cultures. These technologies are being introduced into schools and incorporated into the curriculum. As the price of 3D printers falls, they will become an addition to more households. There are significant environmental and legal issues in relation to how they are used, including questions about intellectual property.

As part of my initial explorations into the sociology of 3D printing, last week I published two pieces on these technologies. One was an article for The Conversation, in which I discussed the phenomenon of the 3D self replica. This is a figurine that can be made of a person using the digital data derived from 3D scanning software. The technologies to generate these artefacts are rapidly moving into a range of leisure domains, including sporting events, shopping centres, airports, concerts and amusement parks as well as fan cultures and marketing programs. 3D printed self replicas can even be made at home using a software package developed for the Xbox Kinect game box and a home 3D printer. Some commentators have referred to these replicas as ‘3D selfies’ because they involve the production of a personal likeness. In the article I speculated about the ways in which people may start to use these figures as markers or mementos of their bodies and social relationships.

The second piece was an academic article that discusses the use of 3D printing of what I entitle ‘digital body objects’ for medical and health-related purposes. The article explores the use of non-organic materialisations of people’s body parts for medical purposes as well as the fabrication of self-tracked bodily data into objects. Here is the abstract: the full paper can be accessed here:

The advent of 3D printing technologies has generated new ways of representing and conceptualising health and illness, medical practice and the body. There are many social, cultural and political implications of 3D printing, but a critical sociology of 3D printing is only beginning to emerge. In this article I seek to contribute to this nascent literature by addressing some of the ways in which 3D printing technologies are being used to convert digital data collected on human bodies and fabricate them into tangible forms that can be touched and held. I focus in particular on the use of 3D printing to manufacture non-organic replicas of individuals’ bodies, body parts or bodily functions and activities. The article is also a reflection on a specific set of digital data practices and the meaning of such data to individuals. In analysing these new forms of human bodies, I draw on sociomaterialist perspectives as well as the recent work of scholars who have sought to reflect on selfhood, embodiment, place and space in digital society and the nature of people’s interactions with digital data. I argue that these objects incite intriguing ways of thinking about the ways in digital data on embodiment, health and illnesses are interpreted and used across a range of contexts. The article ends with some speculations about where these technologies may be headed and outlining future research directions.

These initial forays into a sociology of 3D printing represent merely a small component of possible avenues for theorising and research into the social impact of this technology. What I am particularly interested in at the moment is the implications for people’s data practices, or how the material objects that are generated from 3D printing technologies act as ‘solidified’ personal data. Future writings will investigate these issues in greater depth.

Digital Sociology now out

Digital Sociology has now been published (click here for the Amazon link and here for the publisher’s link).

 

The publisher’s blurb is below:

Digital Sociology

We now live in a digital society. New digital technologies have had a profound influence on everyday life, social relations, government, commerce, the economy and the production and dissemination of knowledge. People’s movements in space, their purchasing habits and their online communication with others are now monitored in detail by digital technologies. We are increasingly becoming digital data subjects, whether we like it or not, and whether we choose this or not.

The sub-discipline of digital sociology provides a means by which the impact, development and use of these technologies and their incorporation into social worlds, social institutions and concepts of selfhood and embodiment may be investigated, analysed and understood. This book introduces a range of interesting social, cultural and political dimensions of digital society and discusses some of the important debates occurring in research and scholarship on these aspects. It covers the new knowledge economy and big data, reconceptualising research in the digital era, the digitisation of higher education, the diversity of digital use, digital politics and citizen digital engagement, the politics of surveillance, privacy issues, the contribution of digital devices to embodiment and concepts of selfhood and many other topics.

Digital Sociology is essential reading not only for students and academics in sociology, anthropology, media and communication, digital cultures, digital humanities, internet studies, science and technology studies, cultural geography and social computing, but for other readers interested in the social impact of digital technologies.

The politics of privacy in the digital age

The latest except from my forthcoming book Digital Sociology (due to be released by Routledge on 12 November 2014). This one is from Chapter 7: Digital Politics and Citizen Digital Public Engagement.

The distinction between public and private has become challenged and transformed via digital media practices. Indeed it has been contended that via the use of online confessional practices, as well as the accumulating masses of data that are generated about digital technology users’ everyday habits, activities and preferences, the concept of privacy has changed. Increasingly, as data from many other users are aggregated and interpreted using algorithms, one’s own data has an impact on others by predicting their tastes and preferences (boyd, 2012). The concept of ‘networked privacy’ developed by danah boyd (2012) acknowledges this complexity. As she points out, it is difficult to make a case for privacy as an individual issue in the age of social media networks and sousveillance. Many people who upload images or comments to social media sites include other people in the material, either deliberately or inadvertently. As boyd (2012: 348) observes, ‘I can’t even count the number of photos that were taken by strangers with me in the background at the Taj Mahal’.

Many users have come to realise that the information about themselves and their friends and family members that they choose to share on social media platforms may be accessible to others, depending on the privacy policy of the platform and the ways in which users have operated privacy settings. Information that is shared on Facebook, for example, is far easier to limit to Facebook friends if privacy settings restrict access than are data that users upload to platforms such as Twitter, YouTube or Instagram, which have few, if any, settings that can be used to limit access to personal content. Even within Facebook, however, users must accept that their data may be accessed by those that they have chosen as friends. They may be included in photos that are uploaded by their friends even if they do not wish others to view the photo, for example.

Open source data harvesting tools are now available that allow people to search their friends’ data. Using a tool such as Facebook Graph Search, people who have joined that social media platform can mine the data uploaded by their friends and search for patterns. Such elements as ‘photos of my friends in New York’ or ‘restaurants my friends like’ can be identified using this tool. In certain professions, such as academia, others can use search engines to find out many details about one’s employment details and accomplishments (just one example is Google Scholar, which lists academics’ publications as well as how often and where they have been cited by others). Such personal data as online photographs or videos of people, their social media profiles and online comments can easily be accessed by others by using search engines.

Furthermore, not only are individuals’ personal data shared in social networks, they may now be used to make predictions about others’ actions, interests, preferences or even health states (Andrejevic, 2013; boyd, 2012). When people’s small data are aggregated with others to produce big data, the resultant datasets are used for predictive analytics (Chapter 5). As part of algorithmic veillance and the production of algorithmic identities, people become represented as configurations of others in the social media networks with which they engage and the websites people characterised as ‘like them’ visit. There is little, if any, opportunity to opt out of participation in these data assemblages that are configured about oneself.

A significant tension exists in discourses about online privacy. Research suggests that people hold ambivalent and sometimes paradoxical ideas about privacy in digital society. Many people value the use of dataveillance for security purposes and for improving economic and social wellbeing. It is common for digital media users to state that they are not concerned about being monitored by others online because they have nothing to hide (Best, 2010). On the other hand, however, there is evidence of unease about the continuous, ubiquitous and pervasive nature of digital surveillance. It has become recognised that there are limits to the extent to which privacy can be protected, at least in terms of individuals being able to exert control over access to digital data about themselves or enjoy the right to be forgotten (Rosen, 2012; Rosenzweig, 2012). Some commentators have contended that notions of privacy, indeed, need to be rethought in the digital era. Rosenzweig (2012) has described previous concepts as ‘antique privacy’, which require challenging and reassessment in the contemporary world of ubiquitous dataveillance. He asserts that in weighing up rights and freedoms, the means, ends and consequences of any dataveillance program should be individually assessed.

Recent surveys of Americans by the Pew Research Center (Rainie and Madden, 2013) have found that the majority still value the notion of personal privacy but also value the protection against criminals or terrorists that breaches of their own privacy may offer. Digital technology users for the most part are aware of the trade-off between protecting their personal data from others’ scrutiny or commercial use, and gaining benefits from using digital media platforms that collect these data as a condition of use. This research demonstrates that the context in which personal data are collected is important to people’s assessments of whether their privacy should be intruded upon. The Americans surveyed were more concerned about others knowing the content of their emails than their internet searches, and were more likely to experience or witness breaches of privacy in their own social media networks than to be aware of government surveillance of their personal data.

Another study using qualitative interviews with Britons (The Wellcome Trust, 2013) investigated public attitudes to personal data and the linking of these data. The research found that many interviewees demonstrated a positive perspective on the use of big data for national security and the prevention and detection of crime, improving government services, the allocation of resources and planning, identifying social and population trends, convenience and time-saving when doing shopping and other online transactions, identifying dishonest practices and making vital medical information available in an emergency. However the interviewees also expressed a number of concerns about the use of their data, including the potential for the data to be lost, stolen, hacked or leaked and shared without consent, the invasion of privacy when used for surveillance, unsolicited marketing and advertising, the difficulty of correcting inaccurate data on oneself and the use of the data to discriminate against people. Those interviewees of low socioeconomic status were more likely to feel powerless about dealing with potential personal data breaches, identity theft or the use of their data to discriminate against them.

References

Andrejevic, M. (2013) Infoglut: How Too Much Information is Changing the Way We Think and KnowNew York: Routledge.

Best, K. (2010) Living in the control society: surveillance, users and digital screen technologies. International Journal of Cultural Studies, 13, 5-24.

boyd, d. (2012) Networked privacy. Surveillance & Society, 10, 348-50.

Rainie, L. & Madden, M. (2013) 5 findings about privacy. http://networked.pewinternet.org/2013/12/23/5-findings-about-privacy, accessed 24 December 2013.

Rosen, J. (2012) The right to be forgotten. Stanford Law Review Online, 64 (88). http://www.stanfordlawreview.org/online/privacy-paradox/right-to-be-forgotten/, accessed 21 November 2013.

Rosenzweig, P. (2012) Whither privacy? Surveillance & Society, 10, 344-47.

The Wellcome Trust (2013) Summary Report of Qualitative Research into Public Attitudes to Personal Data and Linking Personal Data [online text], The Wellcome Trust http://www.wellcome.ac.uk/stellent/groups/corporatesite/@msh_grants/documents/web_document/wtp053205.pdf