When data do not make sense

One of my current areas of research interests focuses on how to conceptualise digital data and the ways in which humans make sense of their personal data. Next week I am attending a workshop in Copenhagen run as a part of a series convened by RMIT’s Data Ethnographies Lab. We are addressing the topic of ‘broken data’, or digital data that for some reason do not work, are considered useless or fail to make sense to the people reviewing them.

Drawing on some of my own concepts of digital data, I have produced the following metaphorical alternatives to that of ‘broken data’.

Metaphor 1: if data are liquid, then …

  • there can be blockages in data flows
  • moving data can become stuck
  • contained data can become out of control (like tsunamis or floods)
  • liquid data can become frozen

Metaphor 2: if data are lively, or companion species, then …

  • alive data can die
  • domesticated data can become wild
  • fresh data can decompose
  • healthy data can become sick

Metaphor 3: if data can be eaten/consumed, then …

  • data can become self or considered not-self
  • data can be incorporated or not incorporated
  • data can be digestible or indigestible
  • data can be edible or inedible

 

 

 

 

Lively devices, lively data and lively leisure studies

This is a foreword I wrote for a Leisure Studies special issue on digital leisure cultures (the link to the journal version is here).

In the countries of the Global North, each person, to a greater or lesser degree, has become configured as a data subject. When we use search engines, smartphones and other digital devices, apps and social media platforms, and when we move around in spaces carrying devices the record our geolocation or where there are embedded sensors or cameras recording our movements, we are datafied: rendered into assemblages of digital data. These personal digital data assemblages are only ever partial portraits of us and are constantly changing: but they are beginning to have significant impacts on the ways in which people understand themselves and others and on their life opportunities and chances. Leisure cultures and practices are imbricated within digital and data practices and assemblages. Indeed, digital technologies are beginning to transform many areas of life into leisure pursuits in unprecedented ways, expanding the purview of leisure studies.

These processes of datafication can begin even before birth and continue after death. Proud expectant parents commonly announce pregnancies on social media, uploading ultrasound images of their foetuses and sometimes even creating accounts in the name of the unborn so that they can ostensibly communicate from within the womb. Images from the birth of the child may also become publicly disseminated: as in the genre of the childbirth video on YouTube. This is followed by the opportunity for parents to record and broadcast many images of their babies’ and children’s lives. At the other end of life, many images of the dying and dead bodies can now be found on the internet. People with terminal illnesses write blogs, use Facebook status updates or tweet about their experiences and post images of themselves as their bodies deteriorate. Memorial websites or dedicated pages on social media sites are used after people’s death to commemorate them. Beyond these types of datafication, the data generated from other interactions online and by digital sensors in devices and physical environments constantly work to generate streams of digital data about people. In some cases, people may choose to generate these data; in most other cases, they are collected and used by others, often without people’s knowledge or consent. These data have become highly valuable as elements of the global knowledge economy, whether aggregated and used as big data sets or used to reveal insights into individuals’ habits, behaviours and preferences.

One of my current research interests is exploring the ways in which digital technologies work to generate personal information about people and how individuals themselves and a range of other actors and agencies use these data. I have developed the concept of ‘lively data’, which is an attempt to incorporate the various elements of how we are living with and by our data. Lively data are generated by lively devices: those smartphones, tablet computers, wearable devices and embedded sensors that we live with and alongside, our companions throughout our waking days. Lively data about humans are vital in four main respects: 1) they are about human life itself; 2) they have their own social lives as they circulate and combine and recombine in the digital data economy; 3) they are beginning to affect people’s lives, limiting or promoting life chances and opportunities (for example, whether people are offered employment or credit); and 4) they contribute to livelihoods (as part of their economic and managerial value).

These elements of datafication and lively data have major implications for leisure cultures. Research into people’s use of digital technologies for recreation, including the articles collected here and others previously published in this journal, draws attention to the pleasures, excitements and playful dimensions of digital encounters. These are important aspects to consider, particularly when much research into digital society focuses on the limitations or dangers of digital technology use such as the possibilities of various types of ‘addiction’ to their use or the potential for oppressive surveillance or exploitation of users that these technologies present. What is often lost in such discussions is an acknowledgement of the value that digital technologies can offer ordinary users (and not just the internet empires that profit from them). Perspectives that can balance awareness of both the benefits and possible drawbacks of digital technologies provide a richer analysis of their affordances and social impact. When people are using digital technologies for leisure purposes, they are largely doing so voluntarily: because they have identified a personal use for the technologies that will provide enjoyment, relaxation or some other form of escape from the workaday world. What is particularly intriguing, at least from my perspective in my interest in lively data, is how the data streams from digitised leisure pursuits are becoming increasingly entangled with other areas of life and concepts of selfhood. Gamification and ludification strategies, in which elements of play are introduced into domains such as the workplace, healthcare, intimate relationships and educational institutions, are central to this expansion.

Thus, for example, we now see concepts of the ‘healthy, productive worker’, in which employers seek to encourage their workers to engage in fitness pursuits to develop highly-achieving and healthy employees who can avoid taking time out because of illness and operate at maximum efficiency in the workplace. Fitness tracker companies offer employers discounted wearable devices for their employees so that corporate ‘wellness’ programs can be put in place in which fitness data sharing and competition are encouraged among employees. Dating apps like Tinder encourage users to think of the search for partners as a game and the attractive presentation of the self as a key element in ‘winning’ the interest of many potential dates. The #fitspo and #fitspiration hashtags used in Instagram and other social media platforms draw attention to female and male bodies that are slim, physically fit and well-groomed, performing dominant notions of sexual attractiveness. Pregnancy has become ludified with a range of digital technologies. Using their smartphones and dedicated apps, pregnant women can take ‘belfies’, or belly selfies, and generate time-lapse videos for their own and others’ entertainment (including uploading the videos on social media sites). 3D-printing companies offer parents the opportunity to generate replicas of their foetuses from 3D ultrasounds, for use as display objects on mantelpieces or work desks. Little girls are offered apps which encourage then to perform makeovers on pregnant women or help them deliver their babies via caesarean section. In the education sector, digitised gamification blurs leisure, learning and physical fitness. Schools are beginning to distribute heart rate monitors, coaching apps and other self-tracking devices to children during sporting activities and physical education classes, promoting a culture of self-surveillance via digital data at the same time as teachers’ monitoring of their students’ bodies is intensified. Online education platforms for children like Mathletics encourage users to complete tasks to win medals and work their way up the leaderboard, competing against other users around the world.

In these domains and many others, the intersections of work, play, health, fitness, education, parenthood, intimacy, productivity, achievement and concepts of embodiment, selfhood and social relations are blurred, complicated and far-reaching. These practices raise many questions for researchers interested in digitised leisure cultures across the age span. What are the affordances of the devices, software and platforms that people use for leisure? How do these technologies promote and limit leisure activities? How are people’s data used by other actors and agencies and in what ways do these third parties profit from them? What do people know about how their personal details are generated, stored and used by other actors and agencies? How do they engage with their own data or those about others in their lives? What benefits, pleasures and opportunities do such activities offer, and what are their drawbacks, risks and harms? How are the carers and teachers of children and young people encouraging or enjoining them to use these technologies and to what extent are they are aware of the possible harms as well as benefits? How are data privacy and security issues recognised and managed, on the part both of those who take up these pursuits voluntarily and those who encourage or impose them on others? When does digitised leisure begin to feel more like work and vice versa: and what are the implications of this?

These questions return to the issue of lively data, and how these data are generated and managed, the impact they have on people’s lives and concepts of selfhood and embodiment. As I noted earlier, digital technologies contribute to new ways of reconceptualising areas of life as games or as leisure pursuits that previously were not thought of or treated in those terms. In the context of this move towards rendering practices and phenomena as recreational and the rapidly-changing sociomaterial environment, all social researchers interested in digital society need to be lively in response to lively devices and lively data. As the editors of this special issue contend, researching digital leisure cultures demands a multidisciplinary and interdisciplinary perspective. Several exciting new interdisciplinary areas have emerged in response to the increasingly digitised world: among them internet studies, platform studies, software studies, critical algorithm studies and critical data studies. The ways in which leisure studies can engage with these, as well the work carried out in sub-disciplines such as digital sociology, digital humanities and digital anthropology, have yet to be fully realised. In return, the key focus areas of leisure studies, both conceptually and empirically – aspects of pleasure, performance, politics and power relations, embodiment, selfhood, social relations and the intersections between leisure and work – offer much to these other areas of enquiry.

The articles published in this special issue go some way to addressing these issues, particularly in relation to young people. The contributors demonstrate how people may accept and take up the dominant assumptions and concepts about idealised selves and bodies expressed in digital technologies but also how users may resist these assumptions or seek to re-invent them. As such, this special issue represents a major step forward in promoting a focus on the digital in leisure studies, working towards generating a lively leisure studies that can make sense of the constantly changing worlds of lively devices and lively data.

Who owns your personal health and medical data?

09/01/15 -- A moment during day 1 of the 2-day international Healthcare and Social Media Summit in Brisbane, Australia on September 1, 2015. Mayo Clinic partnered with the Australian Private Hospitals Association (APHA), a Mayo Clinic Social Media Health Network member to bring this first of it's kind summit to Queensland's Brisbane Convention & Exhibition Centre. (Photo by Jason Pratt / Mayo Clinic)

Presenting my talk at the Mayo Clinic Social Media and Healthcare Summit (Photo by Jason Pratt / Mayo Clinic)

Tomorrow I am speaking on a panel at the Mayo Clinic Healthcare and Social Media Summit on the topic of ‘Who owns your big data?’. I am the only academic among the panel members, who comprise of a former president of the Australian Medical Association, the CEO of the Consumers Health Forum, the Executive Director of a private hospital organisation and the Chief Executive of the Medical Technology Association of Australia. The Summit itself is directed at healthcare providers, seeking to demonstrate how they may use social media to publicise their organisations and promote health among their clients.

As a sociologist, my perspective on the use of social media in healthcare is inevitably directed at troubling the taken-for-granted assumptions that underpin the jargon of ‘disruption’, ‘catalysing’, ‘leveraging’ and ‘acceleration’ that tend to recur in digital health discourses and practices. When I discuss the big data phenomenon, I evoke the ‘13 Ps of big data‘ which recognise their social and cultural assumptions and uses.

When I speak at the Summit, I will note that the first issue to consider is for whom and by whom personal health and medical data are collected. Who decides whether personal digital data should be generated and collected? Who has control over these decisions? What are the power relations and differentials that are involved? This often very intimate information is generated in many different ways – via routine online transactions (e.g. Googling medical symptoms, purchasing products on websites) or more deliberately as part of people’s contributions to social media platforms (such as PatientsLikeMe or Facebook patient support pages) or as part of self-tracking or patient self-care endeavours or workplace wellness programs. The extent to which the generation of such information is voluntary, pushed, coerced or exploited, or indeed, even covert, conducted without the individual’s knowledge or consent, varies in each case. Many self-trackers collect biometric data on themselves for their private purposes. In contrast, patients who are sent home with self-care regimes may do so reluctantly. In some situations, very little choice is offered people: such as school students who are told to wearing self-tracking devices during physical education lessons or employees who work in a culture in which monitoring their health and fitness is expected of them or who may be confronted with financial penalties if they refuse.

Then we need to think about what happens to personal digital data once they are generated. Jotting down details of one’s health in a paper journal or sharing information with a doctor that is maintained in a folder in a filing cabinet in the doctor’s surgery can be kept private and secure. In this era of using digital tools to generate and archive such information, this privacy and security can no longer be guaranteed. Once any kind of personal data are collected and transmitted to the computing cloud, the person who generated the data loses control of it. These details become big data, part of the digital data economy and available to any number of second or third parties for repurposing: data mining companies, marketers, health insurance, healthcare and medical device companies, hackers, researchers, the internet empires themselves and even national security agencies, as Edward Snowden’s revelations demonstrated.

Even the large institutions that are trusted by patients for offering reliable and credible health and medical information online (such as the Mayo Clinic itself, which ranks among the top most popular health websites with 30 million unique estimated monthly visitors) may inadvertently supply personal details of those who use their websites to third parties. One recent study found that nine out of ten visits to health or medical websites result in data being leaked to third parties, including companies such as Google and Facebook, online advertisers and data brokers because the websites use third party analytic tools that automatically send information to the developers about what pages people are visiting. This information can then be used to construct risk profiles on users that may shut them out of insurance, credit or job opportunities. Data security breaches are common in healthcare organisations, and cyber criminals are very interested in stealing personal medical details from such organisations’ archives. This information is valuable as it can be sold for profit or used to create fake IDs to purchase medical equipment or drugs or fraudulent health insurance claims.

In short, the answer to the question ‘Who owns your personal health and medical data?’ is generally no longer individuals themselves.

My research and that of others who are investigating people’s responses to big data and the scandals that have erupted around data security and privacy are finding that concepts of privacy and notions of data ownership are beginning to change in response. People are becoming aware of how their personal data may be accessed, legally or illegally, by a plethora of actors and agencies and exploited for commercial profit. Major digital entrepreneurs, such as Apple CEO Tim Cook, are in turn responding to the public’s concern about the privacy and security of their personal information. Healthcare organisations and medical providers need to recognise these concerns and manage their data collection initiatives ethically, openly and responsibly.

‘Eating’ digital data

Update: I have now published a journal article that brings this blog with the previous one and expands the argument – it can be found here.

My previous post drew on Donna Haraway’s concept of companion species to theorise the ways in which we engage with our personal digital data assemblages. The work of Annemarie Mol offers an additional conceptual framework within which to understand digital data practices at a more detailed level, while still retaining the companion species perspective.

Mol has developed a framework that incorporates elements of enquiry that can be mapped onto the topic of digital data practices. These include the following: understanding language/discourse and its context and effects; tracing the development and use of objects of knowledge as they become objects-in-practice; acknowledging the dynamic nature of processes and the ‘endless tinkering’ that is involved in processes; incorporating awareness of the topologies or sites and spaces in which phenomena are generated and used; and finally, directing attention at the lived experiences or engagements in which practices and objects are understood and employed (see also Mol, 2002, 2008; Mol and Law, 2004).

If this approach is applied to digital data practices and their configurations, then focusing attention on the language that is employed to describe digital data, viewing digital data as objects that have both discursive and material effects and that constantly changing, recognising the process of tinkering (experimenting, adapting) that occur in relation to digital data and the spaces in which these processes take place are all important to developing an understanding of the ontology of digital data and our relationship with them.

Mol’s (2002) concept of ‘the body multiple’ in medicine has resonances with the Haraway’s cyborg ontology. This concept recognises that the human body is comprised of many different practices, sites and knowledges. While the body itself is not fragmented or multiple, the phenomena that make sense of it and represent it do so in many different ways so that the body is lived and experienced in different modes. So too the digital data assemblages that are configured by human users’ interactions with digital technologies are different versions of people’s identities and bodies that have material effects on their ways of living and conceptualising themselves. Part of the work of people’s data practices is negotiating the multiple bodies and selves that these digital data assemblages represent and configure.

Mol’s writings on human subjectivity also have implications for understanding data practices and interpretations. In her essay entitled ‘I eat an apple’, Mol points out that once a foodstuff has been swallowed, the human subject loses control over what happens to the content of the food in her body as the processes of digestion take place. As she notes, the body is busily responding to the food, but the individual herself has no control over this: ‘Her actorship is distributed and her boundaries are neither firm nor fixed’ (Mol, 2008: 40). The eating subject is able to choose what food she decides to eat, but after this point, her body decides how to deal with the components of the food, selecting certain elements and discarding others.

This raises questions about human agency and subjectivity. In the statement ‘I eat an apple’ is the agency in the ‘I’ or in the apple? Humans may grow, harvest and eat apples, but without foodstuffs such as apples, humans would not exist. Furthermore, once the apple is chewed and swallowed, it then becomes part of and absorbed into the eater’s body. It is impossible to determine what is human and what is apple (Mol, 2008: 30) The eating subject, therefore, is semi-permeable, neither completely closed off nor completely open to the world.

Mol then goes on to query at what stage the apple becomes part of her, and whether the category of the human subject might recognise the apple as ‘yet another me, a subject in its own right’ (Mol, 2008: 40). Apples themselves have been shaped by years of cultivation by humans into the forms in which they now exist. In fact they may be viewed as a form of Haraway’s companion species. How then do we draw boundaries around the body/self and the apple? How is the human subject to be defined?

To extend Mol’s analogy, the human subject may be conceptualised as both data-ingesting and data-emitting in an endless cycle of generating data, bringing the data into the self, generating yet more data. Data are absorbed into the body/self and then become new data that flow out of the body/self into the digital data economy. The data-eating/emitting subject, therefore, is not closed off but is open to taking in and letting out digital data. These data become part of the human subject but, as data assemblages also represent the individual in multiple ways that have different meanings based on their contexts and uses. Just as eating an apple has many meanings, depending on the social, cultural, political, historical and geographical contexts in which this act takes place, generating and responding to digital data about oneself are highly contingent acts. If digital data are never ‘raw’ but rather are always ‘cooked’ (that is, always understood and experienced via social and cultural processes), and may indeed be ‘rotted’ or spoilt in some way (Boellstorff, 2013), can we also understand them as ‘eaten’ and ‘digested’?

Haraway and Mol both emphasise the politics of technocultures. Haraway’s cyborg theorising was developed to explain her socialist feminist principles. In all of her work she emphasises the importance of paying attention as critical scholars to the exacerbation of socioeconomic disadvantage and inequalities that may be outcomes of these relationships. Mol similarly notes the political nature of technologies. In her ‘I eat an apple’ essay, for example, she comments about her distaste for Granny Smith apples, once imported from Chile and therefore associated in her mind with repressive political regimes. As she notes, while she may eat this type of apple and while it may nourish her body as other apples do, she is unable to gain sensory pleasure from it.

Data science writings on big data often fail to acknowledge the political dimensions of digital data. They do not see how data are always already ‘cooked’, or how their flavour or digestibility are influenced by their context. Just as ‘eating apples is variously situated’ (Mol, 2008: 29) in history, geography, culture, social relations and politics, resulting in different flavours and pleasures, so too eating data is contextual. Like Haraway’s cyborg figuration (see her interview with Gane, 2006), the digital data assemblage may be viewed both as a product of global enterprise and capitalism and as representing possibilities for radical creative and political possibilities.

Using Mol’s concepts of the eating subject, we might wonder: What happens when we ingest/absorb digital data about ourselves? Do we recognise some data as ‘food’ (appropriate for such ingestion) and others as ‘non-food’ (not appropriate in some way for our use)? Are some data simply indigestible (our bodies/selves do not recognise them as us and cannot incorporate them)? How are the flavours and tastes of digital data experienced, and what differentiates these flavours and tastes?

References

Boellstorff, T. (2013) Making big data, in theory. First Monday, 18 (10). <http://firstmonday.org/ojs/index.php/fm/article/view/4869/3750&gt;, accessed 8 October 2013.

Gane, N. (2006) When we have never been human, what is to be done?: Interview with Donna Haraway. Theory, Culture & Society, 23, 135-58.

Mol, A. (2002) The Body Multiple: Ontology in Medical Practice. Durham, NC: Duke University Press.

Mol, A. (2008) I eat an apple. On theorizing subjectivities. Subjectivity, 22, 28-37.

Mol, A. & Law, J. (2004) Embodied action, enacted bodies: the example of hypoglycaemia. Body & Society, 10, 43-62.

Personal digital data as a companion species

Update: I have now published a journal article that brings this post together with the following post on ‘eating’ digital data – the article can be found here.

While an intense interest in digital data in popular and research cultures is now evident, we still know little about how humans interacting with, making sense of and using the digital data that they generate. Everyday data practices remain under-researched and under-theorised. In attempting to identify and think through some of the ways in which critical digital data scholars may seek to contribute to understandings of data practices, I am developing an argument that rests largely on the work of two scholars in the field of science and technology studies: Donna Haraway and Annemarie Mol. In this post I begin with Haraway, while my next post will discuss Mol.

Haraway’s work has often attempted ‘to find descriptive language that names emergent ontologies’, and I use her ideas here in the spirit of developing new terms and concepts to describe humans’ encounters with digital data. Haraway emphasises that humans cannot be separated from nonhumans conceptually, as we are constantly interacting with other animals and material objects as we go about our daily lives. Her writings on the cyborg have been influential in theory for conceptualising human and computer technological encounters (Haraway, 1991). In this work, Haraway drew attention to the idea that human ontology must be understood as multiple and dynamic rather than fixed and essential, as blurring boundaries between nature and culture, human and nonhuman, Self and Other. She contends that actors, whether human or nonhuman, are never pre-established; rather they emerge through relational encounters (Bhavnani and Haraway, 1994). The cyborg metaphor encapsulates this idea, not solely in relation to human-technology assemblages but to any interaction of humans with nonhumans.

This perspective already provides a basis for thinking through the emergent ontologies that are the digital data assemblages that are configured by humans’ interactions with the software and hardware that generate digital data about them. Haraway’s musings on human and nonhuman animal interactions (Haraway, 2003, 2008, 2015) also have resonance for how we might understand digital data-human assemblages. Haraway uses the term ‘companion species’ to describe the relationships that the human species has not only with other animal species but also with technologies. Humans are companion species with the nonhumans with which they live alongside and engage, each species learning from and influencing the other, co-evolving. Haraway refers to companion species as ‘post-cyborg entities, acknowledging the development of her thinking since her original cyborg exegesis.

This trope of companion species may be taken up to think about the ways in which humans generate, materialise and engage with digital data. Thrift has described the new ‘hybrid beings’ that are comprised of digital data and human flesh. Adopting Haraway’s companion species trope allows for the extension of this idea by acknowledging the liveliness of digital data and the relational nature of our interactions with these data. Haraway has commented in a lecture that she has learnt

through my own inhabiting of the figure of the cyborg about the non-anthropomorphic agency and the liveliness of artifacts. The kind of sociality that joins humans and machines is a sociality that constitutes both, so if there is some kind of liveliness going on here it is both human and non-human. Who humans are ontologically is constituted out of that relationality.

This observation goes to the heart of how we might begin to theorise the liveliness of digital data in the context of our own aliveness/liveliness, highlighting the relationality and sociality that connect them.

Like companion species and their humans, digital data are lively combinations of nature/culture. Digital data are lively in several ways. They are about life itself (details about human’s and other living species), they are constantly generated and regenerated as well as purposed and repurposed as they enter into the digital knowledge economy, they have potential impacts on humans’ and other species’ lives via the assumptions and inferences that they are used to develop and they have consequences for livelihoods in terms of their commercial and other value and effects.

Rather than think of the contemporary digitised human body/self as posthuman (cf. Haraway’s comments on posthumanism in her interview with Gane, 2006), the companion species perspective develops the idea of ‘co-human’ entities. Just as digital data assemblages are comprised of specific information points about people’s lives, and thus learn from people as algorithmic processes manipulate this personal information, people in turn learn from the digital data assemblages of which they are a part. The book choices that Amazon offers the, the ads that are delivered to them on Facebook or Twitter, the returns that are listed from search engine queries or browsing histories, the information that a fitness trackers provides about their heart rate or calories burnt each day are all customised to their digitised behaviours. Perusing these data can provide people with insights about themselves and may structure their future behaviour.

These aspects of digital data assemblages are perhaps becoming even more pronounced as the Internet of Things develops and humans become just one node in a network of smart objects that configure and exchange digital data with each other. Humans move around in data-saturated environments and they are able to wear personalised data-generating devices on their bodies, including not only their smartphones but objects such as sensor-embedded wristbands, clothing or watches. The devices that we carry with us literally are our companions: in the case of smartphones regularly touched, fiddled with and looked at throughout the day. But in distinction from previous technological prostheses, these mobile and wearable devices are also invested with and send out continuous flows of personal information. They have become the repositories of communication with others, geolocation information, personal images, biometric information and more. They also leak these data outwards as they are transmitted to computing cloud servers. All this is happening in real-time and continuously, raising important questions about the security and privacy of the very intimate information that these devices generate, transmit and archive (Tene and Polonetsky, 2013).

The companion species trope recognises the inevitability of our relationship with our digital data assemblages and the importance of learning to live together and to learn from each other. It suggests both the vitality of these assemblages and also the possibility of developing a productive relationship, recognising our mutual dependency. We may begin to think about our digital data assemblages as members of a companion species that have lives of their own that are beyond our complete control. These proliferating digital data companion species, as they are ceaselessly configured and reconfigured, emerge beyond our bodies/selves and into the wild of digital data economies and circulations. They are purposed and repurposed by second and third parties and even more actors beyond our reckoning as they are assembled and reassembled. Yet even as our digital data companion species engage in their own lives, they are still part of us and we remain part of them. We may interact with them or not; we may be allowed access to them or not; we may be totally unaware of them or we may engage in purposeful collection and use of them. They have implications for our lives in a rapidly growing array of contexts, from the international travel we are allowed to undertake to the insurance premiums, job offers or credit we are offered.

If we adopt Haraway’s companion species trope, we might ask the following: What are our affective responses to our digital data companion species? Do we love or hate them, or simply feel indifferent to them? What are the contexts for these responses? How do we live with our digital data companion species? How do they live with us? How do our lives intersect with them? What do they learn from us, and what do we learn from them? What is the nature of their own lives as they move around the digital data economy? How are we influenced by them? How much can we domesticate or discipline them? How do they domesticate or discipline us? How does each species co-evolve?

References

Bhavnani, K.-K. & Haraway, D. (1994) Shifting the subject: a conversation between Kum-Kum Bhavnani and Donna Haraway, 12 April 1993, Santa Cruz, California. Feminism & Psychology, 4, 19-39.

Gane, N. (2006) When we have never been human, what is to be done?: Interview with Donna Haraway. Theory, Culture & Society, 23, 135-58.

Haraway, D. (1991) Simians, Cyborgs and Women: the Reinvention of NatureLondon: Free Association.

Haraway, D. (2003) The Companion Species Manifesto: Dogs, People, and Significant Otherness. Chicago: Prickly Paradigm.

Haraway, D. (2008) When Species Meet. Minneapolis: The University of Minnesota Press.

Tene, O. & Polonetsky, J. (2013) Big data for all: Privacy and user control in the age of analytics. Northwestern Journal of Technology & Intellectual Property, 11, 239-73.

The thirteen Ps of big data

Big data are often described as being characterised by the ‘3 Vs’: volume (the large scale of the data); variety (the different forms of data sets that can now be gathered by digital devices and software); and velocity (the constant generation of these data). An online search of the ‘Vs’ of big data soon reveals that some commentators have augmented these Vs with the following: value (the opportunities offered by big data to generate insights); veracity/validity (the accuracy/truthfulness of big data); virality (the speed at which big data can circulate online); and viscosity (the resistances and frictions in the flow of big data) (see Uprichard, 2013 for a list of even more ‘Vs’).

These characterisations principally come from the worlds of data science and data analytics. From the perspective of critical data researchers, there are different ways in which big data can be described and conceptualised (see the further reading list below for some key works in this literature). Anthropologists Tom Boellstorff and Bill Maurer (2015a) refer to the ‘3 Rs’: relation, recognition and rot. As they explain, big data are always formed and given meaning via relationships with human and nonhuman actors that extend beyond data themselves; how data are recognised qua data is a sociocultural and political process; and data are susceptible to ‘rot’, or deterioration or unintended transformation as they are purposed and repurposed, sometimes in unintended ways.

Based on my research and reading of the critical data studies literature, I have generated my own list that can be organised around what I am choosing to call the ‘Thirteen Ps’ of big data. As in any such schema, this ‘Thirteen Ps’ list is reductive, acting as a discursive framework to organise and present ideas. But it is one way to draw attention to the sociocultural dimensions of big data that the ‘Vs’ lists have thus far failed to acknowledge, and to challenge the taken-for-granted attributes of the big data phenomenon.

  1. Portentous: The popular discourse on big data tends to represent the phenomenon as having momentous significance for commercial, managerial, governmental and research purposes.
  2. Perverse: Representations of big data are also ambivalent, demonstrating not only breathless excitement about the opportunities they offer but also fear and anxiety about not being able to exert control over their sheer volume and unceasing generation and the ways in which they are deployed (as evidenced in metaphors of big data that refer to ‘deluges’ and ‘tsunamis’ that threaten to overwhelm us).
  3. Personal: Big data incorporate, aggregate and reveal detailed information about people’s personal behaviours, preferences, relationships, bodily functions and emotions.
  4. Productive: The big data phenomenon is generative in many ways, configuring new or different ways of conceptualising, representing and managing selfhood, the body, social groups, environments, government, the economy and so on.
  5. Partial: Big data can only ever tell a certain narrative, and as such they offer a limited perspective. There are many other ways of telling stories using different forms of knowledges. Big data are also partial in the same way as they are relational: only some phenomena are singled out and labelled as ‘data’, while others are ignored. Furthermore, more big data are collected on some groups than others: those people who do not use or have access to the internet, for example, will be underrepresented in big digital data sets.
  6. Practices: The generation and use of big data sets involve a range of data practices on the part of individuals and organisations, including collecting information about oneself using self-tracking devices, contributing content on social media sites, the harvesting of online transactions by the internet empires and the data mining industry and the development of tools and software to produce, analyse, represent and store big data sets.
  7. Predictive: Predictive analytics using big data are used to make inferences about people’s behaviour. These inferences are becoming influential in optimising or limiting people’s opportunities and life chances, including their access to healthcare, insurance, employment and credit.
  8. Political: Big data is a phenomenon that involves power relations, including struggles over ownership of or access to data sets, the meanings and interpretations that should be attributed to big data, the ways in which digital surveillance is conducted and the exacerbation of socioeconomic disadvantage.
  9. Provocative: The big data phenomenon is controversial. It has provoked much recent debate in response to various scandals and controversies related to the digital surveillance of citizens by national security agencies, the use and misuse of personal data, the commercialisation of data and whether or not big data poses a challenge to the expertise of the academic social sciences.
  10. Privacy: There are growing concerns in relation to the privacy and security of big data sets as people are becoming aware of how their personal data are used for surveillance and marketing purposes, often without their consent or knowledge and the vulnerability of digital data to hackers.
  11. Polyvalent: The social, cultural, geographical and temporal contexts in which big data are generated, purposed and repurposed by a multitude of actors and agencies, and the proliferating data profiles on individuals and social groups that big data sets generate give these data many meanings for the different entities involved.
  12. Polymorphous: Big data can take many forms as data sets are generated, combined, manipulated and materialised in different ways, from 2D graphics to 3D-printed objects.
  13. Playful: Generating and materialising big data sets can have a ludic quality: for self-trackers who enjoy collecting and sharing information on themselves or competing with other self-trackers, for example, or for data visualisation experts or data artists who enjoy manipulating big data to produce beautiful graphics.

Critical Data Studies – Further Reading List

Andrejevic, M. (2014) The big data divide, International Journal of Communication, 8,  1673-89.

Boellstorff, T. (2013) Making big data, in theory, First Monday, 18 (10). <http://firstmonday.org/ojs/index.php/fm/article/view/4869/3750&gt;, accessed 8 October 2013.

Boellstorff, T. & Maurer, B. (2015a) Introduction, in T. Boellstorff & B. Maurer (eds.), Data, Now Bigger and Better! (Chicago, IL: Prickly Paradigm Press), 1-6.

Boellstorff, T. & Maurer, B. (eds.) (2015b) Data, Now Bigger and Better! Chicago, IL: Prickly Paradigm Press.

boyd, d. & Crawford, K. (2012) Critical questions for Big Data: provocations for a cultural, technological, and scholarly phenomenon, Information, Communication & Society, 15 (5),  662-79.

Burrows, R. & Savage, M. (2014) After the crisis? Big Data and the methodological challenges of empirical sociology, Big Data & Society, 1 (1).

Cheney-Lippold, J. (2011) A new algorithmic identity: soft biopolitics and the modulation of control, Theory, Culture & Society, 28 (6),  164-81.

Crawford, K. & Schultz, J. (2014) Big data and due process: toward a framework to redress predictive privacy harms, Boston College Law Review, 55 (1),  93-128.

Gitelman, L. & Jackson, V. (2013) Introduction, in L. Gitelman (ed.), Raw Data is an Oxymoron. Cambridge, MA: MIT Press, pp. 1-14.

Helles, R. & Jensen, K.B. (2013) Making data – big data and beyond: Introduction to the special issue, First Monday, 18 (10). <http://firstmonday.org/ojs/index.php/fm/article/view/4860/3748&gt;, accessed 8 October 2013.

Kitchin, R. (2014) The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences. London: Sage.

Kitchin, R. & Lauriault, T. (2014) Towards critical data studies: charting and unpacking data assemblages and their work, Social Science Research Network. <http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2474112&gt;, accessed 27 August 2014.

Lupton, D. (2015) ‘Chapter 5: A Critical Sociology of Big Data’ in Digital Sociology. London: Routledge.

Lyon, D. (2014) Surveillance, Snowden, and Big Data: Capacities, consequences, critique, Big Data & Society, 1 (2). <http://bds.sagepub.com/content/1/2/2053951714541861&gt;, accessed 13 December 2014.

Madden, M. (2014) Public Perceptions of Privacy and Security in the post-Snowden Era, Pew Research Internet Project: Pew Research Center.

McCosker, A. & Wilken, R. (2014) Rethinking ‘big data’ as visual knowledge: the sublime and the diagrammatic in data visualisation, Visual Studies, 29 (2),  155-64.

Robinson, D., Yu, H., and Rieke, A. (2014) Civil Rights, Big Data, and Our Algorithmic Future. No place of publication provided: Robinson + Yu.

Ruppert, E. (2013) Rethinking empirical social sciences, Dialogues in Human Geography, 3 (3),  268-73.

Tene, O. & Polonetsky, J. (2013) A theory of creepy: technology, privacy and shifting social norms, Yale Journal of Law & Technology, 16,  59-134.

Thrift, N. (2014) The ‘sentient’ city and what it may portend, Big Data & Society, 1 (1). <http://bds.sagepub.com/content/1/1/2053951714532241.full.pdf+html&gt;, accessed 1 April 2014.

Tinati, R., Halford, S., Carr, L., and Pope, C. (2014) Big data: methodological challenges and approaches for sociological analysis, Sociology, 48 (4),  663-81.

Uprichard, E. (2013) Big data, little questions?, Discover Society,  (1). <http://www.discoversociety.org/focus-big-data-little-questions/&gt;, accessed 28 October 2013.

van Dijck, J. (2014) Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology, Surveillance & Society, 12 (2),  197-208.

Vis, F. (2013) A critical reflection on Big Data: considering APIs, researchers and tools as data makers, First Monday, 18 (10). <http://firstmonday.org/ojs/index.php/fm/article/view/4878/3755&gt;, accessed 27 October 2013.

Changing representations of self-tracking

I recently completed a chapter for a book on lifelogging that discussed the concepts and uses of data as they are expressed in representations of self-tracking (see here for the full paper, available open access). In part of the chapter I looked at the ways in which people writing about the quantified self and other interpretations of self-tracking represent data and data practices, including in articles published in Wired magazine and other media outlets and blogs.

From the beginning of discussions of the quantified self, the representation of data in quantified self-tracking discourses (as least as it was expressed by its progenitors) included several factors. These include the following: quantified data are powerful entities; it is important not only to collect quantified data on oneself, but to analyse these data for the patterns and insights they reveal; data (and particularly quantified or quantifiable data) are an avenue to self-knowledge; the emergence of new digital and mobile devices for gathering information about oneself have facilitated self-tracking and the generation of quantified personal data; quantifiable data are more neutral, reliable, intellectual and objective than qualitative data, which are intuitive, emotional and subjective; self-tracked data can provide greater insights than the information that a person receives from their senses, revealing previously hidden patterns or correlations; self-tracked data can be motivational phenomena, inspiring action, by entering into a feedback loop; everything can be rendered as data; and data about individuals are emblematic of their true selves.

In more recent times, however, it is evident that a further set of concepts about self-tracked data have emerged since the original euphoria of the early accounts of quantified self-tracking. They include: the meaning of self-tracked data can be difficult to interpret; personal data can be disempowering as well as empowering; the conditions in which data are gathered can influence their validity; the contexts in which data are generated are vital to understanding their meaning; individuals’ personal data are not necessarily secure or private; quantified personal data can be reductive; and personal data can be used to discriminate against individuals.

We as yet know very little about how people are conceptualising and engaging with digital data about themselves. Given the recent scandals about how people’s personal data may be hacked or used or manipulated without their knowledge (the Snowden revelations about security agencies’ use of metadata, the Facebook emotional manipulation experiment, the celebrity nude photo and Sony Pictures hackings, for example), as well as growing coverage of the potentially negative implications of self-tracking as described above, these are pressing issues.

Edit (12 December 2015): More on this topic can be found in my book The Quantified Self: A Sociology of Self-Tracking Cultures.

 

Big Data Cultures symposium abstracts

Tomorrow the Big Data Cultures symposium that I have convened at the University of Canberra is taking place. There is a very interesting program from a range of Australian academics working on the social, cultural and political dimensions of the big data phenomenon. Here are the abstracts:

Keynote: ‘Visual dimensions’

Greg More, RMIT University

It’s a small problem for data to scale, but a wicked problem for us to make sense of big data that scales to infinity.  The aim of this article is to explore the translation of data into geometrical relationships: the art and design of creative forms of data visualisation to give data a meaningful visual dimension. Data has dimensionality, but not in a geometrical sense. Topology – the mathematical study of shape – will be used as lens to examine projects where designers utilise metaphors and abstraction to construct visual languages for data. Consider this cartography of data that makes sense of a scaleless territory. What is important in this examination how the designers of data visualisations understand the character of the data itself – the texture, nuance and signal contained within the information – and use this to make data tangible and at a scale we can interact with.

‘To hold a social form in your hand: how far are interactive holograms of social data?’

Alexia Maddox, Deakin University and Curtin University

Starting with the question, ‘can we reanimate social data into three dimensional forms?’, this paper explores the possibility of presenting research findings in three dimensional formats. These formats could include information that we can print through 3D printers or animate through interactive holograms. This paper will interrogate this approach to data presentation and discuss from a sociological point of view the ways it could engage with Big Data.  The combination of visual presentation derived from digital trace data provides us with a lens through which to investigate social patterns and trends.  Building data into three-dimensional formats has the capacity to enhance the cognitive literacy of information and its presentation to diverse stakeholders.  A social surface, that which is defined by form, needs a conceptual framework upon which to gain dynamic presence and dimension in space. Through my research into the Herpetological community, I explored the interior structures of community and patterns of socio-technical engagement. The resulting conceptual approach from this work seeks to situate mediated sociability within social ecologies and build social data into social form. This environmental approach aligns with current trends in geodemographic analysis and incorporates the socio-technical actor that moves beyond physical space and into virtual terrains.  The challenge of this conceptual approach is to explore how Big Data can be incorporated as environmental information or digital trace data.

Stranded deviations: Big Data and the contextually marginalised’

Andrew McNicol, University of New South Wales

As social and practical interactions moved to the digital realm, facilitated by technological breakthroughs and social pressures, many have become understandably concerned about user privacy. With the increased scale and complexity of stored information, giving rise to the term ‘Big Data’, the potential for another person to scrutinise our personal information in a way that makes us uncomfortable increases. However, as attention is a finite resource, in the majority of cases user information never comes under scrutiny by unwanted human eyes – it is lost in the noise and is only treated as data available for computational analysis. In a big data society privacy breaches increasingly occur as a result of algorithms allowing targets to emerge from data sets. This means that in any context certain individuals become disproportionately targeted for unwanted privacy breaches and those who are regularly contextually marginalised have the most to lose from participating in a culture of Big Data, raising issues of equal access. In this paper I bring these ideas together to argue that the privacy discourse should not only focus on the potential for scrutiny of personal data, but also the systems in place, both social and technological, that facilitate an environment where some users are more safe than others.

Health, big data and the culture of irresponsibility’

Bruce Baer Arnold and Wendy Bonython, University of Canberra

The analysis of whole-of-population clinical, hospital and genomic data offers potential major benefits regarding improved public health administration, pharmaceutical research and wellness through identification of susceptibilities to health conditions. Achievement of those benefits will be fundamentally inhibited by ‘health big data culture of irresponsibility’ in the public and private sectors. This paper critiques health big data cultures through reference to problematical initiatives such as 23andme (a global direct-to-consumer DNA service) and mismanaged release of weakly de-identified health data covering millions of people in the UK. It notes whole-of-population health data mining projects such as those involving DeCODE (Iceland) and Maccabi-Merck (Israel) that are more problematical than the so-called ‘vampire project’ involving Indigenous peoples. It draws on the authors’ work regarding privacy, bioethics, consumer protection and the OECD Health Information Infrastructure initiative. It highlights the need for coherent national and global health data management frameworks that address issues such as the genomic commons, intergenerational implications of genetic data and insurance redlining. It also highlights questions about media representations of big data governance.

‘Public problems for the digital humanities: debating Big Data methodologies, legitimating institutional knowledges’

Grant Bollmer, University of Sydney

While Big Data have clear implications for the knowledges produced by the social sciences, the various practices of the Digital Humanities have taken the methods associated with Big Data and applied them to objects rarely thought to be ‘Big’ or even ‘Data’. Scholars have used computation to examine literary history, visualising massive literary data sets in ways to make claims that, methodologically at least, are often perceived as threats to the humanities at a moment where traditional methods of teaching and performing humanistic scholarship are likewise under attack from a corporatized managerial university system. This paper uses the debates surrounding the Digital Humanities to investigate the political and institutional arguments that have emerged around Big Data methodologies in the humanities, along with the contrasting knowledge claims that ground these debates. I argue that, in its emphasis on methodology, these discussions overlook how academic publics have been transformed over the past decades. I suggest that normative claims about Big Data in the humanities must investigate its ‘public problems’—moments in which a specific culture defined around the technologically mediated circulation of discourse produces internal norms that are concealed for the sake of external legitimation and funding.

‘Big data’s golems: bots as a technique of tactical media’

Chris Rodley, University of Sydney

Big data has enabled the creation of a diverse range of bots which collect, analyse and process digital information programmatically. While corporations and political parties were early adopters of bots, a growing number of activists, artists and programmers have recently begun to create their own data-driven bots on social platforms such as Twitter as a way of critiquing or disrupting dominant discourses. This paper considers a selection of bots created to comment on issues including NSA surveillance and gun control, arguing that they represent a radical departure from the Situationist strategy of détournement or the tactical disruptions envisaged by Michel de Certeau. It considers the ethics of adopting the techniques of the sensor society – or what Mark Andrejevic has termed “drone logic” – and the implications of bots entering the public sphere as semi-autonomous political actors. Like the Golem of Prague in Jewish folklore, these personifications of big data may simultaneously represent both a powerful defensive strategy as well as a potentially destructive, uncontrollable force.

‘“Paranoid nominalism” as cultural technique of the quantified self’

Christopher O’Neill, University of Melbourne

The Quantified Self movement constitutes a growing community of those committed to practices of self-tracking through mobile sensors and apps. This paper will offer a critique of contemporary Quantified Self discourse, arguing that it is characterised by a certain ‘paranoid nominalism’. That is, an inability to ‘reconcile’ the intimacy of sensors with the abstraction of statistical technologies. This critique shall be pursued through a genealogical investigation of the precursors of some of the key technologies of the Quantified Self movement, especially Étienne-Jules Marey’s work on developing a ‘second positivism’ through sensor technologies, and Adolphe Quetelet’s production of statistical technologies of governance. Drawing on the ‘cultural techniques’ approach of media theory, this paper will investigate these technological prehistories of the Quantified Self movement in order to probe its ideological aporias.

‘There’s an app for that: digital culture and the rise of technologism’

Doug Lorman, Deakin University

Humans have always used technology to overcome bodily and mental boundaries and limitations in the pursuit of personal transcendence. The development of digital technologies such as ‘apps’ and wearable technology have helped to further this pursuit. Digital technologies allow us to collect, store and analyse data on ourselves and take appropriate action. The growth of self-quantification means that technology is no longer disconnected from us, but is part of being human. Technology and its user are mutually constitutive; one influences the other.

The benefits of self-quantification have been touted elsewhere. My concern is that with our inherent desire to conquer nature and override the natural way of doing things we are placing an inordinate amount of faith in the ability of technology to resolve our issues. My talk will argue that the development of a blind faith in digital technologies is creating a phenomenon I call technologism; the belief that technological outputs or results (big data) are the absolute and only justifiable solutions to personal issues. The result of this is that we pay less attention to our surroundings, our lived events and put our faith in technology, relying on it to guide us, help us, heal us, and so on.

‘Database activism’

Mathieu O’Neil, University of Canberra

When data was rare, the focus lay in finding it and collecting it. Now that there is an overabundance of data, datatabases have assumed a central role for the sorting, organising, querying and representation of data. In the realm of science, databases operate as both scientific instruments and as a means of communicating results (Hine 2006). Similarly in the news media field, journalists are increasingly using databases to render the flow of data meaningful and, through visualisation, to make important and pertinent information memorable. Like scientists, data journalists have to be concerned with the integrity of data, and present their methods and findings; database literacy is increasingly framed as a mandatory journalistic skill. At the same time the reliance on databases has led to the emergence of new forms of collective emotions and indignations (Parasie 2013). Unlike journalists, “civic hackers” (such as for example maplight.org which tracks the influence of money on US politics) do not aim to reveal victims and guilty parties hidden in the data, or to organise collective indignations. Data itself is held to be captive from governing authorities and must be freed: civic hackers reveal, without denouncing.

Hine, C. (2006) “Databases as scientific instruments and their role in the ordering of scientific work”, Social Studies of Science 36(2), pp. 269-298.

Parasie, S. (2013) “Des machines à scandale. Éléments pour une sociologie morale des bases de données”, Reseaux 178-179, pp. 127-161.

‘Disability data cultures’

Gerard Goggin, University of Sydney

A fascinating, cross-cutting case study in big data cultures lies in the dynamic, evolving, and contested space of contemporary disability and digital technology. Disability is now recognized as a significant part of social life, identity, and the life course. Over the past twenty years, digital technology – especially computers, the Internet, mobile media, social media, apps, geolocation technologies, and now, wearable computers, and even technologies such as driverless cars ­– have emerged as a significant part of the mediascape, cultural infrastructure, social support system, and personal identity and repertoire of many people with disabilities. New social relations of disability are premised on ­– and increasingly ‘congealed’ in – forms of digital technology. In the Australian context, we might think, for instance, of the present conjuncture and its coincidence of two big national projects where disability and digital technology are both entangled – the National Disability Insurance Scheme (NDIS) and National Broadband Network (NBN).

There is an emerging research, policy, design, and activist engagement with disability and digital technology, but as yet questions of disability and big data have been not so well canvassed. This is significant, given that, historically, the emergence of forms of data concerning disability has been bound up with classification, exclusion, government, and discrimination, as well as the new forms of knowledge and governmentality associated with new socially oriented models and paradigms of disability.

Accordingly, this paper provides a preliminary exploration of the forms, affordances, characteristics, issues, challenges, ethics, and possibilities of what might be termed ‘disability data cultures’. Firstly, I identify and discuss particular kinds of digital technologies, infrastructures, and softwares, and their distinctive affordances and design trajectories relating to disability.  As well as explicitly nominated and dedicated disability data technologies, I also discuss the emergence of health, self-tracking, and quantified self apps by which normalcy and ability is exnominated (or naturalized). Secondly, I look at the kinds of applications, harvesting, computational logics, and the will to power, emerging in order to provide more comprehensive and targeted data on disability ­– for citizens and users, and service, political, and cultural intermediaries, as well as disability service providers, agencies, and governments. Thirdly, I look at the nascent disability-inflected contribution to, and participation in, open data and citizen data initiatives and experiments. 

‘Theoretical perspectives on privacy, selfhood and big data’

Janice Richardson,  Monash University

Big data practices produce specific anxieties about privacy, based upon the fact that information about us, of which we were previously unaware, may be revealed to our detriment. The concerns of the “masters of suspicion” (Nietzsche, Marx, Freud) provides a cultural background view that some important aspect of our lives are hidden or inaccessible to us. This framework has given way to the Foucauldian position that big data could be characterised as having the potential to create new ways in which we are categorised rather than revealing our hidden essence or truth. However, this shift from revelation to construction does nothing to undermine our need to control such potentially harmful practices by both companies and government. As a result, it is necessary to consider how to conceptualise an ethical basis of such privacy claims, which arise as a result of unpredictable knowledge that is produced, rather than as a breach of confidence of pre-existing knowledge. I consider the potential for Spinoza – and his distinction between adequate and inadequate knowledge – to provide such a framework.

‘Big data/surveillant assemblages, interfaces, and user experiences: the cultivation of the docile data subject’

Ashlin Lee, University of Tasmania

The phenomenon of big data represents a socio-technical assemblage of services and devices that are involved in data collection and analysis.  One example of this is through personal ‘sensor’ devices (Andrejevic and Burdon 2014), like smartphones.  Here users are interfaced into big data, simultaneously using big data for their own needs, while fuelling it with their personal information, being the target of data collection and dataveillance/surveillance.  With the popularity of these devices it is important to consider what implications this interfacing has, and the relationship between users and big data/surveillance. This paper describes the results of empirical research into users and their interfaces – conceptualised under Lee’s (2013) idea of convergent mobile technologies (CMTs) – and the implications of user interfacing with big data and surveillance.  Highlighted is how these interfaces valorise user experiences that are ‘immediate’ over all others.  In the context of their relationship with big data this is problematic, as users dismiss or disengage from issues of security and surveillance as long as rapidity is maintained. These CMT interfaces can be thus understood as contributing to the creation of ‘docile data subjects’, who happily bleed personal information into the big data (and surveillant) assemblage(s), in exchange for an experiential state deemed valuable.

‘Altmetrics in policy communication: investigating informal policy actors using social media data’

Fiona Martin and Jonathon Hutchinson, University of Sydney

For nearly a decade citizens have taken to social media to launch public conversations and connective action around issues of civic concern – conversations which have various impacts on the shaping of policy, regulation and governance. Now Facebook, LinkedIn and Twitter are increasingly being used to build, inform and influence informal expert networks, particularly around emerging technologies and practices, and their associated policy problems. Such networks link actors from data cultures such as computing science and medical research to those in hybrid industrial ecologies, like that of mobile health software development. Their conversations are often transnational. They promote and market as much as debate and mobilise. Thus they complicate Gov 2.0 assumptions about democratic participation and engagement, as well as data security. In this paper we argue that it is vital to have new analytic frameworks to measure and evaluate the identity, reach, and relative agency of actors in those networks, in order to understand their potential impact on policy development.

We model one such framework – a mixed method social media network analysis (SNMA) and digital ethnography used to analyse agency and influence in Twitter conversations about mhealth. Using hashtagged conversations captured in the wake of the U.S. Food & Drug Administration’s September 2013 release of guidelines on Mobile Medical Applications, we visualize the network communications then locate and profile the key influencers, exploring their motivations for engagement. Drawing on this data and altmetrics research we discuss registers of impact in expert social media networks and propose a research agenda for exploring the political, cultural and economic value of Twitter conversations in policy formation.

Capturing capacity: quantified self technologies and the commodification of affect’

Miranda Bruce, Australian National University

The Quantified Self (QS) movement is part of a growing technological trend that exploits and modulates the potential of human life. QS finds ways to quantify the active and passive dimensions of the daily processes of human existence, in order to extract meaning from them and modify the ways that we move in and through the world. This paper will explore, firstly, the idea that QS represents a commodification of human capacity, an extraction of power, or form of immaterial labour consistent with the logic of neoliberal capitalism. I will then turn to Deleuzian affect theory to open up a quite different ontological and ultimately practical approach to the problem of QS, which stresses the excessive, and thus un-capturable, nature of lived potential. Finally, I will offer some reflections on the relationship of this technology to broader trends concerning the technological modulation of human capacity.​

 ‘Live data/sociology: what digital sociologists can learn from artists’ responses to big data’

Deborah Lupton, University of Canberra

The big data phenomenon has attracted much publicity in public forums, both in terms of its potential for offering insights into manifold aspects of social and economic life and for its negative associations with mass surveillance and the reduction of the complexity of behaviour into quantifiable data. In this paper I will discuss some of the ways in which artists have responded to big data. I contend that their conceptualisations and critiques of big data offer intriguing insights into the tacit assumptions and emotions (fears and anxieties as well as pleasures and satisfactions) that these digitised methods of knowledge production engender. Digital data are lively in a number of ways: they have become forms of ‘lively capital’ (that is, drawing commercial value from human embodiment, or life itself); they generate embodied and affective responses; they contribute recursively to life itself; and they have a social life of their own, constantly circulating and transforming as they are appropriated and re-purposed. Artists’ responses can contribute to what might be described as a ‘live data/sociology’ (drawing on Les Back’s concept of a ‘live sociology’ that departs from ‘zombie sociology’) which identifies and theorises the forms of liveliness that big digital data may encompass.

Call for papers: Big Data Cultures symposium

I am convening a one-day symposium to be held on Monday 15 September 2014 that addresses the social, cultural, political and ethical issues and implications of the big data phenomenon. It will be held by the News & Media Research Centre, University of Canberra, Australia.

A keynote speaker will open proceedings (details to be confirmed), but paper abstracts from any interested contributors are invited for consideration. Appropriate topics may include, but are not limited to, the following areas:

– privacy, security and legal issues
– how big data are changing forms of governance and commercial operations
– big data ecosystems
– the open data/citizen data movement
– data hactivism and queering big data
– public understandings of big data
– surveillance and big data
– creative forms of data visualisation
– self-tracking and the quantified self
– data doubles and data selves
– the materiality of digital data
– the social lives of digital data-objects
– algorithmic identities and publics
– code acts
– responses to big data from artists and designers

Abstracts of 150-200 words should be submitted to me (deborah.lupton@canberra.edu.au) by 1 July 2014 for consideration for inclusion in the symposium. Please contact me if you require any further information.