Tomorrow the Big Data Cultures symposium that I have convened at the University of Canberra is taking place. There is a very interesting program from a range of Australian academics working on the social, cultural and political dimensions of the big data phenomenon. Here are the abstracts:
Keynote: ‘Visual dimensions’
Greg More, RMIT University
It’s a small problem for data to scale, but a wicked problem for us to make sense of big data that scales to infinity. The aim of this article is to explore the translation of data into geometrical relationships: the art and design of creative forms of data visualisation to give data a meaningful visual dimension. Data has dimensionality, but not in a geometrical sense. Topology – the mathematical study of shape – will be used as lens to examine projects where designers utilise metaphors and abstraction to construct visual languages for data. Consider this cartography of data that makes sense of a scaleless territory. What is important in this examination how the designers of data visualisations understand the character of the data itself – the texture, nuance and signal contained within the information – and use this to make data tangible and at a scale we can interact with.
‘To hold a social form in your hand: how far are interactive holograms of social data?’
Alexia Maddox, Deakin University and Curtin University
Starting with the question, ‘can we reanimate social data into three dimensional forms?’, this paper explores the possibility of presenting research findings in three dimensional formats. These formats could include information that we can print through 3D printers or animate through interactive holograms. This paper will interrogate this approach to data presentation and discuss from a sociological point of view the ways it could engage with Big Data. The combination of visual presentation derived from digital trace data provides us with a lens through which to investigate social patterns and trends. Building data into three-dimensional formats has the capacity to enhance the cognitive literacy of information and its presentation to diverse stakeholders. A social surface, that which is defined by form, needs a conceptual framework upon which to gain dynamic presence and dimension in space. Through my research into the Herpetological community, I explored the interior structures of community and patterns of socio-technical engagement. The resulting conceptual approach from this work seeks to situate mediated sociability within social ecologies and build social data into social form. This environmental approach aligns with current trends in geodemographic analysis and incorporates the socio-technical actor that moves beyond physical space and into virtual terrains. The challenge of this conceptual approach is to explore how Big Data can be incorporated as environmental information or digital trace data.
‘Stranded deviations: Big Data and the contextually marginalised’
Andrew McNicol, University of New South Wales
As social and practical interactions moved to the digital realm, facilitated by technological breakthroughs and social pressures, many have become understandably concerned about user privacy. With the increased scale and complexity of stored information, giving rise to the term ‘Big Data’, the potential for another person to scrutinise our personal information in a way that makes us uncomfortable increases. However, as attention is a finite resource, in the majority of cases user information never comes under scrutiny by unwanted human eyes – it is lost in the noise and is only treated as data available for computational analysis. In a big data society privacy breaches increasingly occur as a result of algorithms allowing targets to emerge from data sets. This means that in any context certain individuals become disproportionately targeted for unwanted privacy breaches and those who are regularly contextually marginalised have the most to lose from participating in a culture of Big Data, raising issues of equal access. In this paper I bring these ideas together to argue that the privacy discourse should not only focus on the potential for scrutiny of personal data, but also the systems in place, both social and technological, that facilitate an environment where some users are more safe than others.
‘Health, big data and the culture of irresponsibility’
Bruce Baer Arnold and Wendy Bonython, University of Canberra
The analysis of whole-of-population clinical, hospital and genomic data offers potential major benefits regarding improved public health administration, pharmaceutical research and wellness through identification of susceptibilities to health conditions. Achievement of those benefits will be fundamentally inhibited by ‘health big data culture of irresponsibility’ in the public and private sectors. This paper critiques health big data cultures through reference to problematical initiatives such as 23andme (a global direct-to-consumer DNA service) and mismanaged release of weakly de-identified health data covering millions of people in the UK. It notes whole-of-population health data mining projects such as those involving DeCODE (Iceland) and Maccabi-Merck (Israel) that are more problematical than the so-called ‘vampire project’ involving Indigenous peoples. It draws on the authors’ work regarding privacy, bioethics, consumer protection and the OECD Health Information Infrastructure initiative. It highlights the need for coherent national and global health data management frameworks that address issues such as the genomic commons, intergenerational implications of genetic data and insurance redlining. It also highlights questions about media representations of big data governance.
‘Public problems for the digital humanities: debating Big Data methodologies, legitimating institutional knowledges’
Grant Bollmer, University of Sydney
While Big Data have clear implications for the knowledges produced by the social sciences, the various practices of the Digital Humanities have taken the methods associated with Big Data and applied them to objects rarely thought to be ‘Big’ or even ‘Data’. Scholars have used computation to examine literary history, visualising massive literary data sets in ways to make claims that, methodologically at least, are often perceived as threats to the humanities at a moment where traditional methods of teaching and performing humanistic scholarship are likewise under attack from a corporatized managerial university system. This paper uses the debates surrounding the Digital Humanities to investigate the political and institutional arguments that have emerged around Big Data methodologies in the humanities, along with the contrasting knowledge claims that ground these debates. I argue that, in its emphasis on methodology, these discussions overlook how academic publics have been transformed over the past decades. I suggest that normative claims about Big Data in the humanities must investigate its ‘public problems’—moments in which a specific culture defined around the technologically mediated circulation of discourse produces internal norms that are concealed for the sake of external legitimation and funding.
‘Big data’s golems: bots as a technique of tactical media’
Chris Rodley, University of Sydney
Big data has enabled the creation of a diverse range of bots which collect, analyse and process digital information programmatically. While corporations and political parties were early adopters of bots, a growing number of activists, artists and programmers have recently begun to create their own data-driven bots on social platforms such as Twitter as a way of critiquing or disrupting dominant discourses. This paper considers a selection of bots created to comment on issues including NSA surveillance and gun control, arguing that they represent a radical departure from the Situationist strategy of détournement or the tactical disruptions envisaged by Michel de Certeau. It considers the ethics of adopting the techniques of the sensor society – or what Mark Andrejevic has termed “drone logic” – and the implications of bots entering the public sphere as semi-autonomous political actors. Like the Golem of Prague in Jewish folklore, these personifications of big data may simultaneously represent both a powerful defensive strategy as well as a potentially destructive, uncontrollable force.
‘“Paranoid nominalism” as cultural technique of the quantified self’
Christopher O’Neill, University of Melbourne
The Quantified Self movement constitutes a growing community of those committed to practices of self-tracking through mobile sensors and apps. This paper will offer a critique of contemporary Quantified Self discourse, arguing that it is characterised by a certain ‘paranoid nominalism’. That is, an inability to ‘reconcile’ the intimacy of sensors with the abstraction of statistical technologies. This critique shall be pursued through a genealogical investigation of the precursors of some of the key technologies of the Quantified Self movement, especially Étienne-Jules Marey’s work on developing a ‘second positivism’ through sensor technologies, and Adolphe Quetelet’s production of statistical technologies of governance. Drawing on the ‘cultural techniques’ approach of media theory, this paper will investigate these technological prehistories of the Quantified Self movement in order to probe its ideological aporias.
‘There’s an app for that: digital culture and the rise of technologism’
Doug Lorman, Deakin University
Humans have always used technology to overcome bodily and mental boundaries and limitations in the pursuit of personal transcendence. The development of digital technologies such as ‘apps’ and wearable technology have helped to further this pursuit. Digital technologies allow us to collect, store and analyse data on ourselves and take appropriate action. The growth of self-quantification means that technology is no longer disconnected from us, but is part of being human. Technology and its user are mutually constitutive; one influences the other.
The benefits of self-quantification have been touted elsewhere. My concern is that with our inherent desire to conquer nature and override the natural way of doing things we are placing an inordinate amount of faith in the ability of technology to resolve our issues. My talk will argue that the development of a blind faith in digital technologies is creating a phenomenon I call technologism; the belief that technological outputs or results (big data) are the absolute and only justifiable solutions to personal issues. The result of this is that we pay less attention to our surroundings, our lived events and put our faith in technology, relying on it to guide us, help us, heal us, and so on.
Mathieu O’Neil, University of Canberra
When data was rare, the focus lay in finding it and collecting it. Now that there is an overabundance of data, datatabases have assumed a central role for the sorting, organising, querying and representation of data. In the realm of science, databases operate as both scientific instruments and as a means of communicating results (Hine 2006). Similarly in the news media field, journalists are increasingly using databases to render the flow of data meaningful and, through visualisation, to make important and pertinent information memorable. Like scientists, data journalists have to be concerned with the integrity of data, and present their methods and findings; database literacy is increasingly framed as a mandatory journalistic skill. At the same time the reliance on databases has led to the emergence of new forms of collective emotions and indignations (Parasie 2013). Unlike journalists, “civic hackers” (such as for example maplight.org which tracks the influence of money on US politics) do not aim to reveal victims and guilty parties hidden in the data, or to organise collective indignations. Data itself is held to be captive from governing authorities and must be freed: civic hackers reveal, without denouncing.
Hine, C. (2006) “Databases as scientific instruments and their role in the ordering of scientific work”, Social Studies of Science 36(2), pp. 269-298.
Parasie, S. (2013) “Des machines à scandale. Éléments pour une sociologie morale des bases de données”, Reseaux 178-179, pp. 127-161.
‘Disability data cultures’
Gerard Goggin, University of Sydney
A fascinating, cross-cutting case study in big data cultures lies in the dynamic, evolving, and contested space of contemporary disability and digital technology. Disability is now recognized as a significant part of social life, identity, and the life course. Over the past twenty years, digital technology – especially computers, the Internet, mobile media, social media, apps, geolocation technologies, and now, wearable computers, and even technologies such as driverless cars – have emerged as a significant part of the mediascape, cultural infrastructure, social support system, and personal identity and repertoire of many people with disabilities. New social relations of disability are premised on – and increasingly ‘congealed’ in – forms of digital technology. In the Australian context, we might think, for instance, of the present conjuncture and its coincidence of two big national projects where disability and digital technology are both entangled – the National Disability Insurance Scheme (NDIS) and National Broadband Network (NBN).
There is an emerging research, policy, design, and activist engagement with disability and digital technology, but as yet questions of disability and big data have been not so well canvassed. This is significant, given that, historically, the emergence of forms of data concerning disability has been bound up with classification, exclusion, government, and discrimination, as well as the new forms of knowledge and governmentality associated with new socially oriented models and paradigms of disability.
Accordingly, this paper provides a preliminary exploration of the forms, affordances, characteristics, issues, challenges, ethics, and possibilities of what might be termed ‘disability data cultures’. Firstly, I identify and discuss particular kinds of digital technologies, infrastructures, and softwares, and their distinctive affordances and design trajectories relating to disability. As well as explicitly nominated and dedicated disability data technologies, I also discuss the emergence of health, self-tracking, and quantified self apps by which normalcy and ability is exnominated (or naturalized). Secondly, I look at the kinds of applications, harvesting, computational logics, and the will to power, emerging in order to provide more comprehensive and targeted data on disability – for citizens and users, and service, political, and cultural intermediaries, as well as disability service providers, agencies, and governments. Thirdly, I look at the nascent disability-inflected contribution to, and participation in, open data and citizen data initiatives and experiments.
‘Theoretical perspectives on privacy, selfhood and big data’
Janice Richardson, Monash University
Big data practices produce specific anxieties about privacy, based upon the fact that information about us, of which we were previously unaware, may be revealed to our detriment. The concerns of the “masters of suspicion” (Nietzsche, Marx, Freud) provides a cultural background view that some important aspect of our lives are hidden or inaccessible to us. This framework has given way to the Foucauldian position that big data could be characterised as having the potential to create new ways in which we are categorised rather than revealing our hidden essence or truth. However, this shift from revelation to construction does nothing to undermine our need to control such potentially harmful practices by both companies and government. As a result, it is necessary to consider how to conceptualise an ethical basis of such privacy claims, which arise as a result of unpredictable knowledge that is produced, rather than as a breach of confidence of pre-existing knowledge. I consider the potential for Spinoza – and his distinction between adequate and inadequate knowledge – to provide such a framework.
‘Big data/surveillant assemblages, interfaces, and user experiences: the cultivation of the docile data subject’
Ashlin Lee, University of Tasmania
The phenomenon of big data represents a socio-technical assemblage of services and devices that are involved in data collection and analysis. One example of this is through personal ‘sensor’ devices (Andrejevic and Burdon 2014), like smartphones. Here users are interfaced into big data, simultaneously using big data for their own needs, while fuelling it with their personal information, being the target of data collection and dataveillance/surveillance. With the popularity of these devices it is important to consider what implications this interfacing has, and the relationship between users and big data/surveillance. This paper describes the results of empirical research into users and their interfaces – conceptualised under Lee’s (2013) idea of convergent mobile technologies (CMTs) – and the implications of user interfacing with big data and surveillance. Highlighted is how these interfaces valorise user experiences that are ‘immediate’ over all others. In the context of their relationship with big data this is problematic, as users dismiss or disengage from issues of security and surveillance as long as rapidity is maintained. These CMT interfaces can be thus understood as contributing to the creation of ‘docile data subjects’, who happily bleed personal information into the big data (and surveillant) assemblage(s), in exchange for an experiential state deemed valuable.
‘Altmetrics in policy communication: investigating informal policy actors using social media data’
Fiona Martin and Jonathon Hutchinson, University of Sydney
For nearly a decade citizens have taken to social media to launch public conversations and connective action around issues of civic concern – conversations which have various impacts on the shaping of policy, regulation and governance. Now Facebook, LinkedIn and Twitter are increasingly being used to build, inform and influence informal expert networks, particularly around emerging technologies and practices, and their associated policy problems. Such networks link actors from data cultures such as computing science and medical research to those in hybrid industrial ecologies, like that of mobile health software development. Their conversations are often transnational. They promote and market as much as debate and mobilise. Thus they complicate Gov 2.0 assumptions about democratic participation and engagement, as well as data security. In this paper we argue that it is vital to have new analytic frameworks to measure and evaluate the identity, reach, and relative agency of actors in those networks, in order to understand their potential impact on policy development.
We model one such framework – a mixed method social media network analysis (SNMA) and digital ethnography used to analyse agency and influence in Twitter conversations about mhealth. Using hashtagged conversations captured in the wake of the U.S. Food & Drug Administration’s September 2013 release of guidelines on Mobile Medical Applications, we visualize the network communications then locate and profile the key influencers, exploring their motivations for engagement. Drawing on this data and altmetrics research we discuss registers of impact in expert social media networks and propose a research agenda for exploring the political, cultural and economic value of Twitter conversations in policy formation.
‘Capturing capacity: quantified self technologies and the commodification of affect’
Miranda Bruce, Australian National University
The Quantified Self (QS) movement is part of a growing technological trend that exploits and modulates the potential of human life. QS finds ways to quantify the active and passive dimensions of the daily processes of human existence, in order to extract meaning from them and modify the ways that we move in and through the world. This paper will explore, firstly, the idea that QS represents a commodification of human capacity, an extraction of power, or form of immaterial labour consistent with the logic of neoliberal capitalism. I will then turn to Deleuzian affect theory to open up a quite different ontological and ultimately practical approach to the problem of QS, which stresses the excessive, and thus un-capturable, nature of lived potential. Finally, I will offer some reflections on the relationship of this technology to broader trends concerning the technological modulation of human capacity.
‘Live data/sociology: what digital sociologists can learn from artists’ responses to big data’
Deborah Lupton, University of Canberra
The big data phenomenon has attracted much publicity in public forums, both in terms of its potential for offering insights into manifold aspects of social and economic life and for its negative associations with mass surveillance and the reduction of the complexity of behaviour into quantifiable data. In this paper I will discuss some of the ways in which artists have responded to big data. I contend that their conceptualisations and critiques of big data offer intriguing insights into the tacit assumptions and emotions (fears and anxieties as well as pleasures and satisfactions) that these digitised methods of knowledge production engender. Digital data are lively in a number of ways: they have become forms of ‘lively capital’ (that is, drawing commercial value from human embodiment, or life itself); they generate embodied and affective responses; they contribute recursively to life itself; and they have a social life of their own, constantly circulating and transforming as they are appropriated and re-purposed. Artists’ responses can contribute to what might be described as a ‘live data/sociology’ (drawing on Les Back’s concept of a ‘live sociology’ that departs from ‘zombie sociology’) which identifies and theorises the forms of liveliness that big digital data may encompass.