Welcome to the first issue of Computational Culture! This is a journal that aims to provide a space for the emerging kinds of thinking and practice, aligned with, but not limited to, the growing field of software studies. Software and computation more broadly has become fundamental to almost every aspect of daily life. As computation enfolds itself into a myriad of processes, and invents others, it also changes, demanding in turn new ways of thinking about culture, politics, power and experiment, ranging in scale from the intimate to the infrastructural and new ways of understanding and making computing and software. Between the fields that are broadly categorisable as the humanities, arts, and social sciences, and those associated with computing, ranging from mathematics, through engineering and HCI, alongside those more informal forms of knowledge such as hacking and digital art, to name but a few, we can begin to trace developing lines of affinity and understanding, new figurations of both the cultural and the computational. It is in this tangled matrix that we aim to open up a space for discussion and research.
This first issue of Computational Culture is loosely based on the proceedings of a workshop held in Central London in October 2010. Entitled ‘A Billon Gadgets Minds: Thinking Widgets, Data and Workflow’, the aim of the workshop was: “To evaluate the ways in which contemporary hardware and software augment and distribute intelligence, as well as the ensemble of social relations which form around thinking practices as they synchronise, mesh, de-couple, breakdown and collapse with variable effects”. This evaluation was thought necessary as a consequence of our broader perception of the dead-end quality of much thinking about intelligence when it locates this latter, much contested concept, uniquely within – under the skin of – the human. As we put it in our original call for papers, “A growing body of research, including literature on cognitive anthropology, software studies and cognitive capital suggests that whatever is called ‘thinking’ occurs amidst mechanisms, habits, code-like systems, devices and other formally structured means. If intelligence, far from being a property of ‘the human’, is an informal and provisional function of the ensemble of mechanisms and relations that comprise a social field, then we need to explore the co-relation of cultural and experiential practices, thought and intelligent devices.”
Exploring – and evaluating – intelligence in terms of much broader sets of mechanisms and relations than those often allowed by computationalism, traditional cognitive science, psychology and so on, opens up an interesting new set of problems, which make intelligence a much trickier thing to define and to localize as the property of this or that system, entity or process. More importantly for us, by trying to shift the way in which intelligence might be considered, away from its enduring association with psychology (and so as an attribute of an isolated system) we wanted to problematize some of the more persistent habits that thinking about intelligence involves. In this respect, the workshop, and the papers presented at it, were indicative of what the journal hopes to do for computational issues more generally: to generate a space of thought, research, analysis and evaluation that contests the fruitlessly dichotomous frameworks that so frequently shape the ways in which software and its congenerates are understood.
The workshop itself only accomplished a part of the necessarily rather open scope of the initial call for papers. But, it did highlight a further set of questions arising from the problem of how one might introduce some sort of consistency into the evidently highly heterogeneous set of approaches to, and difficulties with, the issue of intelligence. Can one make an appeal to thinking about computational intelligence in broadly relational terms without threatening the coherence of the tenets of computationalism, indeed of the proposed object of enquiry more generally? Substantially revised in some cases, a number of the papers from the workshop are presented in this issue of Computational Culture and revisit the debates generated at the workshop. In so doing they provide evidence of the unsettled – and unsettling – questions that intelligence raises – not least of all about the most appropriate way of approaching the phenomenon in question.
There is, of course, a long history of thinking about and, more notoriously, attempts to measure intelligence. There have been plenty of criticisms around the latter and much intellectual and political energy spent in the skirmishes of a battle that is in many respects a stand-in for a much broader war about biological determinism and about how to extend scientific practice appropriately.1 Computational Culture is not particularly focused on this long and drawn out episode in the science wars, as it opens up a very long and complex debate that would be well beyond the remit of the journal to tackle. What we are interested in here though is in ways of thinking about and problematizing intelligence as it is presumed to be manifest in computational culture.2
The enduring importance of assumptions about formalization and formalisability in the field of artificial intelligence, and the influence – direct and indirect – that the latter has had in shaping, both materially and conceptually, the terms on which computation gets entangled in culture, often gets cashed out today in the form of constantly aborted attempts to remake and resituate activities in a “mathematico-industrial” form of computational space, the value of which derives from the asserted, but not quite proven, superiority of computationally optimized forms of calculation. Socio-economic optimization – such as that sought by process modellers when they design workflow for corporations; socio-technical optimization – such as that which is sought through various e-Government initiatives, to say nothing of the myriad forms of automated management that pervade everyday life – gain traction socially by virtue of the tacit assumption that computational artefacts can do things smarter – faster, cheaper, with less complaining – than humans.
Computational intelligence itself is often predicated on a willing ignorance about the ‘articulation work’ necessary to get computational processes operating in the first place, and arguably requires such minor irritants, like the personal assistant who curates the eminent professor’s diary, to be idealized out of the picture. Despite this, the production of computational automatisms of the kind that infuse the infrastructures and practices of everyday life, whatever kind of loosely understood representations of intelligence accompany them (marketing puff, hardware-led techno-escalation, job-saving overtures to a Board of Directors etc), is not a foregone conclusion, and the apparently smart nature of apparently smart devices is only such if one agrees to operate on the terms that the device prescribes – a machine or application that diligently executes tasks that you don’t want it to execute is likely to be thought stupid or at best an obstruction. But even if ‘folk’ attributions of intelligence or otherwise are not entirely reliable, the intelligence that may or may not be woven into the operations of software rarely lies quite where you might think it does – we are all familiar with the workaround or fix that is required to make a painstakingly programmed application do what we want it to. Indeed, one of the most significant issues for our understanding of computational culture – and hence, for Computational Culture more generally – lies in the intersections that are created in the interstices between formal-material and socio-cultural processes, in the ‘gaps’ that open up between algorithms and their implementation, the implementations of algorithms in software and its deployment in specific contexts, the practices that develop around those deployments and other kinds of activity. In short, the complex web or ecology of knowledges, devices, techniques, practices and so on that are operative across and in society and culture.
This first edition of Computational Culture, then, presents a series of articles that in one way or another take issue with the prevailing body of ideas, assumptions, prejudices and problems associated with notions of intelligence. Not so much in order to offer a coherent or consistent alternative but rather to look for different ways to problematize what we might think intelligence is, how it might be made manifest to us as social and cultural beings, how it operates, and even where it operates. It does this as part of a broader project of developing ways to explore and understand computational culture that do not subscribe to an academic division of labour that separates the two terms the journal joins.
The need to develop a more ecologically attuned sense of the spaces and operations of computation are not uniquely the concern of students of the arts, culture and society. In recent years, a body of work in and around cognitive science (whose historical links to computing science are well known) has emerged that has started to question some of the assumptions – of a functionalist inspiration – that have guided thinking about intelligence and cognition more generally. Whilst it may not necessarily confirm our intuition of an ecology of computational intelligences, the growing body of research on extended cognition certainly puts its finger on a rather difficult point for arguments that seek to locate intelligence in a neatly delimited system or entity. And it may just be that locating cognition or intelligence in entities is one, well-entrenched, way of perpetuating the division of labour that keeps computation and culture apart, sustaining the fallacy of misplaced concreteness on which such a division builds. In his essay Michael Wheeler offers a remarkably clear and cogent account of the primary claims of what he calls the extended cognition hypothesis – the hypothesis that “there are actual (in this world) cases of intelligent action in which thinking and thoughts (more precisely, the material vehicles that realize thinking and thoughts) are spatially distributed over brain, body and world, in such a way that the external (beyond-the-skin) factors concerned are rightly accorded cognitive status.” For Wheeler, this hypothesis, which has been the subject of considerable debate, acquires a measure of cognitive-scientific plausibility provided it is qualified with various conceptual desiderata – some version of functionalist explanation being prime amongst these.
There are significant and pressing questions that arise from the theory of extended cognition, and some of these emerge in the most banal or everyday of contexts. Who or what is doing the thinking if I use a calculator in a maths exam, for example? Who or what are educationalists testing in these circumstances? Our understanding of these rather complex assemblages appears to be at a rather early stage. For a philosopher of cognitive science who has written about Martin Heidegger3 it is rather appropriate that Wheeler should choose in particular to explore some of the implications of the extended cognition hypothesis in relationship to the question of intelligent architecture. The Heideggerian triad of ‘building, dwelling, thinking’ is somewhat perversely and confusedly instantiated in this issue. Discussing Usman Haque’s work in interactive architecture, Wheeler points up the non-unitary nature of some systems of human-machine interaction – lacking unity, such systems arguably lack the quality of a cognitive extension to human capacities, pointing us to an intriguing area of analysis and way of thinking about the point at which broad assemblages of human-nonhuman elements become cognitive in their own right.
The question of architecture comes up again, albeit in rather different terms in the essay by Luciana Parisi and Stamatia Portanova on what they call ‘soft thought’. Drawing on the later philosophy of Alfred North Whitehead, and in particular his arguments for a pan-experientialist metaphysics in which everything is experience, everything is feeling, Parisi and Portanova discuss numerous aesthetic projects in the fields of architecture and choreography, to propose an account of what they say is “nothing less than that numerical and logical mode of thinking which is proper to software itself.” Their notion of ‘soft thought’ evokes a mode of thinking that is immanent to programme architecture, to the billions of interactions and operations of data structures and algorithms, rather than being something humans might do with machines. Soft thought seeks definitively to cast aside the presumption of the human as a model, root term or other kind of reference point, for thinking a) about computation and b) about the sensibility or aesthetic that one might seek to associate with computational code. For Parisi and Portanova, cognitive approaches to the extended mind, insofar as they “assume in fact that a physical body adds feeling to an otherwise mathematical, non-physical, and consequently non-aesthetic, thought”, are not able – or would not be able, were they to want to – to understand a mode of thinking that is immanent to the computational processes of code per se. Such a conception may evoke some sort of Platonic heaven of idealities, a heaven that is never very far away for some computer scientists, but that is not the intent here – Parisi and Portanova equally draw on Gregory Chaitin’s work on randomness to evoke a somewhat more chaotic space of thought. ‘Soft Thought’ places severe speculative demands on the reader and is evocative of the challenges that in philosophical circles have been made on the ‘correlationist’ assumption which requires every object of cognition be an object of cognition. In a sense, breaking this assumption is precisely where the ‘autonomy of code’ argument, with which Parisi and Portanova begin, takes us, and it offers a fascinating way of endeavouring to conceptualise the way that software is productive of intelligent architectures, new spatio-temporal configurations, in its own right, without any reference to a model.
If ‘Soft Thought’ takes us into the realms of computational intelligence operating in the absence of any reference to the human, Ingmar Lippert’s essay on carbon accounting shows us just how leaky, faulty, incomplete and problematic human-machine hybrid prehensions of the universe of carbon emissions can be. The by now well-institutionalised field of science and technology studies offers one set of approaches for exploring computational culture, and research on informational infrastructures in particular provides a suggestive set of studies worthy of further exploration. Such work shows that what we might think of as the autonomously intelligent operations of humans are situated in a prior set of mundane systems, processes and practices that are routinely consigned to the geek oblivion of systems administration, database management and the like. Drawing on fieldwork he conducted in a blue-chip corporation into data gathering and processing practices vis a vis carbon dioxide emissions, Lippert explores the “distributed and heterogeneous intelligence assembled by humans and non-humans” in that corporation so as to make sense of its carbon ‘footprint’. Examining the use of and negotiations over processes of data collection for a carbon emissions database, Lippert’s discussion of what he calls ‘extended carbon cognition’ gives us a detailed analysis of the work of aligning and co-ordinating actors so as to make some form of extended cognition possible in the first place. Without the smooth and routinized co-operation of all the components of this machine, the intelligence in place is non-existent: to borrow the distinction Wheeler makes towards the end of his article, there is human-machine interaction, but no cognition.
Like several other of the contributors to this issue, Lippert’s account of extended carbon cognition draws on some of the ideas developed by the French writer Felix Guattari. This is no surprise to us – Guattari was a reference in the original call for papers that was sent out when the ‘Billion Gadget Minds’ workshop was initiated. There are intriguing crossovers and connections between work in the field of extended cognition and Guattari’s relational, pragmatic thinking. In this regard, one area of research that is likely to prove fruitful is that proposed by semiotic theory. In recent years, semiotics has proved valuable for some computing scientists and as anyone working in the human and social sciences will know, it was, for a time, practically unavoidable in many areas of research into culture. Perhaps it is time for semiotics to make a more nuanced contribution to the understanding of computational culture. Anna Munster’s discussion of fMRI imaging and the ‘evidence’ that this provides of the cranial location of cognitive processes is suggestive of the mediatico-semiotic dimensions of digital technology and the knowledges they provide. Munster’s article offers an interesting response to the question of how contemporary culture makes intelligence visible to us. Developing a discussion of what she calls the neurological turn in thinking about and research on intelligence, the recourse that many have to neuro-scientific arguments and ‘evidence’, she underlines the links between pseudo-scientific neurological discourse and contemporary digital technologies, the ‘imbrication’ of the technics of contemporary media in a “’neural’ continuum”. The global dominance of Google and its pushing of technologies shaped by artificial intelligence forms a case in point for Munster. Discussing its Prediction API, she suggests that its push into the ‘pre-cognitive’ zone, the grey area of the “’just before’ of consciousness and intentionality” is formally homologous to the way in way fMRI scans visualize anticipation. Critically, though, for Munster, fMRI scans (like the Google API) are not unproblematic. Processes of intelligence are not susceptible of any kind of easy visualization and what is needed, she suggests, is a nuanced application of semiotics to the reading of neurological images. Drawing on Peirce and on Guattari she develops a reading of the former’s conception of the ‘diagram’ to bring the highly interpretative nature of neurological data in fMRI scanning back into relationship with the uncertain, the potential and the different. It remains to be seen how such an idea might work itself out in relationship to the project of networked media technologies to anticipate your every waking move.
Lev Manovich’s article sits outside of the Billion Gadget Minds cluster and also sets out some of the key terms of reference for Computational Culture. As a journal of software studies we are determined to bring critical attention to the objects and processes that remain by and large invisible to contemporary understandings of society and culture. Software shapes the world, and, as one of the keenest thinkers and re-inventors of such shaping, Manovich turns his attention to one of the most ubiquitous, and long-lived, professional software applications, Adobe Photoshop. In this article he provides the first extended critical evaluation of this programme, developing an account of its structure of filters and layers, as well as of its actual or putative relation to previous media. That Photoshop, along with its Free Software variant, GIMP, provides such a fundamental aspect of contemporary visual culture, yet goes unevaluated is one of the kind of problems we hope to address in the coming issues of this journal.
The launch of this journal comes about at a time of fundamental attacks on the idea of the university in the country in which we are currently based. As well as the specific context of the, admittedly rather expansive, field we aim to contribute to, we aim to make this initiative something of an opportunity out of this time of turbulence, a chance to assert the necessity of the kinds of free associational thought and experiment that have given rise to much of what is most brilliant in computational culture itself. We would like to close this editorial therefore with an invitation to participate, in producing articles, projects, reviews, and by assisting in the review of such work: computational cultures are made in many places in many kinds of ways, by multitudes of things, ideas, peoples and devices, and we look forward to their manifestation here.
The first issue of any journal necessarily relies on numerous kinds and various amounts of ‘background’ work and support upon which any such project is dependent. Brigitte Kaltenbacher designed the site in WordPress and has paid meticulous attention to its construction and ongoing development as well as being an inspiring protagonist in the discussions that lead to the launch of this project. Derek Shaw at TechSys provides meticulous system administration for the site via YoHa. Mark Bishop, of the department of Computing at Goldsmiths acted as respondent to Michael Wheeler during the Billion Gadget Minds workshop. Andrew Murphie of Fibreculture Journal gave us excellent advice on starting up this project and serves as inspiration for the rather more momentous task of sustaining it. Our thanks to all the contributors and to everyone else who took part in the Billion Gadget Minds workshop, funding for which was generously provided by Middlesex University.
- Stephen Jay Gould’s essay The Mismeasure of Man offers an interesting discussion of the issues raised in this ‘debate’, earning a rebuke for being politically motivated by psychologists keen to use the rhetoric of pure scientificity to protect their power of judgement. See Stephen Jay Gould The Mismeasure of Man 2nd edition (New York: W.W. Norton and Co, 1996) (up)
- We say ‘presumed’ deliberately here. As we don’t really know what intelligence is, references to its manifestation can be argued to be rhetorical, performative, a performance, even – it is in something of the latter sense that it appears in Alan Turing’s famous early essay on ‘Computing Machinery and Intelligence’. (up)
- Michael Wheeler, Reconstructing the Cognitive World: The next step, (Cambridge, Massachusetts: MIT Press, 2005.) (up)