Critical Codes – from forkbomb to brainfuck

Article Information

  • Author(s): Alan F. Blackwell
  • Affiliation(s): University of Cambridge Computer Laboratory
  • Publication Date: 16th November 2013
  • Issue: 3
  • Citation: Alan F. Blackwell. “Critical Codes – from forkbomb to brainfuck.” Computational Culture 3 (16th November 2013).


Review of Speaking Code: Coding as Aesthetic and Political Expression by Geoff Cox, code by Alex McLean, and foreword by Franco “Bifo” Berardi, MIT Press, 2013.

Speaking Code is a book about language – and in particular, the languages of digital technology. The notion of the programming “language” has always acted as a provocation to critical commentary, since the earliest speculations on giant electronic brains and their apparent potential to participate in human society. But increasingly pervasive digital media mean that all language is now subject to digital manipulation, storage and analysis. As a consequence, the distinctions between the languages of machines and their users are becoming blurred.

Of course, these languages were never primarily for communication among machines in the sense of human languages. Rather, they are technical languages –engineering notations – used by an engineer to specify the behaviour of the machine. Debates drawing on analogies to human language (via tropes of literacy, dialects, translation and others) have thus been unhelpful, as I and others have argued in drawing attention to the Cognitive Dimensions of Notations as a formulation of more relevant usability concerns for the designers of such languages.1

Nevertheless, the early analogies between programming “languages” and human language have proven extremely persistent, and have perhaps also posed persistent obstacles to understanding of code among critical commentators. It was often assumed that commentary on a programming language, as with commentary on a human language, demanded that one should first learn the language – not only its basic grammar, but acquiring the expertise of praxis – before offering insights that should be derived from and informed by such fluency. Yet the comparative study of programming languages requires fluency in many of them – an achievement of only the most senior engineers, or of those computer scientists who are particularly inclined to theory and history. If restricted to such commentators, then critical distance seems unlikely. By the time that expertise is acquired, the political and social commitments of the enterprise may have become so familiar as to be invisible.

Analogies to human language have also framed many assumptions about technical education. If viewed only as a parallel to natural language, it seems ridiculous for a person to set out to learn, not a single foreign language, but to become multilingual, or even omnilingual – a speaker of all languages. Yet in order to understand computation as a phenomenon, this may be precisely what is needed. The new educational ideal of “computational thinking”2 expresses the ambition that general principles of computing should be learned for their applicability to many aspects of contemporary life and science. Such principles transcend any particular programming language, and might even be taught without computers at all, as in Tim Bell’s Computer Science Unplugged 3

Speaking Code represents a timely critical intervention within this new understanding of computational thinking. It is about code, but not any particular code. It is trans-lingual, studying the special nature of these languages as a whole, rather than being distracted by their diversity. The impressive variety of code examples contributed to the book by Alex McLean adopts the same trans-lingual perspective. Technically-inclined readers will be pleased by the extent to which McLean illustrates artistic and political interventions made not only in the scripting languages of the internet – perl, python and unix shell scripts, but also a generous sample of esoteric languages – including some of his own invention.

As the title “Speaking Code” implies, this book deals with the liminal ground between code as a technical notation, and as a communication medium. Unlike (for example) engineering drawings, it is indeed possible to “speak code” – to pronounce the symbolic keywords and the narrative of the program logic. Although seldom attempted so literally, there is still a tradition of such performances, as in the reference to radioqualia’s Free Radio Linux4 – reading the source code of the Linux kernel over the Internet via a speech synthesiser. Perhaps more significantly, in performances of live coding it is expected that the code itself will be made visible to the audience. This is not simply narrating an algorithm, but rather eavesdropping on the world of communication between programmer and compiler. As an originator and international leader in live coding practice, Alex McLean is again an ideal collaborator.

But it is Geoff Cox’s own combination of technical understanding and philosophical breadth that provides the real power of this book. Where many commentators simply describe the “application layer” of digital technology, Cox genuinely looks inside the software. Moreover, the liminal status of code as both mechanism and language means that this is much more than simply a technical rendering, or layman’s exposition of engineering principles. This informed perspective is, of course, to be expected in the field of software studies – and the Software Studies series at MIT Press has been a welcome champion of these intellectual ambitions.

The key concern in Speaking Code, as identified by Cox, is that a statement in a programming language is a performative utterance – both a political and aesthetic accomplishment. Most established debates around the nature of code are rooted in an “engineering philosophy of technology” 5 But when we make the code public, the programmer becomes more like a performing artist (or perhaps performance poet) than a software engineer. Nevertheless, it is the machine that “performs” (executes) a program. The key issue is therefore the performative status of the machine as opposed to the human reader, or the question of that part of the code that determines the behaviour of the Von Neumann machine (the structure relating memory locations of data and order of operations on it), as opposed to that part that is intended for human readers (the identifier-labels that are chosen to describe groups of data or sequences of operations, and the comments that are ignored by the machine altogether). These labels and comments are described as secondary notation, a convention also followed in the Cognitive Dimensions of Notations, 6 although Cox notes that this previously uncontroversial term might be read as pejorative, in its implicit reference to a primary, and thus more privileged, notation of the machine.

In these analyses, coding is always considered to represent a “double voice” – not only for the machine, but for human readers of the code. As a result, the practice of coding is “a deliberate action across cultural and technological fields”. 7 In many respects, the same could be said of Cox’s writing, since it offers far greater critical traction on the technical world than does much commentary on digital media. As a result, I would hope that its presence in the MIT Press catalogue might find an audience among software engineers, or perhaps even programming language designers, for whom it would provide an articulate (if somewhat dense) introduction to critical readings that might inform their work.

The ideological commitments of the book are relatively straightforward, if perhaps unfamiliar to the engineering reader. These draw extensively on Hannah Arendt’s political theory, the Marxist theory of Paulo Virno and Franco ‘Bifo’ Berardi’s analysis of linguistic pragmatics in the digital realm (Berardi provides a preface to the book). These sources are used as the basis for rigorous critique of the claims made both in the Free Software movement and in the right wing libertarian philosophy of Silicon Valley venture capitalists (Cox observes that these two uses of the word “free” are more closely entangled than many cyber-utopians might hope). In a comment on those manifestations of the unequal consequences of network standards, even in the “social” networks of products such as Facebook, he notes that, “freedom is extracted by a service to serve the free market, not free expression”.8

Software artists engage regularly in these debates, as readers of this journal will be aware. However, Cox does not assume an audience of expert readers, offering both commentary and illustrations documenting many classic interventions. These include the app iCapitalism (from Crotch Zombie productions),9 which was banned from the Apple App Store simply because of its transparency in revealing the underlying logic of the enterprise (the winner of the game is the player who spends the most). After revealing the hypocrisy of the “free” market, it is only a short step to mock the failures of democracy, through an online market for the purchase of votes (UBERMORGEN’s [V]ote Auction10 – the subject of extensive legal action during the 2000 U.S. presidential election).

Much of this material comes seasoned with dark humour – of the same kind observed in software art itself, such as the Web2.0 suicide machine11 of moddr_ (which attempts to delete all trace of the user from Facebook, MySpace, Twitter and LinkedIn), or McLean’s forkbomb.pl12 , which inevitably crashed the machine that it ran on. Many programmers’ first reaction to an esoteric language like Befunge13 or brainfuck14 (example code for both is included in the book, of course) is to laugh at them. But the power of these jokes should not be underestimated. In a reflection on joking, Cox reports Paulo Virno’s claim that the political dimension of human life resides in the ability to act in unexpected ways, challenging both institutional norms and normative logic. Where those norms and logics are embedded in code, treating code as a joke is an essential freedom.

However, the meat of the book is not simply the entertainment of humorous interventions – a coder’s analogue to LulzSec. In responding to provocative interventions whose targets range from Facebook’s appropriation of individual identity to the trivialisation of online democracy, the resulting running battles between artists and operators clarify the extent to which “the human subject is no longer defined as an active citizen, but as a consumer”. 15

The problem here is, as Cox says, that “it does not really seem to be the case that certain voices are suppressed or silenced, but that they are emptied of significant meanings.”16 The structures of social network companies, of economic free markets, or mediated democracy, can only be questioned through engagement with their infrastructure – the voice of code carries unequal power with respect to the spoken voice. The appropriation of individual voices, rather than providing true opportunity for dissent, simply strengthens the force of the market – a statement that is increasingly true in the pseudo-democratic logic of PageRank.17

The question is whether speakers of code can achieve this kind of agency. Cox argues, following Berardi, that intellectual voices must be recognised as political in order to have a positive force for humanity – and that this applies even more to code than to commentary. Where we separate language from humanity, a kind of autism of the political sphere results, in which the mechanical logics of economics and the market replace human voices.

So rather than humour, or simple incitement to hacktivism, Cox seems to imply that the book itself aspires to the condition of live coding, where code is created not to realise any predetermined specification or engineering concept, but rather as a craft-like conversation with materials.18 In this tradition, Cox modestly suggests that his book has no theory to present, but simply reflects the pursuit of critical writing as a form of practice.

Live coding has become situated with respect to musical performance, where it challenges the categories of “computer generated” music by emphasising human control and contingency. However, live coders are generally highly articulate – it is not clear that they require (or request) an interlocutor to interpret their work. In this context, the inclusion of work by live coder Alex McLean as a code “voice” within the book is particularly welcome evidence that we have a genuinely shared critical and performative enterprise.

In order to progress with that enterprise, we must understand not only the philosophical and artistic traditions within which code might be “spoken”, but also the code language itself. Geoff Cox should be greatly valued as a humanistic and humane speaker of these codes.

Author Bio
Alan Blackwell is Reader in Interdisciplinary Design at the Cambridge Computer Laboratory, having previously studied engineering, comparative religion, computing and experimental psychology. He has 12 years experience of designing industrial systems, electronic and software products. He has taught design courses and supervised postgraduate design research students in Computing, Architecture, Psychology, Languages, Music and Engineering. He is co-Director of the Crucible Network for Research in Interdisciplinary Design, a Fellow of Darwin College, a Director of Cambridge Enterprise, and Vice-President of the Cambridge Philosophical Society.


  1. The Cognitive Dimensions of Notations framework, originally developed by Thomas Green and subsequently extended by many collaborators, is a perspective on computer usability that emphasizes the manipulation of information structures. It has proven particularly valuable in providing a user-centered approach to the design of novel diagrams, programming languages and environments (See, Green, T.R.G. & Petre, M. (1996). “Usability analysis of visual programming environments: a ‘cognitive dimensions’ approach”. Journal of Visual Languages and Computing, 7,131-174.; Blackwell, A.F. and Green, T.R.G. (2003). “Notational systems – the Cognitive Dimensions of Notations framework”. In J.M. Carroll (Ed.) HCI Models, Theories and Frameworks: Toward a multidisciplinary science. San Francisco: Morgan Kaufmann, 103-134.
  2. Wing, J. (2006). “Computational Thinking”. Communications of the ACM 49(3), 33-35.
  5. Speaking Code, p. 79 (quoting David M. Berry)
  6. Unfortunately, Cox does not acknowledge the original introduction of the term by Marian Petre, who first used it to describe expert conventions in the layout of electronic schematics. See, Petre, M. (1995). “Why looking isn’t always seeing: readership skills and graphical programming”. Communications of the ACM 38(6), 33-44.
  7. Speaking Code, p. 15
  8. ibid., p. 84
  15. Speaking Code, p. 91
  16. ibid., p. 92
  17. Rieder, B. (2012). “What is in PageRank? A historical and conceptual investigation of a recursive status index.” Computational Culture, issue 2.
  18. Blackwell, A.F. (2013). “The craft of design conversation”. In A. Van Der Hoek and M. Petre, (Eds), Software Designers in Action: A Human-Centric Look at Design Work. Abingdon: Chapman and Hall/CRC, pp. 313-318.