Code, Capital and Culture: A Review of Brian Lennon, Programming Language Cultures: Automating Automation

Code, Capital and Culture: A Review of Brian Lennon, Programming Language Cultures: Automating Automation (Stanford University Press, 2024)

Warren Sack <wsack@ucsc.edu>

When someone says “I want a programming language in which I need only say what I wish done,” give him a lollipop.
Alan Perlis, Epigrams in Programming1

Almost twenty years ago, philosopher and literary theorist Katherine Hayles described the growth of programming languages.

Unnoticed by most, new languages are springing into existence, proliferating across the globe, mutating into new forms, and fading into obsolescence. Invented by humans, these languages are intended for the intelligent machines called computers. Programming languages and the code in which they are written complicate the linguistic situation as it has been theorized for “natural” language, for code and language operate in significantly different ways. Among the differences are the multiple addresses of code (which include intelligent machines as well as humans), the development of code by relatively small groups of technical specialists, and the tight integration of code into commercial product cycles and, consequently, into capitalist economics.2

The names of some of the most popular languages in use today include JavaScript, Python, Java, and C++. As was the case already sixty years ago, there are thousands of these languages and more are created every day. Although he does not cite her, in his new book Programming Language Cultures: Automating Automation, Brian Lennon proposes to study what Hayles observed to be “unnoticed by most”: to consider how programming languages shape capitalism and abet its fantasy of (re)production without labor — what Lennon terms the “automation of automation.”
According to Brian Lennon, “Automation of the jobs worth having: that’s what a programming language culture does” (p. 5). He argues that the cultural logic of programming languages is the logic of automation and its apotheosis, the means to automate each successive wave of automation, the automation of automation. In Programming Language Cultures, Lennon reads textbooks, technical manuals, and Silicon Valley novels to describe how programming languages affect labor conditions precipitated with implemented and imagined forms of automation.
Lennon characterizes his approach as “philological.”3 He writes,

My approach is broadly philological in three distinct senses of that term, … First, it emphasizes the histories of both natural and formal languages, including programming languages, in their individual specificities, over their abstract formal or structural characteristics… Second, it regards individual natural and formal languages as carriers and sometimes shapers of (specific) cultural histories. Third, it aims to integrate knowledge from different disciplines without rejecting the difference of disciplines. (p. 14)

Today, Lennon’s declared approach, philology, is likely to be unfamiliar even to humanities scholars, at least in the Anglophone world. In a contemporary article, philologist Sheldon Pollock articulates the specificity of philology, the paradigmatic academic discipline of a century and more ago, for most of the humanities and many of the sciences (e.g., evolutionary biology) like this: “Philology is, or should be, the discipline of making sense of texts. It is not the theory of language — that’s linguistics — or the theory of meaning or truth — that’s philosophy — but the theory of textuality as well as the history of textualized meaning. If philosophy is thought critically reflecting upon itself, as Kant put it, then philology may be seen as the critical self-reflection of language. Or to put this in a Vichean idiom: if mathematics is the language of the book of nature, as Galileo taught, philology is the language of the book of humanity.”4
If philology is, and has been, “the critical self-reflection of language,” it is curious that Lennon declares his approach to be “broadly philological” because he insists that programming languages are not languages. He writes, later in the introduction, “In its focus on language, philology describes the way that power is wielded through and in language. While programming languages are codes, not languages (my emphasis), perhaps we wish to misconstrue them as languages because they are sites and applications of power in similar ways. Software is created using programming languages, full stop” (p. 17).
Resolving the question — of whether or not Lennon’s approach is philological or not —seems to be a very small stakes game, both for readers and Lennon himself since he spends almost no time positioning his work in the field of philology. Put another way, according to my count, his bibliography contains over 375 texts of which only a handful seem to have anything to do with philology. The plurality of the texts in the bibliography (over one-third) are technical references to computer science textbooks, conference proceedings, manuals, blog posts, laboratory technical reports, and other such documents (e.g., the first chapter repeatedly cites the materials gathered for a summer workshop on digital computers held at MIT in 1953). In addition, the bibliography contains many references from the field of the history of science and technology and its subfield, the history of computing.
Although it may not be philological, Lennon’s accomplishment in this book is to navigate us through the various literatures that accrue to specific programming languages. By chapter five, “JavaScript Affogato: Negotiations of Expertise,” he has clarified his object of study by reiterating his position that programming languages are codes, not languages, but also explaining that specific programming languages, in his view, are also not just codes. Here is what he says about the programming language JavaScript:

What we call “JavaScript” is not just a programming language, and not just a collection of environments and tooling supporting a programming language, including specifications and other documentation, implementations, and primary and secondary program artifacts (from development tools and frameworks to specific interpreters or “engines,” compilers and transpilers, and other software components embedded in a browser or server applications). JavaScript can, at least at the moment and for the near term, be understood also as an assembly of broader technical and technical-historical dynamics, labor and management practices and arrangements, and discourses about education, job training, and production that privilege technical expertise, but also seek to generalize it in and for a demarcated historical interval (p. 137)

In short, a programming language is not just code, it is a culture that has potentially large implications for both labor and capital. He seeks to make plain these implications, past and present, by reading the many kinds of texts that accrue around and in the code of specific programming languages.
Lennon compares and contrasts his approach with critical code studies (pp. 10-14) with primary reference to Mark Marino’s book of 2020, Critical Code Studies.5 He writes, in its favor, “I fully share and heartily endorse Marino’s conviction that ‘if code governs so much of our lives, then to understand its operations is to get some sense of the systems that operate on us’ (Marino 2020, 4)” (pp. 10-11). His opposition to critical code studies turns on a teleological concern: “Whereas for Marino, ‘code since its inception has been a means of communication’ (Marino 2020, 17), for me programming languages always were and are first of all systems of automation” (p. 13).6
However, there is another significant difference apparent in comparing Lennon’s approach with critical code studies. While each chapter is grounded in a close reading of one or more texts, almost no attention is paid to code per se. The book only includes twelve code examples, none of them longer than ten lines.
In chapter one, “A Third Language: Computing, Translation, Automation,” Lennon proposes to examine the translation metaphors employed by the first programmers (of the 1950s) as they both borrowed and invented terms for describing the writing and execution of programs. He suggests that programmers’ use of the verb “to translate” to describe, for instance, the automatic rewriting of a higher-level language (e.g., C) into a lower-level language (e.g., assembly), differs from the translations of humanities scholars due to the former’s role in the history of “recursive automation,” a phrase that the author defines as “the addition of successive layers of control through which higher- and lower-level codes are ‘translated’ to each other.” He points out that “translation” in computing is most frequently a euphemism for automation. The first jobs automated were, of course, the jobs of calculation performed by human computers for centuries (including those for ballistics and astronomy).
Each successive layer of translation automated specific kinds of (usually tedious) labor. For example, the “translation” of assembly (that allows operators and variables to be written in alphabetic characters) into machine language (in which everything is written in numbers) allows the programmer to write, for instance, “ADD” rather than some numerical code for the addition operator (e.g., 11001111). To examine early uses of the term “translation” in computing, Lennon relies on a number of technical publications from the 1950s (including the notes from a summer 1953 program held at MIT, as mentioned above) and several papers published for the Symposium on Automatic Programming for Digital Computers, Office of Naval Research, Department of the Navy, Washington, DC, 13-14 May 1954, including an address by Grace Hopper (arguably the co-inventor of modern programming languages).
In Chapter Two, “Really Reading the Code, Really Reading the Comments,” we learn the surprising fact that the earliest programming languages had no means to allow the programmer to write comments in the code of a program. The first programming language to allow this was FORTRAN I (1956). Lennon cites Donald Knuth (see footnote 10 on page 189) who wrote that no one had thought to include a comment feature before because code in a high-level language was widely considered to be “self-explanatory.”
The chapter examines the development of two programming approaches, one that continues to believe that well-written code needs no comments; the other that believes comments are essential. The programming manuals produced by these two different approaches clearly put different constraints on the programmers, constraints dictating not only whether or not comments were allowed, but also how any comments needed to be written and coordinated with the code. While most of the primary sources for this chapter were written in the 1950s to the 1980s, Lennon points out the contemporary relevance of this difference by citing contemporary sources that urge, for instance, faith in code and doubt in comments — since changes in the code are not always matched by updates to the comments, thus potentially misleading future maintainers of the code if their expectations are set by the comments. Lennon shows how these differing approaches to comments change the working conditions of the programmers, but then also points out that comments can be a liminal space that programmers use not only to document the code, but also to describe their own working conditions; e.g., “this isn’t the right way to deal with this,” referring to the code, “but today is my last day,” or “uncomment the following line if the program manager changes her mind again this week” (p. 73).
Chapter Three, “Etymologies of Foo,” takes its name from IETF RFC 3092 (by Donald Eastlake, Carl-Uno Manros, and Eric Raymond)7 that commented on the use of “foo,” “bar,” and “foobar” as “metasyntactic variables,” variable names used in example code meant to refer to a variable, but a variable of no particular type, value, or domain. As Lennon points out, in programming and documentation, these metasyntactic variables are akin to stand-in terms in other disciplines such as “widget,” used in economics to refer to some product, but not any particular product, that is manufactured and sold; or “John Doe” as used to refer to an unspecified perpetrator or victim in a legal context; or “thingy” used in everyday conversation. The chapter starts with a thorough history and etymology of “foobar,” a term I had always thought came from the American armed forces of WWII8 but that Lennon points out was already in circulation in MIT student publications of the 1930s (p. 79).
After consideration of metasyntactic variables (like “foo” and “bar,” but additionally “baz,” “qux,” etc.), the chapter discusses other common naming conventions — such as the use of “i,” “j,” and “k” as loop variables — and then focusses on the discipline exerted on programmers to name and format variables in a manner that makes code easier to understand. Thus, for example, a programming manual might dictate that one write a multiword variable with upper and lowercase letters — like this: netIncomeAfterTax — or with underscores — like this: net_income_after_tax. Any given tech company will specify how to format a variable in a style manual for its programmers but, more importantly, will also provide naming conventions since, formally, one can use practically any sequence of characters for a variable. But if in the code a variable is named xyzqrs78w instead of netIncomeAfterTax the code may still work but will be much more difficult for programmers to understand.
In this chapter, many of the same texts that comment on comments cited in the previous chapter (on comments) are also used here since the naming of variables is crucial to making code readable. Analogously, the liberty to name variables opens a liminal space for programmers to describe their own working conditions. In Lennon’s words, “It may seem dramatic to suggest that in a domain of such radical constraint, this freedom to choose names becomes a site of struggle, even a kind of suffering, as the programmer is faced with elaborating ad hoc a second, entirely separate, exclusively human-readable task-and-program-specific semantics interlayered with that of strictly determined machine-readable syntax, in those portions of the program that run. But if this freedom were not also a curse, it would not be the object of so much legislation in programming language style guides and manuals” (p. 80).
Chapter Four, “Snobol: A Rememory of Programming Language History,” recounts a history of the now-obsolete programming language, Snobol (StriNg Oriented and symBOlic Language). Repeatedly referenced are the writings on Snobol by the chief designer of the language, Ralph Griswold, originally at Bell Lab and then the founding chair of the Computer Science Department at the University of Arizona.
The primary data type of Snobol was the “string,” a data type now common in many languages but unusual at the time. Originally, Snobol was intended to support the development of programs for computer science and mathematics that require the string data type (e.g., algebraic formula manipulation, term rewriting systems, compilers and interpreters for programming languages). However, over the course of its existence, its main areas of application became linguistics, humanities computing, and text processing for social science. This is notable because programming languages are usually developed by a given community for a given community. Yet, despite the fact that “there is no reason to believe … any humanities scholar influenced the design and development of Snobol, at any time in its history,” (p. 122) Snobol became a “humanities programming language” (p. 124).
At the end of the chapter, Lennon coins the term “rememory” to describe the relationship of Snobol to another programming language. The C programming language was developed, at about the same time, also at Bell Labs. In comparison, “strings” in C were/are a marginal entity, difficult to handle in the language, a language, instead, that strongly supports the memory “pointer” data type. Consequently, C is ideal for the development of tools for computer memory management. Lennon writes, “it is in their incommensurability that Snobol and C might be understood as mutually reflected images or ‘rememories’ of each other, emerging as they did from the same institution at virtually the same moment” (p. 130).
C began as a tool for computer scientists and remains one today. Snobol began as a tool for computer scientists but found a completely different user community in the humanities. Lennon points out that the development of JavaScript “rather neatly inverts Snobol’s evolution from a project of and for computing experts and specialists into a utility for humanities scholars” (p. 131) and proposes JavaScript as the second part of a dual case study “whose contrasts illuminate the complexity of the links among software, automation, and expertise” (p. 131).
Chapter Five, “JavaScript Affogato: Negotiations of Expertise,” as cited above in this review (“What we call ’JavaScript’ is not just a programming language…” p. 137) is a keystone for the book, solidifying the connections between the technical details of the programming languages considered and their role in the struggle over automation between labor and capital. In this chapter, Lennon clearly articulates his position that programming languages are only codes, not languages, and — at the same time — they are not just codes but cultures, work cultures materialized in programming tools and management practices.
The history of JavaScript is sketched out, beginning with its initial release in 1995 by Sun Microsystems and Netscape Communications as a “complement” to the systems language Java (p. 134). With JavaScript, web designers were envisioned to have a simple scripting language for manipulating the look of webpages and interfacing with the server-side tools built in Java. Over time, JavaScript was seen to be equal to Java, not just for interfacing with bigger systems and prettying webpages, but for building the larger systems too. Consequently, JavaScript starts life advertised as a simple scripting language for non-programmers and becomes the dominant language for expert computer programmers.9 Lennon sees this as the reverse of Snobol’s development.
However, JavaScript is also what Lennon calls a “translational programming language.” It bridges categories of programming languages: it is both a scripting and a system programming language, it supports multiple programming paradigms (functional, applicative, object-oriented), it can be used for full-stack development, from client-side to server-side development, and tools exist for “translating,” compiling, or interpreting other languages (including C/C++, Java, Perl, Python, Ruby, etc.) into JavaScript so that codebases can be easily merged.
This trajectory of JavaScript, from small and simple scripting language to Swiss-Army-knife-and-bulldozer for any and all programming tasks, has fostered the development of many complicated tools, frameworks (e.g., React, Ember.js, etc.), and varieties of the language (e.g., CoffeeScript, TypeScript, etc.). Consequently, the job of a JavaScript programmer has changed considerably: to know JavaScript as a professional entails many new forms of expertise and a constant effort to update one’s skills with the newest developments.
Chapter Six, “DevOps Fiction: Workforce Relations in Technology Industry Novels,” examines the sociotechnical imaginary of software project management theory: part attempted inspirational fiction meant to change the jobs of the software industry, part non-fictional manuals and how-tos. The best aspects of this chapter remind me of historian of science Mary Poovey’s book, Genres of the Credit Economy: Mediating Value in Eighteenth- and Nineteenth-Century Britain,10 where she examines the history of financial instruments to learn how they became part of our everyday world, reading together forms of writing that are not usually viewed together, from bills of exchange and bank checks, to realist novels and Romantic poems, to economic theory and financial journalism.
In the chapter, Lennon identifies the origins of the term “DevOps” in the literature of software engineering and management (the Operations Engineering group of flickr.com who merged development and operations to enable versions of a software application to be released multiple times per day, pp. 159-160) and then explores what he calls “DevOps fiction” that “imagines the management of routine maintenance” (p. 155) and its coordination with the development of new software. The recurrent drama is the opposition between developers who create new software and the operations and testing departments that need to maintain the existing code. “DevOps” is the practical and fictional resolution of this dramatic conflict.
Lennon writes,

In its focus on the maintenance of the production of software and the internal ‘ops’ through which cultures of software development are managed, … DevOps fiction is best understood as a literary branch of the line of software project management commentary that begins with Gerald M. Weinberg’s The Psychology of Computer Programming (1971)” (p. 156).

In many ways, this chapter is foundational for the whole book. We learn (p. 163), for instance, that the phrase used as the subtitle of Lennon’s book, “automation of automation,” comes from a software management book, Robert Britcher’s 1999 The Limits of Software: People, Projects, and Perspectives.11 Most importantly, this final chapter provides a resolution to three arguments Lennon posits as foundational for the book: “1. Technical ignorance of computing is a practical, intellectual, and sociopolitical problem” (p. 18) “2. Automation is not a myth” (p. 20) “3. Automation is a moving target.”
The emphasis on these questions is clarified via a key reference for Lennon, Luke Munn’s 2022 book, Automation Is a Myth.12 Munn is mentioned in the introduction but only discussed and contextualized in detail in the last chapter. Lennon, cites Munn: “While the automation optimists and the technopessimists may differ, they share the same underlying assumption of technical takeover. Whether embraced as a bright future or rejected as a dark dystopia, the prognosis is remarkably consistent: full automation will fully replace the human (Munn 2022, 15)” (p. 166). Lennon argues that by focusing on “full automation,” a state that is unlikely to ever be reached, even in the fantasies of capitalism, media theorists13 overlook the very real conditions of partial automation — and their implications for capital and labor — brought about by software.
This, according to Lennon is just as theoretically disabling as the obverse, to minimize rather than mythologize the effects of software on economics and politics: “radical economists and political thinkers,” such as David Harvey and Vijay Prashad, “have intentionally or unintentionally minimized the role of” information and communications technologies (p. 166).
Lennon’s purpose is neither to minimize nor to mythologize but to closely follow the literatures that accrue around programming languages and software, more generally, to make plain how specific forms of automation, imagined or implemented, have very real implications for contemporary conditions of labor and capital.

  1. Alan Perlis, Epigrams in Programming , https://www.cs.yale.edu/homes/perlis-alan/quotes.html
  2. Katherine Hayles, My Mother Was a Computer: Digital Subjects and Literary Texts (University of Chicago Press, 2005): 15. Almost forty years before Hayles’s observation, in 1966, computer scientist Peter Landin wrote about the estimated 1700 programming languages then in circulation. Peter Landin, “The next 700 programming languages,” Communications of the ACM 9(3) (1966): 157–166, http://doi.acm.org/10.1145/365230.365257. Contemporary reviews of Landin’s predictions of 1966 include Robert Chatley and Alastair Donaldson and Alan Mycroft, “The next 7000 programming languages,” Computing and software science: State of the art and perspectives(Springer, 2019): 250-282.
  3. For a more extended discussion of the distinctive concerns of philological scholarship see Michel Foucault, The Order of Things : “There are four theoretical segments that provide us with indications of its (philology’s) constitution early in the nineteenth century. … 1. The manner in which a language can be characterized from within and distinguished from other languages. … 2. The study of … internal variations constitutes the second important theoretical segment. … 3. This definition of a law for consonantal or vocalic modifications makes it possible to establish a new theory of the root. … 4. The analysis of roots made possible a new definition of the systems of kinship between languages. And this is the fourth broad theoretical segment that characterizes the appearance of philology.” Michel Foucault, The Order of Things: An Archaeology of the Human Sciences (Vintage, 1966). For a book-length introduction to philology, see James Turner, Philology: The Forgotten Origin of the Modern Humanities (Princeton University Press, 2014).
  4. Sheldon Pollock, “Future Philology? The Fate of a Soft Science in a Hard World,” Critical Inquiry 35 (2009)
  5. Mark Marino, Critical Code Studies (MIT Press, 2020)
  6. We can understand that the machines and automata of computing were at one point not considered to be linguistic entities — until a discourse of linguistics and translation was imposed on them. Early work illustrates how computational capabilities and limitations could be analyzed without reference to the tropes of linguistics; e.g., Marvin Minsky, Computation: Finite and Infinite Machines (Prentice Hall, 1967). From this earlier perspective, code is the implementation of an automaton, a device, or a machine. Lennon’s stated aim to examine code (the code of programming languages) exclusively as a means of automation might have been better served by troping codes as machines and not as languages. However, the contrast he draws with critical code studies (communication versus automation) seems to be too stark for the larger goals of the book since Lennon does address issues of communication (especially in his chapters on comments and variable names) in addition to concerns of automation.
  7. RFC 3092 (first published 1 April 2001)
  8. In the film Saving Private Ryan, Captain Miller (Tom Hanks) keeps using the phrase “fubar” and so recurrently perplexing the soldier-translator Upham (Jeremy Davies) until Mellish (Adam Goldberg) indirectly tells him while collecting ammunition: fubar means “fucked up beyond all recognition” (https://savingprivateryan.fandom.com/wiki/Fubar)?
  9. According to annual survey of software developers conducted by the popular blog Stack Overflow, in 2024 “Javascript (62%), HTML/CSS (53%), and Python (51%) top the list of most used languages for the second year in a row,” https://stackoverflow.blog/2024/07/24/developers-want-more-more-more-the-2024-results-from-stack-overflow-s-annual-developer-survey/
  10. Mary Poovey, Genres of the Credit Economy: Mediating Value in Eighteenth- and Nineteenth-Century Britain (University of Chicago Press, 2008)
  11. Robert Britcher, The Limits of Software: People, Projects, and Perspectives (Addison-Wesley, 1999)
  12. Luke Munn, Automation Is a Myth (Stanford University Press, 2022)
  13. And others including, for instance, Aaron Bastani, Fully Automated Luxury Communism (Verso, 2019)