A Matter of Interpretation: A review of ‘Structure and Interpretation of Computer Programs (JavaScript Edition)’

Article Information

  • Author(s): Warren Sack
  • Affiliation(s): University of California, Santa Cruz
  • Publication Date: July 2023
  • Issue: 9
  • Citation: Warren Sack. “A Matter of Interpretation: A review of ‘Structure and Interpretation of Computer Programs (JavaScript Edition)’.” Computational Culture 9 (July 2023). http://computationalculture.net/a-matter-of-interpretation-a-review-of-structure-and-interpretation-of-computer-programs-javascript-edition/.


Abstract

Review of: Abelson H, Sussman GJ, Sussman J, et al. (2022) Structure and Interpretation of Computer Programs. JavaScript edition. Cambridge, Massachusetts: The MIT Press.


The history of science and technology has, until relatively recently, neglected the study of textbooks. However, more recent scholarship has shown how important they have been, not only pedagogically, but also to the creation and direction of various fields like chemistry and physics.1 Arguably computer science textbooks are and have been pivotal for the field and are therefore an appropriate object of review for software studies. One of these pivotal computer science textbooks is Harold Abelson and Gerald Sussman’s Structure and Interpretation of Computer Programs (MIT Press, 1985). Last year, MIT Press issued a third edition, the JavaScript edition.

By 1985, when Abelson and Sussman published the first edition of their introductory textbook, the materials it incorporated had already been in circulation at MIT (since 1980) and beyond in their intellectual community, a community that included Turing Award winner and co-founder of the field of computer science Alan Perlis, who wrote the following in his forward to the book:

The source of exhilaration associated with computer programming is the continual unfolding within the mind and on the computer of mechanisms expressed as programs and the explosion of perception they generate. If art interprets our dreams, the computer executes them in the guise of programs! (xi)

For Perlis, computer programming was not so much a technical skill as it was a new form of literacy, one that he had been advocating for all students since 1962 when he wrote, “I personally feel that the ability to analyze and construct processes is a very important ability, one which the student has to acquire sooner or later. I believe that he does acquire it in a rather diluted way during four years of an engineering or science program. I consider it also important to a liberal arts program.”2

Perlis saw programming as a source of exhilaration, a basic form of literacy, and fun: “I think it’s extraordinarily important that we in computer science keep fun in computing. When it started out, it was an awful lot of fun” (v). This notion that programming is fun is perplexing if not paradoxical for those who understand software as strictly a means to automate (post)industrial processes. In one of a series of witticisms about programming, Alan Perlis compares this fun to word play: “Like punning, programming is a play on words.”3 More recently, Olga Goriunova and the contributors to her edited collection, Fun and Software, have examined this notion in depth.4

Perlis and the authors of the textbook are renowned computer scientists who, paradoxically, have a complicated relationship to computer science as a field of study. In July of 1986, the year after the publication of the textbook, Abelson and Sussman were asked to teach Hewlett Packard engineers the MIT course (6.001) for which they developed the textbook. The videos of that course can be found on YouTube as uploaded by the MIT OpenCourseware project.5 In the opening minutes of the first lecture, Abelson starts by explaining why computer science is not a science and has little to do with computers. In the preface to the textbook, Abelson and Sussman state the same like this:

Underlying our approach to this subject is our conviction that “computer science” is not a science and that its significance has little to do with computers. The computer revolution is a revolution in the way we think and in the way we express what we think. The essence of this change is the emergence of what might best be called procedural epistemology the study of the structure of knowledge from an imperative point of view, as opposed to the more declarative point of view taken by classical mathematical subjects. Mathematics provides a framework for dealing precisely with notions of “what is.” Computation provides a framework for dealing precisely with notions of “how to” (xvi).

In the video Abelson continues: “It might be engineering or it might be art, but we’ll actually see that computer so-called science actually has a lot in common with magic.” For this reason, subsequent editions of the textbook featured a wizard on the cover, as does the current edition: Harold Abelson and Gerald Jay Sussman, adapted to JavaScript by Martin Heinz and Tobias Wrigstad with Julie Sussman, Structure and Interpretation of Computer Programs, JavaScript Edition (MIT Press, 2022). Computer science educators refer to the book with the acronym, SICP, or simply as the “Wizard Book.”

Readers of this journal might venture that the title refers to critical code studies avant la letter.6 Instead, it is important to know that “interpretation” in computer science is an algorithmic process for turning source code into an executable that can be run on the computer and “interpreters” (or “evaluators”) are computer programs that perform this algorithmic process.
It is not until chapter 4 “Metalinguistic Abstraction” (with an epigraph on magic) that the authors fully explain the title and their approach. I cite the text from the JavaScript Edition:

Metalinguistic abstraction — establishing new languages — plays an important role in all branches of engineering design. It is particularly important to computer programming, because in programming not only can we formulate new languages but we can also implement these languages by constructing evaluators. An evaluator (or interpreter) for a programming language is a function that, when applied to a statement or expression of the language, performs the actions required to evaluate that statement or expression. It is no exaggeration to regard this as the most fundamental idea in programming:
The evaluator, which determines the meaning of statements and expressions in a programming language, is just another program.
To appreciate this point is to change our images of ourselves as programmers. We come to see ourselves as designers of languages, rather than only users of languages designed by others (p. 318).

While this may sound like an idiosyncratic approach to teaching and learning programming, it is not. Many prominent computer science educators have also been programming language designers and, together, can be understood as a school of thought. Alan Perlis was a contributor to the design of the ALGOL and APL languages. Gerald Sussman was the co-designer (with Guy Steele) of the Scheme programming language (a dialect of Lisp used for the first and second editions of SICP). Alan Kay was a co-designer of Smalltalk and Squeak; Seymour Papert was Sussman’s dissertation advisor and co-designer, with Wally Feurzeig and Cynthia Solomon, of Logo, a language for children to which Abelson also contributed; Mitchel Resnick was also Papert’s student and is the co-designer of the Scratch programming language for kids; Daniel Friedman was a prominent contributor to Scheme and is the author of many programming textbooks; Matthias Felleisen was Friedman’s student and co-designer of a version of Scheme, called Racket, intended to be used to implement programming languages; Robert Bruce Findler was Felleisen’s student, a co-developer of Racket and co-author with him of How to Design Programs: An Introduction to Programming and Computing (HtDP)7 which was intended to address many of the problems they found while teaching the SICP text; Matthew Flatt and Shriram Krishnamurthi were also co-authors of HtDP, Racket, and students of Felleisen. Prominent educators of art and computing have also been language designers: e.g., Casey Reas and Ben Fry were the architects of Processing; Lauren McCarthy is the designer of P5.js. This tradition of computing educators who are also programming language designers goes back at least to the inventors of the BASIC language by John Kemeny and Thomas Kurtz at Dartmouth College in 1963.

The two main professional organizations of computer science — the IEEE and ACM — each give an award for the outstanding computing educator of the year: respectively the IEEE Computer Science & Engineering Undergraduate Teaching Award8 and the ACM Karl V. Karlstrom Outstanding Educator Award.9 Perusing the lists of past awardees one can spot several educators who are also language designers. The citation for Abelson’s 2011 ACM award reads, in part,

His textbook Structure and Interpretation of Computer Programs, co-authored with Gerry Sussman, changed the way many thought about computing. It was widely emulated and adopted by colleges around the world. … This was not the only time Abelson revolutionized education. He was influential in introducing a curriculum based on the Logo programming language.10

So the approach to computing taken in SICP may be understood to be one that is not idiosyncratic to Abelson and Sussman but representative of a widely-supported approach to computing, a language designers’ approach. The size of this school of thought could explain the previous wide adoption of SICP. The SICP website archives a list of over one hundred colleges (in Asia, Australia, Europe and North America) using SICP as of 1999.11 Necessary additions for the current, JavaScript, edition would be the National University of Singapore and Uppsala University (Sweden) where the adapters/co-authors of the current text, Martin Henz (who, in addition to SICP, also teaches the upper-division course CS4215 Programming Language Implementation) and Tobias Wrigstad (whose research focusses on programming languages) are, respectively, faculty. I call this approach and those who advocate for it the language designers’ school of computing education or, following Abelson and Sussman, the metalinguistic approach.

In the textbook, the prominent role of interpreters and interpretation are not fully disclosed until the fourth chapter. That disclosure is followed by a paragraph that reveals that the entire text is structured around the implementation and description of a series of interpreters:

[W]e can regard almost any program as the evaluator for some language. For instance, the polynomial manipulation system of section 2.5.3 embodies the rules of polynomial arithmetic and implements them in terms of operations on list-structured data. If we augment this system with functions to read and print polynomial expressions, we have the core of a special-purpose language for dealing with problems in symbolic mathematics. The digital-logic simulator of section 3.3.4 and the constraint propagator of section 3.3.5 are legitimate languages in their own right, each with its own primitives, means of combination, and means of abstraction. Seen from this perspective, the technology for coping with large-scale computer systems merges with the technology for building new computer languages, and computer science itself becomes no more (and no less) than the discipline of constructing appropriate descriptive languages [my emphasis] (318).

In a chapter titled “Domain-Specific Languages” in a book also published in 2022, Sussman and his co-author, Chris Hansen, reiterate this point almost forty years after the original edition of SICP:

One powerful strategy for building flexibility into a programming project is to create a domain-specific language that captures the conceptual structure of the subject matter of the programs to be developed. A domain-specific language is an abstraction in which the nouns and verbs of the language are directly related to the problem domain. Such a language allows an application program to be written directly in terms of the domain (21).12

Abelson, Sussman, Henz, Wrigstad, and Sussman assert computing to be the discipline of designing computer programming languages. They continue:

We now embark on a tour of the technology by which languages are established in terms of other languages. In this chapter we shall use JavaScript as a base, implementing evaluators as JavaScript functions. We will take the first step in understanding how languages are implemented by building an evaluator for JavaScript itself (318).

That implementing JavaScript in JavaScript is “the first step in understanding how languages are implemented” is unexpected but reminiscent of contemporary approaches to language learning via “immersion” in which students learn without recourse to translation. Sussman’s dissertation advisor, Seymour Papert (co-designer of Logo) insisted “it is possible to design computers so that learning to communicate with them can be a natural process, more like learning French by living in France than like trying to learn it through the unnatural process of American foreign-language instruction in classrooms.”13

Partially depending on how complicated the syntax of a language is and the built-in facilities for handling that syntax, the code for defining a meta-circular evaluator (the definition of a programming language in that same language) can be concise or voluminous. A meta-circulator evaluator for Scheme is relatively concise because the syntax of Scheme programs in simple and includes only parentheses-delineated lists, a few punctuation marks, and a very constrained set of keywords; for example, here is a list in Scheme with three members: ‘(1 2 3). In contrast, a meta-circular evaluator for a language like JavaScript is significantly longer because its syntax includes a lot of punctuation marks, some of which are used in more than one way; e.g., {} could be an empty block of code or an empty object; the + operator might be used to add two numbers or to concatenate two strings.

Other languages, like Prolog, have both a simple syntax and built-in facilities for parsing that syntax. In Prolog — a logic programming language — “facts” can be asserted into the database in prefix format. For example,
man(socrates).
is the assertion that Socrates is a man. “If-then rules” are written “backwards” as “then-if” rules. So, to state if someone is a man, then they are mortal, one writes
mortal(X) :- man(X).
The body of the predicate (that which follows the :-) are a conjunct of conditions (delineated by commas) that must be proven true (by consulting the database) for the head of the predicate (that which precedes the :-) to be true. So the query to the Prolog
?- mortal(socrates).
returns
true.
because when X = socrates, man(X) can be immediately found in the database. Now imagine the pedagogical impasse one would face if one then told new students that with this one example predicate and a meta-circular evaluator they had all they need to start programming in Prolog. Here is a meta-circular evaluator for Prolog:
mc_eval(true).
mc_eval((Goal1,Goal2)) :- mc_eval(Goal1),mc_eval(Goal2).
mc_eval(Goal) :- clause(Goal,Body),mc_eval(Body).

The first line/clause asserts that the lexeme “true” indicates the query is true and thus the evaluator (mc_eval) returns successfully. The second line indicates that a conjunct of two queries aka “goals” is to be evaluated by evaluation the first, then invoking the evaluator on the second. The third line seeks a then-if rule with the head containing the goal; if the body evaluates successfully, then the head, the goal, is said to be proven and the evaluator returns successfully; e.g., to evaluate mortal(socrates) the then-if rule (the clause) mortal(X) :- man(X) is found, and the body, man(X), is evaluated with X bound to socrates.

There are other, better ways — beyond simple syntax — to identify whether or not a meta-circular evaluator can be written concisely for a given language. First, such an interpreter is easier to write if data and programs can be written using the same syntax. In Scheme, both data and programs are written as lists or nested lists, lists of lists; e.g., here is a program for adding two numbers as expressed in Scheme:
(define (add x y) (+ x y))

Second, meta-circular interpreters are easier to write for programming languages that have a simple semantics. The main mechanism of evaluation for Scheme (and essential to all dialects of Lisp) is called “beta reduction” that means, essentially, the variables or parameters of a function are bound to values and then those values replace the variables in the body of the function. So, given the definition above and then this expression
(add 1 2)
x is associated with 1 (alternatively phrased as x is bound to 1) and y with 2. To evaluate the body of the definition, x is replaced with 1 and y replaced with 2
(+ 1 2)
the addition operator is invoked and the result is
3.
Prolog also has a simple semantics based on unification and resolution and so a meta-circular evaluator for Prolog can be written in three lines of code (as above).

JavaScript’s semantics are not so simple. Consequently, SICP’s implementation of a meta-circular evaluator for JavaScript occupies 36 pages of the text (pp. 319-355). For comparison, in the first edition of SICP, the meta-circular evaluator for Scheme took only 20 pages of the textbook to define.14 Nevertheless, there are deep connections between Scheme and JavaScript which makes the JavaScript edition of SICP feasible. Douglas Crockford, in a widely read book of 2008, JavaScript: The Good Parts,15 showed how, at the center of JavaScript, was a set of language design decisions that were indebted to Scheme. Crockford advocated that the parts of JavaScript outside this core (the “bad parts”) be avoided resulting in better code. Brendan Eich, who invented the JavaScript language in 1995, used Scheme as a model for the language. And, Guy Steele, co-designer (with Sussman) of the Scheme language and author of the foreword of the JavaScript edition of SICP, was also the editor of the first edition of the standard for JavaScript — called ECMAScript. In his foreword to SICP, Steele describes the commonalities like this:

What do Lisp [including the Lisp dialect Scheme] and JavaScript have in common? The ability to abstract a computation … for later execution as a function; the ability to embed references to such functions within data structures; the ability to invoke functions on arguments; the ability to draw a distinction (conditional execution); a convenient universal data structure; completely automatic storage management for that data …; a large set of useful functions for operating on that universal data structure; and standard strategies for using the universal data structure to represent more specialized data structures (xv).

These attributes of Scheme and JavaScript are important for the language designers’ approach, the metalinguistic approach, to computing education because they are essential for a programming language intended to be used for implementing other programming languages. If, in Sussman and Hansen’s words, Scheme and JavaScript are not domain-specific languages it is because they are languages that allow one to implement other (possibly domain specific) programming languages. And, with respect to the metalinguistic approach, “the first step in understanding how languages are implemented” is the implementation of a meta-circular evaluator because, with the source code of the evaluator in hand, one can then make incremental changes to the language and thus demonstrate a family of related languages by rewriting key parts of the meta-circular evaluator.

After the implementation of the meta-circular evaluator in section 4.1, SICP continues like this: “Now that we have an evaluator expressed as a JavaScript program, we can experiment with alternative choices in language design simply by modifying the evaluator. Indeed, new languages are often invented by first writing an evaluator that embeds the new language within an existing high-level language” (360). Section 4.2 implements an interpreter with lazy evaluation (a technique in which the value of a variable is not computed until it is needed in another computation). Section 4.3 implements a JavaScript with nondeterministic evaluation that supports the automatic search for values that satisfy a set of constraints. Section 4.4 demonstrates the implementation of a logic programming language (like Prolog mentioned above) that is of an entirely different programming paradigm than JavaScript. These extensions to the evaluation of JavaScript are critical to what the authors see as the most important new theme — time — introduced in the second edition of the text (and maintained in the third, JavaScript, edition): “different approaches to dealing with time in computational models: objects with state, concurrent programming, functional programming, lazy evaluation, and nondeterministic programming” (xxiii).

Different models of time are employed in different programming languages where “time” determines which instruction of a program is executed when. For example, conventional languages execute instructions one-at-a-time; parallel languages execute two or more instructions at the same time. The first edition of SICP introduced more than one approach to time (e.g., the use of lazy evaluation for the implementation of infinite streams), the second and subsequent editions provide an introduction to multiple approaches to time thus providing the reader with the means to understand and conceptualize radically different kinds of programming languages.

As an undergraduate in computer science, I took a version of the course supported by SICP (MIT 6.001) three years before the first edition of the book was published. At Yale College, in Alan Perlis’s fall 1982 offering of CS 221a, Introduction to Computer Science, we, the students, were asked to learn two languages (APL and a version of Scheme called T) and implement a series of compilers and interpreters for languages that were neither APL nor Scheme. Perlis’s syllabus and the three editions of SICP can be understood to have the same structure: principles of programming introduced through the implementation of a series of programming language interpreters. For example, this was problem set 6 assigned on November 11, 1982: “Write a compiler that will translate programs in the small algebraic language into assembly language. Use your interpreter to control compilation.” [A compiler is like an interpreter but translates source code all at once, rather than one statement at a time.]

The metalinguistic approach to computing education has a nuanced relationship to the notion of literacy explored by Annette Vee in her book Coding Literacy: How Computer Programming is Changing Writing where she writes, “Seeing programming in light of the historical, social, and conceptual contexts of literacy helps us to understand computer programming as an important phenomenon of communication, not simply as another new skill or technology. … It also offers fresh approaches to teaching programming—programming not as a problem-solving tool, but as a species of writing. And thinking of programming as literacy may help us to prepare for a possible future where the ability to compose symbolic and communicative texts with computer code is widespread.”16 Vee’s approach is to see programming as a form of literacy, thus as a form of communication between people and to track the different modes of literacy, and hence of politics, developed over time.
The metalinguistic approach is also about communication, but communication between the programmer and the computer when programming languages are understood as the means for one to communicate with the computer; or, the metalinguistic approach is conceptualized as a form of thinking (cf., Jeanette Wing’s notion of “computational thinking”)17 where code is used to model processes and shape the very metaphors of an object of study. We see this commitment to code as epistemological, even ontological, in sciences like genetics, climatology, and astronomy where simulations and the computational analysis of big data are critical. Analogously, code correctness is crucial to various forms of engineering where simulation is key; e.g., in the design of airplanes and other large and complicated structures.

Interestingly there is disagreement now about whether one needs to actually understand code even in those areas of science and engineering where code is crucial. Google software engineering Sean McDirmid, posting to the popular programming language design blog “Lambda the Ultimate” (a reference to the title of a series of famous papers Gerald Sussman and Guy Steele wrote about Scheme18) reported, perhaps apocryphally, the following:

In this talk at the NYC Lisp meetup, Gerry Sussman was asked why MIT stopped teaching the legendary 6.001 course, which was based on Sussman and Abelson’s classic text The Structure and Interpretation of Computer Programs (SICP). Sussman’s answer was that … they felt that the SICP curriculum no longer prepared engineers for what engineering is like today. … He said that programming today is “More like science. You grab this piece of library and you poke at it. You write programs that poke it and see what it does. And you say, ‘Can I tweak it to do the thing I want?’” The “analysis-by-synthesis” view of SICP — where you build a larger system out of smaller, simple parts — became irrelevant. Nowadays, we do programming by poking.19

It seems unlikely Sussman actually made this statement for a couple of reasons. First, even though it is true that the SICP course (MIT 6.001) is no longer taught as it was, for years after its demise Sussman taught a course that essentially builds on SICP (MIT 6.945 Adventures In Advanced Symbolic Programming20). Second, the textbook for 6.945, the 2022 book cited above, by Sussman and Hansen, is essentially SICP taken to the next level. Nevertheless, even if Sussman himself did not make the statement, the statement cited by McDirmid is representative of a broad and growing consensus, a consensus that is just increasing in size because of the new capabilities of artificial intelligence. For example, one can query ChatGPT to implement a meta-circular evaluator in JavaScript and it will return some code that works for a small subset of JavaScript. Why should we learn how to design and implement new languages if ChatGPT can do it for us?

In fact, there is over half a century of analogous argumentation that coding will soon be obsolete because it will be automated.21 Why it is not obsolete is perhaps easiest to demonstrate with another query to ChatGPT:

query: Just as the programming languages Processing and P5.js have been designed for visual artists, Max was designed for musicians and sound artists. Design a special purpose programming language for filmmakers and implement the interpreter in JavaScript.

The response, from ChatGPT, was not wildly off target. In fact, it listed (in English) a series of design criteria for the language saying that it should include a timeline, layers, scripting, event and triggers, and a means for people to collaborate. But, the generated code was a disappointment and was composed of a short list of “stubs” with names but almost no contents. For instance, here was ChatGPT’s definition, in JavaScript, of a timeline:
class Timeline { constructor() { this.clips = []; }
Not much there!

If we follow the definitions of programming ventured by popular advocates, like code.org, in which learning to code is tantamount to mastering the widely-used parts of a specific programming language, like JavaScript, for the purposes of automation, then the existence of ChatGPT and the abilities of it and other large language models (GPT-4, etc.) seem to make programming — aka coding — obsolete. In contrast, the authors of SICP argue that learning any given programming language is not very important and automation is not the primary purpose of programming. Rather, it is the ability to design and implement new languages as a form of thinking that is important and advocated in all three editions of SICP.

Notes

  1. See David Kaiser (editor) Pedagogy and the Practice of Science (MIT Press, 2005)
  2. Alan Perlis, “The Computer in the University,” in M. Greenberger (editor), Management and the Computer of the Future (Cambridge, MA: MIT Press, 1962), 210.
  3. Alan Perlis, #46 of “Epigrams in Programming,” http://www.cs.yale.edu/homes/perlis-alan/quotes.html
  4. Olga Goriunova (editor), Fun and Software: Exploring Pleasure, Paradox and Pain in Computing (Bloomsbury Academic, 2014)
  5. MIT OpenCourseWare, MIT 6.001 Structure and Interpretation, 1986, https://www.youtube.com/playlist?list=PLE18841CABEA24090
  6. Mark Marino, Critical Code Studies (MIT Press, 2020)
  7. Matthias FelleisenRobert Bruce FindlerMatthew Flatt and Shriram Krishnamurthi, How to Design Programs, Second Edition: An Introduction to Programming and Computing (MIT Press, 2018)
  8. See https://www.computer.org/volunteering/awards/cse-undergrad-teaching
  9. See https://awards.acm.org/karlstrom/award-recipients
  10. https://awards.acm.org/award-recipients/abelson_4239273
  11. https://mitp-content-server.mit.edu/books/content/sectbyfn/books_pres_0/6515/sicp.zip/adopt-list.html
  12. Chris Hanson and Gerald Jay Sussman, Software Design for Flexibility: How to Avoid Programming Yourself into a Corner (MIT Press, 2022)
  13. Seymour Papert, Mindstorms: Children, Computers, And Powerful Ideas (Basic Books, 1980), 6.
  14. The history of the project to create a JavaScript edition, code and programming resources, an interactive version of the text, and a side-by-side comparison of the second edition with the JavaScript edition can all be found at this site: https://sicp.sourceacademy.org/chapters/making-of.html
  15. Douglas Crockford, JavaScript: The Good Parts (O’Reilly, 2008)
  16. Annette Vee, Coding Literacy: How Computer Programming is Changing Writing, Kindle Edition (MIT Press, 2017), Kindle Location 216-239.
  17. Jeanette Wing, “Computational Thinking,” Communications of the ACM, March 2006/Vol. 49, No. 3: 33-35.
  18. https://research.scheme.org/lambda-papers/
  19. http://lambda-the-ultimate.org/node/5335
  20. https://groups.csail.mit.edu/mac/users/gjs/6.945/
  21. See Charles Rich and Richard C. Waters, editors. Readings in Artificial Intelligence and Software Engineering (Morgan Kaufmann, 1998)