The Plane of Obscurity — Simulation and Philosophy

Article Information

  • Author(s): Yuk Hui
  • Affiliation(s): Centre for Cultural Studies, Goldsmiths
  • Publication Date: November 2011
  • Issue: 1
  • Citation: Yuk Hui. “The Plane of Obscurity — Simulation and Philosophy.” Computational Culture 1 (November 2011).


Review of, Manual DeLanda, Philosophy and Simulation, the Emergence of Synthetic Reason, London, Continuum, 2011, ISBN 1441170286, 240 Pages

Manual DeLanda has been best known as a significant figure in the introduction of the works of Gilles Deleuze to the English speaking world with numerous examples of scientific phenomenon. Such an approach presents Deleuze as a scientifically informed philosopher who also used science and technology as a pivot to carry out a revolution within the field of French philosophy, as well as extending the philosophy of Deleuze to a broader range of areas that exceed Deleuze’s own endeavours. Four years after the publication of A New Philosophy of Society, Manual DeLanda has returned with a new title, Philosophy and Simulation—The emergence of synthetic reason, in which the name Gilles Deleuze only appears in the bibliography of the appendix. Shall we expect a new philosophy on the way, one not shadowed by the name Deleuze? A name nevertheless adds power to the history of philosophy, in Deleuze’s own words, the name of the father.

In search of synthetic reason

In this new title, DeLanda proposes a philosophy of emergence by defending what he calls synthetic reason, one that cannot be reduced to deduction and its principles, one that exceeds both the linear, simple mechanisms and the logical operation of the human brain, one that can better be thought by computation and mathematical models. The book, in its modest words, according to DeLanda, is to study the various mechanisms of emergence and its ontological status through computer simulations. Yet the ‘and’ in the title also gives an ambiguous position to ‘simulation’ in a philosophical text such as this. The book consists of eleven chapters on different phenomenon of emergence including chemistry, genetic algorithms, neural networks, economies, language, society, accompanied with examples of computer simulation. It finishes with an appendix to associate this ambitious work with what he has developed elsewhere: the theory of assemblage. The philosophical goal is to unfold an “emergent materialist world view that finally does justice to the creative powers of matter and energy”, but, interestingly, through computer simulations.

The first question that must be justified is what is the relation between emergence and simulation? That question is immediately followed with another one: what is the position of philosophy in such an assemblage? DeLanda recognizes that simulation cannot be fully responsible for ensuring the legitimacy of the concept of emergence; in fact, simulation is always a reproduction of an epistemological understanding. Simulation for DeLanda is a tool that opens up a philosophy that can be visualized and imagined through the topological figures and the constant reconfiguration of data within the simulated environment. Simulation demonstrates and confirms DeLanda’s early writings on the creative force of matter, which formed what one might call a scientifically informed transcendental empiricism. In this respect, DeLanda remains loyal to his Deleuzian method of philosophy, as stated in Qu’est que la Philosophie by Deleuze and Guattari: ‘Philosophy is a constructivism, and the construction has two complementary aspects that are different in nature: to create the concepts and to trace a plane”[1].  Philosophers establish a plane according to their contemporary situation and project the image of thought to the diagrammatic infinity. In this book, DeLanda creates the concepts of synthetic reasons and further develops that of the abstract structure of the assemblage, it also seems to me that he was trying to extend a plane that goes beyond the previous conception of transcendental empiricism, but this yet remains obscure in this ambitious work.

To elaborate this ambiguity, I propose here to see two conceptions of transcendental empiricism. The first conception is what DeLanda emphasized many times in the book, the distinction between the subjective and the objective, the sensual and the actual, the function and the limitations. To illustrate with an example proposed by DeLanda himself, in its early explanation, the contraction of the stomach was considered to be part of the mechanism by which hunger motivates food seeking behaviour. In an experiment to test this hypothesis, despite the stomach of a rat being removed, food seeking behaviour did not cease. DeLanda hence proposes that this corresponds to a ‘subjective gradient that can be dissipated by habituation’, there is an ‘objective gradient, a concentration of excitation substances in the neural substrate’[2]. In the case of the rat, the objective gradient is for example the concentrations of glucose in its blood stream. These objective gradients, which possess different limits and thresholds, also present the singularities that are ‘by default’. These singularities are not limited to the biological, but also occur in chemical reactions, neural networks, etc; for example the discrete energy levels of an electron, the energy for activating the attractors within neural networks, the formation of rocks through different stratifications, etc. The field of transcendence that are presented by the singularities that governs the thresholds in these cases, composes ‘the structure of the space of possibilities’ for the phenomenon of emergence. Such ‘universal singularities’ create a field of transcendence that governs the ontological structure and the general expression of matter, for example the specific combination of molecules (H2O) and ions(CaCl2) give rise to particular properties. Yet the transcendental field doesn’t account for the freedom of the empirical field, the empirical and the transcendental are always already situated on the same plane, that allows the emergence of the expressions which exceeds the transcendental structure, hence every crystal has its own expression, that is to say each of them is singular, as is every cat.

Computer simulations of emergence operate exactly under this principle. In order to run a simulation of cellular automata, one must specify the conditions, the functions, the number of parameters, etc. (This also poses secondary conditions to emergence, that is to say every simulation is always already a limitation of perception under the condition that it can be expressed by computers). Emergences, seen as the repetition of patterns which are determined by such singularities, are in fact recursive functions[3]. To give a quick definition of recursion, it is when a function calls itself until certain thresholds are reached or an external force modifies the current conditions. Computer software in this sense, is the best demonstration of a transcendental empiricism in the Deleuzian sense. But here we must take one step further, if emergence follows a computational and mathematical model, then how can one still account for the discrepancy, and moreover, the artefacts of simulation themselves. It seems to me that DeLanda has literally succeeded in bringing different emergent phenomenon together through the lens of simulations, but that the potential of the project doesn’t yet seem to be well developed, as if the plane is not yet extended and remains to be discovered. However it is also the obscurity of the plane itself, that demands the attentions of the next generation philosophers to address computational culture, or more precisely, computational objects themselves.

We have to approach this from the second sense of transcendental empiricism.  It is imposed not only by the transcendental singularities, but also artificial singularities, that is, the simulations themselves. What are these apparatus that allow us to peep into the process of emergence (another structure of the space of possibilities)? What are their effects on the perception of emergence themselves? In fact, computer simulations must be further generalized within the broader set of computational objects and repositioned here. In other words, one must recognize that computer simulations are not only used in laboratories that try to visualize and aid control over experiments, but are also present more and more in video games, or in data visualizations, of, for example, the simulation of the growth of social networks, the increase in complexity between Twitter status updates, etc. Such simulations present themselves not only as mechanisms of control, or spectacles, but also become functions that are integrated into everyday life existing in the form of a visualized knowledge.

The word ‘simulation’ that follows after ‘philosophy and’ seems to entail DeLanda’s strategy as well as his ignorance: on one hand simulation becomes a synonym of such a philosophy; on the other hand simulation hasn’t really been addressed. I was impressed that in this book, DeLanda becomes a scientist rather than a philosopher. Or if necessary, a philosopher who seems to ignore the effects of these objects of simulations, and the synthetic reason becomes a force that appears to be purely technical. By saying this, one doesn’t need to go back to the thesis of Jean Baudrillard and consider society as pure simulation, but rather one can trace the plane by integrating these technical objects into it, in a manner related to what has been done by Gilbert Simondon and the Simondon-inspired Deleuze. If we can justify what has been called synthetic reason here, it stands in an obscure position that seems to be under-addressed in the discipline of philosophy. Synthetic reasons are not purely theoretical, that is to say they are not simply hypotheses that can easily be accused of being mysterious entities such as ‘life force’ and ‘élan vital’, as with the previous generation philosophers of emergence[4]; on the other hand, they are not actual since what is simulated is only an isomorphism but not  reality itself. What accounts for the reality brought forth by the simulations and what are the relations between the principles of simulations and the transformations imposed? That is to consider a plane on which the simulations themselves become attractors and create a new topography which is not reflected on the screen. What remains at stake, as unquestioned, in DeLanda’s work, is not the ontological status of emergence, but rather that of the computer simulations.

Reality and Simulation

DeLanda, in my view, did a brilliant job in opening up the philosophical investigation concerning simulation by illustrating with different examples in different scientific disciplines. He rightly points out the significance of computer simulation and its importance in contemporary culture, and it seems to me he is trying to integrate these images on the screens into his thought image, but, at least in this book, simulation is rendered solely as the justification of the hypothesis of the synthetic processes. When DeLanda is addressing the computational process of simulation, he still speaks within the field of computing, without reinserting simulation back into reality, that is to say to extend the plane of consistency.

Computational simulations have been often seen as tools, imitations, since their birth, while their transforming power imposed on reality is often overlooked. The discrepancy between the real and the hypothesis also poses a question between truth and untruth. In AI, the simulation of human language, of robotic movement had been criticized as ‘untrue’, for example, as we can see in work by Hubert Dreyfus, such as, What Computers Can’t do? and further in, What Computers Still Can’t Do, in which he considers that simulations based on formal rules follow a mistaken method towards understanding humans and nature. For example, Dreyfus mocked Herbert Simon’s 1957 prediction that AI would win the world chess championship in 10 years time[5]. In this case, the simulation that corresponds to Simon’s prediction was not realized until 1997, when the IBM chess computer Deep Blue beat the world champion Garry Kasparov. The 30 years delay confirms neither the truth nor untruth of this statement, but already constitutes a world which demands a new plane. A genuine philosophical thought is one that integrates the oppositions into a plane of consistency.

On one hand, the reinsertion of computational objects and their synthetic concepts into the plane of consistency is apparently lacking in this book. On the other, philosophy, through simulation, is constructing a new reality that eliminates the gap between simulation and emergence. This is exemplified by the chapter on multi-agent systems and language. In fact, this discussion seems to be one of the strangest of the book. DeLanda proposes to understand the formalization of language through a simulation process of grammatification. Grammatification according to DeLanda is a process “that can break down a set of monolithic sentences into recursively, recombinable components[6]”. For example, how by breaking a sentence like ‘FullMoonCauseLowTide’ into smaller component like Cause (FullMoon, LowTide), and finally a context free logical form Cause(x, y). Here DeLanda seems to concur with what has been discussed in computational linguistics and the formal logic of language. On top of that, DeLanda proposes his own model to understand such process in terms of what he calls biological evolution and cultural evolution. The formal is the passing down of genetic capacity to detect the patterns of word occurrence; the latter transforms ‘customary patterns into obligatory constraints on word-choice behaviour[7]’.  Within such a simulation of the emergence of ‘primitive language’, one encounters again an approximation of the last century’s anthropological research, which takes symbolic logic over other forms of logic, although this time it appears on the computer screen.

What do these simulations tell us if they are not already within the unexamined limit of the computational machines? This way to look at the emergence of primitive languages seems to fundamentally ignore all other technical apparatus involved in the simulation and re-establish logical form as the ultimate pursuit of language. Or, in fact, is this a limitation of simulation itself? Does there exist an emergence phenomenon that cannot be simulated by a Turing machine? For example, when this emergence process involves a non-stop looping or recursion? The equivalence between philosophy and simulation seems to be quite clear here, especially the invisible attempt to axiomatize both nature and culture. But is a philosophy as such simply one that imposes an ‘ontological structure’ through some visual impressions and logical operations? In other words, isn’t such work merely reinforcing epistemological understanding as well as transforming it into an ontological one?

By the end of the book, DeLanda goes back to the theory of assemblage and sketches the relations between the discourse on emergence and simulation. In this part, DeLanda puts simulation aside, and fits the image projected through the lens of simulation into the theory of assemblage as it concerns singularities, codings and identities. Every phonemenon of emergence can be explained, or at least thought through, by an assemblage accompanied with internal dynamics of reterritorialization and deterritorialization. The metaphysical understanding of materiality as a topological graph compounded of both a transcendental and empirical field is plausible, it restores a vitalism to matter: including both a dead cat and an electron, but it nevertheless remains an older project, one that DeLanda already outlined in his previous works, such as Intensive Science and Virtual Philosophy.

Where are the computational objects of simulations? In the last two pages, DeLanda attempts to philosophize a Turing machine, that is ‘the assemblage made out of operators and data’[8], but this remains too general if not already a form of common sense. After all, the numerous examples seem unable to express the philosophical ambitions well. DeLanda may already point to the plane which is brought about by simulations and visualizations. They finally lead to the reconfiguration of the topological figure of transcendental empiricism. But in this book, it remains obscure, and in certain sense it discloses the crisis of a metaphysics that always overlooks its apparatus of seeing, like a short-sighted man always forgets the fact that he is looking the world through his glasses.



De Landa, Manuel, A New Philosophy of Society : assemblage theory and social complexity, London : Continuum, 2006

De Landa, Manuel, Intensive Science and Virtual Philosophy, London : Continuum, 2004

Deleuze, Gilles & Guattari, Félix, Qu’est-ce que la Philosophie?, Paris: Les Édition de Minuit, 2005

Dreyfus, Herbert, What Computers Still Can’t Do : a critique of artificial reason, Cambridge, Mass. : MIT, c1992

Wolfram, Stephen, Cellular Automata as Simple Self-Organizing Systems, 1982,


Author Bio

Yuk Hui, recently obtained his PhD from the Centre for Cultural Studies, Goldsmiths, University of London with a thesis titled “On the Existence of Digital Objects. He is trained in computer science, philosophy and cultural theory, and was a researcher in the Metadata Project funded by the Leverhulme Trust.


[1] Gilles Deleuze and Felix Guattari, Qu’est ce que la philosophie, 38

[2] DeLanda, Philosophy and Simulation, 100

[3] Stephen Wolfram, Cellular Automata as Simple Self-Organizing Systems, 1982,

[4] Philosophy and Simulation, 2

[5] Hubert Dreyfus, What Computers Can’t Do: A Critique of Artificial Reason (New York: Harper & Row,1979), 69

[6] Philosophy and Simulation, 154

[7] Philosophy and Simulation, 162

[8] Philosophy and Simulation, 201