Introduction
The European Union’s General Data Protection Regulation (GDPR, Regulation (EU) 2016/679) came into effect on 25th May 2018. The GDPR revises and extends the previous Data Protection Directive (Directive 95/46/EC) of 1995 but also introduces new areas of regulation relating to algorithmic processing and what it calls “Automated individual decision making.”1 Since the spread of the Internet and the growth of software as a commercial industry in the 1990s, legal debates around computing have been heavily dominated by the treatment of computer code and data as forms of property and questions of ownership surrounding the ease of transmission and replication of such media.2 The recent developments and applications in Artificial Intelligence (AI) and Machine Learning (ML) are now broadening that debate once again in order to think more widely across the range of ways in which computing technology is situated within society and the various roles and forms of agency and liability that this entails.3 Recent legal cases against Uber and Deliveroo relating to platform-based ‘gig economy’ labour represent one branch of this.4 Another is that of the development of autonomous and semi-autonomous robotic systems, such as drones used in warfare or driver-less cars.5 Discussions of ‘the algorithm’ have become a key characteristic of these debates.6 Yet these discussions sometimes lead towards a somewhat reified notion of ‘the algorithm’ as a kind of self-contained agency that acts upon but remains separate from human society and the material world. The algorithm has not appeared from nowhere, there are various histories, contexts and relations at stake. Many of the algorithms running on current-day ML systems originated in the early years of AI development stimulating critical debates as to their application and consequences that remain relevant today.7 Similarly, algorithms do not operate in isolation but rather across and between different subjects, actors and objects. Indeed, Kyle McGee’s comment in relation to analysing legal rules could equally apply to the study of algorithms and software code:
By starting from (what is understood as) a refined, stable legal rule or set of rules and proceeding to an examination of its empirical interpretation or its appearance in patterns of legal activity, by making it pass, unaltered, through diverse social channels, does the analyst not render the rule more opaque, even as she sheds light on its failures and achievements? In doing so, socio-legal studies makes itself parasitic on doctrinal analysis: quite differently, to be sure, but committed all the same to the integrity of the law’s black boxes and performatively complicit in the macro-structuring of a particular normative world — often strikingly similar to the one purportedly critiqued.8
New technology changes the legal landscape, often creating cases that are difficult to interpret in terms of existing norms and practice.9 Each new advance in technology often finds an application within legal practice.10 As a discipline that is heavily constituted within the practice of reading, writing and processing of documents, law has a special relation to technologies that facilitate this. Indeed, as Pierre Legendre and Peter Goodrich have explored, the practice of law and the technology of writing itself are often closely intertwined.11 From the Code of Hammurabi to the Hollerith Tabulating Machine, the US National Crime Information Center database and current uses of facial recognition technologies by the police, different technologies translate law into governance. Murphy describes these as juridical technologies, material objects “through which law becomes visible, even real, in a society.”12 The technologies in which the practice of law is facilitated and implemented can, either in subtle or sometimes dramatic ways, change the nature of law itself and of how it relates to practices of governance within a given society. Just as in the past, the imposition of a new language or writing system was often synonymous with the imposition of a new political and legal regime, so too do the application of new technologies such as the database and predictive AI relate to changes in governance and law. Examples of this are traced in Jon Agar’s study of impact of computing on The Government Machine in post-war Britain and in Benjamin Bratton’s recent study of what he calls The Stack as a new era of technologically constructed global governance.13
Different juridical paradigms may lend themselves to different technological paradigms or patterns of application. The precedent-based common law framework of US law may favour a different technological facilitation than that of more institutional and statutory systems such as that of French law.14 The development of case-based reasoning in expert systems such as HYPO CBR, for example, was tailored specifically to the Anglo-American model which previous rule-based systems had handled with greater difficulty.15 The political and economic governance of legal practice in itself can also have a significant impact on such developments. Richard and Daniel Susskind point out that the deregulation of the legal profession, such as under the Legal Services Act 2007 (England and Wales), may have as much impact on the recent growth in AI-based legal tools as the increased availability of such technologies.16 Virginia Eubanks’ study is particularly revealing in how, at a local state level, narrow economic considerations may influence the selection of a technological approach that embeds a specific political agenda into the daily operations of state and social institutions, having direct and immediate impact on people’s lives.17 The efficacy of law becomes determined not by jurisprudential oversight but through its ability to become mass produced and economically productive, constructing new “frameworks for decision-making” which operate in ways that are not simple to predict.18
The field of Jurimetrics had been established in the early 1960s in order to explore the more general application of quantitative and mathematical approaches to legal work.19 This would include mathematically-driven modelling and decision-making methods often derived from Operations Research. One method from Operations Research that has come to acquire widespread use in current ML systems has been that of the Decision Tree. In a form that is typical of the Operations Research approach, Layman Allen and Mary Ellen Caldwell proposed the application of Decision Trees to model the legal process itself mapping how a case passes through the US court system.20 Current day software implementations of Decision Trees are closer in application to earlier database systems, being used to sort and retrieve textual information and to compare cases but in a more ‘intelligent’ way that models aspects of legal knowledge domains.21 Whereas the Decision Tree in Allen and Caldwell’s proposal could be summarised in a small diagram that that fits within a single printed page, these computer-run Decision Trees are considerably more complex and Baroque, structured in forms such as Random Forests in which a huge number of trees are run in parallel, aggregating their outcomes into a final result.22
Expert systems are amongst the most long-standing applications of AI to legal practice.23 The first experts systems, such as DENDRAL and MYCIN, were developed at Stanford in the mid-1960s to support organic chemistry research. In 1970 Bruce Buchanan, one of the development team from Stanford, along with Thomas Headrick, wrote the first paper considering whether AI could be applied to problems of legal reasoning.24 Whilst the paper speculates on the possibilities of computer implemented legal decision-making systems, their own proposals were primarily orientated towards improving document retrieval through the use of expert system methods. This has been the dominant form in which expert systems have been applied alongside their use as educational tools.25
Buchanan and Headrick highlight the limitations of the text-processing algorithms they were familiar with at the time of writing, yet in the USSR an approach using an entirely different computational paradigm had been developed, one that is now widely applied in ML systems for textual analysis: Support Vector Machines (SVM).26 As with many other current-day ML systems, the algorithms on which SVMs run are quite old in origin, the first proposals dating from 1963,27 but they rely upon a scale of computational power that was simply not available at the time of their conception. Such systems have only become possible recently due to the availability of more powerful processors and large-scale hardware infrastructures such as those hosted on the Cloud. This might be seen simply as an extension of Moore’s Law on the ever-increasing power of computer processors but there are particular social, economic and political constraints and affordances that play into this — such as the deregulation of legal practice and the perceived economic advantages of automating certain processes and not others, of supporting certain research fields and not others. Bratton argues that this enfolds such systems into the operational logic of contemporary neoliberal governance and the economic forms that selectively adopt and sustain such infrastructures.28 The translation of SVM systems from the end of the Khrushchev era into the heart of Big Data monetization ventures — Vapnik was hired by Facebook as part of their AI team in 201429 — indicates something of the convoluted genealogies in which code and politics co-evolve.30
Another approach to come through Operations Research was that of Game Theory, first developed by van Neumann and Morgenstern in 1940s.31 Game Theory was seen as particularly suited to modelling legal problems due to its ability to deal with situations in which complete knowledge of all factors might not be available, such as a court case where only some relevant evidence could be made use of or where different witnesses may have differing interpretations of that to which they were testifying.32 Game Theory was applied, therefore, not so much in determining the outcome of a case as to providing supporting verification for the reliability of evidence. This echoes a much earlier proposal by Jeremy Bentham for the creation of a calculus to measure probative force, the extent to which a given piece of evidence may be considered valid and relevant to a case.33 Indeed, attempts to measure and verify the probative force of evidence have been a significant area of investigation throughout the long history in which law and computation have engaged with one another.34
More recently, agent-based simulation systems have been explored as an alternative to Game Theory which has been criticised for its basis within a model of human behaviour that is inherently competitive.35 In contrast, agent-based simulations, it is claimed, enable the study of the emergence of norms “relatively free of epistemological assumptions.”36 These sometimes follow from research on cooperative action, such as that of Robert Axelrod,37 and on the management of resources that entail collective responsibilities.38 As such they point towards a politics of modelling legal processes in ways that differ from those of Cold War brinkmanship. Whilst such approaches are still often placed within the conceptual framework of rational actor theory, others explore pluralist strategies that seek to reconcile international legal frameworks and local customary law.39
Approaches such as these are mainly applied within legal practice, research and training but there are other kinds of systems that are becoming more prevalent in mundane activities such as shopping or parking your car. Online Dispute Resolution (ODR) has become a significant area in the application of software tools to consumer-related issues. These seek to resolve complaints outside of court and are a development from earlier non-computer based Alternative Dispute Resolution approaches (see the article by Maria Karagianni in this section). Ebay, PayPal, Amazon and YouTube all provide ODR systems that automate the handling of disputes ranging from problem purchases to potential copyright infringements.40 Having originated in online retail, ODR has spread into other sectors. Lex Machina provides a service for patent disputes41 whilst Modria is a form of ODR platform that provides services for civil courts and public sector disputes.42 The Civil Justice Council has proposed a development of ODR to provide an online court system for civil cases in the UK.43
It is within this context that we can understand Do Not Pay, an online system that enables people to appeal parking ticket fines and claim money back on air flights.44 Initially designed more in the spirit of consumer activism than in that of consumer mediation, Do Not Pay might be described as a form of Online Dispute Facilitation. Its inventor, Joshua Browder, describes Do Not Pay as “the world’s first robot lawyer”45 and claims that it is intended to “make legal help accessible to the most vulnerable in society.”46 Whilst media attention has accorded the tool a maverick ‘disruptor’ status it can, nevertheless, be seen as fulfilling the broader process of deregulation outlined by the Susskinds in which, as they describe it, the “closed craft” of established law practice is transformed into what they envisage as a market-driven “commons.”47
Another branch of this development is that of services that automate the drafting and management of contracts and other legal documents, such as ContractExpress and Exari.48 In a certain respect these are beefed-up versions of the downloadable templates for wills and rental agreements that can be found online or the standard-form “boilerplate” contracting practices of traditional legal firms. These platforms perform more of the background work that a legal office might do and can also provide secured online services in which deals can be worked out between commercial parties in isolation from external constraints.49 One new development of particular significance in this regard is that of the blockchain and smart contract systems such as Ethereum.50 There are those who argue that significant progressive gains could be achieved through such technologies,51 yet we also see these technologies becoming absorbed into established legal and financial systems and being utilised to support less progressive aims.52 Stephen Sachs relates new developments in digital contracting and online trading spaces as an attempt to revive a form of lex mercatoria outside of the control and accountability of the state, linked less to romantic pirate autonomous zones and more to elite financial power and right-wing anti-statist politics.53
Ensconced in a seemingly mundane utility, these are all examples of the delegation of legally defined practices and procedures into facilities that are removed from direct legal oversight and that ossify the legal framing of numerous contingent events into patterned structures through which particular legislative and economic norms shape daily activity in a way that is almost unnoticed.54 It is not so much that, following Lessig’s formulation, code becomes law or law becomes code (although both translations may take place in ways that are more subtle than we realise) but rather that numerous different standardisations and formulations are quietly laid down as the background architecture of decision-making structures that shape seemingly autonomous, personal choices.55 The Privacy By Design principle promoted by the GDPR can also be seen as an articulation of such processes embedding legal requirements into our social and working lives.56 The point is not that such processes are inherently benign or malign but rather that the process of delegation becomes a form of unquestioned resignation to particular norms and structures, creating what Fleur Johns describes as new forms of “non-legality.”57
In the same issue of the Stanford Law Review, immediately following from Buchanan and Headrick’s paper, a study on “Racial Discrimination in Public Housing Site Selection” by Peel, Pickett and Buehl was published.58 Whilst the two papers do not refer to one another in any way, their coincidence points towards a terrain of application that would become increasingly problematic as the spread of AI and ML systems grew. The problem of predetermined bias entering into quantified data was already recognised at the time of Buchanan and Headrick’s paper and would be raised again as different phases of computerisation entered into legal and governmental practice.59
In regard to current ML systems, Cathy O’Neil has shown how, due to the incompleteness of data available, many algorithmic systems incorporate what she calls “proxy data” whereby biases related to income and race can enter into the processing by way of indirect references such as postal codes.60 When operated in the context of housing policies that create racially segregated areas, such as those analysed by Peel, Pickett and Buehl, the combination over time of different policies and practices leads to the formation of self-perpetuating “racializing assemblages” through which such biases enter into the fabric of daily governance (see the paper by Ezekiel Dixon-Román, Ama Nyame-Mensah, and Allison R. Russell in this section). Here law merges with architecture, both in terms of architecture as urban planning and as the architecture of software systems, as their co-structuring comes to predicate the positioning of those subject to the law.
This re-opens the question of what counts as a ‘person’ subject to the law. A living ‘natural’ person (to use the legal distinction) has to bear the consequences of outcomes that are adjudicated in regard to data that may only be linked to them by way of aggregation and proxy. In this regard, and as the cases studied by Dixon-Román and others expose, the rights of data privacy promoted by regulations such as the GDPR only serve to support the rights of those who have privileged access to data and for whom their data itself confers privilege.61 Here the doctrine of data privacy appears merely as an extension of the principle of possessive individualism.62 A more critical debate might be framed in terms of data privation, how we might be deprived by the data that exists on or in relation to us.63
This is very different, although not unrelated, to the debates around legal persons that have previously preoccupied discussions on law and AI and which are now returning as they have become more of a reality, that of intelligent agents in the form of robots, drones and software entities.64 Lawrence Solum relates the debate to historical and current examples of non-human entities that have status as legal persons, such as animals in Medieval law, but also ships (deriving from admiralty law) and corporations which acquired legal person status in the 19th century.65 As Solum argues, ‘personhood’ in Western legal traditions is not derived from the ontological or moral status of the entity but rather from how different capacities, rights and responsibilities relating to it may be recognised and processed through existing legal frameworks. That such possibilities may arise can be related to what Antoinette Rouvroy describes as the “irremediably self-referential” nature of the person as legal subject as defined in terms of modern law’s inherently positivist basis.66 In Rouvroy’s account, this tends towards an instability in the recognition of legal subjects, something which H.L.A. Hart arguably saw as its greatest potential, in that what law recognizes as its subject is not pre-given as in theological traditions but determined by rules which can themselves be interrogated and reformulated.67 The law constructs its own model of the world upon which its rules are applied and then projected back into lived experience. Derrida traces this back to Leibniz, a key figure in the development of modern, positivist law, for whom “the lawyer [must] reconstruct the world, as exactly the same but on another scene and related to new principles capable of being justified.”68 Therein lies the tension, for this model-making, rule-based algorithmic law (as Sherwin calls it) craves a totality that it can never achieve.69 Hence, perhaps, the attraction to systems such as Game Theory that appear to offer some solution to the incompleteness and contingency of law.70
The rights-based approach to such questions that Solum explores are rejected by Lucy Suchman, Dan McQuillan and Susan Schuppli. They each argue that the complexity of the combined human and algorithmic systems within which such autonomous and semi-autonomous entities operate cannot be adequately addressed in terms of a discrete person (of whatever sort) bearing individual rights and responsibilities.71 Schuppli proposes instead a concept of “algorithmic accountability” which she sees as part of “an overall strategy aimed at expanding the field of causality and thus broadening the reach of legal responsibility.”72 This echoes aspects of the rejection of the individuated subject in branches of Feminist legal theory which favour a relational model based on the effects of actions rather than the inference of rational intention.73
Another approach that explores the expansion of law away from the reinforcement of the liberal models of a ‘legal subject’ has been proposed in different ways through Gunther Teubner’s concept of autopoetic law, drawing on the systems theory of Niklas Luhmann, and those drawing on Deleuzian theory.74 Whilst each takes a very different form, they both address how law may develop not through revising existing models but through a form of responsive creativity to new conditions. In a certain respect, however, these approaches are extensions of rather than alternatives to Hart’s reflections on the tensions between the positivity of law and its “open texture,” each developing a different side of that relation along a particular trajectory.
It is the positivity of law, its rule-based nature, that enables the kinds of delegation and translation of legalistic processes and decision-making to other contexts and entities — such as in the examples of non-legality analysed by Johns.75 This process of rule-making and delegation also relates to another key tension within the practice of law. Law seeks to be ‘more than human’ in so far as it, ideally, seeks to confer justice in decisions that are not subject to human bias and partialities. Yet it also seeks to be ‘truly human’ in so far as the ability to have laws and to be just is what sets humanity apart from other entities — as for example in the relation of justice to the rational soul in Descartes. The tension between these two strands has been a recurrent feature in the historical development of law. They mark the transition from Medieval to modern law and can also be found within current debates on autonomous systems, as in different ways in Rouvroy’s work or Tribe’s earlier writings.76 Indeed, there are certain analogies between the development of computer-based law today and a practice that was characteristic of pre-modern law, that of the use of lotteries. Each delegates aspects of decision-making and verification to a non-human mechanism. Whilst random selection is used in some countries to allocate juries, the idea that an juridical decision could be made through chance is an anathema to a modern law tradition founded on the core principle of reasoned, deliberative judgement. Yet, in Medieval and Ancient Greek society, the use of chance procedures, such as lotteries, was considered superior to the fallibilities of human judgement being “a means of getting God to speak … the way lots fall represents not randomness but divine choice.”77 Even while the development of law as a positive system sought to replace God with science it did not do away with chance.78 Rather, the integration of chance into formal reasoning became one of the key inspirations to the formalisation of law itself. It was this that drove Bernoulli and Leibniz to develop a mathematics of probability much of which lies at the heart of the predictive systems used in current Data Science and ML.79 Both drew examples from legal cases to formulate their ideas — as George Boole would also do in formulating his system of logic80 — and Leibniz in his work as a lawyer had a special interest in aleatory contracts.81
Law is the domain of enquiry from which the modern mathematics of probability emerged, yet the relation between law and computation goes far deeper. Just as the procedures of the science lab have been derived from those of the law courts so too has the language of logic and computation derived from law.82 Robert Schmidt traces the emergence of various terms and practices within mathematical logic and computer science to those of law. Words so common today, such as data and symbol, were once part of the specialist vocabulary of the lawyer and the use of algebraic notation to express logical arguments has been attributed to François Viète in his Introduction to the Analytic Art (1591) following the practice of lawyers to reduce litigants names to their initials when documenting cases.83
Law and computation have always, at least within the Western tradition, been mutually dependent. The culture of law is also a culture of computation. Bratton recalls Carl Schmitt’s definition of nomos as “the first measure for all subsequent measures,” the need to first mark out a common territory upon which legal relations can be formed.84 As Goodrich and Rotman argue, in regard to law and mathematics respectively, it is only when marking and counting come into play as complementary forms that both law and computation can emerge as distinct and extensive practices capable of transferring from one context to another.85 This enables, as Bratton puts it, “the physicalization of abstraction and the abstraction of physicalization.”86 The marking must be both a marking of boundaries (the people within this piece of land, not that, the herd of cattle within this field, not that) and a storage of values (a brand that denotes ownership, a number that denotes quantity). The counting must be both quantitative and comparative (the number of votes in favour and against, the number of cattle versus bags of grain).87 These two factors alone, however, are not sufficient to distinguish these practices, for they are also shared by a practice such as economics which is so often intrinsically linked to both law and computation. There is a third factor distinctive to law and computation which is that of logic through which marking and counting enable the possibility of making decisions, often of a discrete binary nature: true or false, guilty or innocent, computable or non-computable.88
Law is not separate from computation, each is intrinsic to how the other comes into being. To analyse law as computation is to analyse how logic comes to be in the world and how it comes to re-order that world. The history of legal logic and the evolution of the principles and practices of legal reasoning are central to the long history of both the theoretical structuring and the lived experiences of computationality. As Katja de Vries demonstrates, the current formations of AI and ML, as framed within Data Science and predictive analytics, are the prodigal children of legal logic’s liaison with probability.89 Yet their return to the domain in which they were born is less an occasion for celebration than one in which a fissure in the relation of the probable and probative is revealed, a fissure that raises questions about both the sufficiency of computational thought and the limits of legal reason. The current encounters of law and software are, therefore, a crucial context within which to think computation critically.
The two papers in this Special Section on Critical Approaches to Computational Law place specific themes from this wider enquiry into the methods and theoretical approaches of Software Studies.
In “Software As Dispute Resolution System: Design, Effect and Cultural Monetization,” Maria Karagianni uses artistic practice drawn from glitch culture to reverse engineer and intervene in the algorithms that facilitate YouTube’s Online Dispute Resolution service for copyright claims on uploaded video content. This builds upon the interest in copyright issues among artists working with digital media in which practitioners, such as Cornelia Sollfrank, have often probed its legal limits.90 Karagianni’s project is significant in that it intervenes in and interrogates the mechanisms of governance within this domain and questions the transparency and accountability of the quasi-legal operations through which YouTube seek to regulate it. Karagianni situates this within the evolution of Dispute Systems Design (DSD), a key approach in the development of Alternative Dispute Resolution systems, as a means of circumventing collective action by workers and Cornelia Vismann’s study on the role of paperwork and form-filling, such as the boilerplate material studied by Johns, as an integral element within the materiality of the governance of the modern state.91 Through such instrumentation Google (the owners of YouTube) aspire towards a state-like condition in which their ability to arbitrate disputes is related to a monetization process that Karagianni compares to early bourgeois tax systems, such as that imposed by the city of Paris in the 18th Century.
Ezekiel Dixon-Román, Ama Nyame-Mensah and Allison R. Russell present a detailed analysis of the racial politics of algorithmic policing in their article “Algorithmic Legal Reasoning as Racializing Assemblages.” The paper applies Dixon-Román’s concept of algo-ritmo to the use of predictive analytics in the City of Philadelphia criminal justice system.92 Drawing upon a speculative reading of the Spanish term for algorithm as ‘the repetition (ritmo) of something (algo)’ and Luciana Parisi’s proposal that algorithms operate as a distinct mode of thought, algo-ritmo highlights how alterity and difference can be re-iterated and regenerated across different assemblages of human and more-than-human cognitive and ontic process.93 This has particular relevance to the processes through which the categorisation and performance of race and racialization are constituted via algorithmic systems such as the predictive analytics used in policing and sentencing. In addressing this, the paper extends Alexander Weheliye’s concept of racializing assemblages to include the digital and computational.94 Importantly, this work raises the crucial question of the flesh that is prior to the constitution of the body in law or data yet which is forced to bear the, often violent, marks and consequences of decisions made in terms of certain juridical and algorithmic modes of recognition.
These papers provide strong examples of the different approaches and theoretical scope that a critical approach to the combined study of law and computation both offers and requires. They also provide a good indication of the range of issues that such enquiry can and must address. At a time when we are witnessing increasing automation in many areas, not least those of governance and state administration, and challenges to existing forms of related practice and oversight in ways that may be difficult to determine and evaluate in terms of the opening or closing of future outcomes, the need for such study is all the more pressing. It is hoped that these papers will help stimulate further enquiry and debate within this field.
References
Agar, Jon (2003). The Government Machine: A Revolutionary History of the Computer. Cambridge, MA and London: MIT Press.
Allen, Layman E. and Mary Ellen Caldwell (1963). Modern Logic and Judicial Decision Making: A Sketch of One View. In Hans W. Baade (Ed.), Jurimetrics, pp. 213–270. New York and London: Basic Books.
Andrighetto, Giulia, Rosaria Conte, Eunate Mayor Villalba, and Giovanni Sartor (2012).Introduction to the special issue: simulation, norms and laws. Artificial Intelligence and Law 20(4), 335–337.
Ashley, Kevin D. (1991).Reasoning with cases and hypotheticals in HYPO. International Journal Man-Machine Studies 34(6), 753–796.
Ashley, Kevin D. (2017). Artificial Intelligence and Legal Analytics: New Tools for Law Practice in the Digital Age. Cambridge: Cambridge University Press.
Ashley, Kevin D. and E.L. Rissland (1988).A case-based approach to modeling legal expertise. IEEE Expert 3(6), 70–77.
Axelrod, Elinor (1997). The Complexity of Cooperation: Agent-Based Models of Competition and Collaboration. Princeton, NJ: Princeton University Press.
Baase, Sara (2009). A Gift of Fire: Social, Legal and Ethical Issues for Computing and the Internet (Second ed.). Upper Saddle River, NJ: Pearson.
Baird, Douglas G., Robert H. Gertner, and Randal C. Pickle (Eds.) (1994). Game Theory and the Law. Cambridge, Massachusetts and London, England: Harvard University Press.
Bratton, Benjamin H. (2015). The Stack: On Software and Sovereignty. Cambridge, MA and London: MIT Press.
Brownsword, Roger (2008). So What Does the World Need Now? Reflections on Regulating Technologies. In Roger Brownsword and Karen Yeung (Eds.), Regulating Technologies: Legal Futures, Regulatory Frames and Technological Fixes, pp. 23–48. Oxford and Portland, Oregon: Hart Publishing.
Buchanan, Bruce G. and Thomas E. Headrick (1970). Some Speculation about Artificial Intelligence and Legal Reasoning. Stanford Law Review 23, 40–62.
Calo, Ryan, Michael Froomkin, and Ian Kerr (Eds.) (2016). Robot Law. Cheltenham: Edward Elgar Publishing.
Castañeda, Claudia and Lucy Suchman (2014). Robot Visions. Social Studies of Science 44(3), 314–341.
Catlow, Ruth, Marc Garrett, Sam Skinner, and Nathan Jones (Eds.) (2017). Artists Re:Thinking The Blockchain. London and Liverpool: Torque, Furtherfield and the Foundation for Art and Creative Technology.
Caudill, David S. (2015). Laboratory Life and the Economics of Science in Law. In Kyle McGee (Ed.), Latour and the Passage of Law, pp. 273–303. Edinburgh: Edinburgh University Press.
Chopra, Samir and Lawrence F. White (2011). A Legal Theory for Autonomous Artificial Agents. Ann Arbor: University of Michigan Press.
Cockshott, Paul, Lewis M. Mackenzie, and Greg Michaelson (2012). Computation and its Limits. Oxford: Oxford University Press.
Cohen, I. Jonathan (1977). The Probable and the Provable. Oxford: Clarendon Press.
Cortes, Corinna; Vapnik, Vladimir N. (1995). Support-vector networks. Machine Learning 20(3), 273–297.
Davis, Martin (1958). Computability and Unsolvability. New York: Dover Publications.
de Vries, Katja (2010). On Probable Grounds: Probabilistic Thought and the Principle of Reason in Law and Data Science. Ph. D. thesis, Department of Philosophy, Leiden University.
Dixon-Román, Ezekiel (2016). Algo-ritmo: More-than-human performative acts and the racializing assemblages of algorithmic architectures. Cultural Studies↔Critical Methodologies 16(5), 482–490.
Duxbury, Neil (Ed.) (1999). Random Justice: On Lotteries and Legal Decision-Making. Oxford: Oxford University Press.
Eubanks, Virginia (2017). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press.
Gillespie, Tarleton (2014). The Relevance of Algorithms. In Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot (Eds.), Media Technologies: Essays on Communication, Materiality, and Society, pp. 167–194. Cambridge, MA and London: MIT Press.
Golumbia, David (2016). The Politics of Bitcoin: Software as Right-Wing Extremism. Minneapolis and London: University of Minnesota Press.
Goodman, Bryce and Seth Flaxman (2016). EU regulations on algorithmic decision-making and a “right to explanation”. Presented at 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York, NY, https://arxiv.org/abs/1606.08813v3.
Goodrich, Peter (1987). Legal Discourse: Studies in Linguistics, Rhetoric and Legal Analysis. Basingstoke: Macmillan.
Goodrich, Peter (2006). A Theory of the Nomogram. In Peter Goodrich, Lior Barshack, and Anton Schütz (Eds.), Law, Text, Terror: Essays for Pierre Legendre, pp. 13–33. Abingdon: Glass House Press.
Goodrich, Peter, Lior Barshack, and Anton Schütz (2006). Introduction. In Peter Goodrich, Lior Barshack, and Anton Schütz (Eds.), Law, Text, Terror: Essays for Pierre Legendre, pp. 1–10. Abingdon: Glass House Press.
Goodrich, Peter, Costas Douzinas, and Yifat Hachamovitch (1994). Introduction: politics, ethics and the legality of the contingent. In Peter Goodrich, Costas Douzinas, and Yifat Hachamovitch (Eds.), Politics, Postmodernity and Critical Legal Studies: The legality of the contingent, pp. 1–31. London and New York: Routledge.
Hacking, Ian (1975). The Emergence of Probability: A Philosophical Study of Early Ideas about Probability, Induction and Statistical Inference. Cambridge: Cambridge University Press.
Hacking, Ian (1990). The Taming of Chance. Cambridge: Cambridge University Press.
Hand, Patrick (1982). Probable Cause Based on Inaccurate Computer Information: Taking Judicial Notice of NCIC Operating Policies and Procedures. Fordham Urban Law Journal 10(3), 497–510. Available online: https://ir.lawnet.fordham.edu/ulj/vol10/iss3/5/.
Hart, H.L.A. (1961). The Concept of Law. Oxford: Oxford University Press.
Ho, Tin Kam (1995, 14–16 August). Random Decision Forests. In Proceedings of the 3rd International Conference on Document Analysis and Recognition, Montreal, pp. 278–282.
Hunter, Rosemary, Clare McGlynn, and Erika Rackley (2010). Feminist Judgements: An Introduction. In Rosemary Hunter, Clare McGlynn, and Erika Rackley (Eds.), Feminist Judgements: From Theory to Practice, pp. 3–29. London: Hart.
Johns, Fleur (2013). Non-Legality in International Law: Unruly Law. Cambridge: Cambridge University Press.
Kerimov, Djangir A. (1963). Cybernetics and Soviet Jurisprudence. In Hans W. Baade (Ed.), Jurimetrics, pp. 71–77. New York and London: Basic Books.
Latour, Bruno (2004). Scientific objects and legal objectivity. In Alain Pottage and Martha Mundu (Eds.), Law, Anthropology, and the Constitution of the Social: Making Persons and Things, pp. 73–114. Cambridge: Cambridge University Press.
Latour, Bruno (2010). The Making of Law: An Ethnography of the Conseil d’État. Cambridge: Polity.
Lefebvre, Alexandre (2008). The Image of Law: Deleuze, Bergson, Spinoza. Stanford, California: Stanford University Press.
Leibniz, Gottfried Wilhelm (1966). Sämtliche Schriften und Briefe. Reihe 6: Philosophische Schriften. Band 2: 1663-1672. Berlin: Akademie Verlag.
Lem, Stanisław (2013). Summa Technologiae. Minneapolis and London: University of Minnesota Press. Translated and introduced by Joanna Zylinska.
Lessig, Lawrence (1999). Code and Other Laws of Cyberspace. New York: Basic Books.
Loevinger, Lee (1963). Jurimetrics: The Methodology of Legal Inquiry. In Hans W. Baade (Ed.), Jurimetrics, pp. 5–35. New York and London: Basic Books.
Mania, Karolina (2015). Online dispute resolution: The future of justice. International Comparative Jurisprudence 1(1), 76 – 86.
Mayor, Eunate (2011). Legal Analysis: Its Cognitive Weaknesses. In J. Stelmach and W. Załuski (Eds.), Game Theory and the Law, Volume 7 of Studies in the Philosophy of Law, Krakow, pp. 229–247. Copernicus Center Press.
Mazzega, Pierre, Olivier Therond, Thomas Debril, Hug March, Christophe Sibertin-Blanc, Romain Lardy, and Daniel Sant’ana (2014). Critical multi-level governance issues of integrated modelling: An example of low-water management in the Adour-Garonne basin (France). Journal of Hydrology 519, 2515–2526.
McGee, Kyle (2015). On Devices and Logics of Legal Sense: Toward Socio-Technical Legal Analysis. In Kyle McGee (Ed.), Latour and the Passage of Law, pp. 61–92. Edinburgh: Edinburgh University Press.
McQuillan, Dan (2015). Algorithmic states of exception. European Journal of Cultural Studies 18(4-5), 564–576.
Mirowski, Philip (1994). The Realms of the Natural. In Philip Mirowski (Ed.), Natural Images in Economic Thought: Markets Read in Tooth and Claw, pp. 451–483. Cambridge: Cambridge University Press.
Murphy, Peter (Ed.) (2003). Evidence, Proof, and Facts: A Book of Sources. Oxford: Oxford University Press.
Murphy, Tim (2004). Legal Fabrications and the case of ‘Cultural Property’. In Alain Pottage and Martha Mundu (Eds.), Law, Anthropology, and the Constitution of the Social: Making Persons and Things, pp. 115–141. Cambridge: Cambridge University Press.
Mussawir, Edward (2011). Jurisdiction in Deleuze: The Expression and Representation of Law. London and New York: Routledge.
Noble, Safiya Umoja (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.
Novet, Jordan (2014, November). Facebook’s AI team hires Vladimir Vapnik, father of the popular support vector machine algorithm. Venture Beat (3). https://venturebeat.com/2014/11/25/facebooks-ai-team-hires-vladimir-vapnik-father-of-the-popular-support-vector-machine-algorithm/. Accessed 28th June 2018.
Ostrom, Elinor (1990). Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge: Cambridge University Press.
O’Neil, Cathy (2016). Weapons of Maths Destruction: How Big Data Increases Inequality and Threatens Democracy. London: Allen Lane.
Parisi, Luciana (2013). Contagious Architecture: Computation, aesthetics and space. Cambridge MA and London: MIT Press.
Pasquale, Frank (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, Massachusetts and London, England: Harvard University Press.
Peel, Norman D., Garth E. Pickett, and Stephen T. Buehl (1970). Racial Discrimination in Public Housing Site Selection. Stanford Law Review 23, 63–147.
Quinn, Michael J. (2006). Ethics for the Information Age (Second ed.). Boston: Pearson.
Rieu-Clarke, Alistair, Andrew Allan, and Sarah Hendry (Eds.) (2017). Routledge Handbook of Water Law and Policy. London and New York: Routledge.
Riles, Annelise (2011). Collateral Knowledge: Legal Reasoning In The Global Financial Markets. Chicago and London: University of Chicago Press.
Rosenblat, Alex (2018). Uberland: How Algorithms Are Rewriting the Rules of Work. Berkeley and Los Angeles: University of California Press.
Rotman, Brian (2008). Becoming Beside Ourselves: The Alphabet, Ghosts, and Distributed Human Being. Durham and London: Duke University Press.
Rouvroy, Antoinette (2011). Technology, virtuality and utopia: Governmentality in an age of autonomic computing. In Mireille Hildebrandt and Antoinette Rouvroy (Eds.), Law, Human Agency and Autonomic Computing, pp. 119–140. London and New York: Routledge.
Sachs, Lawrence B. (2006). From St. Ives to Cyberspace: The Modern Distortion of the Medieval ‘Law Merchant’. American University International Law Review 21(5), 685–812.
Schmidt, Robert H. (2003). The Influence of the Legal Paradigm. In Peter Murphy (Ed.), Evidence, Proof, and Facts: A Book of Sources, pp. 103–120. Oxford: Oxford University Press. Originally published in The South Texas Law Review, Vol.40 No.2, 1999.
Schuppli, Susan (2014, September-October). Deadly algorithms: Can legal codes hold software accountable for code that kills? Radical Philosophy 187, 2–8.
Scott, Brett (2013). Heretic’s Guide to Global Finance: Hacking the Future of Money. London: Pluto Press.
Sergot, M.J., F. Sadri, A. Kowalski, F. Kriwaczek, P. Hammond, and H.T. Cory (1986). The British Nationality Act as a Logic Program. Communications of the ACM 29(5), 370–386.
Sherwin, Richard K. (2011). Visualizing Law in the Age of the Digital Baroque: Arabesques and Entanglements. London and New York: Routledge.
Skeggs, Bev and Simon Yuill (2018). Subjects of Value and Digital Personas: Reshaping the Bourgeois Subject, Unhinging Property from Personhood. Subjectivity. Forthcoming.
Solum, Lawrence B. (1992). Legal Personhood for Artificial Intelligences. North Carolina Law Review 70, 1231–1287.
Spengler, Joseph J. (1963). Machine-Made Justice: Some Implications. In Hans W. Baade (Ed.), Jurimetrics, pp. 36–52. New York and London: Basic Books.
Suchman, Lucy (2016, April). Situational awareness and adherence to the principle of distinction as a necessary condition for lawful autonomy. In CCW Informal Meeting of Experts on Lethal Autonomous Weapons, Geneva.
Susskind, Richard (2017). Tomorrow’s Lawyers: An Introduction To Your Future (Second Edition ed.). Oxford: Oxford University Press.
Susskind, Richard and Daniel Susskind (2015). The Future of the Professions: How Technology will Transform the Work of Human Experts. Oxford: Oxford University Press.
Susskind, Richard E. (1986). Expert Systems In Law: A Jurisprudential Approach to Artificial Intelligence and Legal Reasoning. The Modern Law Review 49(2), 168–194.
Szabo, Nick (1997). Formalizing and Securing Relationships on Public Networks. First Monday 2(9).
Tapscott, Don and Alex Tapscott (2016). Blockchain Revolution: How the Technology Behind Bitcoin Is Changing Money, Business and the World London: Penguin.
Tedre, Matti and Ron Eglash (2006). Ethnocomputing. In Matthew Fuller (Ed.), Software Studies: A Lexicon, pp. 92–101. Cambridge MA and London: MIT Press.
Teubner, Gunther (1993). Law as an Autopoietic System. Oxford and Cambridge: Wiley-Blackwell.
Tribe, Lawrence H. (1971). Trial by Mathematics: Precision and Ritual in the Legal Process. Harvard Law Review 84(1329).
Verran, Helen (2001). Science and an African Logic. Chicago and London: University of Chicago Press.
Vismann, Cornelia (2008). Files: Law and Media Technology. Chicago and London: Stanford University Press. Translated by Geoffrey Winthrop-Young.
von Neumann, John and Oskar Morgenstern (1944). Theory of Games and Economic Behavior (Second ed.). Princeton, NJ: Princeton University Press.
Weheliye, Alexander G. (2014). Habeas Viscus: Racializing Assemblages, Biopolitics, and Black Feminist Theories of the Human. Durham and London: Duke University Press.
Winner, Langdon (1992). Autonomous Technology: Technics-out-of-control as a Theme in Political Thought. Cambridge, MA and London: MIT Press. First published 1977.
Notes
- Goodman and Flaxman 2016, O’Neil 2016. ↩
- Lessig 1999, Baase 2009, Quinn 2006. ↩
- Pasquale 2015, Noble 2018. The terms Artificial Intelligence and Machine Learning are sometimes used interchangeably, however the latter is best understood as a subset of the broader field of Artificial Intelligence. Machine Learning algorithms are typically characterised by operating on large statistical data sets and having the ability to adapt to data, either through supervised or unsupervised means, whereas Artificial Intelligence includes other approaches such as purely rule-based systems, agent-based systems and first-order logic systems. ↩
- Rosenblat 2018. ↩
- Schuppli 2014, Calo et al. 2016, Suchman 2016. ↩
- Gillespie 2014. ↩
- Notable early examples of critical responses to such technology in relation to law and governance include MacBride1967, Tribe 1971, and Winner 1992. In his Summa Technologiae of 1964, Stanisław Lem spoke of “The Black Box” as “The Regulator of the Highest Kind,” Lem 2013, Chapter 4, cited in Bratton 2015, p. 341. ↩
- McGee 2015, p. 61. ↩
- “A regulatory environment that is dense with these new technologies is a very different place to an environment that relies on compliance with norms that are either legally or morally expressed or simply implicit in custom and practice.” — Brownsword 2008, p. 25. ↩
- For an analysis of the increasingly visual form that evidence has taken through the use of computer simulations, however, see Sherwin 2011. ↩
- Goodrich 1987, Goodrich 2006, Goodrich et al. 2006. ↩
- Murphy 2004, p. 121. ↩
- Agar 2003, Bratton 2015. ↩
- It is tempting to suggest that this distinction is echoed in that between the preference for functional programming languages in the US, such as Lisp, and declarative languages, such as Prolog, in France. One example of an attempt to model UK statutory law in Prolog was that of researchers at Imperial College, University of London, working with the British Nationality Act, see Sergot et al. 1986. ↩
- Ashley and Rissland 1988, Ashley 1991. ↩
- Susskind and Susskind 2015, p. 68 and Susskind 2017. ↩
- Eubanks 2017. ↩
- Murphy 2004, pp. 121, 126. Bratton makes a similar point in regard to the wider relation between politics and programming which he describes as “transgenic,” each may alter the other “in ways that are often unpremeditated and misunderstood.” — Bratton 2015, p. 327. ↩
- Loevinger 1963. ↩
- Allen and Caldwell 1963. ↩
- Ashley 2017. ↩
- The Random Forest approach was first devised by Tin Kam Ho, Ho 1995. ↩
- An historical survey of their development is provided in Susskind 1986. ↩
- Buchanan and Headrick 1970. ↩
- See Susskind 1986 and Ashley 1991. ↩
- Examples of the application of SVM systems to legal material are given in Ashley 2017. For an outline of computer use within the legal system and training of the USSR during the 1960s see Kerimov 1963. ↩
- Defined by Vladimir Vapnik and Alexey Chervonenkis, and later revised by Corinna Cortes and Vapnik in the 1990s, see Cortes 1995. ↩
- “… the historical emergence of planetary-scale computation and neoliberalism are intertwined.” — Bratton 2015, p. 21. ↩
- Novet 2014. ↩
- See, for example, Parisi 2013 who provides a different reading of the relation between code and capital that suggests other areas of critical analysis. ↩
- von Neumann and Morgenstern 1944. ↩
- Baird et al. 1994. ↩
- Cohen 1977, p. 54. ↩
- Murphy 2003 provides a compendium of various historical sources on this specific topic. ↩
- “Game theory posits that, regardless of how people go about making decisions, the actions they take are consistent with a few basic principles. These include the idea of strict dominance.” — Tribe 1971 quoted in Baird et al. 1994, p. 271. ↩
- Andrighetto et al. 2012, p.336. ↩
- Axelrod 1997. ↩
- Mayor 2011 and Mazzega et al. 2014. The key work on collective resources in this context is Ostrom 1990 which presents a modified form of Game Theory. ↩
- See Mayor 2011 for example. On the relation of international environmental law with local customary law see the articles in Rieu-Clarke et al. 2017. Such an approach would also open up interesting questions in regard to the relation of different legal traditions and the cultural basis and biases of computational paradigms as explored in research such as Verran 2001 and Tedre and Eglash 2006. ↩
- For an overview of the history and potential development of ODR see Mania 2015. ↩
- https://lexmachina.com ↩
- https://www.tylertech.com/solutions-products/modria ↩
- To be known as Her Majesty’s Online Courts (HMOC), the report is available at https://www.judiciary.gov.uk/wp-content/uploads/2015/02/Online-Dispute-Resolution-Final-Web-Version1.pdf, see also Susskind and Susskind 2015, p. 70. ↩
- https://www.donotpay.com/ ↩
- http://www.bbc.co.uk/news/technology-36650317 ↩
- https://www.donotpay.com/terms/ ↩
- Susskind and Susskind 2015, pp.210–211. ↩
- http://www.contractexpress.com, https://www.exari.com, see Susskind and Susskind 2015, p. 69. ↩
- “One might also think of the deal as an abode, a workplace, or a polity divided in and against itself. … questions of when and where those parties may be governed by non-state norms (particularly merchant law or usage, or lex mercatoria), in preference to norms prescribed by state institutions (courts and legislatures).” — Johns 2013, p. 112. ↩
- https://www.ethereum.org/. Smart contracts pre-date blockchain technologies and were first proposed by computational cryptographer and legal researcher Nick Szabo in 1994, http://www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/smart.contracts.html, see also Szabo 1997. ↩
- Scott 2013. For an excellent collection of essays that are both exploring and questioning the potential of blockchain technologies see Catlow et al. 2017. ↩
- Tapscott and Tapscott 2016. ↩
- Sachs 2006, see also Golumbia 2016. ↩
- “… corporate parties’ enactments of ‘non-legal’ autonomous preference in market transactions tend to be structured in particular, patterned ways, including through deal architectures and financial modelling.” — Johns 2013, p. 114. ↩
- “The limits of the traditional conception of contracting as a spontaneous meeting of minds have long been highlighted in studies of standard-form, or boilerplate, contracting.” — Johns 2013, p. 110-111. Johns draws on the work of Robert Hale in making this analysis. See also Riles 2011. Lessig’s arguments on the relation between code and law are given in Lessig 1999. ↩
- See also Susskind and Susskind 2015, p. 71. ↩
- Johns 2013 outlines different forms that non-legality might take: illegality, extra-legality, pre- and post-legality, supra-legality, and infra-legality. ↩
- Peel et al. 1970. ↩
- See, for example, Spengler 1963, Tribe 1971 and Hand 1982, discussed in Susskind and Susskind 2015, p. 68. ↩
- O’Neil 2016. ↩
- This may start from just being suitably informed in how to configure the tools you use, although the issue goes far beyond this. ↩
- See Skeggs and Yuill 2018. ↩
- The guidance on “automated individual decision making” within the GDPR may be a step towards this but, as Goodman and Flaxman 2016 argue, is very much limited in its scope to address the actual forms of current automated processing systems. ↩
- Chopra and White 2011 provide the most detailed survey and analysis of this, taking into account recent developments in computing and cognitive theory. ↩
- Solum 1992. ↩
- “Legal subjectivity is not a firmer concept, obviously: both assumed and constituted by law; it appears irremediably self-referential, or enclosed in a positivity that can never completely be relied upon.” — Rouvroy 2011, p. 120. ↩
- Hart 1961. It was from this perspective that Hart pushed for the decriminalization of homosexuality. ↩
- Derrida 1988, quoted in Goodrich et al. 1994, p. 26. ↩
- “Law as code, as a legislative or algorithmic construction, craves totality. It seeks to command and control the events that fall within its province. Justice seeks neither.” — Sherwin 2011, p.161. ↩
- On the contingency of law see Goodrich et al. 1994, p. 1. ↩
- Castañeda and Suchman 2014, Suchman 2016, McQuillan 2015 and Schuppli 2014. McQuillan is also addressing issues raised by predictive analytics that the GDPR is a response to, one which he considers similarly inadequate. ↩
- Schuppli 2014, p. 7. ↩
- “One consequence of the view of legal subjects as atomised, self-interested, self-determining beings is that legal responsibility tends to be ascribed only to subjects who have acted intentionally, whereas a relational, interdependent view of humanity would focus more on the effects of actions.” — Hunter et al. 2010, p.22. ↩
- Teubner 1993, Lefebvre 2008, Mussawir 2011. ↩
- Johns 2013. ↩
- Rouvroy 2011, Tribe 1971. ↩
- Duxbury 1999, p. 18. My thanks to Burkhard Schafer for bringing this to my attention. ↩
- As Justice Holmes put it: “An ideal system of law should draw its postulates and its legislative justification from science.” — Oliver Wendell Holmes, Learning and Science, speech delivered at a dinner of the Harvard Law School Association, June 25, 1895, reprinted in Collected Legal Papers 138 (1920). Quoted in Loevinger 1963, p. 6. ↩
- For the historical development of mathematical probability see Hacking 1975 and Hacking 1990. For the specific relation of this to law see Cohen 1977 and de Vries 2010. ↩
- Cohen 1977, p. 52. ↩
- Aleatory contracts are those in which the outcome cannot be determined, such as to cover gambling agreements but also insurance and derivatives. See de Vries 2010 who draws from Leibniz 1966, Book 4, Chapter xiv, ‘Judgment’, Chapter xv, ‘Probability’, and Chapter xvi, ‘The degrees of assent’. ↩
- Latour 2004, Mirowski 1994. ↩
- Schmidt 2003. See also Latour 2010, p.207 and Caudill 2015. ↩
- See Bratton 2015, pp. 25–29. ↩
- Goodrich 2006, Rotman 2008. ↩
- Bratton 2015, p. 29. ↩
- “… computation is central to all aspects of our existence; that is, the application of rules to information to produce new information, usually to make a decision about what to do next.” — Cockshott et al. 2012, p. 1. ↩
- In the wake of Hilbert it is almost as though mathematics became a legislative practice, as the opening words of Martin Davis’ Computability and Unsolvability makes clear: “The primary task of present-data mathematicians is that of determining whether various propositions concerning mathematical objects (e.g., integers, real numbers, continuous functions, etc.) are true or false.” — Davis 1958, p. xv. ↩
- de Vries 2010. ↩
- http://artwarez.org ↩
- Vismann 2008. ↩
- Dixon-Román 2016. ↩
- Parisi 2013. ↩
- Weheliye 2014. ↩