Artificial Rhetorical Agents and the Computing of Phronesis

Article Information

  • Author(s): Jennifer Maher
  • Affiliation(s): University of Maryland, Baltimore County
  • Publication Date: 15th January 2016
  • Issue: 5
  • Citation: Jennifer Maher. “Artificial Rhetorical Agents and the Computing of Phronesis.” Computational Culture 5 (15th January 2016). http://computationalculture.net/artificial-rhetorical-agents-and-the-computing-of-phronesis/.


Abstract

Ongoing work by artificial intelligence researchers aims to create moral machines, ethical robots, and artificial moral agents (AMAs), wired with a codable sense of what is good, or what Aristotle called phronesis, which he defined as the ability of some people to ‘see what is good for themselves and what is good for humans in general.’ Of paramount importance to computational phronesis is the creation of artificial rhetorical agents (ARAs) that will ultimately possess the as-of-yet, still uniquely human ability to function persuasively. In doing so, chatbots, persuasive robotics, and argumentative machines stand to be transformed from limited, rule-bound bots to ARAs capable of not only computing nuanced solutions to complex moral problems but also offering persuasive explanations for their merit of those solutions. Yet, this transformation is possible only if the complexities of a world constituted just as much by the ambiguity of rhetoric as by the certainty of computation can be hacked, an endeavor proving more difficult than perhaps even Descartes imagined in his dreams of a world constituted by incontrovertibility.


Introduction

On a cold night in Ulm, Germany on November 10, 1619, René Descartes received a series of dreams in which a ‘mirabilis scientiae fundamenta’ was revealed to him. As he recounts in his autobiography A Discourse on Method, this foundation for a wonderful science was to be built upon mathematics and promised to unite all forms of knowledge, even those that were increasingly treated as antithetical. Though knowledge (scientia) and wisdom (sapientia) had in the classical Western world once enjoyed a complimentary partnership, philosophical wisdom would become increasingly separated from scientific knowledge. This ‘universal math,’ of which Descartes dreamed, promised to formally unite the two, as his tree metaphor in The Principles of Philosophy illustrates:

The whole of philosophy is like a tree. The roots are metaphysics, the trunk is physics, and the branches emerging from the trunk are all the other sciences, which may be reduced to three principle ones, namely medicine, mechanics and morals. By ‘morals’ I understand the highest and most perfect moral system, which presupposes a complete knowledge of the other sciences and is the ultimate level of wisdom. 1

But until this Tree of Knowledge took root and with it a science of morality, Descartes would make due with what he called his ‘provisional moral code.’ 2 The moral maxims comprising his code included following the instructions of God taught by his religion, obeying his country’s laws and customs, possessing moderate opinions formed by those he knew to be wisest, acting with resolve rather than by fortune, and resisting desire which tainted reason. 3 From these maxims that arose from his own reasoning about what constitutes good, Descartes rhetorically inscribed and justified a method for moral action, one that he reasoned came closest to minimizing the shortcomings of probability and ensuring the certainty of incontrovertibility. ‘In short,’ Descartes explained, ‘the method that teaches one to follow the correct order and to enumerate all the factors of the object under examination contains everything that confers certainty on arithmetical rules.’ 4 With this method Descartes aimed to resist the lure of those principles that were formed simply and problematically from ‘majority view’ 5 and to justify his moral method through appeals to the logic of mathematics, which was the only branch of knowledge that ensured the ‘certainty and the incontrovertibility of its proofs’ 6 and, therefore, truth with a capital ‘T’. For Descartes, only universal math would finally offer solutions to the ‘present corrupt state of our morals’ 7 and achieve the certainty of true and false in matters of good and ill. But until then he believed that his provisional moral code offered the best method for ‘perfecting judgment.’ 8 In Descartes’ dreams, reasoned deliberation and its expression in the words of his provisional code created both a moral method and moral language meant to bridge a gap until a time when mathematical calculation would confer to the realm of ought and ought not the certainty of is and is not.

With ongoing work in the field of artificial intelligence (AI), the object of Descartes’s dreams seems more possible than ever, especially through the development of an artificial moral agent (AMA). Machine morality is often elucidated according to a hierarchy, 9 from low-level moral capabilities ascribed to software that executes any kind of moral consequence (e.g., a plane’s autopilot, an electrocardiogram’s reading, a cellphone’s map) to an autonomous artificial being capable of complex moral deliberation. The AMA, also referred to as a moral machine or ethical robot, falls into the latter category and poses the potential for digital beings to compute what has long been considered to be at the heart, or perhaps more fittingly, the soul of what it means to be human. But, as we see in Descartes’ elucidation of his provisional code, the moral often takes shape in and through language. In this way, conceptions of what is moral often materialize through explanation and justification. As a result, the dream of the AMA is very much dependent upon the realization of artificial rhetorical capabilities. Yet, the ability to encode and decode the rhetorical significance of symbols—in sum, to make meaning—is a process so complex that it might well always serve to differentiate human from machine. 10 As Descartes acknowledged, even the best imitations of the machine qua human would ultimately be revealed because first ‘they would never be able to use words or other signs by composing themselves as we do to declare our thoughts to others’ and second ‘they did not act consciously.’ 11 But presumably even rhetoric would also ultimately fall under the purview of universal math to become computational.

This essay explores the on-going challenge to create what I call the artificial rhetorical agent (ARA), capable not only of making moral judgments but also explaining through the persuasive use of natural language—that is, rhetoric—the reasoning behind those judgments. Through analysis of natural language, computer scientists and linguists have sought with computational linguistics to instill in the digital machine the still uniquely human ability to communicate, at first, with validity and, ultimately, with persuasion 12. The ‘duty of rhetoric,’ Aristotle explains, is ‘to deal with such matters as we deliberate upon without arts or systems to guide us.’ 13 As a result, matters of morality are also matters of rhetoric. Because the modern landscape is fraught with moral contestation, certainly more so than Descartes ever experienced, we must expect that an AMA have a sense of what is good, what Aristotle called phronesis, which he described in this way: ‘[W]e credit men with practical wisdom [phronesis] in some particular respect when they have calculated well with a view to some good end which is one of those that are not the object of any art . . . It remains, then, that it is a true and reasoned state of capacity to act with regard to the things that are good or bad for man. For while making has an end other than itself, action cannot; for good action itself is its end.’ 14 But we must also expect that an AMA have the ability to participate in rhetorical exchange about why this sense and its manifestation in a position or a choice of action is good and not ill. In short, only with computational rhetoric does computational phronesis stand to transform the digital machine from a limited capacity, rule-bound bot 15 to an ARA capable of offering nuanced explanations of moral judgments.

Transferring Human Morality

In ‘Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter’, published in January 2015, the Future of Life Institute noted that AI research has been focused for the last 20 years on the creation of ‘systems that perceive and act in some environment.’ 16 Celebrating the potential of AI to change the world for the better, the letter also served as a warning, one made even more newsworthy given that signatories included Stephen Hawking and Elon Musk:

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls. 17

In a hyperlinked document provided via the letter, economic maximization in industry implementation, legal implications of self-driving vehicles, the production of autonomous weapons, and the role of machine ethics were just some of the research priorities that were noted as needing much consideration in order to ensure that ‘our AI systems must do what we want them to do.’ 18

In the realm of machine morality, getting AMAs to do what we want them to do often translates to transferring to the machine human-oriented philosophical conceptions of morality such as Kantianism and utilitarianism, the two dominant philosophies in the West 19 and, not surprisingly, in AI research. Immanuel Kant’s moral philosophy functions deontologically through the existence of a universal moral law that exists a priori of human experience. Through a categorical imperative that ‘demands that any individual should ‘[a]ct only on that maxim through which you can at the same time will that it should become universal law,’ 20 morality is expressed as a duty. If given the freedom to act according to her reason, Kant argues, an individual ought to act morally and ethically so that the interest to act is motivated solely by this universal law as a categorical imperative, meaning ‘ “I ought to do so and so, even though I should not wish for anything else”.’ 21 Thus, the will guided by an unfettered reason bound only by the categorical imperative is independent of immediate material or practical effects and instead is the result of a rationality that dictates that one ought to act so that ‘the highest good [is] a good common to all.’ 22 In contrast, the philosophy of utilitarianism, first posited respectively by John Stuart Mill and Jeremy Bentham, contends that the common good is best served by calculating 23 the consequences of any action for the ‘aggregate of all persons.’ 24 As Bentham crystallized in the ‘greatest happiness principle,’ any action to be considered moral must aim to maximize pleasure and minimize pain for the greatest number of people. Because the attainment of pleasure and avoidance of pain ‘govern us in all we do, in all we say, in all we think,’ moral action becomes an on-going exercise in promoting utility. Although critiqued for amounting to little more than what Charles Taylor calls a ‘weak evaluation’ that fails to address ‘deeper’ issues and preferences ‘about the quality of life, the kind of beings we are or want to be,’ 25 utilitarianism nonetheless promotes a moral approach to the world, even if quite limited.
When James Moor asked in his seminal article ‘Is Ethics Computable,’ published in 1995, he turned to Kantism and utilitarianism as possible models by which to program machine morality.

Seeking to answers the questions ‘Are computers capable of making ethical decisions? Someday, could a computer be properly programmed, or perhaps, effectively taught, to make sophisticated ethical judgments? Could a robot be ethical?’ 26 Moor argued that calculation was of paramount importance to machine morality, but not simply because numerical calculation is the machine’s raison d’être. Like Aristotle, Moor conceptualized calculation as ‘making judgments about ethical virtues or what ought to be done.’ 27 But unlike Aristotle, for whom virtue existed in many ways rhetorically as a mean between two extremes, 28 Moor argued that these calculations were best conducted according to the computation of ‘how much good and evil relevant actions will produce.’ 29 In spite of its limitations, which Moor acknowledged, this utilitarian conception of morality stood to introduce ‘consideration of the good and evil consequences of actions does and should play an important role in the ethical evaluation process.’ 30 But with Kantianism, Moor argued, the ethical machine could be programmed with a deeper sense of good so as to compute fairness, for instance. Because the categorical imperative dictates that the universal law ought to be followed for itself alone, Moor believed this would encourage a machine to ‘impartially universalize31 and thereby ensure moral and political equality.

Since Moor surmised possible philosophical avenues by which to program an artificial being with morality, the merits of Kantian and Utilitarian AMAs have been revisited by researchers in AI. 32 While the power to compute which actions would lead to the greatest utility by crunching potentially limitless amounts of data makes the appeal of the utilitarian AMA obvious, the rule-based formulation of morality offered by Kant’s categorical imperative offers the possibility of an AMA not only with greater logical consistency but also with increased moral depth. But for Wendell Wallach and Colin Allen, both philosophies are problematic. Utilitarianism poses the problem of ‘an endless stream of calculations’ 33 that offers the threat of a ‘computational black hole.’ 34 At what point, in both space and time, do artificial agents stop computing the possible consequences of any action, especially when processing power might ultimately allow for close analysis of even the butterfly effect? As Wallach and Allen explain, ‘[T]he utilitarian principle specifies that negative calculations should be halted at precisely the point where continuing to calculate rather than act has a negative effect on aggregate utility.’ 35 But the question then becomes, ‘How does one know whether a computation is worth doing without doing the computation itself?’ 36 This is the paradox of calculating morality through computation.

While deontological philosophy might appear to offer the very kind of productive constraints missing in utilitarian AMAs, this ‘über-rule computing’ is not any less problematic. While acting according to categorical imperatives like the Golden Rule might initially seem ideal because artificial agents would simply need to execute according to a fixed set of rules, the output from this kind of computation hardly ensures moral good. An AMA programmed to follow the Golden Rule, for example, would have to ‘(1) notice the effect of others’ actions on itself, assess the effect . . . and chose its preferences; (2) assess the consequences of its own actions on the affective state of others, and decide whether they match its own preferences; and (3) take into account differences in individual psychology while working on (1) and (2) . . .’ 37 The degree of nuanced reflection necessary for the application of the Golden Rule in a specific situation would therefore necessitate some consideration of the consequences of that action, according to Wallach and Allen. While this process is ‘exceedingly difficult’ for humans, who, whether consciously or unconsciously, are limited in the amount of information they can take into decision-making processes, the problem for AMAs is the potential return to the black hole of computing utilitarian consequences. And perhaps just as problematic, I might note, is the potential for deontological moral philosophies to always be reduced ultimately to utilitarian concerns.

In spite of the predilection toward Kantianism and utilitarianism, AI researchers have also explored the potential of less popular moral philosophies, such as Aristotelian virtue ethics 38 and case-based ethics. 39 For instance, Marcello Guarini advocates a deductive approach to machine morality, one that uses a series of cases to train artificial ‘neural network models’ so that any kind of abstract moral rules rise organically from the situation rather than deontologically. Through this process the machine comes to have a moral education, which Aristotle believed to be so important to the phronimos, the virtuous person who knows and does good. And should use of functional magnetic resonance imaging (fMRI) to map the mechanics of the brain be utilized in computational phronesis, 40 this might allow for a nuanced situational approach that builds toward models of moral cognition.

Of course, simply transmitting or uploading human conceptions of morality to machines is appealing for reasons that may include familiarity and control. With autonomous cars hitting the road, questions as to what these cars will do in situations that call for not just technical but moral decisions has captured public imagination. But as Google’s May 2015 report on its self-driving cars illustrates, there is a tendency to construct autonomous machines in such a way that they make up for an inherent lack in human beings, especially their inability to follow rules without exception:

Given the time we’re spending on busy streets, we’ll inevitably be involved in accidents; sometimes it’s impossible to overcome the realities of speed and distance. Thousands of minor accidents happen every day on typical American streets, 94% of them involving human errors, and as many as 55% of them go unreported. (And we think this number is low . . .)

In the six years of our project, we’ve been involved in 12 minor accidents during more than 1.8 million miles of autonomous and manual driving combined. Not once was the self-driving car the cause of the accident. 41

But what Google ignores is the way in which driving is not simply about the flawless implementation of the rules of the road. In her letter to The New York Times about its September 1, 2015 article ‘Google’s Driverless Cars Run into Problem: Cars with Drivers,’ Nancy Lederman eloquently communicates this fact: ‘Driving is more than a set of rules for cars, to be “smoothed out” by developers of self-driving cars. It’s a social skill, in which drivers collaborate in a complex choreography that requires them to understand, anticipate and respond to the actions of other drivers on the road. It’s a wholly human dance, beyond a few technological fixes.’ 42 Yet, if cast similarly to Google’s conception of autonomous cars, computational phronesis stands to eliminate the limitations and errors affected through the implementation of moral beliefs in human actions (e.g., inconsistency, bias, emotion) and insure the very kind of incontrovertibility of which Descartes dreamed in his science of morality. This kind of certainty might be especially attractive in the case of healthcare, where patient care can involve life and death decisions. With the growth of the population over the age of 65, the European Commission, as well as governments in Korea and Japan, is working toward creating regulations for the moral comportment of robotic caregivers. 43 In these cases, the appeal of moral formulas, either as maxims or aggregates, can appear to allay fears about how artificial agents will act in the face of moral dilemmas. 44 Yet, belief in the surety of either deontological or utilitarian moral philosophies has dwindled as of late, 45 so whether they are worth artificially imitating is questionable. In fact, it is quite possible that machine imitation of human morality stands to leave artificial morality in the unsatisfying and confused circumstances that we find human morality today.

Computing Phronesis

Where once a shared perception of what is good was provided by the likes of Aristotelian virtue ethics or Christianity, philosophers 46 often bemoan the quagmire that is modern morality, as does the general public, which feels that ‘the good life today is being tarnished by moral decay,’ according to a Pew Research Center report at the close of the millennium. 47 Alastair MacIntyre argues that we live in a ‘new dark age,’ where moral virtues have been overtaken by a bureaucratic rationality that ‘match[es] means to ends economically and efficiently’. 48 Because of the resulting absence of normativity regarding what constitutes virtue and vice, we are left with nothing but ‘simulacra of morality’ in which personal passion and disposition masquerade as virtue. 49 But these simulacra are seldom realized, according to MacIntyre, because ‘our capacity to use moral language, to be guided by moral reasoning, to define our transactions with others morally continues to be central to our view of ourselves.’ 50 Consequently, this ‘rhetoric of morality,’ a phrase he uses pejoratively, serves to give the appearance of moral living when, in fact, such language ‘betrays’ the human condition.

But rather than decry the modern moral state, Marshall McLuhan wrote in 1967 that the ‘new morality’ rejects ‘the endlessly repetitive moralizings’ that occur through ‘value judgments’ rooted in the ‘illusion of the contrast between the past and present’ and effects a shift from the outwardness of consciousness to the inwardness of subconscious. 51 Rather than ‘classifying’ according to already existing moral philosophies and histories, we are to ‘probe’ and treat ‘each experience as unique,’ so that ‘the good’ is essentially recognized as an ultra-individualistic experience. 52 We can see in McLuhan what MacIntyre derides as the imitation of morality. Where once the calculative power of the phronimos reified what amounted to a singular moral belief system, there now instead exists the ‘languages of morality,’ the effect of which is nothing less than a moral Tower of Babel. 53 Although transferring into machines human morality might be a useful first step in creating AMAs, the feasibility of deontological or utilitarian AMAs acting in ignorance of their own capabilities for the sake of adherence to human-generated, commandment-like rules or the greatest happiness principle seems likely to waste the potential of moral computing power.

Illustrative of perhaps the most promising kind of approach to computing morality, at least from the perspective of phronetic decision-making, is found in Beliefs, Desires, and Intentions (BDI). Initially proposed by as a way of understanding human rationality, 54 BDI has proven useful in AI research 55 as a framework used to design an architecture that facilitates ‘practical reasoning,’ a typical translation of phronesis in AI research, and in contrast to more classical translations such as prudence or practical wisdom. According to Michael Bratman, David Israel, and Martha Pollack, in one of the earliest accounts of BDI, ‘rational agents must both perform means-end reasoning and weigh alternative courses of action.’ 56 In other words, agents deliberate about what is possible and then act with intention to achieve a desired goal. To this end, BDI facilitates the creation of an architecture that attends to the fact that action most often occurs in a space of ‘competing alternatives’ that must be weighed before deciding on one. 57 To illustrate such a space, Bratman, Israel, and Pollack offer the following dilemma:

[O]nce an agent has formed a plan to attend a particular meeting at 1:00, she need not continually weigh the situation at hand in a wholly unfocused manner. Instead, she should reason about how to get there by 1:00; she need not consider options incompatible with her getting there by 1:00; and she can typically ground her further reasoning on the assumption that she will indeed attend the meeting at 1:00.

While hardly dripping with moral inflection, this kind of example offers a just-complex-enough situation that allows an agent to deliberate among a number of limited possibilities. But because ‘an agent is seen as selecting a course of action on the basis of her subjective expected utility, which is a function of the agent’s beliefs and desires,’ this particular kind of means-end analysis is a necessary first-step, even if quite limited in terms of illustrating an agent’s ability ‘to see what is good for themselves and what is good for humans in general,’ to borrow from Aristotle.

But as Louise Dennis et al.’s more recent use of BDI illustrates, artificial desire need not be limited to quite so one-dimensional situations, nor can it be. Because deliberation so often occurs in the midst of complex situations involving multiple agents with multiple goals and motivations, an agent must be able to act according to its own action plan, which results from ‘beliefs and goals and form the basis for practical reasoning.’ 58 Among the scenarios that Dennis et al. examine is the common urban search-and-rescue scenario. In this case, a natural disaster (e.g, an earthquake, a tsunami, or a tornado) has occurred and caused buildings to collapse; and it is the mission of autonomous robots to search for survivors in the rubble. Because Dennis et al. are particularly interested in verification that ‘a robot’s reasoning is correct,’ 59 this situation is limited to one artificial agent moving across a grid and looking for (sensing) a body. But from this we can extrapolate to much more complex situations in which an artificial agent will have to make even more complicated choices: How is a robot to choose whom to help first in the case of multiple survivors? What is the robot to do in the case of person who has suffered fatal injuries and is in excruciating pain but is not rescuable? Such may be the choices that someday confront Atlas, a ‘humanoid’ robot, currently being developed through Defense Advanced Research Projects Agency’s (DARPA) Robotic Challenge, for example. Yet in light of moral dilemmas with such steep consequences, an artificial agent would also have need to explain its choices, a condition often missing in discussion of AMAs. As Dennis et al. note,

Rather than just having a system which makes its own decisions in an opaque way, it is increasingly important for the agent to have explicit reasons (that it could explain, if necessary) for making one choice over another. In the setting of autonomous systems, explicit reasoning assists acceptance. When queried about its choices an autonomous system should be able to explain them, thus allowing users to convince themselves over time that the system is reasoning competently. More importantly, where an autonomous system is likely to be the subject of certification, it may need to fulfill obligations to do with reporting, particular in instance where an accident or ‘near miss’ has occurred. It is desire for such reporting to be presented at a high level which explains its choices in terms of what information it had, and what it was trying to achieve. 60

While delving into this aspect of phronesis is not part of Dennis et al.’s particular study, they make clear the rhetorical dimension of complex decision-making. Rather than just computing phronesis and acting accordingly, an AMA must also be able to function as a rhetorical being capable of using persuasion to make the moral worth of its decisions clear to human beings.

Rhetorical Calculation

Unlike in matters that ‘cannot now or in the future be, other than they are,’ Aristotle argues that rhetoric addresses questions regarding what is probable among alternative possibilities. In describing rhetoric as an ‘offshoot of . . . ethical studies,’ 61 Aristotle makes clear that what is considered good or ill is often a matter of persuasion. Such is the nature of rhetoric that it is open to attack for its ability to move others through unethical means toward immoral ends. 62 With the Enlightenment-induced shift from philosophy as thoughtful wisdom to philosophy as pure reason, rhetoric’s dealings in what is probable rather than true threatened to lead people to judgments that were not only immoral but also unreasonable. Kant, for example, expressed his own discomfort with the ‘knack’ of rhetoric as ‘a deceitful art, which understands how to move people, like machines, to a judgment in important matters which must lose all weight for them in calm reflection.’ 63 But rhetoric is no more immune to unethical practice than any other art, although for those like Descartes, this is the essential problem that must be overcome.

What determines whether or not rhetorical action is ethical are the choices made both in the construction of the argument and in the virtue of its end. As Aristotle explains, ‘The origin of action . . . is choice, and that of choice is desire and reasoning with a view to an end. This is why choice cannot exist either without thought and intellect or without a moral state; for good action and its opposite cannot exist without a combination of intellect and character.’ 64 In the case of technical excellence (techne), the artist or craftsperson must choose, according to a ‘true course of reasoning,’ the best methods and means to use in the making of ‘external ends,’ such as a house, a ship, or even an algorithm. 65 Deliberation, as a precursor to judgment, consequently involves the choice of how best to craft that end that in ‘the medical art is health, that of shipbuilding a vessel, that of strategy victory, that of economics wealth.’ 66 But in the case of moral excellence (arête), ends are conceptualized as internal, meaning, at least at the time, of the soul, rather than external. Unlike the good that comes from a house’s use as shelter, internal ends are ‘good in themselves.’ 67 In the context of rhetoric, generally, and the persuasive species of deliberation, specifically, the phronimos is able to construct an argument that ‘aims at establishing the expediency or the harmfulness of a proposed course of action; if he urges acceptance, he does so on the ground that it will do good; if he urges its rejection, he does so on the ground that it will do harm.’ 68 While technical excellence manifests in the specific techniques chosen to craft the argument effectively, moral excellence is more often ambiguous because of taken-for-granted assumptions about what constitutes good. For Aristotle, these assumptions took place through the rhetorical proof known as the enthymeme, which he identifies as the most powerful element in persuasion. 69 Unlike a syllogism, in which all premises of an argument are articulated in the course of coming to a conclusion, an enthymeme has at least one unstated premise. For example, a variation of the famous Aristotelian syllogism reads, ‘All humans are mortal. Socrates is human. Socrates is mortal.’ But in an enthymeme, the premise that ‘Socrates is human’ is absent, that is, unstated. As a result, the audience has to supply the missing premise prior to arriving at the conclusion that ‘Socrates is mortal.’ And while the missing premise in this example leaves little room for significant conflict, debates about same-sex marriage, abortion, and, as illustrated earlier in this article, autonomous cars are often full of such holes. In the context of deliberation over the moral ought, the enthymeme serves to communicate to an audience the ‘speaker’s personal character [ethos] . . . so as to make us think him credible.’ 70 To have a virtuous character does not mean then that a person is virtuous by her nature, and thus, as an extension of her virtue, so too are her arguments, but rather that one is virtuous because of the values, sometimes explicit, more often implicit, that she uses to make a persuasive argument toward a moral internal end. 71

Within the limited variations in moral excellence that existed within the Greek polis, the strategic use of enthymeme could easily be built around certain taken-for-granted value assumptions generally shared among citizens, limited as that group was. Given the complexity often involved in moral decision-making today, determining what is good proves a heady challenge. To address this challenge, the Office of Naval Research recently awarded a grant of $7.5 million over the next five years to university researchers at Tufts, Rensselaer Polytechnic Institute, Brown, Yale, and Georgetown to develop AMAs. Because of current restrictions on the use of lethal robots, these AMAs would be prohibited from engaging in actual combat, but would participate in ethical evaluation and implementation of not only military strategies and tactics but also the everyday moral dilemmas faced by soldiers in the context of battle, like search and rescue. 72 In sum, the purpose is to create an AMA with good character. Principal investigator Mattias Scheutz, professor of computer science at Tufts School of Engineering and director of its Human-Robot Interaction Laboratory, described the aim of this endeavor, which began in 2014, in this way: ‘Moral competence can be roughly though about as the ability to learn, reason with, act upon, and talk about the laws and societal conventions on which humans tend to agree . . . The question is whether machines—or any other artificial system, for that matter—can emulate and exercise these abilities.’ 73 For only in doing so can the AMA be fully realized as an ARA.

AMAs’ Rhetorical Challenge

The emphasis on computing a good character in artificial agents often overlooks the perhaps even greater challenge to the realization of AMAs: the ability to explain their decision-making processes and judgments in situations where moral ambiguity exists. In an adaptation of the Turing Test, Colin Allen, Iva Smit, and Wendell Wallach propose to test various ethical paradigms as well as the ability of machines to conceptualize moral agency itself. Instead of a human trying to decode whether or not the agent she is interacting with is human or a machine based upon a random conversation, the Moral Turing Test would limit conversations about morality so that ‘[i]f human “interrogators” cannot identify the machine at above chance accuracy, then the machine is, on this criterion, a moral agent.’ 74 But the greatest challenge to such a test, according to its creators, is the ‘emphasis it places on the machine’s ability to articulate moral judgments.’ 75 Just as chatbot competitors in the Turing Test for the Loebner Prize are limited by their inability to read and respond to prompts aimed at proving they are sufficiently human, this is also the case for competitors in a Moral Turing Test. Likewise, Guarini acknowledges that ‘the most obvious limitation of the neural network models . . . is their inability to generate arguments as outputs.’ 76 And Wallach and Allen have gone so far as to state that rhetoric may be the ultimate obstacle for AMAs: ‘An agent will still need to find a way to steer between the many moral considerations that impinge on each new challenge and come up with a course of action that balances those considerations as well as possible, while being able to explain why some considerations could not be fully accommodated.’ 77 But to compute these explanations is ultimately part of the challenge. Because ‘correspondence between what is said, what is done, and what is conveyed by nonverbal means’ all serve as fruitful data for computing ethical decision-making, AMAs will also have need to ‘systematically test how words, deeds, and gestures interact to shape ethical judgment.’ 78

Rhetorical calculation has long been the singular purview of human beings. Even Descartes, in spite of his dreams, argued that even the best imitations of the machine qua human would ultimately reveal itself because a machine cannot use signs and act consciously ‘for it is practically impossible for there to be enough different organs in a machine to cause it to act in all of life’s occurrences in the same way that our reason causes us to act.’ 79 But Descartes could not have foreseen the ways in which the revolution made possible through software ‘organs’ encased in hardware would construct these distinctions as programming challenges rather than essential differences. For Gottfried Leibniz, the kind of reflexive consciousness manifest through the human declaration of our thoughts to others is inherently limited by ‘intuitive thinking,’ a process whereby ‘we often mistakenly believe we have ideas of things in mind when we mistakenly suppose that we have already explained some of the terms we use.’ 80 Rather than necessarily resulting from deliberate ignorance, Leibniz contends that humans are naturally ‘blind’ to any notion that is ‘very complex’ because ‘we cannot consider all of its component notions at the same time.’ 81 But if language, or any other symbol system, adheres to ‘the rules of common logic,’ 82 the blindness that results from intuitive thinking can be minimized through formalization:

Any correct calculation can also be considered an example of such an argument conceived in proper form. And so, one should not omit any necessary premise, and all premises should have been either previously demonstrated or at least assumed as hypotheses, in which case the conclusion is also hypothetical. Those who carefully observe these rules will easily protect themselves against deceptive ideas. 83

What Leibniz describes here is logic in terms of the syllogism, where every premise of an argument can be articulated to form what Aristotle called ‘primary deduction’ 84 that executes proposition as essentially facts that are either true or false. In contrast, the enthymeme is defined by absence of articulation. Because the enthymeme is ‘the substance of persuasion,’ 85 Leibniz’s symbol system stands to exist outside the realm of rhetoric, beyond the ambiguities that make rhetoric necessary in the first place. Although Leibniz admits, ‘Not that we always needs syllogisms,’ he nevertheless insists that its form must be the basis of any argument so that ‘at very least the argument must reach its conclusion by its form,’ one that demands the defining of ‘all terms which are even a bit obscure and prove all truths which are even a bit dubious.’ 86 But with the invocation of obscurity and dubiousness, Leibniz, perhaps in spite of himself, brings us back to the rhetorical realm, where what is considered obscure, what is considered dubious is not a matter of demonstrable fact but rather a matter of rhetorical persuasion.

Computing Artificial Moral Rhetorics

For some, the realization of an ARA is a fool’s dream. John Searle posits that an ARA is wholly unfeasible because ‘the computer . . . has a syntax but no semantics’. In the example of the Chinese Room, Searle illustrates how a native English speaker with no knowledge of the Chinese language could simulate written proficiency in this language well enough that even native speakers of Chinese would mistake her for a native speaker. 87 Locked in a room with batches of Chinese writing (‘the script’) that appear only as ‘meaningless squiggles’, she is given set of instructions (‘rules’) written in English that allow her to correlate these writings with a second batch of Chinese writings (‘the story’). Then she is given another set of instructions that allows her to correlate a third batch of Chinese symbols (‘the questions’) with the first two batches so that she can produce Chinese script herself. The Chinese symbols that she gives back in response to the third batch, or her output, are considered her ‘answers’ to the questions or input. So efficient do the sets of instructions (‘the program’) become and so proficient does the English-only speaker become at ‘manipulating’ the symbols that her answers to questions could pass as the written words of native Chinese speaker, even to a native Chinese speaker. But what Searle intends to demonstrate with the Chinese Room example is that, in spite of being able to manipulate symbols, a computer (like the English-only speaker locked in the room) is not able to understand the meaning of those symbols and to use those symbols with intention. 88 Simply put, ‘Whatever it is that the brain does to produce intentionality, it cannot consist in instantiating a program since no program, by itself, is sufficient for intentionality.’ 89 Although certainly persuasive to many, Searle’s argument has not kept those in the field of AI from working to develop what might be eventually evolve to become ARAs.

Through the creation of ‘persuasive robotics’, or ‘argumentative machines’, computer scientists and linguists have sought to instill in the machine the uniquely human ability to use rhetorical symbols with intention. 90 But within the context of computation, argument is often cast in instrumental terms, as revealed in a discussion of situated artificial dialogue by Geert-Jan Kruijff et al.:

Language presents a powerful system to express meaning. Also, perception provides a cognitive system with rich experiences of the world. The fundamental problem for (situated) dialogue processing is how to relate linguistically expressed meanings to these experiences. If we look at this problem from the viewpoint of dialogue as a means of communication, then we can pose first of all the following two requirements. Any solution needs to be efficient, to allow a cognitive system to respond in a timely fashion, and effective, so that the system arrives at those meanings which are indeed likely to be correct in a given context. 91

When essentialized as seemingly a-rhetorical, language reverts to a syllogistic enterprise, modeled in such a way that the only true values are the efficiency and efficacy derived from executable input/output. While such a phenomenon certainly stands to correct the chaos of modern morality, it does so through the decimation any kind of traditional notion of what constitutes moral good. 92

In an overview of argumentation from the end of the twentieth century, T. J. M. Bench-Capon and Paul Dunne note two trends in research on artificial argumentation. The first is oriented toward the development of argumentative models based on principles shared with formal logic and mathematical proofs. This mathematical reasoning approach has tended to dominate argumentation research in AI through an emphasis on rules, protocols, legitimacy, and correctness. 93 The second, more recent trend focuses on the role of persuasion in everyday, contextualized argumentation where ‘the premises upon which debates may build are often presupposed as forming part of the background assumptions common to all parties involved; the information and knowledge brought to bear in the course of discussion will often be incomplete, vague, and uncertain.’ 94 This second trend is occasionally punctuated with a concern for phronesis. According to Bench-Capon and Dunne, practical reasoning (i.e., phronesis) is needed when ‘arguments are . . . not about whether some belief is true, but about whether some action should or should not be performed’ and include ‘dialogue types such as persuasion, deliberation and negotiation.’ 95 But in spite of the still tacit connection between practical reasoning and everyday discourse in AI, a formal approach to practical reasoning still appears quite prevalent. 96

An example of how phronesis is considered in formal argumentation in AI is Katie Atkinson and Bench-Capon’s ‘Practical Reasoning as Presumptive Argumentation Using Action Based Alternating Transition Systems.’ Responding to what they contend is the neglect of practical reasoning because of an emphasis on belief, the authors construct a ‘well-defined structure enabling the precise specifications under which an argument scheme and associated critical questions for the practical reasoning can be instantiated.’ 97 Using a classic AI problem, Atkinson and Bench-Capon formulate the range of reasonable actions that a farmer might take in solving the following problem: ‘[A] farmer is returning from market with a chicken (C), a bag of seeds (S) and his faithful dog (D). He needs to cross a river, and there is a boat (B) but it can only carry the farmer and one of his possessions. He cannot leave the chicken and the seeds together because the chicken will eat the seeds. Similarly he cannot leave the dog and the chicken unattended together. His problem is how to organize his crossing.’ 98 In response to this situation, Atkinson and Bench-Capon pose critical questions that are formulated from the possible set of actions and the possible values that might motivate the farmer. Rather than a singular solution to the problem, this scheme allows for the generation of a number of possible solutions, including ‘a less efficient solution’ that the farmer might choose ‘if it would serve his interests to do so.’ 99 With the subsequent application of their model to the ways in which social laws ‘co-ordinate multi-agent systems’, they investigate the effect of these laws to constrain agents’ (e.g., trains) behavior ‘so as to achieve certain objectives’ (e.g., avoid collision). 100 But whether or not such a formal approach in the assessment and application of practical reasoning can remain open to the richness—or muddiness—of rhetorical exigency in the quest to create a functioning ARA is hardly certain. In the seminal article ‘Toward Computational Rhetoric,’ Floriana Grasso makes clear the need for understanding arguments that ‘are both heavily based on the audience’s perception of the world, and concerned more with evaluative judgments than with establishing the truth or falsity of a proposition.’ 101 Yet, the complexity of actually analyzing such arguments and then breaking them down to function computationally often results in the values of good and bad being extremely limited by the need to create formulized models.

The difficulties in creating an ARA are undoubtedly great. But this ability to articulate the reasons for moral judgment is crucial to a fully functioning AMA. Of course, for some the dream of the AMA is its ability to function beyond the current limitations of human morality. In this case, the AMA would be beyond rhetoric, as in Thomas Power’s explanation of a deontologically programmed machine, which could implement a more logical and, possibly, more moral form of Kantianism than currently possible by humans or first-generation AMAs:

‘[The moral machine] would have no need for expressions of regret, moral conflictedness or any act of conscience, since everything it did would fall neatly under the categories of moral maxims that we’ve programmed into it or which have been logically derived from those we’ve programmed. It would not suffer from weakness of the will, because it would be programmed to always act according to moral categories. 102

In evolving from rhetorical calculation, the AMA would only need to make meaning through computation. As rhetoric operates in situations that could be otherwise, the certainty and incontrovertibility of computation would be fully realized in the AMA. As Allen confesses, ‘All of us know people (and some of us perhaps even are people) who would prefer to rely on the superior computational power of artificial moral agents to help us see the logical and empirical consequences of our actions rather than relying on more intuitive, less systematic approaches to moral decision making.’ 103 But herein lies the problem of AMAs when imagined as freed from the ambiguities of morality and rhetoric. Conflict, dilemma, and contestation cease to exist because good is the artificially natural output of any computation. Rhetoric then is no longer necessary but possibly indulged in by a few enamored with the pastime. For computation as the moral source of all things fulfills the promise that Christianity, Kant, and utilitarianism could not: an Eden where we all live together in harmony, secure in the knowledge that any moral values that we hold are always the True values. So utopian is this dream that I need hardly mention it is highly unlikely to ever come to fruition, even if we were to wish it so, which is (at least for now) still debatable.

The Potential of ARAs

In a 1967 letter to the editor of Electronics & Power, H.R. Bristow wrote of ‘the necessity of putting economics into the language of the computer—mathematics.’ He continued, ‘But more than this, I would go further and say we should do the same with all the social sciences. Most important of all is morality—the science of improving the happiness of the people. It should not be too difficult to derive a symbolic notation to give a quantitative relationship to the main moral concepts, such as “pleasures and ecstasies less miseries and agonies integrated over a lifetime”, and the aim of “improving the lifetime happiness of an increasing proportion of the people.’ 104 If we understand that philosophy-as-calculating and numeracy-as-computing are part of the same thread of reasoning that weaves itself throughout Western philosophy, artificial beings composed of 1s and 0s pose a continuation rather than revolutionary change to questions of morality. In fact, I imagine that the ARA fosters the possibility that conversations regarding such questions as ‘What is goodness?’ and ‘What does it mean to live a good life?’ might be reinvigorated. Although I am mindful of the black hole of which Wallach and Allen warn in their discussion of moral computation, the possibility for previously unimagined ways of addressing the world’s moral dilemmas, both big and small, offers new ways of mapping how morality actually works and how it might work to better ends. Similarly, analyses of how choices in the midst of such dilemmas are rhetorically justified would provide incredible details about how persuasion succeeds and fails and to what effects.

The worst-case scenario for computational phronesis and computational rhetoric is encapsulated in the tool Phronesis currently in use with the Large Hadron Collider beauty (LHCb), one of seven particle physics detector experiments collecting data at CERN, the European Organization for Nuclear Research. The LHCb utilizes a heterogenous computing cluster on 2000 servers. To provide physicists with ‘steadily improving diagnosis and recovery solutions in case of misbehavior of a service, without having to modify the original applications,’ 105 Phronesis was created to help the installation and administration team maximize availability: ‘Despite the fact that there only a small number of unexpected and unprovoked situations, Phronesis could make several correct diagnoses, and offered appropriate solutions,’ including human errors, such as a user’s failure to define elements manually. For many Phronesis undoubtedly reaffirms the on-going displacement of a deeper notion of good for instrumental values of science and technology, what Jacque Ellul decried as the new ‘supreme objects.’ 106 From this perspective, which is shared by others such as Langdon Winner, Albert Borgmann, and Martin Heidegger, technology colonizes to the point of extinction the very soul of what it means to be human. To borrow from Slavoj Zizek’s work on plague fantasies, ‘This is the ultimate horror: not the proverbial ghost in the machine, but the machine in the ghost: there is no plotting agent behind it, the machine just runs by itself, as a blind contingent device.’ 107 Even worse, in the context of AMAs and ARAs, there is a plotting agent, but that too is the machine. In short, there is a ghost in the machine, and in that ghost is also a machine. But like the utopian computational Eden, this dystopian nightmare seems equally unlikely.

While Norbert Wiener’s argument that ‘the great weakness of the machine—the weakness that saves us so far from being dominated by it—is that it cannot yet take into account the vast range of probability that characterizes the human situation’ 108 is still true, we need not consider that to be our saving grace, as Hubert Dreyfus did in 1972 in his explanation of a fundamental difference between human and computer:

Whatever it is that enables human beings to zero in on the relevant facts without definitely excluding others which might become relevant is so hard to describe that it has only recently become a clearly focused problem for philosophers. It has to do with the way man is at home in his word, has it comfortably wrapped around him so to speak. Human beings are somehow already situated in such a way that what they need in order to cope with things is distributed around them where they need it, not packed away like a trunk full of objects, or even carefully indexed in a filing cabinet. This system of relations which make it possible to discover objects when they are needed is our home or our world . . . The human world, then, is prestructured in terms of human purposes and concerns in such a way that what counts as an object or is significant about an object already is a function of, or embodies, that concern. This cannot be matched by a computer, which can deal only with universally defined, i.e., context-free, objects. In trying to stimulate this field of concerns, the programmer can only assign to the already determinate facts further determinate facts called values, which only complicates the retrieval problem for the machine. 109

From this perspective, the process by which phronetic and rhetorical activity occurs is an effect of being in and of a world constituted by human beings and thus not reducible to algorithmic if/then’s. As a result, the AMA and the ARA can simply not exist. But there is room for doubt in this regard, especially since the a-rhetorical machine presented in Dreyfus’ argument might simply result from limitations in artificial processing power rather than unique human abilities. Undoubtedly the power to clarify how it is that human beings come to make moral decisions and rhetorically explain those decisions would be a coup for computation, in essence, breaking the obfuscated code of what makes humans human. But just as importantly, ARAs offer the possibility to compute and explain moral matters in such a way that we might reimagine the role of morality in a digitized world. With ARAs, we might even enjoy the benefits that come from a deep understanding of what constitutes good but without the moralizing condemnation that too often appears as a negative side effect of moral orthodoxy. So whether ultimately limited by processing power or by the fatal flaw of computing’s black hole, we should see ARAs, should they come to pass, as offering an exciting development in the moral landscape.

License
CC BY-NC-ND 3.0

Acknowledgments
This article benefited enormously from the productive critiques and insights offered by Annette Vee and Jim Brown, as well as the anonymous reviewers.

Bibliography

Allen, Colin, Iva Smit, and Wendell Wallach. ‘Artificial Morality: Top-Down, Bottom-Up, and Hybrid Approaches.’ Ethics and Information Technology 7.3 (2005): 149-55.

Andrist, Sean, Erin Spannan, and Bilge Mutlu. ‘Rhetorical Robots: Making Robots More Effective Speakers Using Linguistic Cues of Expertise.’ Proceedings of the 8th ACM/IEEE international Conference on Human-Robot Interaction Proceedings. 2013.

Aristotle. ‘Nichomachean Ethics.’ In The Complete Works of Aristotle, Volume 2, edited by
Jonathan Barnes, 1729-867. Princeton: Princeton University Press, 1984.

Aristotle. ‘Rhetoric.’ In The complete works of Aristotle, Volume 2, edited by Jonathan Barnes, 2152-269. Princeton: Princeton University Press, 1984.

Atkinson, Katie and Bench-Capon, Trevor. ‘Practical Reasoning as Presumptive
Argumentation Using Action Based Alternating Transition Systems.’ Artificial Intelligence 171 (2007): 855-74.

Bench-Capon, T.J.M. and Paul E. Dunne. ‘Argumentation in Artificial Intelligence.’ Artificial Intelligence 171 (2007): 619-41.

Besnard, Philippe, Hunter, Anthony, & Woltran, Stefan. ‘Encoding Deductive
Argumentation in Quantified Boolean Formulae.’ Artificial Intelligence 173 (2009): 1406-423.

Borgmann, Albert. ‘The Moral Assessment of Technology.’ In Democracy in a Technological Society, edited by Langdon Winner Democracy in a Technological Society, 207-13. Norwell: Kluwer Academic Publishers, 1992.

Borgmann, Albert. ‘The Moral Significance of the Material Culture.’ In Technology & the Politics of Knowledge edited by Andrew Feenberg and Alastair Hannay, 85-93. Bloomington: Indiana University Press, 1995.

Bostock, David. Aristotle’s Ethics. Oxford: Oxford University Press, 2000.

Bratman, Michael E. Intention, Plans, and Practical Reason. Cambridge: Harvard University Press, 1987.

Bratman, Michael E., David J. Isreal, and Martha E. Pollack. ‘Plans and Resource-Bounded Practical Reasoning.’ Computational Intelligence 4.4 (1988): 349-355.

Bringsjord, Selmer, Konstantine Arkoudas, and Paul Bello. ‘Toward a General Logicist Methodology for Engineering Ethically Correct Robots.’ IEEE Intelligent Systems. 21.4 (2006): 38-44.

Bristow, H.R. ‘Comment on Stone’s Article.’ Electronics & Power. 13.6 (1967): 231.

Colin, Allen, Gary Varner, and Jason Zinser. ‘Prolegomena to Any Future Artificial Moral Agent.’ Journal of Experimental Theoretical Artificial Intelligence 12 (2000): 252-61.

Dennis, Louise A., Michael Fisher, Nicholas K. Lincoln, Alexei Lisitsa, and Sandor Veres. ‘Practical Decision-Making in Agent-Based Autonomous Systems.’ Automated Software Engineering 14 (2014).

Descartes, René. A Discourse on the Method. Translated by Ian Maclean. Oxford: Oxford University Press, 2006.

Descartes, René ‘Principles of Philosophy.’ In The Philosophical Writings of Descartes, Volume 1. Translated by John Cottingham, Robert Stoothoff, and Dugald Murdoch, 177-292. Cambridge: Cambridge University Press, 1985.

Dreyfus, Hubert L. What Computers Still Can’t Do: A Critique of Artificial Reason. Cambridge: MIT Press, 1992.

Dung, Phan Minh. ‘On the Acceptability of Arguments and Its Fundamental Role in Nonmonotonic Reasoning, Logic Programming and N-Person Games.’ Artificial Intelligence 77 (1995): 321-57.

Ellul, Jacques. The Technological Society. New York: Vintage Books, 1964.

Erodogmus, Hakan. ‘What’s Good Software Anyway?’ IEEE Software 24.2 (2007): 5-7.

Floridi, Luciano and Sanders J. W. ‘On the Morality of Artificial Agents.’ Minds & Machines
14.3 (2004): 349-79.

Future of Life Institute. ‘Research Priorities for Robust and Beneficial Artificial Intelligence.’ January 23, 2015. http://futureoflife.org/static/data/documents/research_priorities.pdf

Future of Life Institute. ‘Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter.’ January 2015. http://futureoflife.org/misc/open_letter

Garver, Eugene. Aristotle’s Rhetoric: An Art of Character. Chicago: University of Chicago Press, 1994.

Galloway, Alexander. Protocol: How Control Exists after Decentralization. Cambridge: MIT
Press, 2004.

Gips, James. ‘Towards the Ethical Robot.’ In Android Epistemology. Cambridge: MIT Press, 1995

Girle, Rod, David Hitchcock, Peter McBurney, and Bart Verheij. ‘Decision Support for Practical Reasoning: A Theoretical and Computational Perspective.’ In Argumentation Machines: New Frontiers in Argument and Computation, edited by Chris Reed and Timothy Norman. Dordrecht: Springer, 2004.

Golumbia, David. The Cultural Logic of Computation. Cambridge: Harvard University Press, 2009.

Goodman, Joshua T. ‘A Bit of Progress in Language Modeling.’ Computer Speech & Language 15.4 (2001): 403-434.

Google. ‘Google Self-Driving Car Project: Monthly Report.’ Accessed on July 2, 2015, http://static.googleusercontent.com/media/www.google.com/en//selfdrivingcar/files/reports/report-0515.pdf.

Grasso, Floriana. ‘Towards Computational Rhetoric.’ Informal Logic 22.3 (2002): 195-229.

Grau, Christopher. ‘There Is No “I” in “Robot”: Utilitarianism and Utilitarian Robots.’ IEEE Intelligent Systems 21.4 (2006): 52-55.

Guarini, Marcello. ‘Particularlism and the Classification and Reclassification of Moral Cases.’ IEEE Intelligent Systems 21.4 (2006): 22-28.

Haen, C., V. Barra, E. Bonaccorsi, and N. Neufeld. ‘Phronesis, a diagnosis and recovery tool for system adminstrators.’ International Conference on Computing in High Energy and Nuclear Physics. 513.6 (2014): 1-5.

Hayles, Katherine. My Mother Was a Computer: Digital Subjects and Literary Texts. Chicago:
University of Chicago Press, 2005.

Kant, Immanuel. Critique of the Power of Judgment. Edited by Paul Guyer. Cambridge: Cambridge University Press, 2000.

Kant, Immanuel. Groundwork of the Metaphysics of Morals. Edited by Mary Gregor. New York:
Cambridge University Press, 1997.

Kant, Immanuel. 1996. Metaphysics of Moral. Cambridge: Cambridge University Press, 1996.

Kim, Eun Jung, Sebastian Ordyniak, and Stefan Szeider. ‘Algorithms and Complexity Results for Persuasive Argumentation.’ Artificial Intelligence 175 (2011): 1722-736.

Kruijff, Geert-Jan M., Liso, et al. ‘Situated Dialogue Processing for Human-Robot Interaction.’ In Cognitive Systems, edited by Geert-Jan M. Kruijff et al., 311-64. Berlin: Springer-Verlag, 2010.

Lederman, Nancy. ‘Letter to the Editor.’ The New York Times. September 2, 2015, accessed on September 4, 2015, http://www.nytimes.com/2015/09/05/opinion/driving-as-a-human-dance.html?_r=0.

Leibniz, Gottfried W. Philosophical Essays. Translated by Roger Arlew and Daniel Garber. Indianapolis: Hackett Publishing, 1989.

Looije, Rosemarijn., Mark A. Neerincx, and Fokie. Cnossen. ‘Persuasive Robtic Assistant for Health Self-Management of Older Adults: Design and evaluation of social behaviors.’ International Journal of Human-Computer Studies 68.6 (2010): 386-97.

MacIntyre, Alastair. After Virtue. Notre Dame, IN: University of Notre Dame Press, 2007.

McLuhan, Marshall. ‘The Future of Morality: The Inner Versus the Outer Quest.’ In
The New Morality: Continuity and Discontinuity, edited by William Dunphy, 175-89. New York: Herder and Herder, 1967.

McMullin, Barry. ‘30 Years of Computational Autopoiesis: A Review.’ Artificial Life 10.3 (2004): 277-95.

Mill, John Stuart. ‘Utilitarianism.’ In Utilitarianism and Other Essays 272-338, edited by Alan
Ryan. New York: Penguin Books, 1987.

Moll, Jorge, et al. ‘The Moral Affiliations of Disgust.’ 18.1 (2005): 68-78.

Moll, Jorge, Paul J. Eslinger, and Ricardo de Oliveira-Souze. ‘Fontopolar and Anterior Temporal Cortext Activation in a Moral Judgment Task.’ Arquivos de Neuro-Psiquiatria 58.3 (2001), accessed on March 2, 2014, http://dx.doi.org/10.1590/S0004-282X2001000500001

Moor, James H. ‘Is Ethics Computable?’ Metaphilosophy 26 (1995): 1-21.

Moor, James H. ‘Four Kinds of Ethical Robots.’ Philosophy Now 72 (2009): 12-14.

Mumford, Lewis. ‘Authoritarian and Democratic Technics.’ Technology and Culture 5.1 (1964): 1-8.

Nussbaum, Martha G. The Fragility of Goodness: Luck and Ethics in Greek Tragedy and Philosophy. New York: Cambridge University Press, 2001.

Nussbaum, Martha G. ‘Mill between Aristotle & Bentham.’ Daedelus 133.2 (2004): 60-68.

Pew Research Center. Global Views on Morality. http://www.pewglobal.org/2014/04/15/global-morality/

Pew Research Center. Technology Triumphs, Morality Falters. July 3, 1999. http://www.people-
press.org/1999/07/03/technology-triumphs-morality-falters/

Plato. Gorgias. In Plato: Complete Works, edited by John M. Cooper, 791-879. Indianapolis: Hackett Publishing, 1997.

Powers, Thomas M. ‘Prospects for a Kantian Machine.’ IEEE Intelligent Systems 21.4 (2006): 46-51.

Rahwan, Iyad and Guillermo R. Simari (eds). Argumentation in Artifical Intelligence. New York: Springer, 2009.

Rao, Anand S. and Michael P. Georgeff. 1995. ‘BDI Agents: From Theory to Practice.’ Technical Note 56. http://www.agent.ai/doc/upload/200302/rao95.pdf

Rao, A.S., and M.P. Georgeff. ‘Modeling agents within a BDI-architecture.’ In Proceedings of the 2nd International Conference on Principles of Knowledge Representation and Reasoning (KR&R), pp. 473–484. San Mateo: Morgan Kaufmann, 1991.

Rawls, John. Political Liberalism. New York: Columbia University Press, 1993.

Reed, Christopher and D. Walton. ‘Towards a Formal and Implemented Model of Argumentation Schemes in Agent Communication.’ In Argumentation in Multi-Agent Systems 19-30, edited by Iyad Rahwan, P. Moraitis, and Christopher Reed. Dordrecht: Springer.

Reid, Alexander. ‘Teaching Robots Right from Wrong.’ TuftsNow. May 9, 2014. http://now.tufts.edu/news-releases/teaching-robots-right-wrong

Rorty, Richard. ‘Kant vs. Dewey: The Current Situation of Moral Philosophy.’ In Philosophy as Cultural Politics. Cambridge: Cambridge University Press, 2007.

Saptawijaya, Ari and Luís Moniz Pereira. ‘Towards Modeling Morality Computationally with Logic Programming.’ In Practical Aspects of Declarative Language, edited by Matthew Flatt and hai-Feng Guo, 104-19. Berlin: Springer, 2014.

Searle, John. 1980. ‘Minds, Brains, and Programs.’ Behavioral and Brain Sciences 3 (1980): 417-24.

Stout, Jeffrey. Ethics after Babel: The Languages of Morals and Their Discontents. Boston: Beacon Press, 1998.

Taylor, Charles. A Secular Age. Cambridge: Belknap Press, 2007.

Tonkens, Ryan. ‘A Challenge for Machine Ethics.’ Minds & Machines 19 (2009): 421-38.

Tucker, Patrick. ‘Now the Military Is Going to Build Robots That Have Morals.’ Defense One. May 13, 2014. http://www.defenseone.com/technology/2014/05/now-military-going-build-robots-have-morals/84325/

Turing, Alan. M. ‘Computing Machinery and Intelligence.’ Mind 59 (1950): 433-60.

Wallach, Wendell, and Colin Allen. Moral Machines: Teaching Robots Right from Wrong. Oxford: Oxford University Press, 2009.

Walton, Douglas, and David M. Godden. ‘The Impact of Argumentation on Artificial Intelligence.’ In Considering Pragma-Dialectics, edited by Peter Houtlosser and Agnes van Rees, 287-299. Mahweh: Lawrence Erlbaum.

Wiener, Norbert. The Human Use of Human Beings. Boston: De Capo Press, 1954.

Winograd, Terry and Fernando Flores. Understanding Computers and Cognition. Norwood: Ablex, 1986.

Winner, Langdon. The Whale and the Reactor. Chicago: University of Chicago Press, 1988.

Yang, Juan, Lili Guan, Mingming Qi. ‘Gender Differences in Neural Mechanisms Underlying Moral Judgment of Disgust: A Functional MRI Study.’ Journal of Behavior and Brain Science 4 (2014): 214-22.

Zizek, Slavoj. The Plague of Fantasies. New York: Verso, 1997.

 

Notes

  1. René Descartes, ‘Principles of Philosophy,’ in The Philosophical Writings of Descartes, Volume 1, trans. John Cottingham, Robert Stoothoff, and Dugald Murdoch (Cambridge: Cambridge University Press, 1985), §14.
  2. René Descartes, A Discourse on the Method, trans. Ian Maclean (Oxford: Oxford University Press, 2006), § 22.
  3. Ibid., § 23-28.
  4. Ibid., § 21.
  5. Ibid., § 16.
  6. Ibid., § 7.
  7. Ibid., § 23.
  8. Ibid., § 24.
  9. Luciano Floridi and J. W. Sanders, ‘On the Morality of Artificial Agents,’ Minds & Machines 14.3 (2004); Moor, James H., ‘Four Kinds of Ethical Robots,’ Philosophy Now 72 (2009); Wendell Wallach and Colin Allen, Moral Machines: Teaching Robots Right from Wrong (Oxford: Oxford University Press, 2009).
  10. Hubert L Dreyfus, What Computers Still Can’t Do: A Critique of Artificial Reason (Cambridge: MIT Press, 1992); Graham Gordon Ramsay, “Noam Chomsky on Where Artificial Intelligence Went Wrong,’ The Atlantic, November 1, 2012, accessed on December 3, 2013, http://www.theatlantic.com/ technology/archive/2012/11/noam-chomsky-on-where-artificial-intelligence-went-wrong/261637/?single_page=true.
  11. Descartes, A Discourse on the Method, 47.
  12. Sean Andrist, Erin Spannan, and Bilge Mutlu, ‘Rhetorical Robots: Making Robots More Effective Speakers Using Linguistic Cues of Expertise,’ Proceedings of the 8th ACM/IEEE international Conference on Human-Robot Interaction (2013): 341-48; Joshua T. Goodman, ‘A Bit of Progress in Language Modeling,’ Computer Speech & Language 15.4 (2001): 403-434; Rosemarijn Looije, Mark A. Neerincx, and Fokie Cnossen, ‘Persuasive Robotic Assistant for Health Self-Management of Older Adults: Design and Evaluation of Social Behaviors,’ International Journal of Human-Computer Studies 68.6 (2010): 386-97.
  13. Aristotle, Rhetoric, in The Complete Works of Aristotle, Volume 2, ed. Jonathan Barnes, 2152-269 (Princeton: Princeton University Press, 1984), 1357a 2-3.
  14. Aristotle, Nichomachean Ethics, in The Complete Works of Aristotle, Volume 2, ed. Jonathan Barnes, 1729-867, (Princeton: Princeton University Press, 1984), 1140a 28-1140b 8.
  15. For Alan Turing, in contrast, it was only a matter of time until a universal computing machine could perform, for all intents and purposes, as human, even in the realm of language. Programmed chatbots have made, what Turing would call, ‘good showings’ in proving themselves intelligent to varying degrees by conversing with a human interrogator via text-based, synchronous conversation. However, while the most ‘human-like’ winner of the annual Turing Test competition, known as the Loebner Prize, is the scripted chatbot Elbot who in 2008 fooled three of twelve judges into thinking it was human, transcripts of these competitions reveal that chatbots are more likely to imitate conversation according to a simplistic rulebook reading of contextual cues rather than engage in sophisticated rhetorical deliberation, the very kind needed to by an AMA to explain its moral judgments.
  16. Future of Life Institute, ‘Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter,’ January 2015, accessed March 20, 2015, http://futureoflife.org/misc/open_letter.
  17. Ibid.
  18. Future of Life Institute, ‘Research Priorities for Robust and Beneficial Artificial Intelligence,’ January 23, 2015, accessed March 20, 2015, http://futureoflife.org/static/data/documents/research_ priorities.pdf.
  19. Martha G. Nussbaum, The Fragility of Goodness: Luck and Ethics in Greek Tragedy and Philosophy (New York: Cambridge University Press, 2001); Richard Rorty, ‘Kant vs. Dewey: The Current Situation of Moral Philosophy,’ in Philosophy as Cultural Politics (Cambridge: Cambridge University Press, 2007); John Rawls, Political Liberalism (New York: Columbia University Press, 1993).
  20. Immanuel Kant, Groundwork of the Metaphysics of Morals, ed. Mary Gregor (New York: Cambridge University Press, 1997), 4:402.
  21. Immanuel Kant, Metaphysics of Moral (Cambridge: Cambridge University Press, 1996), 5:31
  22. Ibid., 6:97.
  23. This utilitarian conception of calculation should not be confused with the kind of calculation invoked by Aristotle in his discussion of phronesis. As David Bostock explains, ‘The utilitarian recommends the maximization of happiness, and takes the measure of happiness to be the total sum of pleasures less pains. It has long been recognized that we do not in practice know how to calculate this sum, for it requires us to rank all pleasures on an additive scale, and no such scale seems to be available.’ (Aristotle’s Ethics (Oxford: Oxford University Press, 2000), 148). This utilitarian conception of calculation is not the aim of Aristotelian phronesis. For more on the difference between Aristotle and utilitarians, see Martha G. Nussbaum, ‘Mill between Aristotle & Bentham,’ Daedelus 133.2 (2004): 60-68
  24. John Stuart Mill, ‘Utilitarianism,’ in Utilitarianism and Other Essays, ed. Alan Ryan, (New York: Penguin Books, 1987), 81.
  25. Charles Taylor, A Secular Age (Cambridge: Belknap Press, 2007), 26.
  26. James H. Moor, ‘Is Ethics Computable?’ Metaphilosophy 26 (1995): 1.
  27. Ibid., 2.
  28. In Nicomachean Ethics, Aristotle describes his virtue ethics in this way: ‘moral excellence is a mean . . . between two vices’ (II 1109a 24-25). With wealth, for instance, liberality is the mean that falls between the pleasure and pain of miserliness and self-indulgence (NE IV 1119b 22 – 11120a 23). So to act liberally and thus excellently according to this mean translates as ‘giving for the sake of the noble, and rightly; for he will give to the right people, the right amounts, and at the right time.’
  29. Moor, ‘Is Ethics Computable?’ 2.
  30. Ibid., 5.
  31. Ibid. 12.
  32. Selmer Bringsjord, Konstantine Arkoudas, and Paul Bello, ‘Toward a General Logicist Methodology for Engineering Ethically Correct Robots.’ IEEE Intelligent Systems. 21.4 (2006); Christopher Grau, ‘There Is No “I” in “Robot”: Robots and Utilitarianism,’ IEEE Intelligent Systems 21.4 (2006); Thomas M. Powers, ‘Prospects for a Kantian Machine,’ IEEE Intelligent Systems 21.4 (2006; Ryan Tonkens, ‘A Challenge for Machine Ethics,’ Minds & Machines 19 (2009).
  33. Wallach and Allen, Moral Machines, 89
  34. Ibid., 88.
  35. Ibid., 89.
  36. Ibid.
  37. Ibid., 96.
  38. James Gips, ‘Towards the Ethical Robot,’ in Android Epistemology (Cambridge: MIT Press, 1995).
  39. Marcello Guarini, ‘Particularlism and the Classification and Reclassification of Moral Cases,’ IEEE Intelligent Systems 21.4 (2006).
  40. Jorge Moll et al., ‘The Moral Affiliations of Disgust: A Functional MRI Study,” Cognitive and Behavioral Neurology 18.1 (2005); Jorge Moll, Paul J. Eslinger, and Ricardo de Oliveira-Souze, ‘Fontopolar and Anterior Temporal Cortext Activation in a Moral Judgment Task,’ Arquivos de Neuro-Psiquiatria 58.3 (2001); Juan Yang, Lili Guan, Mingming Qi, ‘Gender Differences in Neural Mechanisms Underlying Moral Judgment of Disgust: A Functional MRI Study,’ Journal of Behavior and Brain Science 4 (2014).
  41. Google, ‘Google Self-Driving Car Project Monthly Report,’ May 2015, accessed July 2, 2015, http://static.googleusercontent.com/ media/www.google.com/en//selfdrivingcar/files/reports/report-0515.pdf
  42. Nancy Lederman, ‘Letter to the Editor,’ The New York Times, September 2, 2015, accessed on September 4, 2015, http://www.nytimes.com/2015/09/05/opinion/driving-as-a-human-dance.html?_r=0.
  43. European Commission, ‘Robot Caregivers Help the Elderly,’ Digital Agenda for Europe, May 5, 2014, accessed April 1, 2015, http://ec.europa.eu/digital-agenda/en/news/robot-caregivers-help-elderly.
  44. The appeal of something like Asimov’s Three Laws of Robotics is clear in such situations, as articulations of what must not be done and what must be done prevent potential negative consequences and enhance the potential for positive consequences: ‘A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.’
  45. Nussbaum, Fragility of Goodness, xxiv.
  46. Alastair MacIntyre, After Virtue (Notre Dame: University of Notre Dame Press, 2007); Jeffrey Stout, Ethics after Babel: The Languages of Morals and Their Discontents (Boston: Beacon Press, 1998); Taylor, The Secular Age.
  47. Pew Research Center, Technology Triumphs, Morality Falters, July 3, 1999, accessed on February 23, 2004, http://www.people-press.org/1999/07/03/technology-triumphs-morality-falters/.
  48. Macintyre, After Virtue, 25.
  49. Ibid., 2, 228.
  50. Ibid., 2.
  51. Marshall McLuhan, ‘The Future of Morality: The Inner Versus the Outer Quest,’ in The New Morality: Continuity and Discontinuity, ed. William Dunphy (New York: Herder and Herder, 1967), 176-77.
  52. Ibid., 188-89.
  53. Jeffrey Stout, Ethics after Babel: The Language of Morals and Their Discontents (Boston: Beacon Press, 1998), 97.
  54. Michael E. Bratman, Intention, Plans, and Practical Reason (Cambridge: Harvard University Press, 1987).
  55. Anand S. Rao and Michael P. Georgeff, ‘BDI Agents: From Theory to Practice,’ Technical Note 56 (1995), accessed May 10, 2014, http://www.agent.ai/doc/upload/ 200302/rao95.pdf; Anand S. Rao and Michael P. Georgeff, ‘Modeling agents within a BDI-architecture,’ Proceedings of the 2nd International Conference on Principles of Knowledge Representation and Reasoning (1991).
  56. Michael E. Bratman, David J. Isreal, and Martha E. Pollack, ‘Plans and Resource-Bounded Practical Reasoning,’ Computational Intelligence 4.4 (1988), 4.
  57. Ibid., 3.
  58. Louise A. Dennis, Michael Fisher, Nicholas K. Lincoln, Alexei Lisitsa, and Sandor Veres. ‘Practical Decision-Making in Agent-Based Autonomous Systems,’ Automated Software Engineering 14 (2014), 2.2.
  59. Ibid., 3.2.
  60. Ibid., 2.2.
  61. Aristotle, Rhetoric, 1356a 25.
  62. Plato, Gorgias, in Plato: Complete Works, ed. John M. Cooper (Indianapolis: Hackett Publishing, 1997).
  63. Immanuel Kant, Critique of the Power of Judgment, ed. Paul Guyer, (Cambridge: Cambridge University Press, 2000), §53 5:327-28.
  64. Aristotle, Nicomachean Ethics, 1139a 32-35.
  65. Ibid., 1140a 22.
  66. Ibid., 1094a 7-8.
  67. Ibid., 1096b 14.
  68. Aristotle, Rhetoric, 1358b 22-24.
  69. Ibid., 1354a15.
  70. Ibid., 1356a.
  71. In his analysis of Aristotle’s conception of enthymeme, Eugene Garver explains, ‘Character may be the most persuasive kinds of appeal, but the artful and methodical approach to character must be via enthymeme . . . Rhetorical audiences should judge the argument qua argument, and that means that they should judge the speaker’s ethos as it is embodied in the enthymeme. [72. Eugene Garver, Aristotle’s Rhetoric: An Art of Character (Chicago: University of Chicago Press, 1994), 194, 196.
  72. Patrick Tucker, ‘Now the Military Is Going to Build Robots That Have Morals,’ Defense One, May 13, 2014, accessed March 15, 2015, http://www.defenseone.com/technology/2014/05/now-military-going-build-robots-have-morals/84325/.
  73. As quoted in Alexander Reid, ‘Teaching Robots Right from Wrong,’ TuftsNow, May 9, 2014, accessed on March 15, 2015, http://now.tufts.edu/news-releases/teaching-robots-right-wrong.
  74. Colin Allen, Iva Smit, and Wendell Wallach, ‘Artificial Morality: Top-Down, Bottom-Up, and Hybrid Approaches,’ Ethics and Information Technology 7.3 (2005), 254.
  75. Ibid.
  76. Marcello Guarini, ‘Particularlism and the Classification and Reclassification of Moral Cases.’ IEEE Intelligent Systems 21.4 (2006), 28.
  77. Wallach and Allen, Moral Machines, 216.
  78. Ibid., 122.
  79. Descartes, A Discourse on the Method, 46-47.
  80. Gottfried W. Leibniz, Philosophical Essays, trans. Roger Arlew and Daniel Garber (Indianapolis: Hackett Publishing, 1989), 25.
  81. Ibid., 25.
  82. Ibid., 27.
  83. Ibid.
  84. Aristotle, Rhetoric, 1357a 17.
  85. Ibid., 1354a15.
  86. Leibniz, Philosophical Essays, 27.
  87. Searle, ‘Minds, Brains, and Programs,’ 417.
  88. Ibid., 423.
  89. Ibid., 424.
  90. Philippe Besnard, Anthony Hunter, and Stefan Woltran, ‘Encoding Deductive Argumentation in Quantified Boolean Formulae,’ Artificial Intelligence 173 (2009); Phan Minh Dung, ‘On the Acceptability of Arguments and Its Fundamental Role in Nonmonotonic Reasoning, Logic Programming and N-Person Games,’ Artificial Intelligence 77 (1995); Iyad Rahwan and Guillermo R. Simari, eds., Argumentation in Artifical Intelligence, (New York: Springer, 2009); Eun Jung Kim, Sebastian Ordyniak, and Stefan Szeider, ‘Algorithms and Complexity Results for Persuasive Argumentation,’ Artificial Intelligence 175 (2011); Douglas Walton and David M. Godden, ‘The Impact of Argumentation on Artificial Intelligence,’ in Considering Pragma-Dialectics, ed. Peter Houtlosser and Agnes van Rees (Mahweh: Lawrence Erlbaum, 2006).
  91. Geert-Jan M., Liso, et al., ‘Situated Dialogue Processing for Human-Robot Interaction.’ In Cognitive Systems, edited by Geert-Jan M. Kruijff et al. (Berlin: Springer-Verlag, 2010), 315.
  92. Albert Borgmann, ‘The Moral Assessment of Technology,’ in Democracy in a Technological Society, ed. Langdon Winner (Norwell: Kluwer Academic Publishers, 1992); Albert Borgmann, ‘The Moral Significance of Material Culture,’ in Technology & the Politics of Knowledge, ed. Andrew Feenberg and Alastair Hannay, (Bloomington: Indiana University Press, 1995); Jacques Ellul, The Technological Society (New York: Vintage Books, 1964); Lewis Mumford, ‘Authoritarian and Democratic Technics,’ Technology and Culture 5.1 (1964); Langdon Winner, The Whale and the Reactor (Chicago: University of Chicago Press, 1988).
  93. Bench-Capon, T.J.M. and Paul E. Dunne. ‘Argumentation in Artificial Intelligence.’ Artificial Intelligence 171 (2007), 620.
  94. Ibid.
  95. Ibid., 630-31.
  96. Katie Atkinson and Trevor Bench-Capon, ‘Practical Reasoning as Presumptive Argumentation Using Action Based Alternating Transition Systems,’ Artificial Intelligence 171 (2007); Rod Girle, David Hitchcock, Peter McBurney, and Bart Verheij, ‘Decision Support for Practical Reasoning: A Theoretical and Computational Perspective,’ in Argumentation Machines: New Frontiers in Argument and Computation, eds. Chris Reed and Timothy Norman (Dordrecht: Springer, 2004).
  97. Atkinson and Bench-Capon, ‘Practical Reasoning,’ 873.
  98. Ibid., 863.
  99. Ibid., 869.
  100. Ibid., 870.
  101. Floriana Grasso, ‘Towards Computational Rhetoric,’ Informal Logic 22.3 (2002), 196.
  102. Thomas M. Powers, ‘Prospects for a Kantian Machine.’ IEEE Intelligent Systems 21.4 (2006), 16.
  103. Colin Allen, ‘Calculated Morality: Ethical Computing in the limit,’ in Cognitive, Emotive and Ethical Aspects of Decision Making and Human Action, vol. 1, eds. Iva Smit and George E. Lasker (Baden Baden: IIAS, 2002), 23.
  104. H.R. Bristow, ‘Comment on Stone’s Article,’ Electronics & Power 13.6 (1967), 231.
  105. C. Haen, V. Barra, E. Bonaccorsi, and N. Neufeld, ‘Phronesis, a Diagnosis and Recovery Tool for System Administrators,’ International Conference on Computing in High Energy and Nuclear Physics 513.6 (2014), 1.
  106. Ellul, The Technological Society, 148.
  107. Slavoj Zizek, The Plague of Fantasies, (New York: Verso, 1997), 40.
  108. Norbert Wiener, The Human Use of Human Beings (Boston: De Capo Press, 1954), 181.
  109. Dreyfus, What Computers Still Can’t Do: A Critique of Artificial Reason, 261.