How to Survive a Robot Uprising

By DAVID J. GUNKEL

Review of We, the Robots? Regulating Artificial Intelligence and the Limits of the Law, by Simon Chesterman

Cambridge: Cambridge University Press, 2021


 

Near the end of the final chapter of We, the Robots?, Simon Chesterman offers a short, almost off-hand remark about lawyers: “As a profession, lawyers are also notoriously conservative” (229). The statement might have the appearance of being little more than a sidebar reflection on the state of the discipline. But it is actually the key to understanding the entire trajectory of the book’s argument. 

“Conservative” in this context is not meant in a political sense. The word needs to be read and understood in a strict and almost literal manner, that is, “conservative” as being interested in conserving the existing structures and ways of doing things. We, the Robots? is conservative precisely in this fashion. What Chesterman argues and seeks to accomplish is stated quite directly and plainly: “existing state institutions and norms are capable of regulating most applications of AI” (217). In other words, all the big opportunities or challenges of AI, automation, algorithms, robots, big data, digital platforms, etc. can be accommodated to and made to fit with existing laws, legal institutions, and regulatory structures. Though this might sound reassuring—a kind of “Don’t worry, everything is going to turn out just fine”—the devil is in the details. And, as Chesterman skillfully demonstrates, all of this is going to take work. “Conservative” does not mean complacent. 

The book is organized into three parts: Challenges, Tools, and Possibilities, with each part consisting of three chapters. The first investigates the main challenges that AI presents to law and regulation, namely speed, autonomy, and opacity. The first chapter in this part critically investigates “the regulatory challenges posed by speed” (17), that is, the seemingly incontrovertible fact that technology seems to advance at light speed, while law and policy move at the pace of pen and paper. The rapid pace of technological development does pose significant difficulties for lawmakers and regulators across the globe, and the first chapter provides a clear assessment of the challenges of developing laws and regulations for an industry that is often celebrated for its exuberant “move fast and break things” ethos. 

Chapter two performs a similar kind of analysis for autonomy, “exposing gaps in regulatory regimes that assume the centrality of human actors” (7). In pursuing this, the chapter targets what have become the proverbial examples and test cases: self-driving automobiles, lethal autonomous weapon systems (LAWS), and algorithmic decision making. In the case of each one of these autonomous (or at least semi-autonomous) technologies, Chesterman carefully exhibits and examines the regulatory difficulties and dilemmas—e.g., trolley problems and the way autonomous vehicles challenge both criminal and civil liability schemes; killer robots and the potential moral difficulties with outsourcing life and death decisions to a machine; and the crisis of legitimacy and responsibility with algorithmic decision making applied to already vulnerable human individuals and communities. 

Chapter three, which concludes the Challenges part of the book, investigates AI opacity or what is also called the “black box problem.” The challenge, as Chesterman correctly points out, derives from two different sources: IP protections covering proprietary commercial algorithms and the complexity of system architectures, especially with deep learning and other kinds of artificial neural networks that generate their operating instructions by exploiting statistical patterns discoverable in large data sets. 

In his analysis of this challenge, Chesterman focuses on three problems regarding algorithmic decision-making and its impact on human individuals and communities. First, it might permit or even allow for bad or inferior decisions. Second, it can mask impermissible decisions like bias or discrimination. And finally, it can undermine legitimacy, especially in the area of public policy and democratic governance. In all three cases, the problem is that the lack of transparency can serve to obscure knowledge about the decision making process and may even be deliberately used to provide cover for malfeasance. 

After getting a handle on these three challenges (which Chesterman is careful to acknowledge is not intended to be an exhaustive account but merely a representative list of the most common and urgent issues confronting lawmakers and regulators) the second part investigates whether and to what extent existing legal frameworks and instruments are able to accommodate and respond to these difficulties. Here again, there are three chapters. Chapter four, examines how existing laws can and should be applied to emerging technologies in order to resolve questions concerning the attribution of responsibility. The focus is on two areas. First, risk management and the opportunities and challenges of various methods for resolving difficulties with responsibility gaps, including product liability, strict liability, and insurance. And second, the exigencies of organizational decision making and the practical and theoretical limits of delegating or otherwise distributing responsibility to others. 

Chapter five grapples with the question of legal personality for AI—a question that has attracted a lot of attention, especially in the wake of the highly publicized proposal from the European Parliament suggesting that robots be recognized as “electronic persons.” As Chesterman notes, the most urgent question here is “whether some form of juridical personality would fill a responsibility gap or otherwise be advantageous to the legal system” (116). In this case, the investigation recognizes an important distinction between “can” and “should.” On the one hand, it seems entirely possible for most legal systems to be able to extend to AI some form of legal personhood, following the precedent of previous decisions regarding the legal status of corporations, organizations, and ships. So yes, AI can be persons. 

But, and on the other hand, “the more interesting questions are whether they should and what content that personhood would have” (116). For his part, Chesterman is not convinced by the arguments that have been developed in favor of extending legal personality to AI, because they are ultimately both too simple and too complex. Too simple, because they are reductionist, lumping a wide range of different technologies into one legal category; too complex in that they embrace the anthropomorphic fallacy and the specter of machine consciousness. But he is also not entirely comfortable with the arguments on the other side, especially those that promote the reification of socially situated and interactive robots or worse impose a kind of Slavery 2.0—a repurposed and updated form of Roman slave law under euphemisms like the “digital peculium.”  

The final chapter of the second part examines the concepts of transparency and explainability, and provides a cost/benefit analysis of these solutions. Though Chesterman finds these concepts to be more promising than the proposals for AI personality, both have their own set of difficulties. “The limits of transparency,” Chesterman notes, “are already beginning to show as AI systems demonstrate abilities that even their programmers struggle to understand” (9). 

Explainability, which was recently codified in EU law with “the right to an explanation,” seems to fare much better. But the way it works in practice tends to be “backwards-looking” or post hoc, relying on individuals knowing that they have been harmed. In an effort to address this limitation—one that could further disenfranchise vulnerable members of society—several forward-looking mechanisms are proposed, specifically impact assessments, audits, and an AI ombudsperson. These proposals are not necessarily new. They have been developed and deployed in other areas. What is new is their application to the regulation of AI and robots. 

Throughout the book Chesterman makes a strong and persuasive case that “existing norms, suitably interpreted, are able to deal with many of the challenges presented by AI” (9). But he also realizes and documents the fact that there are limitations. In response to these potential problems and lacunae, Chesterman rounds out the analysis by proposing new rules and some new institutions that, as he argues, will be necessary to remediate the few inadequacies that have been discovered with existing legal tools and regulatory models in the course of the investigation. 

The first of the three chapters that make up this final part (chapter seven), investigates and proposes new rules. The chapter begins with a survey of the competing guides, frameworks, and principles of AI Ethics that have been developed and promoted by states, industry, and intergovernmental organizations. Chesterman calls this “norm proliferation,” which is, for my money, one of the best discursive formulations in the book. And the chapter, rather than adding to this problem, provides a more pragmatic guideline for deciding what new laws and regulations are needed. In response to this question, the chapter proposes and makes a case for two specific reforms, one for dealing with lethal autonomous weapons and the other designed to address the new problem of robot abuse and perceived victimization. 

The eighth chapter does something similar at the institutional level. We need new rules, but we may also need new institutions since the challenges of AI and robots are global and do not recognize or necessarily respect the boundaries of existing legal jurisdictions. As Chesterman argues, “different national approaches to regulation will pose barriers to effective regulation exacerbated by the speed, autonomy, and opacity of AI systems. For that reason, some measure of collective action, or at least coordination, is needed” (10). 

Taking a cue from international initiatives in cybersecurity and other efforts at protecting the “global commons,” Chesterman advances a hypothetical “International Artificial Intelligence Agency (IAIA), which he models on existing international organizations like the International Atomic Energy Agency (IAEA). The idea is reasonable, and it is pitched at the level of a proposal. For that reason, it would be impetuous to expect that the practical details of establishing this new institution would be fully developed within the course of this one chapter. That would require another book, and maybe more than one. For now, it is sufficient, Chesterman believes, that the raison d’être and legal framework for such an institutional innovation be put forward and advanced. 

The ninth and final chapter flips the script and turns attention to “the possibility that the AI systems challenging the legal order may also offer at least part of the solution” (10). The operative word in this sentence is “part,” and it is not insignificant that the title to this final chapter is punctuated with a question mark: “Regulation by AI?” Chesterman’s point here is that AI should not only be looked at as a problem for law, it can also be part of the solution. But — and this is an important “but”—efforts to automate the law and employ AI-driven data analytics in legal proceedings and decision-making need to proceed with caution. The opportunities or challenges in this area are not just an empirical matter, they involve fundamental philosophical questions concerning the nature of law and its moral and political foundations, for example, “whether law is reducible to code that can optimize the human condition or if it must remain a site of contestation, of politics, and inextricably linked to institutions that are themselves accountable to a public” (10). Chesterman does not so much answer these questions as provide the framework for their debate—a debate that no doubt will become increasingly important and urgent. 

Anyone familiar with the literature in the field knows that Chesterman’s We, the Robots? is not the only monograph attempting to close the gap between existing legal structures and the seemingly unprecedented challenges imposed on us by AI, robots, and algorithms. The field is, for better or worse, already quite crowded. But there are three important features of Chesterman’s contribution to this literature that makes it stand out from the competition. 

First, he takes a global perspective. The technologies of AI and robots are not limited by geographic, linguistic, or political boundaries. They are being developed and deployed by multinational corporations, they are being rolled out and utilized by individuals and communities across the globe, and different jurisdictions are already responding in ways that vary and are not necessarily compatible. Consequently, the challenges of speed, autonomy, and opacity are a global issue and the way that we respond to them often needs international cooperation and coordination. 

Chesterman’s analysis is not only sensitive to the diversity of law—i.e., the fact that different nations, like the US and China, and different transnational political bodies, like the EU, operate with different legal systems, methodologies, and philosophies—but leverages the opportunities that these difference makes available. Comparing EU law to what is being developed in Singapore or China, for instance, provides revealing insight into the wide variety of approaches to resolving the same or a similar set of problems. For Chesterman, these differences are not a difficulty to be overcome; they are an opportunity for investigative diversity, and he makes the most of what it has to offer. 

Second, the book is incredibly well organized, persuasively argued, and supported by sufficient evidence. But, and perhaps most importantly, it is well composed and very readable. This last bit—readable—is not something one often expects of a publication in legal theory, which frequently rely on complicated legal jargon and get lost in the important but often overwhelming details of statutes and cases. Chesterman is able to formulate a remarkably accessible legal argument—one that can be read and understood by both experts and non-experts—without sacrificing the necessary attention to detail or investigative rigor. This not only speaks to the author’s skill as a writer, but it derives from his understanding that deciding these important legal questions is not something that can or should be outsourced to a small number of experts. We all have a stake in how these matters are resolved, and we all need to be able to access and make sense of the opportunities and the challenges that now confront us and our communities.

Finally, and to bring this all back to where we began, Chesterman’s argument is conservative in the most affirmative and optimistic sense of that word. The moral and legal challenges of AI and robots may seem daunting, and at times it may appear we are on the verge of that robot invasion rehearsed for us in science fiction. But the robots and AI are already here, and the challenges they present to human individuals and communities are very real and very urgent. Chesterman is convinced (and is convincing in his argument) that we are largely ready to meet these challenges. According to his assessment, we already have the tools to make a response, and where there remain seemingly unresolvable difficulties, we have a good idea of what new rules and new institutions will be needed. It is undoubtedly an optimistic message. But even those who would want to disagree either with Chesterman’s analysis or conclusions surely need to take his insights and innovations seriously. 

 

 

Posted on 17 March 2021


DAVID J. GUNKEL is Professor of Media Studies, Department of Communication at Northern Illinois University.