By RAPHAELE CHAPPE
Review of New Laws of Robotics: Defending Human Expertise in the Age of AI, by Frank Pasquale
Cambridge, MA: Harvard University Press, 2020
Long-term macroeconomic trends such as the decline of the labor share in total national income, the stagnation of wages for average workers, and the widening wage distribution suggest that the U.S. economy has been failing to deliver inclusive economic growth for workers for some time now. Prior to the COVID-19 pandemic, though the unemployment rate in the U.S. had actually dropped to a 50-year low of 3.5 percent, leading The Economist to celebrate an unprecedented jobs boom, labor markets were in reality a lot more precarious than this figure would have suggested. In past decades jobless recoveries have become more prevalent, and most new jobs created in the aftermath of the 2008 financial crisis were in low-wage occupations rather than “quality jobs.”
The share of new jobs characterized by little to no job security, low wages, contingent work, and no benefits had actually been increasing—not just the growth of the so-called “on-demand” or “gig economy” of digital platforms, but more generally a variety of alternative work arrangements. This change points to a gradual and profound transformation in the nature of employment relationships and labor markets that has resulted in more economic risk being shouldered by average workers—what Jacob Hacker has referred to in his book The Great Risk Shift as the “changing American work contract.” As this type of insecure work stands to become the new normal, the economy-wide pattern is one of mass under-employment rather than unemployment. We have jobs, just precarious ones.
While these trends seem to point to a persistently low demand for labor, there are very different perspectives on what may be happening. Some research suggests that we are on the verge of experiencing a general productivity surge on account of the accelerating pace of innovation, with new digital technologies threatening to disrupt entire industries and labor markets as workers stand to be displaced by machines. This is, notably, the perspective of Brynjolfsson and McAfee, for example in The Second Machine Age, who have warned that as the economy becomes less reliant on labor, most workers will be left behind and income inequality is set to increase. Companies like Apple, Google, Microsoft, and Facebook are indeed emblematic of firms that attain large market shares and valuations with a relatively small workforce. In his recent analysis of leading firms by market capitalization, Thomas Philippon has highlighted that one defining characteristic of superstar firms today is that they tend to employ fewer workers than they used to—their labor footprint has dramatically decreased over time relative to both valuations and profit margins, resulting in high labor productivity for those firms.
Other research points to quite the opposite characterization, namely that the pace of fundamental technological innovation may actually have begun to slow down and that what we have been experiencing is in fact deepening economic stagnation and slowing productivity growth. This hypothesis has been put forward by economists on the right and on the left, such as Tyler Cowen in The Great Stagnation, Robert Gordon in The Rise and Fall of American Growth, and Aaron Benanav in Automation and The Future of Work. Benanav argues that the slowdown in economic growth and correspondingly slower rates of job creation are the outcomes of a global process of deindustrialization, that is, a decline in the share of industrial production and manufacturing in total employment. Structural factors such as global industrial overcapacity and underinvestment are more to blame than technology per se.
To the extent no other sector appeared to replace industry as a major engine of economic growth, workers have been reallocating to low-productivity jobs in the service sector. New digital platforms and crowdsourcing business models such as Amazon Mechanical Turk, Uber, Airbnb, and Task Rabbit only facilitated this. When the major cost component is labor, an easy way, and perhaps the only way to grow the economy is to increase demand for services by lowering prices and paying workers less. This could become (if it hasn’t already) the model not just for the so-called gig economy, but for the majority of firms and workers in the economy.
Both scenarios spell a stark future for labor. Frank Pasquale’s thought-provoking and deeply humanist New Laws of Robotics: Defending Human Expertise in the Age of AI pledges that another story is possible. We can achieve inclusive economic prosperity and avoid both traps of mass technological unemployment and low labor productivity. His central premise is that technology need not dictate our values, but instead can help bring to life the kind of future we want. But to get there we must carefully plan ahead while we still have time; we cannot afford a “wait-and-see” approach. In the spirit of Asimov’s classic but ultimately limited laws of robotics—originally designed to ensure that robots don’t mistreat humans, and updated in the book to be better suited to address twenty-first-century challenges—Pasquale outlines a much-needed framework for a human-centered regime of law and political economy that helps guide and frame the development, implementation, and regulation of new technologies. Unlike Asimov’s original laws, the new laws of robotics target the people building robots rather than the robots themselves.
If to date productivity numbers have not reflected the full automation narrative—Bob Solow famously noted in 1980 that “we see the computer age everywhere, except in the productivity statistics”—recent advances in artificial intelligence, machine learning, and robotics are undeniably challenging the traditional division of labor between humans and computers. The stakes of a changing balance between humans and machines couldn’t be higher. What is gained and lost when robots take over tasks performed by humans? How should technology be deployed to ensure that work becomes more productive and fulfilling for workers? How do we identify and promote the right mix of humans and automation in light of our individual and collective goals and values? And how do we avoid destructive arms races and zero-sum dynamics?
Through a careful analysis of the problems that arise when robotics and automation are deployed in a variety of environments (as diverse as hospitals, schools, media, advertising, financial services, prisons, and the military), Pasquale builds a compelling case that these questions are too important to be left to the business world alone. The interactions between humans and computer are shaped by a broad range of actors and affect society at large. As such, the terms and conditions under which technological advances are to be deployed should not be unilaterally dictated by corporations and engineers in Silicon Valley, but need to be subject to rigorous public debate and continuous democratic oversight. The book’s vision for the proposed new laws of robotics recognizes and underscores the key role to be played by policymakers and regulators in this regard.
Pasquale makes the convincing case that while we should be taking full advantage of what robots can do better than humans, we also need to acknowledge and reaffirm the centrality of human labor, judgment and expertise in the economy and in society. Indeed, the use of AI can greatly reduce mistakes and common errors, and overcome human cognitive limitations. Myriad examples are explored in the book. The “narrow” form of AI—typically focused on one particular task, for example a pattern recognition algorithm that leverages massive amounts of imaging data—can help detect a cancerous tumor. Automated systems such as Machines Clinical Decision Support Software (CDSS) can help physicians reduce mistakes and bad outcomes, and can also make work more meaningful by freeing up time and eliminating repetitive drudgery.
But a proper goal for policy should be for automation to complement rather than replace workers, capitalizing on human strengths to increase labor productivity while avoiding mass displacement of workers. This is Pasquale’s first law of robotics. Yes, workers should focus on tasks where they have a comparative advantage over computers, and technologies of automation should be leveraged to make work more productive and meaningful. But it is also a matter of defining and upholding quality standards and a larger vision of progress in in various fields. Once we have identified what standards of care we want for the elderly, sick patients, and students, we can recognize when and why robots alone can’t do the job (for instance why personal interaction is needed for medical care or teaching), and strive for a level of cooperation between humans and machines that ultimately will bring us better elderly care, health care, education, and more. As Pasquale puts it, “[W]e are trying to preserve certain human values in health, education, journalism, policing, and many other fields” (171).
An important insight is that while higher labor productivity is desirable from the point of view of corporate profitability, robots don’t demand fair wages, vacation, or health insurance. As such, businesses will be incentivized to innovate to increase labor productivity while also relying on technology to displace workers to the extent possible. Incidentally, this is the very reason Marx characterized the process of technological change as a tendency towards a gradual increase in machines relative to labor, and ultimately a source of contradiction for capitalism—with value only extracted from labor, the profit rate would eventually decline to zero, eliminating private incentives for production.
Yet one need not subscribe to Marx’s labor theory of value (the idea that all value is generated through human labor) to take issue with an era of automation where robots do most of the work. In a world where the weak bargaining power of labor keeps wages low for the bulk of workers (while presumably the owners of the robots come to capture a greater share of the economy), the economy will come to rely heavily on the behavior of the rich, for example through the consumption of luxury goods, as was highlighted in the somewhat infamous Citigroup “plutonomy” memo. Yet differences in the savings rate between rich and poor households, with the latter spending less of each incremental dollar that they earn, mean that in a context of ever-increasing income and wealth inequality, aggregate demand will ultimately be insufficient to sustain long-term economic growth. This is a classic Keynesian framework. These considerations are at the root of current policy debates around Universal Basic Income (UBI) and a job guarantee. Past a certain tipping point of automation, conceivably society could no longer function in its capitalist form; some authors, for instance Nick Srnicek and Alex Williams in Inventing the Futur, advocate for a socialist post-work, post-scarcity society as the only viable option.
Chapter 7 in New Laws of Robotics does a wonderful job at outlining this macroeconomic perspective and providing a balanced and nuanced analysis of related policy debates. Two narratives get debunked. First is the utopian “radical dream of full automation” – the idea that jobs will become obsolete and everything will be done by machines. Second is the fear that investments in labor (such as health and education) are a “cost disease,” draining resources from the rest of society, as well as the idea that there is a trade-off between regulation and innovation, and that automation and AI are driven by efficiency gains. This way of thinking leads to “fast and cheap automation” and the complete and unrestrained takeover of entire professions and markets by leading technology firms in the name of short-term economic efficiency. This is often at the cost of automation being implemented to replace rather than complement labor, jeopardizing the long-term sustainability of economic growth, as well as negative externalities such as environmental degradation. The book’s vision for the future of AI and robotics is one where the right policies in a variety of areas such as tax, competition, labor, and education can help target inclusive prosperity and achieve a rebalancing of the economic playing field to protect workers against substitution by machines.
To be done right, this will require substantial government funding and investment in labor. While sympathetic to UBI, which holds the promise of reaffirming basic subsistence rights and a sense of meaning and purpose for workers while stimulating the economy, Pasquale highlights that it “runs into practical difficulties quickly” and has a number of shortcomings. Poorly planned, it could simply lead to inflation and undermine the state by substituting for publicly-provided services. A federal job guarantee of the sort proposed by economist Pavlina Tcherneva might avoid these perils.
Chapter 7 does address the issue of how such ambitious measures could be funded, and is generally supportive of the views of modern monetary theory (MMT) in this regard. MMT challenges the conventional thinking that governments can only spend by collecting taxes or issuing debt, and argues instead that to the extent they are sovereign over their currency, governments face an inflation constraint rather than a public deficit constraint. To be sure, MMT is controversial and does have its critics. In the aftermath of the COVID-19 pandemic, which required massive stimulus spending and expansions of the Federal Reserve’s balance sheet, growing fears about U.S. inflation have issued from investors, analysts, and economists. Inflation could create difficult dilemmas for policymakers. High levels of existing corporate and sovereign debt mean there are concerns that rising yields could precipitate a solvency crisis, while conventional policy tools such as raising interest rates may only increase the cost of debt servicing.
In short, there may be legitimate concerns around macro-financial vulnerabilities that could limit reliance on public finance on the scale needed to implement these ideas. The book’s discussion has the merit of clearly laying out that if MMT is taken at face value, inflation becomes a central topic and there may be difficult decisions to make about how to properly measure it; it also highlights how strategic interventions (e.g., taxation, sector-specific measures designed to deter asset bubbles, or bank regulation) could potentially limit problems. Far from being conclusive on these issues or offering definite answers, which would be premature, Pasquale advocates against a monopoly of economists on the role of money, and favors pluralist foundations for the political economy of automation. Wider expertise and public input on these issues are also critical.
The second law of robotics posits that robotic systems and AI should not counterfeit humanity. A world where we do not know if we are dealing with a fellow human or a machine is disturbing. Robots that are mimicking human emotions and faking feelings; and humanoid robots that are indistinguishable from humans (no longer the realm of science fiction) are actually cheating us in a way that can end up diminishing our valuation of empathy and genuine human relationships, in the same way that the counterfeiting of money can damage trust in a currency. There is also an ethical argument. The voice or face of another human being demands respect, says Pasquale, while machines have no such claims on our conscience. This is very compelling.
The third law of robotics says that robotic systems and AI should not intensify zero-sum arms races. There is, for example, the “arms race for data” among technology firms; the “arms race for ratings and rankings” among individuals that are trying to access certain services by agreeing to release more and more personal data; the “arms race for attention” between users and advertisers on platforms such as Google and Facebook. Chapter 4 of New Laws of Robotics describes how these media platforms have effectively taken on the role of global communication regulators and need to accept responsibility for this new role. Instead, they have cultivated irresponsibility, hiding behind algorithms that take over editorial functions and maximize engagement and ad revenue rather than uphold social values of truth and information. Recognizing the centrality of human expertise in the careers of journalists, editors, and creatives, particularly in the wake of recent debates about “fake news,” is much needed in order to achieve the right balance between commercial interests and the public interest, and to “help prevent the worst excesses of the automated public sphere.”
In The Great Transformation, his 1944 study of the birth of the modern market society, Karl Polanyi made the case that markets are not pre-existing abstract entities but rather a construction of the legal system. Markets for new technology are no different. As such it is critically important to get legal frameworks right to properly define incentives for innovation, achieve desired market outcomes, and uphold rights and values. For example, regulations like “bot-disclosure” laws in Europe are one way to preserve our ability to distinguish between humans and machines. Who should take responsibility when robots fail? Vendors and developers have long resisted liability, relying on what Pasquale has labeled the four “horsemen of irresponsibility”: sweeping preemption, radical deregulation, broad exculpatory clauses, and free expression defenses.
The distinction between automation that complements workers and automation that replaces them is critical to properly thinking through the issues. When robotics or AI substitute for human labor, Pasquale argues, strict liability standards (where developers take responsibility even if no human negligence is involved) are more appropriate. Anything less demanding could incentivize premature automation to reduce dependence on human expertise even when it is still needed, with potentially devastating consequences in fields such as health care. This is also why we need transparency to be able to trace robots or online bots back to their owners or developers. This is Pasquale’s fourth law of robotics, namely that robotics and AI systems must always indicate the identity of their creators(s), controller(s), and owners(s).
Michael Polanyi famously observed that “we can know more than we can tell.” Human judgement will never be fully automated. But the logic and dictates of neoliberal capitalism create the risk of ending up with a kind of automation that places machines rather than humans at the center of society and the economy. New Laws of Robotics issues a dire warning against the perils we may collectively be facing as a result of such advances. But it is also a deeply humanistic manifesto with a hopeful vision of the future in which it is possible to dream of a world where robots work for us rather than the other way around. Pasquale’s vision for the political economy of automation engages us to reimagine our economy so that technology may be deployed to enhance professional expertise and promote human flourishing. One can only hope that these proposed new laws of robotics will guide the implementation of new technologies in the coming decades.
Posted on 14 May 2021
RAPHAELE CHAPPE is an Assistant Professor of Economics at Drew University. She is also an economic advisor for The Predistribution Initiative, a multi-stakeholder project to develop new investment structures that share more economics with workers and communities.