Affording Choice

By RYAN D. DOERFLER

Review of Choosing Not to Choose: Understanding the Value of Choice, by Cass R. Sunstein

New York: Oxford University Press, 2014


Imagine a world in which Amazon selects your toilet paper, Netflix picks your movies, and OkCupid chooses your dates, unless you override their choices. The horror. Or maybe not. In Choosing Not to Choose, Cass Sunstein tries to warm us to the idea of personalized default rules. With the increasing availability of large data sets, it is increasingly feasible to tailor default rules to the situations of individuals (for example, an algorithm that predicts which cereal I will like). It is thus increasingly sensible, Sunstein suggests, for us to outsource decisions to choice architects since, in so doing, we free up time (and cognitive energy) to do more important things.

Sunstein is correct but far too cautious in his embrace of Big Data. Sunstein goes out of his way to limit the conditions under which personalized default rules make sense. If the area is one in which we benefit from exposure to diversity, for instance, Sunstein grants that active choice is probably best. The reason is that a default rule based on past choices would, to our detriment, default us to more of the same. In making such concessions, Sunstein overlooks obvious patches (for example, a sophisticated default rule with a diversity quotient). More interestingly, he exhibits unreasonable optimism about our capacity for choice by contrasting reliance upon personalized defaults rules with reasoned decision-making, and overlooking other approaches like haphazard selection.

1.

Ours is a time of cognitive overload. We have too many choices to make and too many options from which to choose. With everything from running shoes to retirement plans customizable, the number of possible choices exceeds our individual capacity for reasoned choice. It is thus inevitable that we rely upon outside cognitive assistance just to get ourselves through the day.

Default rules are one source of such assistance. Take GPS navigation. If one enters a destination into a GPS application, the application proposes a route from here to there. One is free to rely upon or reject the proposed route; in that sense, the proposed route is a mere default. Yet, if one accepts the application’s proposal, one will have successfully outsourced the cognitive task of determining how to get to the destination.

For default rules to provide cognitive assistance, one must accept defaults somewhat unthinkingly. If, for example, one assessed a GPS route by calculating possible routes independently, one would derive no cognitive benefit from a GPS application. That is not how most of us respond to defaults. Instead, various psychological dispositions make acceptance of defaults to some degree automatic.

First, we are disposed to accept defaults because rejecting them requires additional thinking. To reject a GPS route, for example, requires us to calculate an alternative route for ourselves. And because cognitive resources are scarce, we would rather not. Sunstein labels this disposition “inertia,” or, alternatively, an “effort tax” on default rejection (p. 34). The basic idea is that we prefer to think less rather than more.

Second, in at least some cases, we attribute to a default what Sunstein calls an “informational signal.” Part of the reason we accept GPS defaults is our belief that GPS devices can calculate routes better than we can. If we believe that a default has been “chosen by someone [or something] who is wise, decent, or smart and for a good reason,” we are, other things equal, disposed to accept it (p. 41). For that reason, our disposition to accept a default is contingent on our trust in the architect of the corresponding rule.

Third, we sometimes favor defaults because of “loss aversion” or, in my view, its moral psychological analogue, the responsibility felt for activity, as opposed to passivity. Loss aversion is to dislike losses (for example, a five-cent “tax” for disposable bags) more than one likes commensurate gains (for example, a five-cent “subsidy” for using reusable bags). What counts as a loss or a gain depends upon the status quo. That status quo, however, comes not from the heavens but from the chosen default. Golfers, for example, are better at avoiding “bogies” than at making “birdies.” What counts as a “bogey” or “birdie,” however, turns on the course designer’s decision as to what constitutes “par.” In this way, choice architects can increase attachment to, say, vacation time by setting it as the default rather than as an option that must be actively chosen (p. 32-33).

Analogously, “active” choices trigger feelings of responsibility more than “passive” choices, even when the consequences are the same (p. 48). A representative contrast is the one between cheating on one’s taxes and refraining from correcting an Internal Revenue Service error such as a mistakenly issued refund. Because of this disposition, we feel more responsible for consequences that result from default rejection than from default acceptance (for example, one feels worse about being stuck in a traffic jam if one rejects, rather than accepts, a GPS route). For this reason, we are inclined to accept defaults in part because it helps pass the responsibility buck.

Because we accept defaults somewhat unthinkingly, default rules also have the potential for harm. Suppose, for example, that a car dealership defaults customers to the purchase of expensive, unnecessary features. In that case, the dealership’s customers will purchase those features more often than they should. Sunstein acknowledges such risks, but offers a few consolations. First, Sunstein insists, competitive markets, both commercial and political, impose limits on harmful defaults. To continue the previous example, other dealerships can gain a competitive advantage by offering more reasonable defaults. Second, because acceptance of defaults is automatic only to some degree, one will reject a default if it is bad enough. These consolations notwithstanding, Sunstein concedes that the potential for harm is real and must be taken into account.

Sunstein also considers whether default rules are paternalistic and so inconsistent with personal autonomy. Here, Sunstein offers various persuasive responses. Among other things, Sunstein notes, to require persons to engage in active choice is itself a form of paternalism. Sometimes we prefer to outsource decision-making (think GPS). In such cases, it is straightforwardly paternalistic to require us to choose actively nonetheless. Further, Sunstein observes, to require active choice across the board is simply infeasible. Again, the cognitive load is too great as it is. If the goal is attentive decision-making in at least certain areas (see below), we do best to use default rules and other tools to lighten our load in those areas that matter less.

2.

A default rule is only beneficial if it defaults one to good things. Whether a default rule is attractive in some area is thus contingent upon whether (reasonable/informed/actual) preferences in that area are possible to predict. As Sunstein recognizes, preferences are easier to predict under certain conditions than under others. Again, take GPS. In the area of navigation routes, the vast majority of us prefer to get from here to there as quickly as possible. For that reason, a GPS system that minimizes anticipated travel time serves us quite well as a group. Consider, by contrast, fashion or music. In those areas, preferences are heterogeneous. Default rules insensitive to individual idiosyncrasies would be unsatisfactory to almost all. Areas such as health insurance or retirement savings fall somewhere in between navigation routes and fashion or music. The more heterogeneous the preferences in some area, the more difficult it is to design a default rule that will produce good defaults across cases.

Similarly, if preferences in some area are largely static, choice architects have an easier time. Default rules tend not to update over time. This means that a rule that works well at t1 might not work well at t2 if preferences change. Here, again, the contrast between navigation routes and fashion or music is illustrative.

As a historical matter, heterogeneity and change over time have imposed real limits on the utility of default rules. In many (most?) areas, preferences are diverse and shifting. For that reason, one could argue that default rules are, or at least have been, presumptively inappropriate much of the time. As Sunstein observes, however, such limitations may soon become things of the past. With the availability of large data sets, choice architects are increasingly capable of crafting default rules that are sensitive to individual circumstances. Incorporating information about the past behavior of the person and those similarly situated to her, such personalized default rules calculate defaults based upon the likely preferences of the specific user. In so doing, such rules promise to combine the cognitive assistance of “mass” default rules with the accuracy of active choice.

Big Data analytics is still in its infancy, as anyone subject to Amazon recommendations can attest. Analytical technologies are, however, developing rapidly. As they do, one should anticipate a correspondingly rapid increase in the number of areas in which it is feasible to rely upon sophisticated decision-making algorithms. Health insurance defaults based upon medical history. Retirement-savings defaults based upon actuarial calculations constructed from fine-grained demographic data. Fashion defaults based upon Instagram “likes.” The sky is the limit.

3.

Sunstein is quick to insist that, even assuming technological feasibility, personalized default rules are appropriate only in limited circumstances. While such rules overcome concerns resulting from heterogeneity and change over time, other sources of concern remain. Sunstein concedes, for instance, that even personalized default rules are presumptively unattractive in areas in which we have an interest in learning. With music or television, for example, there is undoubted benefit to trying new things. So too food, fashion, and sources of news. In such areas, Sunstein fears that adherence to personalized default rules would limit, rather than expand, our horizons. The reason is that personalized defaults are a function of past choices, and so personalized default rules would continuously default one to the familiar, as opposed to the novel (for example, an algorithm that defaults fans of science fiction to more science fiction).

Moreover, Sunstein allows that personalized default rules should be disfavored if substantial autonomy interests are at stake. Take voting. Imagine a system in which one is defaulted into voting for political candidates of the same party for which one previously voted. Such a system would reduce the costs of decision without much increase in the costs of errors. Nonetheless, Sunstein reasons, such a system would be objectionable because the “internal morality” of voting requires that voting be an active choice, in which voters are “engaged, thinking, participating, and selecting among particular candidates” (p. 164). One might think similarly about, say, romantic partners or careers. In such areas, it matters for reasons not to do with accuracy that one deliberates (in some sense) and chooses for oneself.

But Sunstein makes these concessions too quickly. Start with learning. Sunstein is right that a personalized default rule would stifle learning if that rule defaulted one to the familiar time and again. But why should that be the case? One can imagine very easily, for example, a film-selection algorithm with a (perhaps customizable) diversity quotient, such that the films selected are mixed and, to some degree, unfamiliar (for example, two percent contemporary Scandinavian).

Sunstein also overestimates the degree to which active choice results in learning under contemporary conditions. Again, ours is a time of cognitive overload. Because one has access to virtually all films, choosing which film to watch on a given occasion can be overwhelming. Absent some source of curation (for example, a cinema director, a film critic, a Netflix algorithm), one must engage in a sort of cognitive triage, applying various filters (for example, science fiction, Metacritic score greater than 60) to limit the set of candidate films to some manageable number. Because, however, even the set of possible filters can be unmanageably large (for example, Netflix currently divides its content into 20 categories, each of which are divided into numerous subcategories), one must engage in even further triage just to determine which filters to apply. Staring down this avalanche of options (and options within options), it is far too easy to “default” oneself, so to speak, to the familiar (for example, “Maybe one of the Bond movies?,” “How about Tarentino?,” etc.).

Active choice need not and, under conditions of cognitive overload, often will not involve serious consideration of novel options. For that reason, it is entirely plausible that, under contemporary conditions, reliance on a sophisticated personalized default would lead to more learning than would reliance on one’s own (already overtaxed) cognitive capacities. I could choose to watch more contemporary Scandanavian films. But I do not. I might thus do better to rely upon Netflix.

Turn next to autonomy. Sunstein is right that, in certain areas, the “internal morality” of decision-making appears to call for attentive active choice. The problem is that, in these areas, actual decision-making is often inattentive and so internally immoral. Go back to voting. The ideal voter is, for most, the earnest citizen who thinks seriously about candidates and issues. Not only does she choose for herself (active choice); she does so on the basis of reasons (attention). Needless to say, many (most?) actual voters fall short of this standard. Some vote for candidates of one party without anything that could be fairly described as “engage[ment],” “participat[ion],” or “thinking.” Others vote more eclectically, but do so in a haphazard way (for example, on the basis of likability). And this is to say nothing of eligible voters who remain merely eligible.

Or consider romantic partners. In the ideal, choosing a romantic partner probably involves less in the way of reasoning than does choosing among political candidates. All the same, most accept that choice of partner should be appropriately attentive in the sense that there is a range of considerations (for example, shared values, physical attractiveness, common interests) to which one should attend (however consciously) when making that choice. This ideal notwithstanding, partner selection is increasingly inattentive under contemporary conditions. Dating websites and apps offer thousands of potential partners. To manage this (perceived) surplus of options, one applies filters that involve minimal cognitive cost, physical attractiveness in particular. Worse still, once one chooses, one pays less attention to one’s for-the-time-being partner because that surplus of options remains.

Actual decision-making notwithstanding, Sunstein insists, the “aspiration” of autonomous choice in such areas is important (p. 164). Maybe so. Just as important, though, is that that aspiration not become an excuse for inaction. Eligible voters, for example, are inattentive not because of poor character but because they are overtaxed as it is. Absent, say, a weeklong paid national voting holiday, why not use a preference predicting default rule to add citizen input? Similarly, reliance on dating websites and apps is on the increase in part because the pressures of the contemporary economy make it increasingly difficult to form social relationships in traditional ways. Thus, so long as, say, nine to five remains a thing of the past, why not rely upon an OkCupid algorithm to choose a partner one promises to date exclusively for one month?

Sunstein does not give his argument enough credit. We know that when people “actively choose,” they almost always fall back on mental defaults that control them far more than legal or commercial defaults could. A system of personalized defaults can be designed not only to improve their everyday choices, but to push them out of ruts, and expose them to experiences that they have never imagined.

4.

As Sunstein observes, poverty increases cognitive load. One advantage of living in a rich country is that one can take for granted access to potable water. Similarly, if one is well off in the United States, one need not think about whether to pay the electric bill before or after the first of the month. Because the cognitive burden of poverty is substantial—one study estimates the adverse effect of performance on an IQ test as roughly equivalent to that of having not slept the night before the test—Sunstein reasons persuasively that low-income persons stand to benefit disproportionately from default rules if appropriately tailored. Such rules would both lighten the load—fewer choices to make—and (plausibly) produce better outcomes—choice architects can devote greater cognitive resources to those choices than can already over-burdened individuals.

But while Sunstein attends carefully to poverty when considering the effects of default rules, he gives insufficient consideration to poor people’s receptivity to such rules. In his discussion of predictive shopping, for example, Sunstein asks whether persons would be receptive to a monitoring system that automatically purchases various household goods when such goods have run out. When surveyed, Harvard University undergraduates overwhelmingly said, “yes.” Respondents recruited on Amazon Mechanical Turk were far less receptive. So too a national representative sample. Sunstein conjectures that the discrepancy is attributable to age—perhaps young people are especially comfortable with technologies that “shop” for them. Or maybe it is just that Harvard undergraduates are “unusually unenthusiastic about spending their time looking for household goods” (pp. 183-84). One hypothesis Sunstein ignores, however, is that Harvard undergraduates are much wealthier than the other populations and so much less afraid of automatic charges.

For most readers of this Review, it is likely hard to imagine that the purchase of toilet paper is something one might need to put off for reasons to do with cash flow. That is, however, the financial reality for a depressingly large segment of the population. According to a recent study by the Federal Reserve, 47% of Americans could not cover an emergency expense costing $400, or would cover it by selling something or borrowing money. Given this degree of insecurity, one starts to see how a $20 purchase—or, more specifically, the timing of a $20 purchase—might be something about which one has to think. Such reasoning extends beyond predictive shopping to, for example, automatic bill pay. Sunstein rightly praises automatic payment as an easy way to lighten one’s cognitive load. Such programs are, however, unattractive if one must consider each month whether to pay on time.

So what to do? If economic insecurity makes one unreceptive to default rules, additional availability of such rules might actually increase, rather than decrease, inequality of cognitive load. In this respect, personalized default rules would have the same effect as any other convenience technology more accessible to high-income persons. As with those other technologies, this is not necessarily an argument against their adoption. It is, however, a reason to temper our expectations concerning the relative benefits of such technologies to rich and poor.


RYAN DOERFLER is Harry A. Bigelow Teaching Fellow and Lecturer in Law at The University of Chicago Law School.