Plus ça change: Privacy, Corporate Interests, and the Design of Data-Extractive Tech 

By IGNACIO COFONE and ADELISE LALANDE

Review of Industry Unbound: The Inside Story of Privacy, Data, and Corporate Powerby Ari Ezra Waldman

Cambridge: Cambridge University Press, 2021


 

Ari Ezra Waldman’s Industry Unbound is a detailed look into the inner workings of tech corporations and how they normalize anti-privacy practices and thinking both within and outside their organizations. Waldman’s fieldwork investigates the tension between corporate rhetoric and the myriad ways in which privacy is undermined in corporate practice, which the book calls “the performance of privacy.”  It is a book about corporate power and how the tech industry continues to harm consumer interests and evade accountability despite increasing public scrutiny and legal reform. 

In its first chapter, titled “A Day in the Office,” the book recounts a visit to a tech company, based on Waldman’s observations from meetings, interviews, and documents (15-44). The book describes meetings with a chief privacy officer, software engineers, privacy lawyers, a CPO, a Director of Privacy, a privacy analyst, general counsel, and a Chief Risk Officer, among others. What stands out from this narrative is the lack of internal agreement among these actors about what “privacy” even means. This disagreement reflects ambiguities that also exist outside of the industry. Privacy professionals are often eager to highlight that the public holds different views about what privacy means (for example, “a lot of people talk about privacy” or “privacy means different things to different people”) (21, 23, 35). This emphasis is a red herring that minimizes privacy by making it sound as if privacy is devoid of substance and its meaning impossible to agree upon. 

But the problem runs even deeper than this. Waldman shows how employees conflate privacy with cybersecurity, encryption, and transparency. At best, they find it indistinguishable from user choice and control. For example, when asked how he would define privacy, one product manager answered: “privacy is important around here. But I really have no idea what it means other than making sure everything we do is secure up and down the line. That’s just what privacy is” (21). This conflation, often encouraged by upper management, impacts internal conversations and thinking about privacy. Consequently, it impacts how tech products are designed.  

Other notable issues include that, when it comes to designing for privacy, software engineers consistently say they need objectives that they can code for. But important aspects of privacy—including understanding user expectations, third-party uses of data, and the integration of diverse privacy concerns—are not reducible to specific code. Hence, these issues are systematically left out of privacy training (195). 

Additionally, privacy professionals and in-house counsel tend to be excluded from much of the product development process, sometimes because of a perceived lack of technical skills required to help integrate privacy into design and other times because they may be perceived as inhibitors of innovation (153-156). Team members are typically siloed from each other and from other departments, further enabling the design of data-extractive technology. Narrow assignments (i.e., discrete tasks) make it easier for privacy to be seen as “someone else’s job” (186-193). The venture capitalists who fund tech companies consistently champion a growth-at-all-costs mentality and prioritize user engagement over privacy (195-197). The book illustrates the everyday anti-privacy mechanisms that corporations employ to constrain how privacy professionals work and think—such as employees siloed in the design process and even using coercive bureaucratic tactics such as “deskilling” on employees.

Being critical of tech companies doesn’t mean being critical of everyone that works in them. The book carefully points out that many big tech employees come in with good intentions and a genuine interest in privacy. However, because of various bureaucratic and financial pressures, as well as a desire to succeed, many end up being complicit in undermining privacy.  Without necessarily realizing it, they become “team players,” a term often used as a euphemism for those who do not want to impede innovation by bringing up privacy considerations (149–53, 209, 224–25). This presumption that privacy and innovation conflict has broader implications at the policy-making level, where the assertion is used as a deregulatory tactic despite lack of evidence that privacy truly stifles innovation, as the tech industry often claims (174). 

A central theme of Industry Unbound is the inadequacy of privacy laws across the world. The flaws of the traditional notice-and-consent model of privacy continue to be replicated in recent legislation, such as the CCPA and GDPR, which “perpetuate a self-governing privacy regime” (107)—a regime focused on users making “decisions” about what to do (or not to do) with their personal information (52). The aim is to “empower” individuals to make privacy choices (97). 

Framing privacy as being about control or choice allows corporations to shift the burden of protecting privacy onto consumers, who are asked to read and understand countless privacy policies, find alternative websites or apps (if they exist) if they object to a platform’s data collection practices, navigate platforms’ opt-out processes—all within an environment that seeks to extract as much data from them as possible (54). The efficacy and legitimacy of this regime erroneously assume that we can both properly process privacy notices and make rational decisions, and that consent is the same thing as choice (52). 

In an earlier book, Privacy as Trust, Waldman wrote about the traditional and legally influential view that, when we share information with others, this information is no longer private (equating privacy with secrecy). As an alternative, Waldman’s privacy-as-trust approach proposes that, when we share, we do so within relationships of trust, expecting that certain norms of confidentiality and discretion will be upheld. This approach recognizes that, rather than being a matter of choice (or consent), sharing is inevitable and frequently involuntary. Companies should be held accountable when they breach a user’s trust in them and act against the user’s best interests.

Industry Unbound builds on this notion and highlights the many ways in which tech companies remain unworthy of our trust. Companies continue to tap into and exploit “an innate human desire to share” (150). Many of the privacy professionals and engineers who work at tech companies (but also courts, legislatures, and policymakers) erroneously think of privacy in terms of disclosure, user autonomy, choice, and “empowerment.” This continued reliance on notions of user consent is problematic because informed consent to data collection is never truly possible: consumers are unable to fully appreciate all the ways in which privacy harm can arise, especially given the amounts of data aggregation and data sharing with third parties. 

Moreover, even when privacy law places some emphasis on corporate responsibilities (rather than on individuals), companies find ways to undermine the law in practice. The industry gets to interpret and implement privacy law on the ground by way of checklists, records, and documentation (like privacy impact assessments). “Check-the-box” compliance with privacy rules, however, is merely a symbolic structure that allows companies to evade accountability (132–35, 212–15). Privacy is integrated into product design but often in merely performative ways. Thus, despite reforms in privacy law around the world, what continues to be protected by the law are corporate financial interests and data extractive practices.

Lack of trust relates to another theme in Industry Unbound: the continued lack of diversity in tech companies, in particular the representation of women. Of the ninety engineers, software engineers, web designers, coders, programmers, and other technologists interviewed or observed, only seventeen were women. Many of the meetings, the author reports, were entirely composed of young, male engineers—which the book calls “broteams.” Women were also absent from most design meetings. The book notes that this gender imbalance is consistent with statistics for the industry (181-182). 

This lack of diversity matters both inside and outside of companies. Inside, it leads to discrimination in the workplace: “Misogynistic speech among male designers was common during my visits,” Waldman writes, “with seven coders chatting with each other about their female coworkers in graphic sexual terms. When women were involved in design meetings, their ideas were either ignored and repeated by men (seven times) or told their design ideas had no merit (thirteen times)” (182). 

And the problem goes far beyond gender. “Women, members of the LGBTQ community, persons of color, poor women, those living with HIV, those living with disabilities, and others with stigmatized and marginalized identities have different lived experiences with data, privacy breaches, and online harassment that heighten their awareness of privacy issues” (180). Outside of companies, diversity also matters because members of these groups (and particularly women) tend to be more attuned to problems such as risks of online stalking, how flagging social media accounts may be used as a tactic to silence members of marginalized groups, and how design (for example in video games) can perpetuate harmful gender stereotypes. This awareness (or lack thereof) impacts product design. Gender bias in product management—an area where women are notably underrepresented—affects everyone’s privacy (179-186). 

Company executives and privacy professionals say they want to “gain user trust,” that they “care about privacy,” and that they’re “doing their best.” But Waldman’s fieldwork shows that the actions of tech companies continue to underperform. In the fall of 2021, the congressional testimony of Frances Haugen, a Facebook (now Meta) employee turned whistle-blower, described how Meta is willing to sacrifice user safety and well-being in furtherance of its own financial interests. While aware that its platforms exacerbate body-image issues in teenage girls, enable human trafficking and armed violence, and facilitate the spread of misinformation and hate speech, Meta has done little to fix these problems. 

In her opening testimony, Haugen remarked: “I’m here today because I believe Facebook’s products harm children, stoke division, and weaken our democracy. The company’s leadership knows how to make Facebook and Instagram safer, but won’t make the necessary changes because they have put their astronomical profits before people.” Relatedly, Meta was recently accused of reacting with animosity and resistance to Apple’s increased privacy protections and efforts to curb data harvesting. All this while, also in the past year, Meta published a shiny new human rights policy. The policy reads: “Our principles are: give people a voice; serve everyone; promote economic opportunity; build connection and community; keep people safe and protect privacy.” The company’s rhetoric has human rights at the forefront: it vows to respect and implement international human rights principles, conduct human rights due diligence and disclosure, remedy human rights impacts, protect human rights defenders, and “promote a climate of respect and awareness for human rights.” But its practice remains to be seen.

Borrowing language from Industry Unbound, Meta seems to be “performing accountability” (93) while undermining privacy (and human rights more broadly) in practice. This incongruity between words and actions may be doubly troubling given the leading role Meta plays in developing the metaverse. According to Haugen, the metaverse will collect even greater amounts of personal information from users than before, while potentially being more addictive; the company may gain another monopoly online with virtual reality. While Meta has said it intends to involve human rights and civil rights groups “from the start” in building technologies—and that it is looking into how to minimize “the amount of data that’s used, build technology to enable privacy-protective data uses, and give people transparency and control over their data”—the company’s track record and the realities on the ground that Industry Unbound presents raise doubt about the strength and seriousness of these commitments. 

Where do we go from here? In its final chapter, titled “Fighting Back” (232-248), the book calls for measures to rein in corporate power. Fighting back means building privacy law as a counterweight to corporate power (238). Countering corporate power, in turn, requires de-emphasizing the role of the individual user and the notions of consent, choice, freedom, control, and autonomy. Instead, the focus must be on enhancing corporate responsibility and accountability. Strengthened public regulation is key; it should be used to promote the values of democracy, equality, and power. The book affirms the value of banning technology that unduly infringes on individuals’ fundamental rights or disproportionally impacts marginalized populations, in line with literature arguing for the ban of specific technologies, such as facial recognition. The voices of members of marginalized communities, who are disproportionately affected by corporate practices, should inform regulation. Internally, privacy professionals must be better integrated into design teams. Independent, privacy-related discursive, legal, professional, and educational institutions are needed to act as “counterweights to corporate power” (235). In universities, for example, teaching social sciences, humanities, and ethics to future engineers can equip them with the critical thinking skills needed to change the status quo.

The book productively builds on notions of informational capitalism presented by Julie Cohen in Between Truth and Power and Shoshana Zuboff in The Age of Surveillance Capitalism. But the conclusions of the book may go further than what the book itself claims. By not tracing the boundaries of the information industry, readers are left to wonder about the exact scope of the book’s normative lessons. While the empirical work done by Waldman is rooted squarely in the tech industry, information capitalism is broader: all of capitalism today is increasingly information capitalism. Waldman’s study of the tech industry may be an informative case study of a much larger problem, one that covers the whole economy.

 

 

Posted on 3 February 2021


IGNACIO COFONE is an Assistant Professor and Canada Research Chair in A.I. Law & Data Governance at McGill University Faculty of Law. ADELISE LALANDE obtained her J.D./B.C.L. from McGill University Faculty of Law.