SELECTED PAPERS
Below are links to some of my academic papers. Note that in computer and information science papers published in major conference proceedings are double-anonymously peer reviewed and recognized as equivalent research contributions to journal articles.
You can also find my research on SSRN, PhilPapers, and other major indexes.
⇢ PRIVACY
Core to ethics, politics, and policy in computing are questions about privacy and the flow of information. I’m interested in philosophical and policy questions about privacy, as well as questions about the political economy of information. I have also developed a number of collaborative projects on engineering approaches to privacy, exploring both how they track and diverge from social, ethical, and legal approaches.
Critical Provocations for Synthetic Data
w/Jeremy Seeman
Surveillance & Society 22(4), 2024
Training artificial intelligence (AI) systems requires vast quantities of data, and AI developers face a variety of barriers to accessing the information they need. Synthetic data has captured researchers’ and industry’s imagination as a potential solution to this problem. While some of the enthusiasm for synthetic data may be warranted, in this short paper we offer critical counterweight to simplistic narratives that position synthetic data as a cost-free solution to every data-access challenge—provocations highlighting ethical, political, and governance issues the use of synthetic data can create. We question the idea that synthetic data, by its nature, is exempt from privacy and related ethical concerns. We caution that framing synthetic data in binary opposition to “real” measurement data could subtly shift the normative standards to which data collectors and processors are held. And we argue that by promising to divorce data from its constituents—the people it represents and impacts—synthetic data could create new obstacles to democratic data governance.
Synthetic Health Data: Real Ethical Promise and Peril
w/Daniel S. Schiff, Sara Gerke, Laura Y. Cabrera, I. Glenn Cohen, Megan Doerr, Jordan Harrod, Kristin Kostick-Quenet, Jasmine McNealy, Michelle N. Meyer, W. Nicholson Price II, and Jennifer K. Wagner
Hastings Center Report 54(5), 2024
Researchers and practitioners are increasingly using machine-generated synthetic data as a tool for advancing health science and practice, by expanding access to health data while—potentially—mitigating privacy and related ethical concerns around data sharing. While using synthetic data in this way holds promise, we argue that it also raises significant ethical, legal, and policy concerns, including persistent privacy and security problems, accuracy and reliability issues, worries about fairness and bias, and new regulatory challenges. The virtue of synthetic data is often understood to be its detachment from the data subjects whose measurement data is used to generate it. However, we argue that addressing the ethical issues synthetic data raises might require bringing data subjects back into the picture, finding ways that researchers and data subjects can be more meaningfully engaged in the construction and evaluation of datasets and in the creation of institutional safeguards that promote responsible use.
Between Privacy and Utility: On Differential Privacy in Theory and Practice
w/Jeremy Seeman
ACM Journal on Responsible Computing 1(1), 2024
Differential privacy (DP) aims to confer data processing systems with inherent privacy guarantees, offering strong protections for personal data. But DP’s approach to privacy carries with it certain assumptions about how mathematical abstractions will be translated into real-world systems, which—if left unexamined, and unrealized in practice—could function to shield data collectors from liability and criticism, rather than substantively protect data subjects from privacy harms. This paper investigates these assumptions and discusses their implications for using DP to govern data-driven systems. In Parts 1 and 2, we introduce DP as, on one hand, a mathematical framework and, on the other hand, a kind of real-world sociotechnical system, using a hypothetical case study to illustrate how the two can diverge. In Parts 3 and 4, we discuss the way DP frames privacy loss, data processing interventions, and data subject participation, arguing it could exacerbate existing problems in privacy regulation. In part 5, we conclude with a discussion of DP’s potential interactions with the endogeneity of privacy law, and we propose principles for best governing DP systems. In making such assumptions and their consequences explicit, we hope to help DP succeed at realizing its promise for better substantive privacy protections.
Brain Data in Context: Are New Rights the Way to Mental and Brain Privacy?
w/Laura Cabrera
AJOB Neuroscience 15(2), 2024
The potential to collect brain data more directly, with higher resolution, and in greater amounts has heightened worries about mental and brain privacy. In order to manage the risks to individuals posed by these privacy challenges, some have suggested codifying new privacy rights, including a right to “mental privacy.” In this paper, we consider these arguments and conclude that while neurotechnologies do raise significant privacy concerns, such concerns are—at least for now—no different from those raised by other well-understood data collection technologies, such as gene sequencing tools and online surveillance. To better understand the privacy stakes of brain data, we suggest the use of a conceptual framework from information ethics, Helen Nissenbaum’s “contextual integrity” theory. To illustrate the importance of context, we examine neurotechnologies and the information flows they produce in three familiar contexts—healthcare and medical research, criminal justice, and consumer marketing. We argue that by emphasizing what is distinct about brain privacy issues, rather than what they share with other data privacy concerns, risks weakening broader efforts to enact more robust privacy law and policy.
From Procedural Rights to Political Economy: New Horizons for Regulating Online Privacy
In Sabine Trepte and Philipp Masur (ed.), The Routledge Handbook of Privacy and Social Media. Routledge, 2023
The 2010s were a golden age of information privacy research, but its policy accomplishments tell a mixed story. Despite significant progress on the development of privacy theory and compelling demonstrations of the need for privacy in practice, real achievements in privacy law and policy have been, at best, uneven. In this chapter, I outline three broad shifts in the way scholars (and, to some degree, advocates and policy makers) are approaching privacy and social media. First, a change in emphasis from individual to structural approaches. Second, increasing attention to the political economy of privacy—especially the business models of information companies, such as social media platforms. And third, a deeper understanding of privacy’s role in a healthy digital public sphere.
Data and the Good?
Surveillance & Society 20(3), 2022
Surveillance studies scholars and privacy scholars have each developed sophisticated, important critiques of the existing data-driven order. But too few scholars in either tradition have put forward alternative substantive conceptions of a good digital society. This, I argue, is a crucial omission. Unless we construct new “sociotechnical imaginaries,” new understandings of the goals and aspirations digital technologies should aim to achieve, the most surveillance studies and privacy scholars can hope to accomplish is a less unjust version of the technology industry’s own vision for the future.
Privacy, Autonomy, and the Dissolution of Markets
w/Kiel Brennan-Marquez
Knight First Amendment Institute, 2022
Throughout the 20th century, market capitalism was defended on parallel grounds. First, it promotes freedom by enabling individuals to exploit their own property and labor-power; second, it facilitates an efficient allocation and use of resources. Recently, however, both defenses have begun to unravel—as capitalism has moved into its “platform” phase. Today, the pursuit of allocative efficiency, bolstered by pervasive data surveillance, often undermines individual freedom rather than promoting it. And more fundamentally, the very idea that markets are necessary to achieve allocative efficiency has come under strain. Even supposing, for argument’s sake, that the claim was true in the early 20th century when von Mises and Hayek pioneered it, advances in computing have rekindled the old “socialist calculation” debate. And this time around, markets—as information technology—are unlikely to have the upper hand.
All of this, we argue, raises an important set of governance questions regarding the political economy of the future. We focus on two: How much should our economic system prioritize freedom, and to what extent should it rely on markets? The arc of platform capitalism bends, increasingly, toward a system that neither prioritizes freedom nor relies on markets. And the dominant critical response, exemplified by Shoshana Zuboff’s work, has been to call for a restoration of market capitalism. Longer term, however, we believe it would be more productive to think about how “postmarket” economic arrangements might promote freedom—or better yet, autonomy—even more effectively than markets, and to determine the practical steps necessary to realize that possibility.
Notice After Notice-and-Consent: Why Privacy Disclosures Are Valuable Even If Consent Frameworks Aren’t
Journal of Information Policy 9, 2019
The dominant legal and regulatory approach to protecting information privacy is a form of mandated disclosure commonly known as “notice-and-consent.” Many have criticized this approach, arguing that privacy decisions are too complicated, and privacy disclosures too convoluted, for individuals to make meaningful consent decisions about privacy choices—decisions that often require us to waive important rights. While I agree with these criticisms, I argue that they only meaningfully call into question the “consent” part of notice-and-consent, and that they say little about the value of notice. We ought to decouple notice from consent, and imagine notice serving other normative ends besides readying people to make informed consent decisions.
Information Privacy and Social Self-Authorship
Techné: Research in Philosophy and Technology 20(3), 2016
The dominant approach in privacy theory defines information privacy as some form of control over personal information. In this essay, I argue that the control approach is mistaken, but for different reasons than those offered by its other critics. I claim that information privacy involves the drawing of epistemic boundaries—boundaries between what others should and shouldn’t know about us. While controlling what information others have about us is one strategy we use to draw such boundaries, it is not the only one. We conceal information about ourselves and we reveal it. And since the meaning of information is not self-evident, we also work to shape how others contextualize and interpret the information about us that they have. Information privacy is thus about more than controlling information; it involves the constant work of producing and managing public identities, what I call “social self-authorship.” In the second part of the essay, I argue that thinking about information privacy in terms of social self-authorship helps us see ways that information technology threatens privacy, which the control approach misses. Namely, information technology makes social self-authorship invisible and unnecessary, by making it difficult for us to know when others are forming impressions about us, and by providing them with tools for making assumptions about who we are which obviate the need for our involvement in the process.
⇢ PLATFORMS, AUTOMATED DECISION-MAKING, AND AI
In addition to worries about privacy, technology ethics and policy scholarship increasingly focuses on artificial intelligence and automated decision-making, and on the handful of dominant platforms that control these systems. I've written about the ethics of predictive policing tools, the temporality of automated decision-making, exploitation in the platform economy, and related aspects of governing AI.
Exploitation in the Platform Age
In Beate Roessler and Valerie Steeves (ed.), Being Human in the Digital World. Cambridge University Press, forthcoming
In this chapter I consider a common refrain among critics of digital platforms: big tech "exploits" us. It gives voice to a shared sense that technology firms are somehow mistreating people—taking advantage of us, extracting from us—in a way that other data-driven harms, such as surveillance and algorithmic bias, fail to capture. Exploring several targets of this charge—gig work, algorithmic pricing, and surveillance advertising—I ask: What does exploitation entail, exactly, and how do platforms perpetrate it? Is exploitation in the platform economy a new kind of exploitation, or are these old problems dressed up as new ones? What would a theory of digital exploitation add to our understanding of the platform age? First, I define exploitation and argue that critics are justified in describing many platform practices as wrongfully exploitative. Next, I focus on platforms themselves—both as businesses and technologies—in order to understand what is and isn’t new about the kinds of exploitation we are witnessing. In some cases, digital platforms perpetuate familiar forms of exploitation by extending the ability of exploiters to reach and control exploitees. In other cases, they enable new exploitative arrangements by creating or exposing vulnerabilities that powerful actors couldn't previously leverage. On the whole, I argue, the language of exploitation helps express forms of injustice overlooked or only partially captured by dominant concerns about, e.g., surveillance, discrimination, and related platform abuses, and it provides valuable conceptual and normative resources for challenging efforts by platforms to obscure or legitimate them.
Decision Time: Normative Dimensions of Algorithmic Speed
Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2022
Existing discussions about automated decision-making focus primarily on its inputs and outputs, raising questions about data collection and privacy on one hand and accuracy and fairness on the other. Less attention has been devoted to critically examining the temporality of decision-making processes—the speed at which automated decisions are reached. In this paper, I identify four dimensions of algorithmic speed that merit closer analysis. Duration (how much time it takes to reach a judgment), timing (when automated systems intervene in the activity being evaluated), frequency (how often evaluations are performed), and lived time (the human experience of algorithmic speed) are interrelated, but distinct, features of automated decision-making. Choices about the temporal structure of automated decision-making systems have normative implications, which I describe in terms of "disruption," "displacement," "re-calibration," and "temporal fairness," with values such as accuracy, fairness, accountability, and legitimacy hanging in the balance. As computational tools are increasingly tasked with making judgments about human activities and practices, the designers of decision-making systems will have to reckon, I argue, with when—and how fast—judgments ought to be rendered. Though computers are capable of reaching decisions at incredible speeds, failing to account for the temporality of automated decision-making risks misapprehending the costs and benefits automation promises.
Predictive Policing and the Ethics of Preemption
In Ben Jones and Eduardo Mendieta (ed.), The Ethics of Policing: An Interdisciplinary Perspective. NYU Press, 2021
The American justice system, from police departments to the courts, is increasingly turning to information technology for help identifying potential offenders, determining where, geographically, to allocate enforcement resources, assessing flight risk and the potential for recidivism amongst arrestees, and making other judgments about when, where, and how to manage crime. In particular, there is a focus on machine learning and other data analytics tools, which promise to accurately predict where crime will occur and who will perpetrate it. Activists and academics have begun to raise critical questions about the use of these tools in policing contexts. In this chapter, I review the emerging critical literature on predictive policing and contribute to it by raising ethical questions about the use of predictive analytics tools to identify potential offenders. Drawing from work on the ethics of profiling, I argue that the much-lauded move from reactive to preemptive policing can mean wrongfully generalizing about individuals, making harmful assumptions about them, instrumentalizing them, and failing to respect them as full ethical persons. I suggest that these problems stem both from the nature of predictive policing tools and from the sociotechnical contexts in which they are implemented. Which is to say, the set of ethical issues I describe arises not only from the fact that these tools are predictive, but also from the fact that they are situated in the hands of police. To mitigate these problems, I suggest we place predictive policing tools in the hands of those whose ultimate responsibility is to individuals (such as counselors and social workers), rather than in the hands of those, like the police, whose ultimate duty is to protect the public at large.
Strange Loops: Apparent versus Actual Human Involvement in Automated Decision-Making
w/Kiel Brennan-Marquez and Karen Levy
Berkeley Technology Law Journal 34(3), 2019
The era of AI-based decision-making fast approaches, and anxiety is mounting about when, and why, we should keep “humans in the loop” (“HITL”). Thus far, commentary has focused primarily on two questions: whether, and when, keeping humans involved will improve the results of decision-making (making them safer or more accurate), and whether, and when, non-accuracy-related values—legitimacy, dignity, and so forth—are vindicated by the inclusion of humans in decision-making. Here, we take up a related but distinct question, which has eluded the scholarship thus far: does it matter if humans appear to be in the loop of decision-making, independent from whether they actually are? In other words, what is at stake in the disjunction between whether humans in fact have ultimate authority over decision-making versus whether humans merely seem, from the outside, to have such authority?
Our argument proceeds in four parts. First, we build our formal model, enriching the HITL question to include not only whether humans are actually in the loop of decision-making, but also whether they appear to be so. Second, we describe situations in which the actuality and appearance of HITL align: those that seem to involve human judgment and actually do, and those that seem automated and actually are. Third, we explore instances of misalignment: situations in which systems that seem to involve human judgment actually do not, and situations in which systems that hold themselves out as automated actually rely on humans operating “behind the curtain.” Fourth, we examine the normative issues that result from HITL misalignment, arguing that it challenges individual decision-making about automated systems and complicates collective governance of automation.
⇢ AUTOMATED INFLUENCE AND MANIPULATION
Central to debates about digital surveillance and diminishing privacy is the concern that personal information can be used to influence—even manipulate—people. A significant strand of my research looks at the difficult conceptual, ethical, and policy challenges raised by the automation of influence through data-driven targeted advertising and content recommender systems, and emerging worries about the persuasive power of generative AI.
A Mechanism-Based Approach to Mitigating Harms from Persuasive Generative AI
w/Seliem El-Sayed, Canfer Akbulut, Amanda McCroskery, Geoff Keeling, Zachary Kenton, Zaria Jalan, Nahema Marchal, Arianna Manzini, Toby Shevlane, Shannon Vallor, Matija Franklin, Sophie Bridgers, Harry Law, Matthew Rahtz, Murray Shanahan, Michael Henry Tessler, Arthur Douillard, Tom Everitt, Sasha Brown
Google DeepMind, 2024
Recent generative AI systems have demonstrated more advanced persuasive capabilities and are increasingly permeating areas of life where they can influence decision-making. Generative AI presents a new risk profile of persuasion due the opportunity for reciprocal exchange and prolonged interactions. This has led to growing concerns about harms from AI persuasion and how they can be mitigated, highlighting the need for a systematic study of AI persuasion. The current definitions of AI persuasion are unclear and related harms are insufficiently studied. Existing harm mitigation approaches prioritise harms from the outcome of persuasion over harms from the process of persuasion. In this paper, we lay the groundwork for the systematic study of AI persuasion. We first put forward definitions of persuasive generative AI. We distinguish between rationally persuasive generative AI, which relies on providing relevant facts, sound reasoning, or other forms of trustworthy evidence, and manipulative generative AI, which relies on taking advantage of cognitive biases and heuristics or misrepresenting information. We also put forward a map of harms from AI persuasion, including definitions and examples of economic, physical, environmental, psychological, sociocultural, political, privacy, and autonomy harm. We then introduce a map of mechanisms that contribute to harmful persuasion. Lastly, we provide an overview of approaches that can be used to mitigate against process harms of persuasion, including prompt engineering for manipulation classification and red teaming. Future work will operationalise these mitigations and study the interaction between different types of mechanisms of persuasion.
Measuring Automated Influence: Between Empirical Evidence and Ethical Values
w/Vincent Grimaldi
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES), 2021
Automated influence, delivered by digital targeting technologies such as targeted advertising, digital nudges, and recommender systems, has attracted significant interest from both empirical researchers, on one hand, and critical scholars and policymakers on the other. In this paper, we argue for closer integration of these efforts. Critical scholars and policymakers, who focus primarily on the social, ethical, and political effects of these technologies, need empirical evidence to substantiate and motivate their concerns. However, existing empirical research investigating the effectiveness of these technologies (or lack thereof), neglects other morally relevant effects—which can be felt regardless of whether or not the technologies "work" in the sense of fulfilling the promises of their designers. Drawing from the ethics and policy literature, we enumerate a range of questions begging for empirical analysis—the outline of a research agenda bridging these fields—and issue a call to action for more empirical research that takes these urgent ethics and policy questions as their starting point.
Ethical Considerations for Digitally Targeted Public Health Interventions
American Journal of Public Health, 2020
Public health scholars and public health officials increasingly worry about health-related misinformation online, and they are searching for ways to mitigate it. Some have suggested that the tools of digital influence are themselves a possible answer: we can use targeted, automated digital messaging to counter health-related misinformation and promote accurate information. In this commentary, I raise a number of ethical questions prompted by such proposals—and familiar from the ethics of influence and ethics of AI—highlighting hidden costs of targeted digital messaging that ought to be weighed against the health benefits they promise.
Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES), 2019
For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions—what behavioral economists call our choice architectures—are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the sets of options we choose from and the way those options are framed. Moreover, artificial intelligence and machine learning (AI/ML) makes it possible for those options and their framings—the choice architectures—to be tailored to the individual chooser. They are constructed based on information collected about our individual preferences, interests, aspirations, and vulnerabilities, with the goal of influencing our decisions. At the same time, because we are habituated to these technologies we pay them little notice. They are, as philosophers of technology put it, transparent to us—effectively invisible. I argue that this invisible layer of technological mediation, which structures and influences our decision-making, renders us deeply susceptible to manipulation. Absent a guarantee that these technologies are not being used to manipulate and exploit, individuals will have little reason to trust them.
Technology, Autonomy, and Manipulation
w/ Beate Roessler and Helen Nissenbaum
Internet Policy Review 8(2), 2019
Since 2016, when the Facebook/Cambridge Analytica scandal began to emerge, public concern has grown around the threat of "online manipulation". While these worries are familiar to privacy researchers, this paper aims to make them more salient to policymakers—first, by defining "online manipulation", thus enabling identification of manipulative practices; and second, by drawing attention to the specific harms online manipulation threatens. We argue that online manipulation is the use of information technology to covertly influence another person’s decision-making, by targeting and exploiting their decision-making vulnerabilities. Engaging in such practices can harm individuals by diminishing their economic interests, but its deeper, more insidious harm is its challenge to individual autonomy. We explore this autonomy harm, emphasising its implications for both individuals and society, and we briefly outline some strategies for combating online manipulation and strengthening autonomy in an increasingly digital world.
Online Manipulation: Hidden Influences in a Digital World
w/ Beate Roessler and Helen Nissenbaum
Georgetown Law Technology Review 4(1), 2019
Privacy and surveillance scholars increasingly worry that data collectors can use the information they gather about our behaviors, preferences, interests, incomes, and so on to manipulate us. Yet what it means, exactly, to manipulate someone, and how we might systematically distinguish cases of manipulation from other forms of influence—such as persuasion and coercion—has not been thoroughly enough explored in light of the unprecedented capacities that information technologies and digital media enable. In this paper, we develop a definition of manipulation that addresses these enhanced capacities, investigate how information technologies facilitate manipulative practices, and describe the harms—to individuals and to social institutions—that flow from such practices.
We use the term “online manipulation” to highlight the particular class of manipulative practices enabled by a broad range of information technologies. We argue that at its core, manipulation is hidden influence—the covert subversion of another person’s decision-making power. We argue that information technology, for a number of reasons, makes engaging in manipulative practices significantly easier, and it makes the effects of such practices potentially more deeply debilitating. And we argue that by subverting another person’s decision-making power, manipulation undermines his or her autonomy. Given that respect for individual autonomy is a bedrock principle of liberal democracy, the threat of online manipulation is a cause for grave concern.