DANIEL SUSSER


SELECTED PAPERS

Below are links to some of my academic papers. Note that in computer and information science papers published in major conference proceedings are double-anonymously peer reviewed and recognized as equivalent research contributions to journal articles.

View abstracts by clicking "Abstract" under the relevant article. You can also find my research on SSRN, PhilPapers, and other major indexes.


Critical Provocations for Synthetic Data
w/Jeremy Seeman
Surveillance & Society 22(4), 2024

Training artificial intelligence (AI) systems requires vast quantities of data, and AI developers face a variety of barriers to accessing the information they need. Synthetic data has captured researchers’ and industry’s imagination as a potential solution to this problem. While some of the enthusiasm for synthetic data may be warranted, in this short paper we offer critical counterweight to simplistic narratives that position synthetic data as a cost-free solution to every data-access challenge—provocations highlighting ethical, political, and governance issues the use of synthetic data can create. We question the idea that synthetic data, by its nature, is exempt from privacy and related ethical concerns. We caution that framing synthetic data in binary opposition to “real” measurement data could subtly shift the normative standards to which data collectors and processors are held. And we argue that by promising to divorce data from its constituents—the people it represents and impacts—synthetic data could create new obstacles to democratic data governance.

Synthetic Health Data: Real Ethical Promise and Peril
w/Daniel S. Schiff, Sara Gerke, Laura Y. Cabrera, I. Glenn Cohen, Megan Doerr, Jordan Harrod, Kristin Kostick-Quenet, Jasmine McNealy, Michelle N. Meyer, W. Nicholson Price II, and Jennifer K. Wagner
Hastings Center Report 54(5), 2024

Researchers and practitioners are increasingly using machine-generated synthetic data as a tool for advancing health science and practice, by expanding access to health data while—potentially—mitigating privacy and related ethical concerns around data sharing. While using synthetic data in this way holds promise, we argue that it also raises significant ethical, legal, and policy concerns, including persistent privacy and security problems, accuracy and reliability issues, worries about fairness and bias, and new regulatory challenges. The virtue of synthetic data is often understood to be its detachment from the data subjects whose measurement data is used to generate it. However, we argue that addressing the ethical issues synthetic data raises might require bringing data subjects back into the picture, finding ways that researchers and data subjects can be more meaningfully engaged in the construction and evaluation of datasets and in the creation of institutional safeguards that promote responsible use.

Between Privacy and Utility: On Differential Privacy in Theory and Practice
w/Jeremy Seeman
ACM Journal on Responsible Computing 1(1), 2024

Differential privacy (DP) aims to confer data processing systems with inherent privacy guarantees, offering strong protections for personal data. But DP’s approach to privacy carries with it certain assumptions about how mathematical abstractions will be translated into real-world systems, which—if left unexamined, and unrealized in practice—could function to shield data collectors from liability and criticism, rather than substantively protect data subjects from privacy harms. This paper investigates these assumptions and discusses their implications for using DP to govern data-driven systems. In Parts 1 and 2, we introduce DP as, on one hand, a mathematical framework and, on the other hand, a kind of real-world sociotechnical system, using a hypothetical case study to illustrate how the two can diverge. In Parts 3 and 4, we discuss the way DP frames privacy loss, data processing interventions, and data subject participation, arguing it could exacerbate existing problems in privacy regulation. In part 5, we conclude with a discussion of DP’s potential interactions with the endogeneity of privacy law, and we propose principles for best governing DP systems. In making such assumptions and their consequences explicit, we hope to help DP succeed at realizing its promise for better substantive privacy protections.

Brain Data in Context: Are New Rights the Way to Mental and Brain Privacy?
w/Laura Cabrera
AJOB Neuroscience 15(2), 2024

The potential to collect brain data more directly, with higher resolution, and in greater amounts has heightened worries about mental and brain privacy. In order to manage the risks to individuals posed by these privacy challenges, some have suggested codifying new privacy rights, including a right to “mental privacy.” In this paper, we consider these arguments and conclude that while neurotechnologies do raise significant privacy concerns, such concerns are—at least for now—no different from those raised by other well-understood data collection technologies, such as gene sequencing tools and online surveillance. To better understand the privacy stakes of brain data, we suggest the use of a conceptual framework from information ethics, Helen Nissenbaum’s “contextual integrity” theory. To illustrate the importance of context, we examine neurotechnologies and the information flows they produce in three familiar contexts—healthcare and medical research, criminal justice, and consumer marketing. We argue that by emphasizing what is distinct about brain privacy issues, rather than what they share with other data privacy concerns, risks weakening broader efforts to enact more robust privacy law and policy.

From Procedural Rights to Political Economy: New Horizons for Regulating Online Privacy
In Sabine Trepte and Philipp Masur (eds.), The Routledge Handbook of Privacy and Social Media (New York: Routledge), 2023

The 2010s were a golden age of information privacy research, but its policy accomplishments tell a mixed story. Despite significant progress on the development of privacy theory and compelling demonstrations of the need for privacy in practice, real achievements in privacy law and policy have been, at best, uneven. In this chapter, I outline three broad shifts in the way scholars (and, to some degree, advocates and policy makers) are approaching privacy and social media. First, a change in emphasis from individual to structural approaches. Second, increasing attention to the political economy of privacy—especially the business models of information companies, such as social media platforms. And third, a deeper understanding of privacy’s role in a healthy digital public sphere.

Vaccine Passports and Political Legitimacy: A Public Reason Framework for Policymakers
w/Anne Barnhill and Matteo Bonotti
Ethical Theory and Moral Practice 26, 2023

As the COVID-19 pandemic continues to evolve, taking its toll on people’s lives around the world, vaccine passports remain a contentious topic of debate in most liberal democracies. While a small literature on vaccine passports has sprung up over the past few years that considers their ethical pros and cons, in this paper we focus on the question of when vaccine passports are politically legitimate. Specifically, we put forward a ‘public reason ethics framework’ for resolving ethical disputes and use the case of vaccine passports to demonstrate how it works. The framework walks users through a structured analysis of a vaccine passport proposal to determine whether the proposal can be publicly justified and is therefore legitimate. Use of this framework may also help policymakers to design more effective vaccine passports, by incorporating structured input from the public, and thereby better taking the public’s interests and values into account. In short, a public reason ethics framework is meant to encourage better, more legitimate decision-making, resulting in policies that are ethically justifiable, legitimate and effective.

Data and the Good?
Surveillance & Society 20(3), 2022

Surveillance studies scholars and privacy scholars have each developed sophisticated, important critiques of the existing data-driven order. But too few scholars in either tradition have put forward alternative substantive conceptions of a good digital society. This, I argue, is a crucial omission. Unless we construct new “sociotechnical imaginaries,” new understandings of the goals and aspirations digital technologies should aim to achieve, the most surveillance studies and privacy scholars can hope to accomplish is a less unjust version of the technology industry’s own vision for the future.

Privacy, Autonomy, and the Dissolution of Markets
w/Kiel Brennan-Marquez
Knight First Amendment Institute, 2022

Throughout the 20th century, market capitalism was defended on parallel grounds. First, it promotes freedom by enabling individuals to exploit their own property and labor-power; second, it facilitates an efficient allocation and use of resources. Recently, however, both defenses have begun to unravel—as capitalism has moved into its “platform” phase. Today, the pursuit of allocative efficiency, bolstered by pervasive data surveillance, often undermines individual freedom rather than promoting it. And more fundamentally, the very idea that markets are necessary to achieve allocative efficiency has come under strain. Even supposing, for argument’s sake, that the claim was true in the early 20th century when von Mises and Hayek pioneered it, advances in computing have rekindled the old “socialist calculation” debate. And this time around, markets—as information technology—are unlikely to have the upper hand.

All of this, we argue, raises an important set of governance questions regarding the political economy of the future. We focus on two: How much should our economic system prioritize freedom, and to what extent should it rely on markets? The arc of platform capitalism bends, increasingly, toward a system that neither prioritizes freedom nor relies on markets. And the dominant critical response, exemplified by Shoshana Zuboff’s work, has been to call for a restoration of market capitalism. Longer term, however, we believe it would be more productive to think about how “postmarket” economic arrangements might promote freedom—or better yet, autonomy—even more effectively than markets, and to determine the practical steps necessary to realize that possibility.

Decision Time: Normative Dimensions of Algorithmic Speed
Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2022

Existing discussions about automated decision-making focus primarily on its inputs and outputs, raising questions about data collection and privacy on one hand and accuracy and fairness on the other. Less attention has been devoted to critically examining the temporality of decision-making processes—the speed at which automated decisions are reached. In this paper, I identify four dimensions of algorithmic speed that merit closer analysis. Duration (how much time it takes to reach a judgment), timing (when automated systems intervene in the activity being evaluated), frequency (how often evaluations are performed), and lived time (the human experience of algorithmic speed) are interrelated, but distinct, features of automated decision-making. Choices about the temporal structure of automated decision-making systems have normative implications, which I describe in terms of "disruption," "displacement," "re-calibration," and "temporal fairness," with values such as accuracy, fairness, accountability, and legitimacy hanging in the balance. As computational tools are increasingly tasked with making judgments about human activities and practices, the designers of decision-making systems will have to reckon, I argue, with when—and how fast—judgments ought to be rendered. Though computers are capable of reaching decisions at incredible speeds, failing to account for the temporality of automated decision-making risks misapprehending the costs and benefits automation promises.

Predictive Policing and the Ethics of Preemption
In Ben Jones and Eduardo Mendieta (eds.), The Ethics of Policing: An Interdisciplinary Perspective (New York: NYU Press), 2021

The American justice system, from police departments to the courts, is increasingly turning to information technology for help identifying potential offenders, determining where, geographically, to allocate enforcement resources, assessing flight risk and the potential for recidivism amongst arrestees, and making other judgments about when, where, and how to manage crime. In particular, there is a focus on machine learning and other data analytics tools, which promise to accurately predict where crime will occur and who will perpetrate it. Activists and academics have begun to raise critical questions about the use of these tools in policing contexts. In this chapter, I review the emerging critical literature on predictive policing and contribute to it by raising ethical questions about the use of predictive analytics tools to identify potential offenders. Drawing from work on the ethics of profiling, I argue that the much-lauded move from reactive to preemptive policing can mean wrongfully generalizing about individuals, making harmful assumptions about them, instrumentalizing them, and failing to respect them as full ethical persons. I suggest that these problems stem both from the nature of predictive policing tools and from the sociotechnical contexts in which they are implemented. Which is to say, the set of ethical issues I describe arises not only from the fact that these tools are predictive, but also from the fact that they are situated in the hands of police. To mitigate these problems, I suggest we place predictive policing tools in the hands of those whose ultimate responsibility is to individuals (such as counselors and social workers), rather than in the hands of those, like the police, whose ultimate duty is to protect the public at large.

Measuring Automated Influence: Between Empirical Evidence and Ethical Values
w/Vincent Grimaldi
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES), 2021

Automated influence, delivered by digital targeting technologies such as targeted advertising, digital nudges, and recommender systems, has attracted significant interest from both empirical researchers, on one hand, and critical scholars and policymakers on the other. In this paper, we argue for closer integration of these efforts. Critical scholars and policymakers, who focus primarily on the social, ethical, and political effects of these technologies, need empirical evidence to substantiate and motivate their concerns. However, existing empirical research investigating the effectiveness of these technologies (or lack thereof), neglects other morally relevant effects—which can be felt regardless of whether or not the technologies "work" in the sense of fulfilling the promises of their designers. Drawing from the ethics and policy literature, we enumerate a range of questions begging for empirical analysis—the outline of a research agenda bridging these fields—and issue a call to action for more empirical research that takes these urgent ethics and policy questions as their starting point.

Strange Loops: Apparent versus Actual Human Involvement in Automated Decision-Making
w/Kiel Brennan-Marquez and Karen Levy
Berkeley Technology Law Journal 34(3), 2019

The era of AI-based decision-making fast approaches, and anxiety is mounting about when, and why, we should keep “humans in the loop” (“HITL”). Thus far, commentary has focused primarily on two questions: whether, and when, keeping humans involved will improve the results of decision-making (making them safer or more accurate), and whether, and when, non-accuracy-related values—legitimacy, dignity, and so forth—are vindicated by the inclusion of humans in decision-making. Here, we take up a related but distinct question, which has eluded the scholarship thus far: does it matter if humans appear to be in the loop of decision-making, independent from whether they actually are? In other words, what is at stake in the disjunction between whether humans in fact have ultimate authority over decision-making versus whether humans merely seem, from the outside, to have such authority?

Our argument proceeds in four parts. First, we build our formal model, enriching the HITL question to include not only whether humans are actually in the loop of decision-making, but also whether they appear to be so. Second, we describe situations in which the actuality and appearance of HITL align: those that seem to involve human judgment and actually do, and those that seem automated and actually are. Third, we explore instances of misalignment: situations in which systems that seem to involve human judgment actually do not, and situations in which systems that hold themselves out as automated actually rely on humans operating “behind the curtain.” Fourth, we examine the normative issues that result from HITL misalignment, arguing that it challenges individual decision-making about automated systems and complicates collective governance of automation.

Online Manipulation: Hidden Influences in a Digital World
w/ Beate Roessler and Helen Nissenbaum
Georgetown Law Technology Review 4(1), 2019

Privacy and surveillance scholars increasingly worry that data collectors can use the information they gather about our behaviors, preferences, interests, incomes, and so on to manipulate us. Yet what it means, exactly, to manipulate someone, and how we might systematically distinguish cases of manipulation from other forms of influence—such as persuasion and coercion—has not been thoroughly enough explored in light of the unprecedented capacities that information technologies and digital media enable. In this paper, we develop a definition of manipulation that addresses these enhanced capacities, investigate how information technologies facilitate manipulative practices, and describe the harms—to individuals and to social institutions—that flow from such practices.

We use the term “online manipulation” to highlight the particular class of manipulative practices enabled by a broad range of information technologies. We argue that at its core, manipulation is hidden influence—the covert subversion of another person’s decision-making power. We argue that information technology, for a number of reasons, makes engaging in manipulative practices significantly easier, and it makes the effects of such practices potentially more deeply debilitating. And we argue that by subverting another person’s decision-making power, manipulation undermines his or her autonomy. Given that respect for individual autonomy is a bedrock principle of liberal democracy, the threat of online manipulation is a cause for grave concern.

Technology, Autonomy, and Manipulation
w/ Beate Roessler and Helen Nissenbaum
Internet Policy Review 8(2), 2019

Since 2016, when the Facebook/Cambridge Analytica scandal began to emerge, public concern has grown around the threat of "online manipulation". While these worries are familiar to privacy researchers, this paper aims to make them more salient to policymakers—first, by defining "online manipulation", thus enabling identification of manipulative practices; and second, by drawing attention to the specific harms online manipulation threatens. We argue that online manipulation is the use of information technology to covertly influence another person’s decision-making, by targeting and exploiting their decision-making vulnerabilities. Engaging in such practices can harm individuals by diminishing their economic interests, but its deeper, more insidious harm is its challenge to individual autonomy. We explore this autonomy harm, emphasising its implications for both individuals and society, and we briefly outline some strategies for combating online manipulation and strengthening autonomy in an increasingly digital world.

Notice After Notice-and-Consent: Why Privacy Disclosures Are Valuable Even If Consent Frameworks Aren’t
Journal of Information Policy 9, 2019

The dominant legal and regulatory approach to protecting information privacy is a form of mandated disclosure commonly known as “notice-and-consent.” Many have criticized this approach, arguing that privacy decisions are too complicated, and privacy disclosures too convoluted, for individuals to make meaningful consent decisions about privacy choices—decisions that often require us to waive important rights. While I agree with these criticisms, I argue that they only meaningfully call into question the “consent” part of notice-and-consent, and that they say little about the value of notice. We ought to decouple notice from consent, and imagine notice serving other normative ends besides readying people to make informed consent decisions.

Transparent Media and the Development of Digital Habits
In Yoni Van Den Eede, Stacey O’Neal Irwin, and Galit Wellner (eds.), Postphenomenology and Media: Essays on Human-Media-World Relations (New York: Lexington Books), 2017

Our lives are guided by habits. Most of the activities we engage in throughout the day are initiated and carried out not by rational thought and deliberation, but through an ingrained set of dispositions or patterns of action—what Aristotle calls a hexis. We develop these dispositions over time, by acting and gauging how the world responds. I tilt the steering wheel too far and the car’s lurch teaches me how much force is needed to steady it. I come too close to a hot stove and the burn I get inclines me not to get too close again. This feedback and the habits it produces are bodily. They are possible because the medium through which these actions take place is a physical, sensible one. The world around us is, in the language of postphenomenology, an opaque one. We notice its texture and contours as we move through it, and crucially, we bump up against it from time to time. The digital world, by contrast, is largely transparent. Digital media are designed to recede from view. As a result, we experience little friction as we carry out activities online; the consequences of our actions are often not apparent to us. This distinction between the opacity of the natural world and the transparency of the digital one raises important questions. In this chapter, I ask: how does the transparency of digital media affect our ability to develop healthy habits online? If the digital world is constructed precisely not to push back against us, how are we supposed to gauge whether our actions are good or bad, for us and for others? The answer to this question has important ramifications for a number of ethical, political, and policy debates around issues in online life. For in order to advance cherished norms like privacy, civility, and fairness online, we need more than good laws and good policies—we need good habits, which dispose us to act in ways conducive to our and others’ flourishing.

Information Privacy and Social Self-Authorship
Techné: Research in Philosophy and Technology 20(3), 2016

The dominant approach in privacy theory defines information privacy as some form of control over personal information. In this essay, I argue that the control approach is mistaken, but for different reasons than those offered by its other critics. I claim that information privacy involves the drawing of epistemic boundaries—boundaries between what others should and shouldn’t know about us. While controlling what information others have about us is one strategy we use to draw such boundaries, it is not the only one. We conceal information about ourselves and we reveal it. And since the meaning of information is not self-evident, we also work to shape how others contextualize and interpret the information about us that they have. Information privacy is thus about more than controlling information; it involves the constant work of producing and managing public identities, what I call “social self-authorship.” In the second part of the essay, I argue that thinking about information privacy in terms of social self-authorship helps us see ways that information technology threatens privacy, which the control approach misses. Namely, information technology makes social self-authorship invisible and unnecessary, by making it difficult for us to know when others are forming impressions about us, and by providing them with tools for making assumptions about who we are which obviate the need for our involvement in the process.