SSRN
PhilPapers



PAPERS

"Strange Loops: Apparent versus Actual Human Involvement in Automated Decision-Making"
w/Kiel Brennan-Marquez and Karen Levy
Berkeley Technology Law Journal, forthcoming

The era of AI-based decision-making fast approaches, and anxiety is mounting about when, and why, we should keep “humans in the loop” (“HITL”). Thus far, commentary has focused primarily on two questions: whether, and when, keeping humans involved will improve the results of decision-making (making them safer or more accurate), and whether, and when, non-accuracy-related values—legitimacy, dignity, and so forth—are vindicated by the inclusion of humans in decision-making. Here, we take up a related but distinct question, which has eluded the scholarship thus far: does it matter if humans appear to be in the loop of decision-making, independent from whether they actually are? In other words, what is stake in the disjunction between whether humans in fact have ultimate authority over decision-making versus whether humans merely seem, from the outside, to have such authority?

Our argument proceeds in four parts. First, we build our formal model, enriching the HITL question to include not only whether humans are actually in the loop of decision-making, but also whether they appear to be so. Second, we describe situations in which the actuality and appearance of HITL align: those that seem to involve human judgment and actually do, and those that seem automated and actually are. Third, we explore instances of misalignment: situations in which systems that seem to involve human judgment actually do not, and situations in which systems that hold themselves out as automated actually rely on humans operating “behind the curtain.” Fourth, we examine the normative issues that result from HITL misalignment, arguing that it challenges individual decision-making about automated systems and complicates collective governance of automation.

"Online Manipulation: Hidden Influences in a Digital World"
w/ Beate Roessler and Helen Nissenbaum
Georgetown Law Technology Review, forthcoming

Privacy and surveillance scholars increasingly worry that data collectors can use the information they gather about our behaviors, preferences, interests, incomes, and so on to manipulate us. Yet what it means, exactly, to manipulate someone, and how we might systematically distinguish cases of manipulation from other forms of influence—such as persuasion and coercion—has not been thoroughly enough explored in light of the unprecedented capacities that information technologies and digital media enable. In this paper, we develop a definition of manipulation that addresses these enhanced capacities, investigate how information technologies facilitate manipulative practices, and describe the harms—to individuals and to social institutions—that flow from such practices.

We use the term “online manipulation” to highlight the particular class of manipulative practices enabled by a broad range of information technologies. We argue that at its core, manipulation is hidden influence—the covert subversion of another person’s decision-making power. We argue that information technology, for a number of reasons, makes engaging in manipulative practices significantly easier, and it makes the effects of such practices potentially more deeply debilitating. And we argue that by subverting another person’s decision-making power, manipulation undermines his or her autonomy. Given that respect for individual autonomy is a bedrock principle of liberal democracy, the threat of online manipulation is a cause for grave concern.

"Technology, autonomy, and manipulation"
w/ Beate Roessler and Helen Nissenbaum
Internet Policy Review 8(2), 2019

Since 2016, when the Facebook/Cambridge Analytica scandal began to emerge, public concern has grown around the threat of "online manipulation". While these worries are familiar to privacy researchers, this paper aims to make them more salient to policymakers—first, by defining "online manipulation", thus enabling identification of manipulative practices; and second, by drawing attention to the specific harms online manipulation threatens. We argue that online manipulation is the use of information technology to covertly influence another person’s decision-making, by targeting and exploiting their decision-making vulnerabilities. Engaging in such practices can harm individuals by diminishing their economic interests, but its deeper, more insidious harm is its challenge to individual autonomy. We explore this autonomy harm, emphasising its implications for both individuals and society, and we briefly outline some strategies for combating online manipulation and strengthening autonomy in an increasingly digital world.

"Notice After Notice-and-Consent: Why Privacy Disclosures Are Valuable Even If Consent Frameworks Aren’t"
Journal of Information Policy 9, 2019

The dominant legal and regulatory approach to protecting information privacy is a form of mandated disclosure commonly known as “notice-and-consent.” Many have criticized this approach, arguing that privacy decisions are too complicated, and privacy disclosures too convoluted, for individuals to make meaningful consent decisions about privacy choices—decisions that often require us to waive important rights. While I agree with these criticisms, I argue that they only meaningfully call into question the “consent” part of notice-and-consent, and that they say little about the value of notice. We ought to decouple notice from consent, and imagine notice serving other normative ends besides readying people to make informed consent decisions.

"Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures"
Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, 2019

For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions—what behavioral economists call our choice architectures—are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the sets of options we choose from and the way those options are framed. Moreover, artificial intelligence and machine learning (AI/ML) makes it possible for those options and their framings—the choice architectures—to be tailored to the individual chooser. They are constructed based on information collected about our individual preferences, interests, aspirations, and vulnerabilities, with the goal of influencing our decisions. At the same time, because we are habituated to these technologies we pay them little notice. They are, as philosophers of technology put it, transparent to us—effectively invisible. I argue that this invisible layer of technological mediation, which structures and influences our decision-making, renders us deeply susceptible to manipulation. Absent a guarantee that these technologies are not being used to manipulate and exploit, individuals will have little reason to trust them.

"Transparent Media and the Development of Digital Habits"
Yoni Van Den Eede, Stacey O’Neal Irwin, and Galit Wellner (eds.), Postphenomenology and Media: Essays on Human-Media-World Relations (New York: Lexington Books), 2017

Our lives are guided by habits. Most of the activities we engage in throughout the day are initiated and carried out not by rational thought and deliberation, but through an ingrained set of dispositions or patterns of action—what Aristotle calls a hexis. We develop these dispositions over time, by acting and gauging how the world responds. I tilt the steering wheel too far and the car’s lurch teaches me how much force is needed to steady it. I come too close to a hot stove and the burn I get inclines me not to get too close again. This feedback and the habits it produces are bodily. They are possible because the medium through which these actions take place is a physical, sensible one. The world around us is, in the language of postphenomenology, an opaque one. We notice its texture and contours as we move through it, and crucially, we bump up against it from time to time. The digital world, by contrast, is largely transparent. Digital media are designed to recede from view. As a result, we experience little friction as we carry out activities online; the consequences of our actions are often not apparent to us. This distinction between the opacity of the natural world and the transparency of the digital one raises important questions. In this chapter, I ask: how does the transparency of digital media affect our ability to develop healthy habits online? If the digital world is constructed precisely not to push back against us, how are we supposed to gauge whether our actions are good or bad, for us and for others? The answer to this question has important ramifications for a number of ethical, political, and policy debates around issues in online life. For in order to advance cherished norms like privacy, civility, and fairness online, we need more than good laws and good policies—we need good habits, which dispose us to act in ways conducive to our and others’ flourishing.

"Information Privacy and Social Self-Authorship"
Techné: Research in Philosophy and Technology 20(3), 2016

The dominant approach in privacy theory defines information privacy as some form of control over personal information. In this essay, I argue that the control approach is mistaken, but for different reasons than those offered by its other critics. I claim that information privacy involves the drawing of epistemic boundaries—boundaries between what others should and shouldn’t know about us. While controlling what information others have about us is one strategy we use to draw such boundaries, it is not the only one. We conceal information about ourselves and we reveal it. And since the meaning of information is not self-evident, we also work to shape how others contextualize and interpret the information about us that they have. Information privacy is thus about more than controlling information; it involves the constant work of producing and managing public identities, what I call “social self-authorship.” In the second part of the essay, I argue that thinking about information privacy in terms of social self-authorship helps us see ways that information technology threatens privacy, which the control approach misses. Namely, information technology makes social self-authorship invisible and unnecessary, by making it difficult for us to know when others are forming impressions about us, and by providing them with tools for making assumptions about who we are which obviate the need for our involvement in the process.

"Ihde's Missing Sciences: Postphenomenology, Big Data, and the Human Sciences"
Techné: Research in Philosophy and Technology 20(2), 2016

In Husserl's Missing Technologies Don Ihde urges us to think deeply and critically about the ways in which the technologies utilized in contemporary science structure the way we perceive and understand the natural world. In this paper, I argue that we ought to extend Ihde's analysis to consider how such technologies are changing the way we perceive and understand ourselves too. For it is not only the natural or "hard" sciences which are turning to advanced technologies for help in carrying out their work, but also the social and "human" sciences. One set of tools in particular is rapidly being adopted—the family of information technologies that fall under the umbrella of “big data.” As in the natural sciences, big data is giving researchers in the human sciences access to phenomena which they would otherwise be unable to experience and investigate. And like the former, the latter thereby shape the ways those scientists perceive and understand who and what we are. Looking at two case studies of big data-driven research in the human sciences, I begin in this paper to suggest how we might understand these phenomenological and hermeneutic changes.

"Obstacles to Transparency in Privacy Engineering"
w/ Kiel Brennan-Marquez
Proceedings of the IEEE International Workshop on Privacy Engineering, 2016

Transparency is widely recognized as indispensable to privacy protection. However, producing transparency for end-users is often antithetical to a variety of other technical, business, and regulatory interests. These conflicts create obstacles which stand in the way of developing tools which provide meaningful privacy protections or from having such tools adopted in widespread fashion. In this paper, we develop a "map" of these common obstacles to transparency, in order to assist privacy engineers in successfully navigating them. Furthermore, we argue that some of these obstacles can be successfully avoided by distinguishing between two different conceptions of transparency and considering which is at stake in a given case—transparency as providing users with insight into what information about them is collected and how it is processed (what we call transparency as a "view under-the-hood") and transparency as providing users with facility in navigating the risks and benefits of using particular technologies.

"Artificial Intelligence and the Body: Dreyfus, Bickhard, and the Future of AI"
Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence (SAPERE; Berlin: Springer), 2013

For those who find Dreyfus's critique of AI compelling, the prospects for producing true artificial human intelligence are bleak. An important question thus becomes, what are the prospects for producing artificial non-human intelligence? Applying Dreyfus's work to this question is difficult, however, because his work is so thoroughly human-centered. Granting Dreyfus that the body is fundamental to intelligence, how are we to conceive of non-human bodies? In this paper, I argue that bringing Dreyfus's work into conversation with the work of Mark Bickhard offers a way of answering this question, and I try to suggest what doing so means for AI research.