The Legal Regulation of A.I. Hate

Abstract

This project examines the legal regulation of race-based hate in the information age. In order to comply with current antidiscrimination law–which prohibits unequal racial treatment–almost all predictive algorithms are programmed not to “see” race, meaning race is not a variable included in the algorithm’s underlying statistical model. Yet many agree that formally race-blind algorithms can nevertheless include racial information by relying on so-called proxies for race to formulate their predictions. This project draws on legal analysis, computer science literature, and critical theory to demonstrate that the precise definition of the racial proxy is far from obvious and the definition proffered by courts and computer scientists varies widely. One strand of the project traces case law to show the contradictory and contested legal terrain in which the impermissible racial proxy is identified. Together with my community partner, another strand of the project uses public records’ requests to understand how state agencies define the racial proxy. The project draws attention to the permanently contingent nature of the racial proxy variable and the various processes that ultimately decide its meaning.

Field

Law

Team

Dr. Fanna Gamal