Community Grounded Hate Speech Detection
Abstract
Social media platforms rampant with online hate speech expose users to toxic environments and pose safety risks when web data is scraped for model training. Many platforms employ automatic hate speech detection. However, current hate speech detection models rely on unrealistic labeled datasets and fail to be robust in real-world deployments. This can lead to harms like erasure of minority online communities because detectors erroneously filter non-White speakers. We address the need for more diverse training data through community-driven data generation. We will ask members of affected minority communities, in particular Black communities, to annotate and write new training examples that are representative of their real-world data. In this way we can develop a dataset that models the lived experiences of actual users. Our goal is to use this data to substantially improve equity of hate speech detection and also expose dangerous failure points in current models.
Field
Computer Science
Team
Dr. Saadia Gabriel, Dr. Thomas Hartvigsen (UVA), Dr. Hamid Palangi (MSR), Ashima Suvarna (UCLA), Christina Chance (UCLA), Dr. Sophie Hao (NYU)
Dr. Saadia Gabriel
Dr. Saadia Gabriel is an Assistant Professor in UCLA Computer Science and affiliated with the Bunche Center for African American Studies. Her research revolves around natural language processing and machine learning, with a particular focus on building systems for understanding how social commonsense manifests in text (i.e. how do people typically behave in social scenarios), as well as mitigating spread of false or harmful text (e.g. Covid-19 misinformation). Her work has explored robustness and fairness of hate speech detection, and has been covered by a wide range of media outlets like Forbes and TechCrunch. It has also received a 2019 ACL best short paper nomination, a 2019 IROS RoboCup best paper nomination, a best paper award at the 2020 WeCNLP summit and a 2023 MIT Generative AI Impact award. She was named on Forbes’ 30 under 30 2024 list. She previously was a NYU Data Science Faculty Fellow and MIT CSAIL Postdoctoral Fellow. She received her PhD from the University of Washington.
Ashima Suvarna
Ashima Suvarna is a PhD student in Computer Science at UCLA. Her research focuses on building equitable and socially responsible AI. While her research interests are broad, she is currently exploring the robust detection of harmful behaviours exhibited by LLMs and multiagent systems. Previously, Suvarna was a Deepmind Scholar at UCLA while pursuing her MSCS. She was also awarded the Mitacs Globalink Research Scholarship in 2019. Before UCLA, she completed her Bachelor’s in Technology from Delhi Technological University in India.
Christina Chance
Christina Chance is a third-year doctoral student in the Computer Science department at UCLA studying fairness in natural language processing. Her research primarily focuses on application of NLP in social sciences for bias quantification and mitigation as well as the development of methods to center community impact and marginalized voices in content moderation and hate speech classification. Prior to attending UCLA, she obtained her bachelor’s of science in Mathematics and Computer Science from Emory University in Atlanta, Georgia.