Abstract
Technology has transformed how people access information—and how they experience hate. Online, group-based hatred is often embedded in emotionally resonant stories that offer identity, belonging, and moral certainty. These narratives don’t just misinform—they divide, polarize, and escalate. Conventional interventions like fact-checking or takedowns often miss this relational dimension, and even deepen mistrust or entrench opposition.This project introduces Usap Tayo, an AI chatbot designed to reduce the appeal of polarizing, hate-based content through identity-aware, compassionate dialogue. Rather than correcting users, the chatbot uses large language models (LLMs) to prompt reflection, encourage perspective-taking, and support relational processing. Its core innovation lies in adapting tone and framing based on social network cues—without collecting personal data. The chatbot engages not just what people believe, but why it resonates. Focusing on the Philippines—a high-stakes setting shaped by historical trauma, disinformation, and intergroup tension—we will co-design the intervention with civic and government partners and evaluate emotional, cognitive, behavioral outcomes. Funding will also support open-access toolkits and prototypes. Rooted in the principles of the Initiative for the Study of Hate and the Bedari Kindness Institute, this project offers real-world intervention that rethinks how people experience misinformation—and strengthens the social fabric that keeps hate from taking hold.

