Abstract
Call center workers in the Global South, particularly in countries like India and the Philippines, perform intensive affective labor—managing technical issues while navigating Western cultural expectations and customer hostility. To meet these expectations, workers often undergo accent neutralization training and adopt American names and linguistic patterns. These practices are designed to mitigate customer dissatisfaction and retaliation due to accent bias stemming from negative perceptions of non-Western English. Sanas.ai, a real-time accent conversion software, presents itself as a solution to this problem by altering workers’ voices to sound white and American. Marketed as a tool for “fostering more compassionate communication,” Sanas’s technology raises pressing concerns about replacing empathy with automated voice assimilation as the default response to bias. This project conducts a sociotechnical analysis of Sanas.ai, examining its public messaging, technical infrastructure, and broader social implications through discourse analysis, technical inquiry, and interviews with developers and workers. By exploring accent modification as a response to group-based hate outside of its purported profitability, this research seeks to inform ethical technology design and contribute to ongoing conversations about anti-discrimination and justice in the global information economy.


