Hate and Social Media
On February 13th, Professor David Myers, Professor Saadia Gabriel, and CRESST researcher Dr. Christine Ong convened on the subject of “Hate and the Internet/Social Media” for the third installment of the online series “The State of Hate,” organized by the Friends of Semel Institute for Neuroscience and Human Behavior at UCLA, along with the Initiative to Study Hate.
As Professor Myers, director of the Initiative to Study Hate, noted in his introduction to the conversation, social media use, for better or worse, is all-pervasive in society. This trend may well be irreversible. Although in the early years of the internet, many championed its democratizing potential, we have also come to understand the harm the internet and social media can pose to democracy and to the fabric of society as a conduit for hateful rhetoric and imagery as well as for mis- and dis- information. As Christine Ong, senior researcher at CRESST and ISH fellow with the Social Media and the Spread of Hate (SMASH) team, noted, children and teenagers in particular are consuming an “unhealthy diet” of hate-inspired narratives and media targeting a variety of groups on a near-constant basis.
Delving deeper into the underlying issues, all three speakers emphasized the lack of oversight and regulation of speech in digital space. Content moderation is often left to tech companies themselves, which have few incentives to monitor hate speech rigorously and in fact may maintain a certain level of disturbing or graphic material on their sites as a means of driving engagement. Fortunately, researchers such as Saadia Gabriel, Assistant Professor of Computer Science at UCLA, ISH fellow, and affiliate of the Bunche Center for African American Studies, are addressing this problem by developing increasingly sophisticated tools for the identification and prevention of online hate speech.
As Professor Gabriel’s research demonstrates, current protocols for human-powered content moderation run the risk of reproducing the inequalities they were intended to combat—with the tendency to flag as prejudice the colloquial speech of marginalized communities. Further, her work shows that machine-learning models exhibit consistent difficulty understanding context-specific instances of hateful rhetoric. For example, Gabriel cited the term MGTOW, an acronym from the “men’s rights” movement that stands for “Men Going Their Own Way.” This term’s long standing associations with extremist misogyny may only come across in the broader context of the comment or post at hand. As a result, a model trained to recognize offensive language may not flag the term as such, while it may flag erroneously a benign use of a term. Moreover, Gabriel noted, AI moderators are not yet capable of moving beyond text. “It’s not all about speech, it’s not all about text,” she explained,” [M]ore and more it’s becoming an issue with these hateful ideologies being spread through visual mediums—through videos, through images—and that’s going to be an increasing concern.” Gabriel further discussed the potential of user-driven “community notes” as models of moderation, emphasizing the benefits of countering hate speech with factual information in real time.
Sharing insights from her SMASH research and work in schools, Dr. Ong pointed to the need for more longitudinal studies of social media’s effects on the development and wellbeing of children and teenagers, particularly how intensive use can impact their ability to learn and develop empathy. Ong also weighed in on why social media is so addictive and how it acts as a “force multiplier” for hateful messages: “Social media is a very dynamic experience, and it’s very personalized… a lot of information and a lot of stimulation. These apps are designed to keep people’s attention, and algorithms are also designed to highlight or amplify those aggressive posts.” She further highlighted the benefits of increased attention to digital literacy in both K-12 and college-level curricula. , While it currently is impossible to staunch the flow of hateful material and misinformation on social media, we can still “inoculate” young people through education.
As the conversation came to a close, Professors Myers, Gabriel, and Ong attended to the weighty topic of accountability. In the current political climate, it seems unlikely that technology companies will face any serious consequences for failing to moderate their platforms effectively. Given these realities, the speakers concluded, it is all the more important that we approach online hate communally, and strive to be models of thoughtful, cautious, and responsible internet use.
We hope you will join us for more compelling conversations featuring researchers from the Initiative to Study Hate at an upcoming webinar on April 30th hosted by Women & Philanthropy at UCLA. This all-women panel will discuss their research findings and how to use their results to develop real-world mitigation strategies. Learn more about this event here.