When a U.S. senator asked Facebook CEO Mark Zuckerberg, “Can you define hate speech?” it was arguably the most important question that social networks face: how to identify extremism inside their communities.
Hate crimes in the 21st century follow a familiar pattern in which an online tirade escalates into violent actions. Before opening fire in the Tree of Life synagogue in Pittsburgh, the accused gunman had vented over far-right social network Gab about Honduran migrants traveling toward the U.S. border, and the alleged Jewish conspiracy behind it all. Then he declared, “I can’t sit by and watch my people get slaughtered. Screw your optics, I’m going in.” The pattern of extremists unloading their intolerance online has been a disturbing feature of some recent hate crimes. But most online hate isn’t that flagrant, or as easy to spot.
As I found in my 2017 study on extremism in social networks and political blogs, rather than overt bigotry, most online hate looks a lot like fear. It’s not expressed in racial slurs or calls for confrontation, but rather in unfounded allegations of Hispanic invaders pouring into the country, black-on-white crime or Sharia law infiltrating American cities. Hysterical narratives such as these have become the preferred vehicle for today’s extremists – and may be more effective at provoking real-world violence than stereotypical hate speech.
THE EASE OF SPREADING FEAR
On Twitter, a popular meme traveling around recently depicts the “Islamic Terrorist Network” spread across a map of the United States, while a Facebook account called “America Under Attack” shares an article with its 17,000 followers about the “Angry Young Men and Gangbangers” marching toward the border. And on Gab, countless profiles talk of Jewish plans to sabotage American culture, sovereignty and the president.
While not overtly antagonistic, these notes play well to an audience that has found in social media a place where they can express their intolerance openly, as long as they color within the lines. They can avoid the exposure that traditional hate speech attracts. Whereas the white nationalist gathering in Charlottesville was high-profile and revealing, social networks can be anonymous and discreet, and therefore liberating for the undeclared racist. That presents a stark challenge to platforms like Facebook, Twitter and YouTube.
Of course this is not just a challenge for social media companies. The public at large is facing the complex question of how to respond to inflammatory and prejudiced narratives that are stoking racial fears and subsequent hostility. However, social networks have the unique capacity to turn down the volume on intolerance if they determine that a user has in fact breached their terms of service. For instance, in April 2018, Facebook removed two pages associated with white nationalist Richard Spencer. A few months later, Twitter suspended several accounts associated with the far-right group The Proud Boys for violating its policy “prohibiting violent extremist groups.”
Still, some critics argue that the networks are not moving fast enough. There is mounting pressure for these websites to police the extremism that has flourished in their spaces, or else become policed themselves. A recent Huffpost/YouGov survey revealed that two-thirds of Americans wanted social networks to prevent users from posting “hate speech or racist content.”
Click here to read more.
SOURCE: Urban Faith, Adam G. Klein, Pace University