Someone facing the4 camera with flashing lights passing over themShare on Pinterest
A new study investigates how online groups might foster extremism. Dennis Aglaster/EyeEm/Getty Images
  • Online communities help individuals find others with whom they share an interest or worldview.
  • A new study finds that the shared moral outrage that can occur within such communities often leads to radicalization.
  • The more a person feels as though everyone in a community is in agreement, the more invested they become in the group. This investment can encourage them to feel free to express extreme views.

Disagreement in a democracy is nothing new. It may even be the foundation of democracy, as people of differing views come together to find common ground on which to move forward together.

Lately, however, for people in the United States, such collaboration feels increasingly unlikely due to growing anger and a worrying willingness among some to embrace violence as an acceptable tactic.

A new study from researchers at the University of Southern California (USC) explores the role of online communities in the rise of such radical positions.

The study finds that the greater the level of moral agreement in an online community, the more likely it is that its members will feel free to engage in hate speech.

Lead author of the study, USC’s Dr. Mohammad Atari, told Medical News Today:

“There is good data suggesting that polarization has been on the rise, meaning, in part, that Americans are increasingly seeing the ‘other’ party members as morally corrupt.”

“I certainly see a part of the problem being rooted in social media platforms. People tend to embed themselves into ideologically like-minded environments and news sources and bots and so on,” he continued. “Therefore, they fail to see the bigger picture — that there are good people out there, with different opinions, who are not evil.”

“People who find themselves in a ‘bubble,’ so to speak — wherein their ideas, beliefs, and values are strongly reinforced — could go on to form a visceral bond with their in-group,” says Dr. Atari, speaking to the Society for Personality and Social Psychology (SPSP).

“In these situations, people might engage in radical acts to defend their in-group, ranging in intensity from an outrage-filled tweet to attacking a federal building.”

Of course, not every chat platform or online group leads to such radicalization. As Dr. Atari pointed out to MNT, “If a bunch of Twitter users are interested in green tea or classic cars, it is hard to imagine anything extraordinarily pernicious coming out of this bubble.”

The new study appears in Social Psychological and Personality Science.

The researchers began their exploration with the chat platform Gab, as, the study authors write, “it claims to celebrate free speech and has attracted a large number of users who identify with far right ideologies.”

Based on a manual analysis of 7,692 messages from 800 randomly selected Gab users, the researchers developed a neural net model that analyzed 24,978,951 posts from 236,823 English-language Gab posters.

The researchers found, said Dr. Atari, that “when people come together in moralized bubbles around one topic — e.g., immigration — they become more likely to develop a gut-level sense of moral duty to act upon their moral convictions.”

The researchers then studied a different network: the “misogynist subreddit” called Incels, which was founded as a catering place for “involuntary celibates.” After removing users with fewer than five posts, the researchers analyzed the messages of 11,454 frequent posters.

In the Incels subreddit, 10,240 (89.8%) of the users had at least one hate speech post against women.

Finally, the researchers conducted three follow-up trials. These confirmed that the more a poster believes themself to belong to a morally like-minded community, the more likely they are to express radical intentions, including, for some, a willingness to fight or die for the group.

Dr. Atari noted, “Social media platforms do not particularly care about this; their algorithms are designed to maximize profit, not well-being.”

With more people spending more time online — and in bubbles of their own — the American conversation may well become even more fractured and hostile.

“It would be a Herculean task to reverse the trend in the short term, but it’s not impossible,” Dr. Atari told MNT. “There is a ton of work that can be done on the policy side to make better social media platforms that protect people’s privacy, liberty, mental health, and access to reliable news, rather than exposing users to misinformation, hate, and ‘othering.’”

Say the study authors:

“Our results highlight the importance of moral diversity in online social networks to avoid affective polarization and creation of moral echo chambers that could contribute to radicalization through formation of cult-like identities to which individuals get vehemently attached.”

The authors caution that their research encompasses only English-language Western users. Its conclusions may not necessarily apply to other cultures and locations.

Speaking to SPSP, Dr. Atari concludes:

“What I am more convinced of is that putting yourself in an extremely homogeneous environment wherein nobody disagrees with your values or [they cheer] ‘hell yeah!’ is not a great environment to be in. And it might even radicalize you.”