Are we really more divided than ever? How social media polarizes us
This is the question that motivated a large group of social scientists like me to launch a first-of-its-kind-study. They asked 10,207 people in 26 countries to estimate how much people with opposing views disliked them. For example, they asked people who identify with the Labour Party in the United Kingdom to estimate how much Conservatives rate them on a scale from 0 to 100, where 0 indicates extremely negative views.
The results showed that people in most countries drastically overestimated the extent to which their political opponents disliked them – often between 10 and 25 points on the 100 point scale. In follow-up experiments, these scholars showed that educating people about these misperceptions can decrease negative intergroup attitudes in turn. These findings suggest beliefs about polarization may create a self-fulfilling prophecy that traps societies within a vicious cycle of misunderstanding and malice.
On the one hand, this question is almost as old as the social scientists. Social psychologists, sociologists, and political scientists have long warned of “false polarization”– or the tendency of people to exaggerate the ideological extremity of their political opponents and minimize that of their own side.
Though the sources of false polarization are no doubt complex, there is growing consensus among scholars that social media pours petrol on the fire. In 2019, the Pew Research Center conducted a fascinating study that showed 6 percent of American Twitter users generate 76 percent of all tweets about politics. When I dug deeper into these data, I learned that most of this small group of people identify as “extremely conservative” or “extremely liberal.”

One answer may lie in the design of social media itself. Though the exact details of how the algorithms that guide user experiences on social media remain closely kept corporate secrets, most experts agree they are heavily influenced by user engagement. If all of your friends angrily denounce a radical post, most platforms will show you that same message earlier than others – regardless of the valence of your friends’ reactions.
Breaking the cycle of false polarization – or what I have called the “social media prism” – requires interventions at multiple levels. First, social media users must learn that they are “voting with their thumbs” each time they view, like, or comment on posts that make them angry. When we voice our anger about such messages, we risk calling further attention to them.
But learning not to “feed the trolls” is much easier said than done. Our instincts to censure those with whom we disagree are often subconscious – or triggered by rapidly unfolding events or crises that prevent us from calmly considering the consequences of our behavior before anonymous strangers on the internet.
The challenge of defeating false polarization becomes even more daunting when one realizes that malicious actors create influence campaigns that are often designed to stoke outrage. The Russian Internet Research Agency, for example, is widely blamed for creating thousands of fake social media accounts that impersonated people with extreme views during the 2016 U.S. Presidential election. Even though such campaigns may not often directly influence people’s opinions, they probably contribute to the perception that polarization is out of control.
Perhaps most importantly, large language models can engage in dynamic conversations with humans. This creates the potential for a new, more powerful kind of influence campaign, in which people might find themselves in toxic arguments with swarms of bots powered by large language models.
These new threats – and the constant challenge of suppressing our subconscious instincts to defend our side – suggest mitigating false polarization on social media will require far more ambitious interventions. One answer may lie deep within the design of social media platforms themselves.
Though social media platforms are often described as democracy’s new town square, they were certainly not designed for this purpose. Facebook evolved from a site that helped college students rate each other’s physical attractiveness. Instagram was originally called “Burbn” and used to help friends arrange alcohol-centric gatherings. Neither Twitter, YouTube, nor TikTok were designed to promote democratic dialogue either.
Instead, most of today’s dominant platforms evolved from esoteric communities on the internet into prominent forums for public debate. Along the way, the design of social media platforms – which too often focuses on how to keep users engaged no matter what the cost – went largely unquestioned. Yet the fever pitch of political debate on so many platforms today suggests it might be time to revisit some of the most basic assumptions of how social media ought to function.
We are a team of social scientists, statisticians, and computer scientists who conduct experimental research on the key drivers of user behavior on social media, but also prototype new types of technology designed to push back against polarizing users.
One of the first tools our lab created was a “Bipartisanship Leaderboard.” This tool tracked a very large group of Twitter users of different political backgrounds and identified the elected officials, journals, and opinion leaders whose posts inspired positive reactions from those with different viewpoints. Our goal was to try to reward those users who expressed productive, moderate positions and incentivize them to continue “reaching across the aisle.”
But our overarching goal was much deeper – to change the incentive structure of social media itself. In my 2021 book, I proposed a “bridging algorithm” that would boost messages that received positive feedback from different types of people. A message from a Republican whose message resonated with Democrats, for example, would appear much earlier in our newsfeeds than an extreme liberal who was preaching to the choir.
The bridging algorithm was not only designed to encourage people to create productive posts that impress diverse audiences. It can also make social media a much less fun place for trolls to play. Most social media platforms reward trolls for antagonizing others by giving their posts prime real estate at the top of our newsfeeds. When Twitter implemented our bridging algorithm in a 2022 study, misinformation was shared less than 25% as often.
No single algorithm can solve the persistent polarization that vexes so many countries around the world today. Nevertheless, business leaders and academics must partner to develop many more creative solutions to help people learn to see the gap between social media and reality. Without such efforts, too many people will remain unaware that the gulfs that seem to separate so many members of rival groups around the world may not be as vast as most people think.
About Chris Bail
Explore more thought leadership articles at the Power of Unity hub
About Allianz
** As of March 31, 2025.