Facebook Likes Extreme Content Facebook execs rejected changes to reduce polarization.

Published
Reading time
2 min read
Angry emoji over dozens of Facebook like buttons

Facebook’s leadership has thwarted changes in its algorithms aimed at making the site less polarizing, according to the Wall Street Journal.

What’s new: The social network’s own researchers determined that its AI software promotes divisive content. But the company’s management rejected or weakened proposed reforms, concerned that such changes might cut into profits or give the appearance of muzzling conservatives.

Fizzled reforms: Facebook’s recommender system promotes posts from its most active users: those who do the most commenting, sharing, and liking. Internal investigations conducted between 2016 and 2018 showed that such so-called superusers disproportionately spread misinformation, much of it politically divisive. Internal committees proposed ways to address the issue, but the company ultimately made changes that blunted their potential impact.

  • One proposal called for lowering recommendation scores for content posted by superusers on the far right or far left of the political spectrum. Content from moderates would receive higher scores.
  • The company accepted the approach but lowered the penalties applied to extremist posts by 80 percent.
  • Facebook also nixed the building of a classification system for polarizing content and quashed plans to suppress political clickbait.

Behind the news: Conservatives in the U.S. have long accused social media platforms of left-wing bias, a charge to which Facebook has been particularly sensitive.

  • In 2018, lawmakers grilled Facebook CEO Mark Zuckerberg over accusations that the platform marginalized conservatives.
  • Last week, Twitter put warning labels on tweets by Donald Trump that it deemed misleading or inciting violence. The president responded with an executive order that would strip social media companies of legal protections from liability for content posted by users.
  • Facebook publishes similarly inflammatory posts by the president without challenge. Some Facebook employees protested that stance with a virtual walkout on Monday.

Facebook’s response: “We’ve built a robust integrity team, strengthened our policies and practices to limit harmful content, and used research to understand our platform’s impact on society so we continue to improve,” the company said in a statement.

Why it matters: The algorithms that govern popular social media platforms have an outsized influence on political discourse worldwide, contributing to polarization, unrest, and hate crimes. Divisive rhetoric distributed by Facebook has been linked to violence in Sri Lanka, Myanmar, and India.

We’re thinking: Social media is a double-edged sword. It has been helpful for quickly disseminating (mostly accurate) information about concerns like Covid-19. But what brings people together can also drive them apart. The AI community has a responsibility to craft algorithms that support a just society even as they promote business.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox