Restructuring Social Media Platforms for the Greater Good
How Recommendation Engines Encourage Extremism
Here’s an idea: What if we restructure the spaces by rebuilding the algorithms from the ground up?
We’ve written about the fairness of algorithms and how organizations can best avoid biased data analysis in their early stages of collecting data. But now we’re going to focus on one algorithm in particular: the recommendation engine.
According to many media experts, the recommendation engine is the major culprit for the polarization of society. They are supposed to save time and anxiety over finding the “right choice” of books or articles to read. But often they end up limiting our choices and worldview. Facebook and Google’s personalization engines aren’t truly personalized for each individual. The “recommendations for you” are algorithms that suggest recommendations based on users with similar behavior to you. They are not developed with the goal of true personalization, but ad optimization and driving revenue. It’s an important distinction.
That’s why personalization engines are limited. For example, a user who follows a white supremacist out of sheer curiosity on Facebook will have their feed full of extremist content for a month afterward. Or a user who genuinely wants to read something outside their usual reading habits is only given ones based on past preferences. Or a user who bought a baby gift for his nephew is now inundated with daily sales for baby items despite those ads not being useful for his day-to-day life.
Leveraging High-Quality Web Data for New Algorithms
One solution to making better personalization engines is to collect high-quality web data and develop new algorithms.
Here are a few examples:
- A team at Yale University designed a personalization engine with the goal of minimizing the effects of polarization. Their personalization engine results included an equal number of articles from both sides of the political spectrum. The dataset they used to develop this algorithm included news articles collected over the last 30 days from Webhose’s News API.
- Hai is a recommendation engine that goes beyond collecting data from one domain like most recommendation engines. Instead, it integrates datasets from a range of apps like Netflix, Hulu, Spotify. It also uses AI to make recommendations based on your individual tastes and preferences, rather than “what users similar to you also liked.”
Another group of researchers at Simon Fraser University took it one step further. They decided to eliminate polarizing content as much as possible – before it is even suggested by the recommendation engine. They built their own algorithms with the help of data that compared real news articles with fake ones. The fake news items were from the Russian Internet Research Agency while the real news articles from Webhose included 2,500 data items from a total of 172 news sources. The result of the research? A fake news detector that would identify disinformation before it was posted.
Working to Bring People Together
Social media platforms like Facebook and Twitter are profit-driven at the end of the day, and their personalization engines are not in the best interest of the public. But healthy democracies need healthy spaces – and healthier social media platforms – for public debate. For Webhose, that means delivering web data to organizations so they can build fairer recommendation engines that bring people together rather than tear them apart.
Want to learn more about gaining access to high-quality data for your news or web monitoring service? Schedule a call with our data experts today!