Social media algorithm pops filter bubbles by presenting ideas you disagree with
Using algorithms to analyze our metadata can personalize everything from our searches on Google to the products we’re recommended on Amazon to the news stories Facebook thinks we’ll be interested in. But personalization isn’t always good. The so-called “filter bubble” effect can mean we get stuck in online echo chambers, never presented with news that conflicts with our own world view. That’s a problem a new algorithm aims to help with.
Developed as a collaboration between researchers from Aalto University in Helsinki, Finland and University of Rome Tor Vergata in Italy, the algorithm is designed to make sure that social media users are presented with views that don’t necessarily conform with their own, as a way of helping to “solve” the increasingly polarized nature of discussions around controversial topics.
The algorithm works by first representing people and their connections on a network, in which two users are connected if they are either friends or if they have interacted with one another in the past — such as liking one another’s posts or sharing similar information. The algorithm then works out who belongs on which side of a debate, and clusters the different perspectives to find two opposing viewpoints. Next, it then uses a so-called “greedy algorithm” that analyzes a list of possible users and singles out influential ones who are both more balanced in their views, as well as sufficiently connected to a broad number of people. These “tastemaker” users are the ones who could be most valuable in sharing different takes on controversial subjects.
“In this research, we develop a proof of concept to show that polarization in the society could be reduced by using an approach to balance information exposure,” Garimella Kiran, a final-year doctoral student at Aalto University who worked on the project, told Digital Trends. “We also show that our algorithm works efficiently, and can select good influential users for large social networks. In terms of demonstrating it in practice, we are limited by the availability and access to data. Companies like Facebook or Twitter only provide limited access to information about who reads and shares what information. So the real practical implementation could be done only by these organizations.”
It’s definitely a neat idea, not least because it would maintain the social aspect of social media — rather than, for example, simply flashing up random Republican ads on a committed Democrat’s Facebook page. While it still raises issues (who would approach these users to get them to share the information? And would they be willing to share news stories on command?), it’s a unique approach to solving a growing problem in society.
- Facebook boasts of removing fake news in Germany, but journalists aren’t so sure
- Tweet-analyzing algorithm can detect depression sooner than a human doctor
- Facebook applies new authenticity tools and expose Russian-controlled pages
- If data is the new oil, are tech companies robbing us blind?
- Facebook creates a centralized place for crisis tools in the Crisis Response Hub