The web is becoming increasingly personalized and this risks creating mental walled gardens, or so called “filter bubbles”. These bubbles limit what opinions and content we get exposed to and can potentially threaten democracy.
Before we get into the specifics of filter bubbles, let’s look back and see why we’re here to begin with and why they are a problem to begin with.
If you were to ask someone in the ‘50s what we’d call this age or époque, they’d say the “atomic age” or the “the post-industrial age”. Obviously influenced by the boom of wealth in the western world following the horrors of WW2, the sense of a new chance to make a better world and the limitless potential of nuclear power. When atomic optimism was at its high, many predicted that cars and home stoves would be powered by small nuclear reactors.
Asking someone in the ‘80s and early ‘90s, they’d say “the age of the computer”. Nuclear accidents such as Chernobyl in 1986 and the fear of “flip a switch” global destruction, in the aftermath of the Cuba Crisis, had cooled the love affair with nuclear power. Beginning the ‘70s, environmental issues became increasingly relevant to people.
Automation and automated data processing is all good, but it wasn’t until army researchers and universities in the US began connecting the wires that the first generation of the Internet was born. Most people were largely unaffected by it until Sir Tim Berners Lee invented the World Wide Web. It was the technology that made the Internet into more than just a research tool. It caught on fast, as we all know. And if you were to ask a random person, they’d most likely say they were living in the “Internet age”.
Skip ahead ten years and the web had evolved into something that was hard to predict in the mid ‘90s. There were now countless ways to publish, share and express opinion. The rudimentary tools that required a fair amount of programming knowledge have evolved and now just about anyone can start a blog and start typing away. It was like having a million news outlets. Interaction happened mostly in comment threads. Forums were also popular and became a way for people with niche interests to interact with peers, without having to sign up to mailing lists or access Usenet, which still was kind of nerdy.
People were spending more and more time online. Mostly from desktop and laptop computers. The mobile web was in its infancy and phones were not fast enough or had screens that could display web pages in a way that made them relatively useful but they were steadily getting better and the iPhone was just around the corner. There was just one component missing: the social link.
Enter Facebook and the almighty personalized news feed
Introducing Facebook. I created my account in the summer of 2007, being introduced to it by friends. I was immediately hooked. The picture tagging feature was one of many things that made it infinitely more useful and accessible than what I’d used before. It was easy to just throw ideas or links out there. No need to fill in 5 fields, and click “no, I don’t want to create a poll with this post” and use bulky bbCode to embed a photo, which I had to host somewhere else. Friends “liking” gave immediate gratification. It triggered the right neuro-receptors. Facebook was pleasure. Still, you didn’t feel guilty about wasting your time there. After all, you were socializing, “networking” or building your “personal brand”. And with smartphones, you could suddenly share so much more. Perhaps too much even.
Querying a stranger on the street today what age they live in, you won’t be surprised to hear someone refer to today as the “social media” or “social network” (as I prefer) age.
All of this has brought tremendous benefits. It’s never been easier for grandparents to stay in touch with their grandkids. Or for us to maintain relationships with people half a world away.
But it has gone further. Much further. And we’re not thinking of the true cost of our social networks moving to the digital sphere.
The social networks are for many people today the predominant way to stay up to date with what their social circle talks about. I’d it’s less common to go out to lunch with a colleague or grab a beer after work just to catch up. What’s the need if it’s all in the feed? (Pun intended).
The feed of updates from people we know is becoming a strong influencer in our social lives. It’s replacing other modes of interaction. Yet we don’t spend much time thinking about what that feed actually is. To most people it’s just a list updates they see when they log in to Facebook. It seems obvious it’s just an innocuous list of thing your friends say, comment, like or share.
Problem is, it’s not.
The update feed could be Facebook’s most powerful business development tool. It’s the holy grail in the battle to manipulate end users to do things that further Facebook’s own business goals.
At this point you’re probably saying this sounds like any other conspiracy theory out there.
But before you click that location bar and start typing “face…” and wait for your browser to autocomplete your wishes, consider this.
What is the greatest influencer in your life above anything else?
News and mass media? Not likely.
It’s your family and friends. A friend recommending a new restaurant is such a stronger predictor of your behavior than you seeing an ad on TV. Facebook knows this. They also know that you care a great deal about what your friends think, say and do.
See where this is leading? Yep, that list of updates is beautifully tailored and modified to manipulate your behavior to serve other goals than your own.
Much of the information we refer to and base our opinions on come from our friends. And even if we refer to media when making a case to someone else, friends and family have a massive influence over how we value and rank that information. In the past, such information was often told as we sat down at dinner, hung out at a restaurant or chit-chatted at the office. Today, it’s the Facebook update feed that’s taken over the role as toast master of our social circle.
Facebook has made huge investments into making sure you see things you’ll like, and will result in actions that you’ll like. All in order to manipulate you to spend more time on the site and view more ads since that’s how Facebook makes its money.
Now this is usually harmless when it comes to movie reviews, music and food. It will likely result in your life quality increasing. But once you go beyond superficial topics, the implications are a bit more dire.
We discuss more than music, movies and TV shows with friends. Many of us have friends who have very differing political opinions from ours. And many of us are aware of it. We even cherish it as we know the danger of a stagnant mind stuck in the conviction of always being right. We enjoy those mental battles and even though what we hear makes no sense, we still respect the sharp mind that produced those thoughts. It’s a seed for good doubt and personal growth.
But Facebook doesn’t see it that way. In the minds of the machine learning algorithms employed by Facebook, similitude begets affinity. The more Facebook learns about you, the less likely it is you’ll in your feed see views that are in contrast to your own.
Polarizing political discussion and news
So what? No big deal… right? Well normally not. But in this day and age the consequences are probably worse than most think. At least in the part of the world where I reside. I see the political climate gradually shifting from one based on cohesion and values like compassion and collective problem solving to more individualism at the loss of the ability to relate to other people.
In Sweden, xenophobic views that were once considered extreme seem to enter the normal political agenda. It’s what we’ve seen in Denmark for the past twenty years. Things is, I think this isn’t just the result of fear and unresolved social issues resulting in people being less able of empathy. I believe Facebook plays a role in normalizing logically unsupported views that would otherwise have gone challenged. In a landscape of homogeneity, extreme versions of views take root. It happens in all corners of the political spectrum and promotes further groupthink.
There’s reason to believe that Facebook’s selective feed algorithms help create niches of political homogeneity where views feed and another and ideas that challenge the status quo are automatically pruned. The most poorly substantiated sentiments, ideas that should have gone challenged and exposed for the idiocy they are, get to remain to reproduce and spread under the umbrella of Facebook’s machine learning. Cliques of unsubstantiated ideas and views live their own lives, and never need to handle challenging views.
I also believe this isn’t exclusive to Sweden and Europe. It’s very likely this can also explain the enormous dichotomy we’re seeing in the US presidential election. Supporters of Sanders and those of Trump could as well live in different planets. And they probably do, thanks to Facebook’s culling and pruning.
We got a word for this
There’s a word for this: filter bubble.
Image from Eli Pariser’s TED talk (link below).
Eli Pariser’s TED talk from 2011 describes this exact scenario. Pariser describes how updates from friends, that Facebook’s algorithm assumed he wasn’t interested in, got omitted from his news feed. It’s a great talk and it starts with a memorable quote by Mark Zuckerberg (founder and CEO of Facebook):
"A squirrel dying in your front yard may be more relevant to your interests right now than people dying in Africa."
We need to ask ourselves, in the age of social media, is it worth this?
I think it’s time stop letting ourselves be manipulated and find more sources for our daily newsfeed. We need to understand that personalization doesn’t just mean convenience, it also makes us partially blind. And perhaps, and I know this is a radical idea, we should try and actually meet people face to face regardless of whom they’re voting for.
Here’s a link to Eli Pariser’s 2011 TED talk "Beware of online filter bubbles" (10 min):
Facebook challenged these ideas with a study of their own in 2015. Here’s Pariser’s response: https://backchannel.com/facebook-published-a-big-new-study-on-the-filter...
This is something that's been on my mind quite a lot lately with the US presidential cycle and Trump.
I actually see a bigger problem in the non-agorythmic side of the equation. Facebook can and does tune its feeds to try and maximize the pleasantness of the experience, but I don't know of anyone who would say their feed didn't contain contrary views.
But it's easy to mute or block people you find annoying. The self-selecting capabilities — which are a requirement in order for any social network to deal with bad actors, trolls, griefers, etc — make it very easy for all of us to unwittingly construct our own echo chambers.
Likewise, it's easy for extreme view-holders to congregate in their own areas of the internet (e.g. fascist message boards) and consume their own version of newsmedia. Couple that with the tendency for like-minded people to naturally connect, and you don't need Facebook tying to give people what they like in order to have organically occurring filter bubbles. It might accelerate the process somewhat, but I think the underlying human social dynamic would have largely the same effect.
All in all, to me it make the internet much more viscerally problematic. Still a great thing for humanity overall, but the downsides and risks are clearer than ever before.
Submitted by Josh Koenig on Sun, 2016-04-17 18:49.
Thanks for the comment, Josh!
Your conclusions seem to echo those of that research paper that Facebook funded. Eli Pariser comments on it here: https://backchannel.com/facebook-published-a-big-new-study-on-the-filter...
I think the difference with FB is that it happens without people being aware of it. Many FB users sort of assume that what they see people talking about is actual reality. People looking for specific communities have consciously decided to filter. Everyone has a right to do so. I just wished it required more intentional action.
Submitted by jakob on Sun, 2016-04-17 20:36.