The Cycle of Confirmation
Zoe Vedova –
You want your opinions to be right, but why stop there? What if there was a place where you could always be right, where your beliefs were continuously validated and there was an endless supply of proof supporting your claim; Welcome to Facebook. Facebook is an omnipotent social media platform which collects data about its user’s online habits, then places misleading or false news articles onto people’s newsfeed; with few users being able to distinguish fact from sponsored fiction, this confirmation of bias (Norman, Lecture week 6) has far reaching consequences into people’s lives, and democracy on a whole. This paper will examine how confirmation bias works through Facebook’s algorithms, and the risk it poses to individual people, as well as farther into society. Forty-five percent of American’s get their news from Facebook (Shearer, Gottfried, 2017), giving the website unprecedented power to curate what news people see, shaping world views by trapping users in heavily filtered bubbles.
Scrolling through Facebook is not as innocuous as it seems; every image you hover over and every article you share is being recorded. Even your location is prime data for Facebook to sell to ad companies to help that company sell back to you. This process is the first step in steering up a confirmation of bias. This works through a myriad of ways, detailed by Barbara Ortutay, in the INC. online magazine article, “Tips on how to avoid Facebooks new ad tracking” (2014). Ortuay described the process, starting with how advertisers pick from a series of “attributes such as age, gender … and language” to best target the user. From there, Facebook works in affiliation with “outside analytical firms” to track what websites and apps a user goes to in a process known as “interest-based targeting.” For instance, if a user was to click on a Facebook news story which linked out to Breitbart, a far-right American news site which endlessly propped up Trump with misleading articles, Facebook would collect that information and use it to show ads related to your ‘interests.’ Perhaps a product from the Breitbart website would appear on your news feed – something such as a, Safe spaces are for snowflakes, bumper sticker, or even more sinister, a false news story fed to them simply because it matches the user’s pervious interests. The user is now trapped in a cyber world of their own creation, constantly re-affirmed by Facebooks algorithms.
The cycle of confirmation bias doesn’t end with far-right neo-Nazi sympathizers, imprisoned in a hate bubble. Out of the 245.3 million adults living in America, PEW research center deduced that 45% of them get their news solely from Facebook (Shearer, Gottfried, 2017). 110.4 million Americans lives in danger of being lead into false articles. Although sixty-four percent of American’s believe that false news leads to real-life confusion, thirty-nine percent claim they can recognize fake articles online, thus negating the negative effects (Barthel, Mitchell, Holcomb, 2016). However, a 2016 study conducted by Stanford Graduate School of Education found that eighty percent of participants were unable to separate sponsored content from real news stories (Donald, 2016). With half of American’s using Facebook as the primary source of news, and many unable to differentiate between real and false, the cycle of confirmation bias only grows stronger, and more dangerous as it leaves peoples personal lives and affects public decisions, such as who to vote for.
The 2016 American election steals a lot of limelight as the poster child for the hazards of Facebook’s fake news and confirmation bias. However, the “electoral battle ground” Facebook provides utilizes the same “media filter bubbles and algorithms” which effect every election, as well as our collective sense of democracy (Hern, 2017). If a Facebook user cannot distinguish between a real and sponsored story about veganism, the misinformation that could spread within their personal lives is fairly benign. More malignant is false news centered around governments and political candidates. The Journal of economic perspectives published an article from Stanford university on fake news affecting elections, detailing how fake news makes it increasingly difficult to “infer the true state of the world” (Allcott, Gentzkow, pg.2, 2017). In fact, the article goes on to suggest that “Donald trump would not have been elected president were not for the influence of fake news” (pg.2). Fake news, perpetrated by algorithms trapping users in a cycle of confirmation bias pose a massive threat to life off the internet.
Facebook has a multitude of tactics used to continuously confirm biases as a news source. Whether its algorithms to track a user’s interests, or filters to only show high paying sponsored content, there is real danger in people not being able to distinguish between real and false, dangers that could even threaten our sense of democracy. Facebook must either accept responsibility for the effects of over-curating content, or users must learn how to spot fake news for themselves. Society needs to be able to trust our published news for a democracy to thrive.
Allcott, Hunt, Gentzkow, Mathew. (2017) Social media and fake news in the 2016 election. Journal of economic perspectives, Vol 31. (Issue 2), pp. 211-236)
Barthel, Michael, Mitchell, Amy, Holcomb, Jessie. (2016, December, 15th) Many American’s believe fake news is sowing confusion.
Donald, Brooke. (2016, November, 22) Stanford researchers find students have trouble juging the credibility of information online.
Hern, Alex. (2017, May, 22) How social media filters bubbles and algorithms affect the election.
Ortutay, Barbara. (2014, June, 20th) Tips on how to avoid Facebooks new ad tracking
Shearer, Elisa, Gottfried, Jeffrey. (2017, September, 7th) News use across Social Media platforms 2017