Surely, you’ve heard the story. A man walks into a pizza parlour, armed with a rifle, to expose the child sex ring Hillary Clinton has been operating beneath the business. After firing a few shots, he realizes there was no truth to the conspiracy the Internet prompted him to believe. Thankfully, no one was injured, but the #pizzagate scandal showcases the damaging repercussions of disseminating fake news. Society has grown concerned about how new media affects journalism and influences public opinion. As a social media giant that thrives through the circulation of spreadable spectacles, Facebook is expected to assess the activity of their users and respond to the fake news crisis. Many argue that Mark Zuckerberg must take action to completely eradicate fake news from the platform, but the complicated definition of the term and importance of free speech online makes this difficult to do. Facebook must encourage users to exercise their digital literacy by adjusting their comforting algorithms.
The public demands Facebook to fight against fake news, as Facebook’s popularity has allowed the platform to become extremely influential politically. Facebook reaches 67% of American adults and 62% of these users retrieve news from the platform (Gottfried & Shearer, 2016). As Mihailidis & Viotty (2017) explain, “the spread of information associated with #pizzagate was propagated by a set of factors—technical, economic, structural, and content-based—that collectively support an environment where sharing and spreadable content are paramount” (pg. 444). Facebook allows users to post their own ideas, and support or criticize the ideas of others through likes, comments and shares. The immediacy of actions on Facebook also allows information to spread extremely quickly (Tadnac, Lim & Ling, 2017). Facebook serves as an active public sphere, but this has also allowed problematic fake news stories to flourish.
Hoaxes, propaganda, politically motivated half-truths, advertising clickbait, poorly composed stories and satire have all been referred to as fake news (Borel, 2017). What does “fake news” really mean? The term has grown incredibly complex. “‘Fake’ doesn’t begin to describe the complexity of the different types of misinformation (the inadvertent sharing of false information) and disinformation (the deliberate creation and sharing of information known to be false)” (Wardle, 2017). The term typically refers to the fictional stories disguised as truth circulating online, but “fake news has also been invoked to discredit some news organizations’ critical reporting, further muddying discourse around fake news” (Tadnoc, Ling & Lim, 2017, pg. 138). This is demonstrated by the current US president’s tendency to constantly deem fact-checked, left-leaning journalism as “fake news” (Meade, 2017). Trump dismissed a report from The New York Times in this manner, despite the fact that “the reporting that Trump had ordered Mueller’s firing is a) deeply sourced b) confirmed — after being first broken by The New York Times — by a number of serious and credible media outlets and c) very detailed as to how and why Trump moved to fire Mueller” (Cillizza, 2018). If you do not agree with something, is it acceptable to forbid this “fake news?”
A text may be classified as fake news, but is not necessarily harmful. Alcott & Gentzkow (2017) argue that fake news should be defined as “articles that are intentionally and verifiably false, and could mislead readers” (pg. 232). The Onion is a satirical news source, and according to Meade (2017) although their content is “technically fake news,” there is no ideological or economic motive, nor is there intent to misinform audiences. The #Pizzagate conspiracy was developed to push a political agenda, while The Onion creators would be dismayed if anyone mistook their carefully crafted comedy for fact (Meade, 2017). Although The Onion has no malicious intent, it could confuse some Internet users. If comedic content has the potential to mislead someone in any way, is it acceptable to forbid this “fake news?”
Virtual platforms provide a space for non-journalists to reach a mass audience (Tadnac, Ling & Lim, 2017). Online media does not always come from acclaimed sources, and there is potential for citizen journalists to create work that is extremely biased, inaccurately framed or completely wrong. However, citizen journalism also reveals perspectives from the less powerful. Removing imperfect citizen journalism could be considered an infringement on free speech, and affect Facebook’s reputation as a public sphere. If someone is slightly misinformed when sharing their opinion, is it acceptable to forbid this “fake news?”
Despite the fact that what constitutes fake news is unclear, some stories are certainly problematic, as demonstrated by #pizzagate. As a social media giant with great influence, Facebook does have a responsibility to respond to these issues. But if a precise definition of fake news cannot be determined, it becomes difficult for platforms like Facebook eradicate content while remaining a public sphere. It has been suggested that to fight fake news, Facebook should invest in professional fact-checkers to filter out false content. However, companies like Facebook “often pride themselves on being seen as the democratic front doors for citizens by allowing editorial content control to rest, for the most part, with users” (Mihailidis & Viotty, 2017, pg. 239). As Zuckerberg (2017) declares, “in a free society, it’s important that people have the power to share their opinion, even if others think they’re wrong.” Facebook must proceed carefully when removing content, because “unless done right, these steps may create more problems than they solve — and boost claims that the “fake news crisis” is an attempt to impose political controls on the media” (Young, 2016). (Young, 2016) wonders: if Facebook became closely monitored by professional and citizen fact-checkers, would they report and remove the right content?
To attempt to resolve the crisis, removing fake news may be the least of Facebook’s concerns. Tufekci (2017) identifies another pressing issue regarding Facebook’s political influence: echo chambers. Facebook perpetuates confirmation bias by tracking the interests of each user, and programs their algorithms to promote content that compliments a user’s personal beliefs (Tufekci, 2017). “If finding truth is not as large a priority as finding personally relevant information, then what good is knowing how to critique a message in the first place? And if individuals are taught to question, critique and inquire about the credibility of media, it seems as if this technique can justify those who felt compelled to investigate the #pizzagate story in the first place” (Mihailidis & Viotty, 2017, pg. 450). According to Borel (2017), “ideological fake news lands in the social media feeds of audiences who are already primed to believe whatever story confirms their worldview.” For this reason, Zuckerberg (2017) declares that Facebook will respond to the fake news crisis by “focus[ing] less on banning misinformation, and more on surfacing additional perspectives and information” as well as “the impact of sensationalism and polarization, and the idea of building common understanding.”
Due to the complexity of the term, it is difficult for social media platforms to forbid fake news while preserving the democratic functions of the Internet. I propose that it is most important for Facebook to encourage users to exercise their digital literacy by adjusting their algorithms to showcase alternative opinions and warning their users of potentially unresearched content.
Allcott, H. & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31:2 (pg. 211-236). Retrieved from https://web.stanford.edu/~gentzkow/research/fakenews.pdf
Borel, B. (2017). Fact checking won’t save us from fake news. FiveThirtyEight. Retrieved from https://fivethirtyeight.com/features/fact-checking-wont-save-us-from-fake-news/
Gottfried, J. and Shearer, E. (2016, May 26). News use across social media platforms 2016. Pew Research Center. Retrieved from http://www.journalism.org/2016/05/26/news-use-across-social-media-platforms-2016/
Mihailidis, P., & Viotty, M. (2017). Spreadable spectacle in digital culture: Civic expression, fake news, and the role of media Literacies in “post-fact” society. American Behavioural Scientist, 61: 4 (pg. 441-554). Retrieved from https://doi-org.proxy.lib.sfu.ca/10.1177/0002764217701217
Meade, A. (2017) The Onion in the age of Trump. What we do becomes essential when its targets are this clownish. The Guardian. Retrieved from https://www.theguardian.com/culture/2017/aug/28/the-onion-in-the-age-of-trump-what-we-do-becomes-essential-when-its-targets-are-this-clownish
Tadnoc, E. C., Lim, Z. W., & Ling, R. (2017). Defining fake news. Digital Journalism, 6:2 (pg. 137-133). https://doi.org/10.1080/21670811.2017.1360143
Tufekci, Z. (2016, November 15). Mark Zuckerberg is in denial. The New York Times. Retrieved from https://www.nytimes.com/2016/11/15/opinion/mark-zuckerberg-is-in-denial.html?
Wardle, C. (2016, December 16). Fake news: It’s complicated. Medium. Retrieved from https://medium.com/1st-draft/fake-news-its-complicated-d0f773766c79
Young, C. (2016). Who will check Facebook’s fact checkers? The Hill. Retrieved from http://thehill.com/blogs/pundits-blog/media/310849-who-will-check-facebooks-fact-checkers
Zuckerberg, M. (2017, February 10). Building global community. Retrieved from https://www.facebook.com/notes/mark-zuckerberg/building-global-community/10154544292806634