It’s Just What You Want to Hear

Charles Long
12 min readFeb 14, 2021

How the Prevalence of Fake News is the Result of Confirmation Bias, Not Algorithms

Photo of computer programming code.

Have you ever seen the College Humor sketches on YouTube? There’s one, one of my favorites. It’s titled “What if Google Was a Guy” (College Humor). It stars a middle-aged man playing the role of Google. He sits in a small, dingy, run-down office and there’s an unending line of people outside his door. One by one, they enter and present to him their searches. The humor comes from Mr. Google’s reactions. One client, a lady searching for “Vaccines cause autism” gets one million results saying they don’t, represented by a large stack of folders on his desk. Mr. Google then holds up one single sheet of paper and lets her know he has one result saying they do. She dismissively ignores the stacks of folders and snatches the one sheet of paper saying, “Thank you, I knew it,” and proudly walks away. He yells after her now validated self as she leaves his office, “JUST BECAUSE I HAVE IT DOESN’T MEAN IT’S TRUE!” How many times have you done this? I know I’ve done it many times; been presented with tons of information but zeroed in on the one piece that validated what I already thought was true? This is called a confirmation bias. Today, many discussions say technology and social media have been the cause of the spread of false information. But I insist this is untrue. Instead, I believe technology and social media has been a passive participant in our own confirmation bias, and it is we who are responsible for the spread misinformation, disinformation, and lies.

The American Psychological Association defines confirmation bias as “the tendency to gather evidence that confirms preexisting expectations, typically by emphasizing or pursuing supporting evidence while dismissing or failing to seek contradictory evidence” (Confirmation Bias). This means, by engaging in confirmation bias, we seek out things to prove ourselves right and ignore things that might prove us wrong. This is what the lady engaged in in the College Humor sketch. She was presented with a stack of results that told her vaccines do not cause autism but choose to leave with the single result that told her it does. We as human beings do this every day. Generally, it is unconscious. It is a form of cognitive mising; a way for our brains to be lazy. It makes thinking easier. If our days were filled with constantly having to challenge our beliefs, it would exhaust us, leaving no room in our minds to do anything else. Do I judge people for this? No. We all do it in some form, shape, or fashion. But when this becomes most dangerous, is when we spread these beliefs to others when they are false. It is the blind leading the blind. But at some point, we must take responsibility for the misinformation we pass on to others.

Some say that when it comes to technology and social media, it is not us humans who spread misinformation, but a complex thing they call algorithms. Algorithms have become a shadowy, mythical entity that no one truly understands. Last year, it almost started a war. I don’t know if you remember, but there was a huge issue over TikTok. TikTok is an app that developed a very unique algorithm that has made it the most addictive app in the world. No other algorithm has been able to produce the recidivism and time used this app has produced. It became such an issue, Donald Trump wanted to ban it in the United States. He claimed it was being used to spy on Americans and could be used by the Chinese government against us. He would only allow the app if American companies could control it. A deal was made, for the sale of the American arm of TikTok to the company Oracle. But TikTok would only agree to a monetary deal, they would not give access to their algorithms to Oracle. Because of this, Donald Trump denied the deal; no algorithm, no deal, and TikTok was banned by the President of the United States. It was all about the algorithm, not the money. That’s how important these things are.

In “What Do Social Media Algorithms Do for You?” AJ Argawal, a marketer and writer for Forbes magazine says, “[algorithms are] to make sure that people receive the content they actually care about.” He says they are on our side; they only want to help us. They do the job of weeding out uninteresting information so that we can see what matters most to us. That is a very innocent way to put it. It implies algorithms can have no outside motives or influences besides what you, the user, want. But my favorite venture capitalist, Chamath Palihapitiya, does not paint the purely innocent picture that Argawal does. He was an early senior executive at Facebook. He came under fire in recent years for his honesty about the pitfalls of social media. After leaving Facebook, he had turned on his own and was seen as a traitor. In a no-nonsense interview (that’s redundant as all his interviews are no-nonsense) he gave with CNBC, Chamath says, “Today we live in a world where it is easy to confuse truth and popularity. You can use money to amplify whatever you believe and get people to believe that whatever is popular is truthful and whatever is not popular is not truthful” (Palihapitiya). Chamath exposes that algorithms are not as innocent as Argawal would have us believe. They can be bought, sold, manipulated, and twisted to do more than just show us what we care about. Instead, they can show us what was paid for the most. This is true. Chamath is never one to mislead.

Our Facebook walls are filled with “Sponsored Ads” every day that we have to scroll past to see what our Uncle Joe posted about Cousin Jim’s new baby. But the thing about these sponsored ads is, they are highly regulated. Every sponsored ad must declare itself as so. Often, the moment we see that “sponsored” label, we scroll right past. Instinctively we tend to ignore advertisements. But what if that same ad was not “sponsored content.” What if instead, when you scrolled down to Uncle Joe’s post to hear about Cousin Jim’s new baby, Uncle Joe was sharing a news story about how Mark Zuckerberg is a lizard man who has secret meetings with Hillary Clinton and Barack Obama to eat babies, figure out ways to take our guns and make us wear masks? What do you do then? You’ve just scrolled past a bunch of paid-for advertisements that really did you no harm and landed on Uncle Joe’s post.

This is where confirmation bias and algorithms converge. What will happen is one of two things. If you disagree with Uncle Joe, you will think to yourself, “Uncle Joe is getting old and crazy. Look at this craziness he just posted,” and you will keep on scrolling, with the only new thought being “I hope someone checks on him soon.” Now if you agree, if Uncle Joe’s post falls in line with your beliefs, you’re gonna click it. Then you’re gonna click the thumbs up button. Then you’re gonna comment on it. Then you’re gonna share it so all your friends can see it too! At that point, you’ve just programmed the algorithm. From here on out, the algorithm is going to show you more of crazy Uncle Joe’s posts. It’s going to show you more similar posts. It’s going to show you more of the people’s posts who also liked and shared it because the algorithm believes you have like interests. Did a spooky corporation out for profits tip over all those dominoes? No. That was all you my friend. This is how things go “viral.” When everyone likes and shares, they catch wind. Things can’t go viral without the key ingredient of YOU. Now Uncle Joe’s post is everywhere. Two days later you get it shared back to you from someone you didn’t even originally share it with. This is how disinformation, misinformation, and fake news spreads in their purest most grassroots form; and this is how we create echo chambers.

Echo chambers are environments that reflect what we want to believe, strengthening our beliefs and more deeply entrenching ourselves in them. So imagine being in a cave and all you hear is the echo of your own words. Are algorithms responsible for creating these echo chambers in social media? In the journal article “Fake News in Social Media: Bad Algorithms or Biased Users?” a group of researchers studied how echo chambers are formed. They found that “Algorithms applied in social media themselves do not form communities purely on their own as they amplify users’ information behavior. The crucial element of fake news and their pathways into social media is mainly the individual users” (Zimmerman et al). So, algorithms themselves cannot do anything that the individual user does not tell them to do. They reflect the user, not the other way around. When lizard Mark goes viral, it is not because of algorithms, it is because of the individual users who want to believe it and therefore spread the misinformation.

In “Disinformation, Misinformation, and Fake News” Geysha Gonzalez of the Eurasia Center of the Atlantic Council differentiates between those three common misleading information types in the media (Gonzalez). Disinformation is information purposely given to confuse. It usually mixes truth and lies to blur the lines between fiction and nonfiction. This is often done by people with ulterior malicious motives to deceive you. Misinformation is what Uncle Joe engaged in. He may truly believe Mark Zuckerberg is a lizard man who eats babies. He believed it to be true, so he shared it with his loved ones; he must protect them from the lizard people! Uncle Joe had good intentions but was now sharing misinformation. Both of these, whether shared intentionally or unintentionally to fit the purpose to deceive, is what we have begun to call fake news.

Fake news is not new. It’s as old Greek myths and witch hunts. Growing up, I’d see them every time we went grocery shopping, right by the cashier on the front page of tabloids. They are shared on television, magazines, newspapers, and social media; and are devoured feverously by those engaged in confirmation bias. When views are extreme enough to defy a commonly understood truth, we seek out that which validates. These things are hard to come by so when we find a source that provides it, we cling to that source. We like and share and find others who agree to help feed us other rare sources that say the same. In “This Analysis Shows How Viral Fake News Stories Outperformed Real News on Facebook,” media editor Craig Silverman gives us a surprising statistic: “of the top 20 fake news stories on Facebook they had 8.7 million shares, comments, and reactions. Of the top 20 real news stories by major news organizations, they had 1.7 million fewer.” This is amazing. This says that fake news stories are more likely to be engaged and shared than real ones. Why is this so? Because as I said, when views are extreme enough to defy a commonly understood truth, we seek out that which validates us.

A person reading a common news article assumes everyone is seeing the same article. It’s no big deal. There’s no need to seek out people who agree; to seek validation for a belief that’s commonly held to be true. A fake news story does not enjoy the same frivolous indulgence. It’s a rare treat; one that must be shared by those who are searching drastically for that one Google search result. Then they must share it with their friends who think as they do. And they must get as many likes and comments for validation to reinforce its truth, it’s a group effort. This is confirmation bias in full effect but on a group scale. There is not an algorithm causing “8.7 million shares, comments, and reactions” of fake news stories. Those are actual people taking actual action; creating echo chambers of like-minded people to continue believing what they want to believe.

In a study, “Fake News on Social Media: People Believe What They Want to Believe When it Makes No Sense at All,” participants were hooked up to brain data measuring machine and their brain activity was measured as they perused social media. The study states:

“We found that confirmation bias dominates, with most users unable to distinguish real news from fake. Users exhibit greater cognitive activity when news headlines align with their political opinions, and they are more likely to believe them. Headlines that challenge their opinions receive little cognitive activity (i.e., they are ignored) and users are less likely to believe them” (Morevec et al).

That study says that no amount of algorithm will make people pay attention to what they don’t want to pay attention to. Participants’ brains lit up when news aligned with their beliefs and fell asleep when they didn’t. Did the algorithm put their brains to sleep? Did the algorithm wake them up? Or was it simply confirmation bias hungry for validation?

In Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory psychologists Mergit Oswald and Stefan Grosjean writes:

“Confirmation bias means that information is searched for, interpreted, and remembered in such a way that it systematically impedes the possibility that the hypothesis could be rejected … Here, the issue is not the use of deceptive strategies to fake data but forms of interpretation processing that take place more or less unintentionally.”

These two psychologists are pointing out that confirmation bias is active. Information is “searched for, interpreted, and remembered” to prove ourselves right. The issue is not “deceptive strategies to fake data,” meaning it’s not the disinformation that Geysha Gonzalez spoke of in “Disinformation, Misinformation, and Fake News” but instead, it’s our willingness to be lied to, to believe what validates us. The solution to this problem is not a “better algorithm” as that would only exponentially compound the problem. It only gives us what we want so it would just give us more of what we want. The solution lies in us. We must make a conscious effort to differentiate real from fake. We must be willing to question ourselves and challenge our beliefs. I know, it’s tough, it’s something we don’t want to do; but unless we prefer living in our lies (as they say, “ignorance is bliss”) then it is our only option.

To blame technology for the prevalence of misinformation would be like blaming cars for car accidents, instead of the drivers. Technology only does what we want it to do. Until the days foresaw by The Terminator and Skynet tries to kill us all, we in our control. If you want to believe that Mark Zuckerberg is a lizard man who has secret meetings with Hillary Clinton and Barack Obama to eat babies, figure out ways to take our guns, and make us wear masks, then you will. Technology and social media didn’t cause that. In the 90s you would’ve got it from the 20th century A.D., you would’ve got it from the tabloids. In the 20th century B.C., you would’ve gotten it from the cave wall. News isn’t new. Conspiracy theories aren’t new. Myths aren’t new. And technology hasn’t magnified their spread. If anything, it’s given us the option to combat that misinformation with the click of a button. The information superhighway can show us anything we want to know, but if all we want to know is what we already think we know, then it will show us that too.

Works Cited

Agrawal, A. “What Do Social Media Algorithms Mean for You?” Forbes, 20 April 2016, www.forbes.com/sites/ajagrawal/2016/04/20/what-do-social-media-algorithms-mean-for-you/?sh=375d022fa515. Accessed February 6, 2021.

College Humor. “If Google Was a Guy (Part 3).” YouTube, 12 July 2014, www.youtube.com/watch?v=yJD1Iwy5lUY. Accessed February 6, 2021.

“Confirmation Bias.” American Psychological Association, 2020, dictionary.apa.org/confirmation-bias. Accessed February 6, 2021.

Gonzalez, G. “Disinformation, Misinformation, and ‘Fake News.’” YouTube, uploaded by KTOO 360TV, 17 September 2019, www.youtube.com/watch?v=AemrXbSrhwg. Accessed February 6, 2021.

Grosjean, S. & Oswald, M. Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory. Pohl, R. Hove: Psychology Press, 2004. Print. P. 79.

Moravec, P., Minas, R., & Dennis, A. “Fake News on Social Media: People Believe What They Want to Believe When it Makes No Sense at All.” Kelley School of Business Research Paper, vol. 18, no. 87, 9 August 2018. dx.doi.org/10.2139/ssrn.3269541. Accessed February 6, 2021.

Palihapitiya, C. “Former Facebook Exec Chamath Palihapitiya On Social Media, Bitcoin, And Elon Musk (Full) | CNBC.” YouTube, uploaded by CNBC, 12 December 2017, www.youtube.com/watch?v=5zyRpq2ODrE. Accessed February 6, 2021.

Silverman, C. “This Analysis Shows How Viral Fake News Stories Outperformed Real News on Facebook.” BuzzFeed News, 16 November 2016, www.buzzfeednews.com/article/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook. Accessed February 6, 2021.

Thornhill C., Meeus, Q., Peperkamp, J., & Berendt, B. “A Digital Nudge to Counter Confirmation Bias.” Frontiers in Big Data, vol. 2, no. 11, 6 June 2019, doi.org/10.3389/fdata.2019.00011. Accessed February 6, 2021.

Velshi, A. “How Fake News Grows in a Post-Fact World | Ali Velshi | TEDxQueensU.” YouTube, uploaded by TEDx Talks, 9 March 2017, www.youtube.com/watch?v=nkAUqQZCyrM. Accessed February 6, 2021.

Zimmer, F., Scheibe, K., Stock, M., & Stock, W. G. “Fake News in Social Media: Bad Algorithms or Biased Users?” Journal of Information Science Theory and Practice, vol. 7, no. 2, pp. 40–53. doi.org/10.1633/JISTAP.2019.7.2.4. Accessed February 6, 2021.

--

--