This is the second in a series of posts I plan to write about the Internet and journalism. I think it’s especially important to think about this because we have entered the era of ‘Fake News’ where people don’t really know what to believe. And so I think journalism is under threat — and with it our trust in government — as a result.
As you read this piece, also think about how people respond to Donald Trump. That’s what I am doing and I think it’s helpful in explaining what’s going on with lies that he tells.
Now I think I’ve said this before, but, according to some research, you have to suspend disbelief to process anything. That’s the concept I want to dwell on in this second Internet journalism piece. I wrote the first yesterday and called it “The Internet is for data“. Do read that piece. It’s not necessary to understand what I’m writing here. But I think it will help.
The Spinoza view
The first time I wrote about suspension of disbelief as the cornerstone of human information processing was back in 2010. I called that one Spinoza, Descartes and suspension of disbelief in the ivory tower of economics. And it was my spin on a James Montier article that I had read a few years prior.
Here’s what I got out of James’ post on information processing:
The core of my argument will come from James Montier, now at the fund manager GMO. As a strategist at Dresdner Kleinwort Benson in 2005, he wrote a timeless piece on the debate between two 17th century philosophers René Descartes of France and Baruch de Spinoza of the Netherlands. Descartes was of the view that people process information for accuracy before filing it away in memory. Spinoza made the opposite claim, that people must suspend disbelief in order to process information. The two competing ideas were put to the test; and it appears that Spinoza was right about the need for naïve belief, something that has grave implications for investing, the subject of Montier’s essay.
So the crux here is that you first have to believe what you’re hearing or reading to process it. If you don’t believe the information first, you won’t process it.
A couple of months ago, I was thinking about this and did an internet search on Spinoza vs. Descartes. I found a lot of entries. So let me draw from a few here to further the argument.
Here’s one piece from 2015 on social cognition:
How do people decide what to believe and what to disbelieve? When it comes to deciding whether to believe what someone else is saying, people are more likely to believe others are telling the truth rather than lying, dubbed the truth bias (Bond & DePaulo, 2006; Vrij, 2008). The “Spinozan” account (Gilbert, 1991; Gilbert, Krull, & Malone, 1990; Mandelbaum, 2014)proposes that understanding an assertion means having first to accept it as true automatically. It is only after the initial acceptance that people can consider rejecting the idea. In that sense cognition is considered a two-step process where the “unbelieving” stage follows automatic acceptance. In their seminal work, Gilbert and colleagues (1990) argued the Spinozan account explains why people are truth biased.
I only took one Philosophy class in school. So I’m not versed in this debate. But what I’m reading here says that we make an ‘eye-blink’ decision whether to suspend disbelief. And if we do, only then after accepting that the information we are seeing or hearing is true can we process that information and ultimately accept or reject it.
The Descartes view
But what if this eye-blink moment is “a moment of uncertainty” when we resist passing judgement? That 2015 piece by Street and Richardson that I referenced above takes exactly this view:
Under our account, comprehension begins with a period of uncertainty. Knowledge and past experiences can bias initial uncertainty toward believing a statement (see Clark & Chase, 1972; Mayo, Schul, & Burnstein, 2004). This bias toward believing others may appear as an automatic truth assumption when participants are forced into a truth or lie judgment. In other words, people do not automatically assume the truth of a statement. We suggest they may instead have a preferential bias dependent on experiential or situational factors that in general biases them toward believing, and if forced to judge they will hedge more toward believing than disbelieving.
In light of the evidence, we argue that the preference toward one response should not be taken as evidence that people automatically believe what they hear is the truth, but simply that it is the favored alternative if a judgment were to be elicited at that moment. Indeed, having an early bias toward believing the speaker is adaptive (even before the speaker has begun delivering his or her statement): in the long run it will be more accurate than random guessing because speakers usually do tell the truth.
That last part is what made me think of Trump. In this framing, we are biased to believing what we hear is true — because “speakers usually do tell the truth.” But “an early bias toward believing the speaker is adaptive” – meaning we may come to disbelieve, if the situation warrants it.
As Street and Richardson put it:
if the context leads us to believe people will generally be deceptive, there should be evidence of an early bias toward disbelieving, not believing, which has been shown
The crux of this framing by Street and Richardson though is this:
It seems people do not merely believe what they are told: they can comprehend without having to automatically assign a belief value.
What does this have to do with the Internet?
A lot of the information we consume is ‘virtual’. And I mean virtual in the sense I defined in the last post — i.e. through second hand sources, without being able to verify with our five senses who the original information source is.
In the old days before the Internet, we chose to ‘believe’ specific sources of information like newspapers, radio stations or television broadcasters. And, going by the Street and Richardson analysis, our choice was built on the adaptive analysis of the trustworthiness of that source of information. We became loyal to specific newspapers, radio stations or TV networks because they provided us with entertainment and a source of information whose trustworthiness was built up over years of habitual reading, listening and viewing.
The Internet changed that.
With the click of a button, we can connect virtually to any person or any company. And that has democratized information. Before the Internet, information was scarce. And information sources that had good distribution channels and economies of scale were able to dominate news flow. With the Internet, that is no longer true.
The Internet is for data
That’s where the last post comes into play. My message there is that the Internet is about search because the Internet is a data platform first and foremost. And the goal of users of the Internet is to seek and to find data, often times from sources they have never interacted with. If you offer a robust search algorithm as a service provider then, you are going to win a lot of customers – people looking for the lowest price, the funniest video, the best movie, or the latest on the debate between 17th century philosophers Descartes and Spinoza.
The corollary: if you make ‘search’ core to your mission as a company, you win.
Think of this in the context of news, for example.
- People have an early adaptive bias toward believing what they read, hear or see
- The familiarity heuristic entrenches that early bias toward belief for news sources that we interact with most often
- And when we search for data, the availability heuristic further entrenches our early bias toward belief for news when we see the same topic or ‘meme’ popping up over and over again.
Two things here
First, if you are a news organization. You want to rank high in search results. And since Google is the biggest search engine, you most want to rank high in Google’s search results.
For example, as English-language companies, UK media organizations like the Guardian and the Daily Mail understand this. In the Internet age, they have quickly morphed into an alternative source of US-centric news despite their legacy focus on the UK. They know that the US audience is much bigger. And so they now seem to focus heavily in servicing the US market. When I read the Daily Mail’s comment sections on US news and notice that the majority of the comments are from people who say they are based in the US.
Second, journalists are just as susceptible to heuristics as anyone else. For example, after an alleged Russian spy was arrested in the US, a ‘meme’ took hold on the Internet that she was in the Oval Office for a meeting between US President Trump and Russian Foreign Minister Sergei Lavrov back in May 2017.
At the time, the NY Times ran a story “How Russian Media Photographed a Closed Meeting With Trump” with the following photo:
Source: NY Times
The ‘meme’ was that the redheaded woman partially blocked, but in the back of that photo was the alleged Russian spy Maria Butina. Why?
- The availability heuristic: Butina was in the news. There were multiple photographs of her with US politicians all over the Internet and she had even posed President Trump a question at a 2015 presidential bid appearance of his. Moreover, the NY Times noted that “Trump Bars U.S. Press, but Not Russia’s, at Meeting With Russian Officials“. Wouldn’t you assume the woman was Russian? It’s not a big leap to thinking it’s Maria Butina then.
- The familiarity heuristic: Once a verified Twitter journalist runs through the logic in bullet point #1 and tweets the accusation, that trusted familiar source legitimizes the meme. And when the meme is legitimized by a trusted source just once, it’s off to the races. That’s enough to re-verify again and again.
- Early bias toward belief: Twitter is made for eye-blink moments. You run through the logic of bullet points #1 and #2 in an eye-blink moment and decide to retweet a meme almost instantly. How many times, for example have you retweeted a tweet with a link without visiting the link just because you trusted the twitter account and liked the message they were making?
- Amplification of the heuristics: Even if the first tweet is unsure or questioning (i.e. – “Is that Maria Belina in the Oval Office?”), the mere repetition and amplification of the underlying message makes it ‘true’. That’s how our brains work.
Retreating to trusted sources
So when it turns out that a meme is “fake news”, we feel duped and ashamed for falling for it. We ask ourselves: how can I make sure I don’t get fooled again?
One answer is to verify your information. But if one did verify the information in this case, one would come across all of the points in bullets 1 and 2 above. That would be verification enough for most people. That’s how the meme spread in the first place.
Another answer is to go back to the pre-Internet habit and limit your trusted sources. Let’s call this a “circle the wagons retreat”. And I call it that because I think people basically decide to reduce their window of trusted sources to those that confirm their biases, politically like-minded sources.
Putting this in the Street-Richardson context, it’s as if the adaptive judgement on the trustworthiness of a source causes greater early scepticism, less bias toward believing. And if your bias toward belief is reduced, you’re going to eventually narrow the sources that you choose to believe. Sources that challenge your closely held cultural and political beliefs will be rejected out of hand. You will circle the wagons around your core beliefs and retreat to a safe ideological space of information sources that share your worldview. Once in your safe space, you can let your defenses down and resume holding a bias toward belief, even an extreme bias – since you now know you have rid yourself of untrustworthy information sources.
So, far from democratizing information, I believe the Internet entrenches ideology. For the propagandist, the goal then is to make it onto the list of trusted sources because, then, what the propagandist says will be taken in on faith.
I think that’s what’s happening in the Internet today.