Welcome to episode 249 of the Digital Life, a show about our insights into the future of design and technology. I’m your host Jon Follett and with me is founder and cohost Dirk Knemeyer.
Today we’re going to chat about a recent study which was published in the Journal Science by researchers at MIT Media Lab and the MIT Sloan School of Management which showed that false news spreads more quickly online than truthful news. We’ll talk a little bit about the implications for our ongoing political discussions and possible impacts on society as a result. So, let’s start with a little bit about this study which was just recently published in Science Magazine. The purpose of it was to scientifically break down how news stories were spread online, specifically part of the genesis for this study of course was understanding better how misinformation spreads on social media and sort of the growth of that phenomenon.
So, I don’t know if this is the expected results of the study but the research which analyzed the diffusion of all major true and false stories that were on Twitter from the very start in 2006 to 2017, the conclusion that researchers came to was that false stories spread significantly farther, significantly faster and more broadly than true ones for all categories. So, not just politics of course but also business, entertainment, you name it. If it was false, it was more likely to spread and also spread farther. In fact, the researchers found that false stories were 70% more likely to be retweeted which is pretty significant.
Now, in particular, the false political stories were even a notch above that. They were more likely than other false stories to be spread. So, the researchers had some conclusions about why this might be the case. One of the conclusions they reached was that there’s a certain novelty to false information. So, it’s information that seems unique, different, worth sharing to people. So, from a sheer sort of interest standpoint, the novelty of the stories researchers felt was a potential driving factor for why human beings were sharing the stories. Just to be clear, they also did some controls for whether these were human beings or robots spreading the false information. They found somewhat discouragingly that yes there are bots spreading information but it is actually the human beings who are sort of providing the added juice that makes these false stories propagate online more quickly.
So, yes we’re worried about bots and that’s a growing concern of course, but it is the people who are driving fake stories online or false rather let’s use the term false stories, because that the word fake has become politicized as you know somewhat strangely but that’s how it is. So, first, let’s start with examining the conclusions of the research which I thought were interesting. Dirk, as I was saying that, you kind of smiled and you know saying it’s the human beings who are spreading the false information. What was your thought when I looked over at you just now when I said that?
Yeah, I mean it’s no surprise in a certain way. As I started to read the initial article there were a couple of them. Very quickly, I was like, “Oh I know why this is happening.” Eventually, the article got to it and the word that the article uses is novelty. It said that the false stories as opposed to the stories that were true had a higher degree of novelty. That taps into a few different things about humans. One, novel stories in the sense of something that’s surprising or different. So, that’s something that would lead a person to want to share it.
Novelty in the sense that’s something is outrageous, something is more … it is beyond reality in a certain way that get someone excited and makes them want to share it. For me, it wasn’t a surprise. It was more if somebody asked me to sort of deconstruct this psychologically, this is about where I would get although it’s interesting that they have quantitative research and numbers behind it that they can prove the phenomenon as opposed to just sort of a more of an intuition level deconstruction.
Yeah. I feel like there’s something to be found here in terms of breaking down how this false information underpins our discussions whether it’s about politics or entertainment or business. I think for me the idea that false information is growing online and that it actually spreads more quickly, I find that alarming, because it didn’t quite match my worldview which is that of course facts matter and that we should search out facts to make decisions. I do think social media exposes the side of human life that it was always present but is not necessarily exposed to everyone all the time. So, the idea of the rumor like this false thing back in the quaint days of the 90s when I was in high school, the idea of-
Well, high school and college. The idea that the rumor about let’s just stick to the idea of gossip. Like that was something that spread quickly and we didn’t have a social media at the time, but it was word-of-mouth. So, this idea that there could be interesting novel information about someone or something that you’re following, paying attention to in the case I’m describing maybe it’s a person and perhaps you’re aware maybe it’s not true, but it’s really fun to share this information hence the rumor mill. This of course is all replicated in social media and has more of a permanent presence in our life now which makes it harder to ignore. You can certainly say, “I’m not going to pay attention to rumors.”
They’re not constantly scrolling on your phone in the same … if you’re in a more analog world, it’s not this constant drumbeat of pressure that misinformation applies now in some way because we’re all attached to our devices in some way, shape or form checking them tens, maybe hundreds of times a day. So, it surfaced this level of human behavior. It’s accelerated it to a presence in our mind that’s ongoing. I wonder what the implications are just in terms of our ability to filter false information and make good decisions whether it’s around politics which is always sort of a hot topic or even around other kinds of decision-making in our lives and our businesses etc. Like, what are the consequences of having a more omnipresent set of misinformation? Does it matter?
Stepping back out, you covered a lot of ground there. So, I think gossip is a really good analog. Most gossip is false. We played telephone as little kids and we learn once it’s one person’s into another one’s ear, into another one’s ear, into another one’s ear the story gets twisted badly and changed badly. So, I would say that the spreading of false information via Twitter and other mechanisms is just an extension of gossip that goes back to the earliest humans communication I would imagine. So, it’s just part and parcel of the human experience in a certain way. Its presence on a medium like Twitter is showing us the unintended consequences of removing the editorial layer from public information, sharing and discourse. Twitter is basically self-editorial. Frankly, one of the reasons that the information spreads more when it’s false is because once people realize it’s true, there’s no value there.
It’s sort of something that you know other people know sharing with other people is pointless. Something that’s false, people aren’t sure about and so they keep going like, “Really?” They put it out. They’re, “Oh my God! Oh my God! Oh my God! Oh my God! Oh my God!” So, it just sort of make sense. There are downsides and we are seeing more and more of them to the removal of the editorial layer from the information that seems more formal and packaged and we consume as a culture whether it’s Twitter or Medium or Quora. It’s has the pluses of the wisdom of the crowds but it has the minuses of the wisdom of the crowds as well. It would take cataclysmic events to go back to a world where most of what we consume has the formal editorial layer, but I think that we’re missing it and I think that we’ll start to see more and more of it in certain ways because of the weaknesses and the things like spreading falsehoods. They come in this environment where it doesn’t exist.
Yeah, I think there’s a huge information filtering problem, which was at one point mitigated by the presence of the editorial layer. Of course, the editorial layer is a type of power. So, you are actively raising up certain things and downplaying others. At the same time, no person has the time, just the physical time, to filter information in the way that we’re being asked to right now. I think it’s interesting to think about the level of information that we’re consuming now almost as sort of the base layer, information layer that maybe we’ll look back on this and say, “How did anybody think that this was worth consuming?”
We had this river of information most of which is unfiltered and possibly not valuable in the slightest. We’re taking up time, we’re using our thinking abilities to parse this information when information parsing could … it’s either a job for somebody or it could be … I hesitate to say it but it could be an algorithm. It could be any number of things that just limit in certain ways our access to the data so as to make sure that we’re receiving like the most accurate sort of the right data that we’re looking for. Now, whether that’s a generation of products that sits on top of that data layer, I don’t know, but the need is there for an information filter that’s not just … unfortunately, I mean the filters we have now are potentially are networks which are all just sort of self-validating echo chambers. So, that’s sort of another potential problem there, but suffice it to say, I mean this has been coming for a long time as we move to the so-called information age. I think we’re finding out that there’s lots of data and very, very little information in there.
Certainly the rise of artificial intelligence and smart wear will provide a suite of technologies that are well suited for providing an editorial layer on top of all of this sort of Wild West of rubbish. Then, the challenge shifts to what are the biases inherent in those editorial layers and how are they impacting us which is really the problem we had with old media, but now it’s moved from the biases of direct actors, the biases of indirect actors who program machines, which is just interestingly different context.
Yeah, and I think about … I do lots of electronic music programming. I think about when we talk about the interface layer in terms of understanding biases. I’d love to see a bunch of knobs or something that say, “Okay, I’m going to turn up the bias on the liberal side or the conservative side. I’m going to turn up the false information a little bit or turn it …” Like this interface that allows me not only surfaces the bias but allows me to alter the feed in such a way so I can see what kind of information I would get if I were more interested in certain kinds of bias, certain kinds of editorial. Like, “Hey, I really want something that’s moderate and that’s just completely 100% fact based. Okay, I want an editorial that’s more free market or I want these sorts of things.”
Filters we can play with. We already have these filters in our heads. Why do we decide to follow this pundit and not this other one? Why do we decide to listen to this band and not this other one? We’re not surfacing those biases in a readily adjustable manner. We set and forget. We follow and then we follow this person or this group and then we forget why we did in the first place. So, now we’ve highly tuned our information feed and there’s no exposure to how that tuning took place.
That’s like the system that we had in the redesigned democracy piece in 2013 for picking and following adolescent filtering with information. You’re reviewing over and make decisions about how you vote. Same idea, slightly different context.
Yeah. Listeners, what Dirk’s referring to is a piece that he published called Redesign Democracy which we’ll have a link to in the resources section. Just some thoughts on how we might be able to better interact with ourselves and our government in a democratic fashion. So, I think it’s worth winding up this episode with talking about some of the possible suggestion or possible … I don’t if cure all is the right word, but ways to mitigate false stories spreading online suggested by the researchers in a New York Times article.
So, these weren’t meant to be cure alls of course but just initial ideas. So, one suggestion was the idea that we might be able to label stories almost as if like a nutrition label, which I thought was interesting. I hardly look at nutrition labels. I do when I buy the thing for the first time, but I think that suffers from the same problem that sort of inherent in our initial decisions around who we’re following or what we’re reading. It still might be nice. I don’t know what will be contained in that new story label. 99% fact free but it’s an interesting concept.
It can even be done with existing data. Using Twitter, they can have an algorithm based on how many likes or retweets something gets or doesn’t get and the way that those things are treated and can be converted into this kind of thing.
Or even like here’s the fact checking sites. Here are the top 10 fact checking sites. Based on those, this source has a certain grade.
As long as the content itself has been filtered through the fact checking sites, the infrastructure for which I don’t think is generally happening right now.
Right. Another suggestion from that same article was that there might be certain negative consequences to spreading false stories. So, whether that’s … I don’t know if that would be a series of grades that would demote accounts or that would provide some warning. “Hey other users, this account is notorious for spreading false stories.” Of course, that would raise all kinds of issues in terms of administration, but that was another possibility was that you could do this grading or demoting of accounts as a result of spreading false news.
I’m a fan of the kind of stuff, but it really requires … when you get down to the level of the individual, it really requires a relationship between verifiable actual person and account. Otherwise, it’s just nonsense, because you can keep spinning up fake accounts to do ridiculous things, but if you can get past that part of it, then there’s some interesting stuff there.
Listeners, remember that while you’re listening to the show, you can follow along with the things that we’re mentioning here in real time. Just head over to the digitalife.com, that’s just one L in the digital life and go to the page for this episode. We’ve included links to pretty much everything mentioned by everybody so it’s a rich information resource to take advantage of while you’re listening or afterward if you’re trying to remember something that you liked. You can find the Digital Life on iTunes, Sound Cloud, Stitcher, Player FM and Google Play. If you want to follow us outside of the show, you can follow me on Twitter at Jon Follett. That J-O-N-F-O-L-L-E-T-T. Of course the whole show is brought to you by GoInvo, a studio designing the future of healthcare and emerging technologies, which you can check out at goinvo.com. That’s G-O-I-N-V-O.com. Dirk?
You can follow me on Twitter @dknemeyer. That’s @D-K-N-E-M-E-Y-E-R and thanks so much for listening.
So that’s it for episode 249 of the Digital Life. For Dirk Knemeyer, I’m Jon Follett and we’ll see you next time.