Prebunker Mentality
New research in combatting misinformation suggests the truth may have a future yet. But we're solving for the wrong problem.
In the summer of 1981, five men waited in Canada to kill the president.
Most of the assassins were Middle Eastern, one was a blond-haired East German. They all had the same destination in mind: A hotel in downtown Washington, D.C. Their temporary digs would be close enough to the White House to see President Ronald Reagan’s helicopter take off from the south lawn.
Months prior, American fighter jets had been intercepted by two Libyan planes. The Libyans fired, the Americans returned fire: The Libyan jets crashed into the Mediterranean Sea. The embarrassing ordeal riled Muammar Qaddafi. The dictator had been on an assassination kick recently, having murdered a number of his critics in Europe. Now he had decided to escalate, to kill the president.
The five assassinations he dispatched were issued a mobile surface-to-air missile launcher and marching orders to take down the Marine One from their hotel window.
When this plot leaked, the Reagan White House stepped up their warnings to the rogue North African state. Qaddafi faced “the most serious consequences” if the hit squads carried out their mission, the White House said. Furious meetings were held in the White House to contemplate America’s response to the Libyan threat. Invasion was on the table.
These warnings came as an awful shock to Qaddafi. He had contemplated killing Reagan, sure, as he had many times before. But he never set the plan in motion.
As questions started mounting about the evidence for this supposed plot, the White House vacillated. Maybe the assassins were in Mexico. Maybe they hadn’t arrived yet. But the plans, they insisted, were real.
Except they weren’t. This was a time of conspiracy theory and disinformation, and it was flowing from the inside out. William Casey, Reagan’s Director of National Intelligence, had adopted a habit of simply making things up to justify covert action. And this seemed to be a prime example of his fabulations.1
Casey’s penchant for thinking outside the box would become very clear in the ensuing years, when the breadth of the Iran-Contra affair — a wild scheme to swap drugs for weapons, making partners of Iranian mullahs and Latin American death squads — would become clear.
In the introduction to a sprawling series of features in LA Weekly on the Reagan administration’s troubling penchant for disinformation, the paper’s editor Jay Levin explained that disinformation is not an “occasional tool” of rogue presidents, but “a long-standing adjunct of policy.” But things had gotten out of control.
Never before has disinformation been so well coordinated or agreed upon by all potential players, and no previous administration thought to begin almost all its initiatives, including many domestic ones, with a disinformation campaign. Disinformation has been as reflexive with this crowd as ‘spin control',’ and it feeds on itself.2
Disinformation, in other words, had become the thing you do to justify the things you shouldn’t do.
And, Levin worried, it would have downstream consequences.
The result is that there is very little issuing from the present government that Americans can believe, though few Americans know yet how pervasively disinformed they’ve been…These themes have helped create the mass psychology that the U.S. is under siege by hostile, terrorist forces at every turn and that our only hope is to rally ‘round the president and let him fight back for us.
Governments used these tactics for ever-greater ends in the decades that followed, culminating in the invasion of Iraq, leveraging arguably the most extreme disinformation campaign ever employed by a democratic Western government.
Since then, we finally learned just how pervasively disinformed we’d been. And we all went a little mad. We began producing the disinformation ourselves.
In this era of misinformation and conspiracy theories, when distrust is corroding the contacts that allow us to communicate, a whole industry has emerged, hunting for a new strategy to bring trust back.
Fact checking? That’s so 2016. Debunking? A thing of 2020.
“Welcome,” the Washington Post wrote recently. “To ‘pre-bunking.’”
This week, on a very special Bug-eyed and Shameless, a warning that we are still solving for the wrong thing. Decades after the spooks and hawks obliterated faith in institutions, we are still trying to reverse-engineer the time-tested message of “trust us.”
It’s not going to work.
Instead, we’re going to have to do the more difficult thing.
Up for the Prebunk
In 2013, the World Economic Forum conducted a bit of futurecasting. One of the key threats to the world, they warned, would be the proliferation of “digital wildfires in a hyperconnected world.”
These wildfires, of misinformation on a massive scale, could be contained by criminal law and digital censorship — but, the WEF wrote, better not. To keep our society running on truth “generators and consumers of social media will need to evolve an ethos of responsibility and healthy scepticism.” Institutions will need to help users get to that spot, the WEF noted.
The ensuing years saw a steady rise in research on how, exactly, we are supposed to fight these fires.
A team of researchers, in 2015, published Misinformation and how to correct it. They surmised that countering misinformation would largely fall to three tactics: Repeat the corrections to misinformation as often as possible (fact-checking), provide context as to why the initial misinformation is believable but wrong (debunking), and warn people about misinformation before they encounter it (prebunking).
Prevention is better than cure, argued researchers from the University of Kent in 2017. Preparing people for the conspiracy theories they may soon encounter “may in some way inoculate people from the potential harm of conspiracy theories,” they found.
By 2020, experiments had chugged along. Three social scientists went so far as to develop an online game, where users played the role of a fake news con artist, generating real-sounding disinformation — and, they found, users who played the game came out of it much more resilient to actual misinformation. As the COVID-19 pandemic hit, a raft of new studies, guidebooks, and strategies proliferated — many pointed to prebunking as the killer tactic to harden populations against dangerous misinformation.
This crystallizing idea of prebunking began in the background, as less-interesting than other intervention options. Initially there was an obsession with fact-checking (Dispatch #68), then with debunking, then content take-downs, then content warnings.
But, one-by-one, those tactics fell out of fashion (which we’ll talk about in the next section) so now it is time for prebunking to step up to the plate.
As the Post writes:
Modeled after vaccines, these campaigns — dubbed “prebunking” — expose people to weakened doses of misinformation paired with explanations and are aimed at helping the public develop “mental antibodies” to recognize and fend off hoaxes in a heated election year.
The Post is particularly keen to highlight a new campaign, from Google, to bring this prebunking to the masses. See for yourself:
The Post concedes that the evidence is still somewhat shaky for the overall real-world efficacy of prebunking, but they reserve ample optimism:
The moves come after nearly a decade of floundering initiatives to stem voting misinformation, leading researchers to a sobering conclusion: It is nearly impossible to counter election misinformation once it has taken root online. In a year when law enforcement officials are warning that artificial intelligence could supercharge election threats, election officials say prebunking could be their best hope.
Ok, I have bad news.
For starters, this Google campaign is essentially identical to public awareness efforts that have run over the past few years — and they are, really, just variations on media literacy education that has run for decades. (And they feel like a Corporate Memphis-era take on 90s PSAs.) And to that end, they may be perfectly effective at improving overall media literacy, but we shouldn’t put too much hope in the idea that these meagre Youtube ads will make much of an impact on any of our current social ills or disrupt any kind of concerted effort to go after our democratic system. It is also, let’s be real, cover-your-ass PR from Google, which is worsening their core product, arguably the single most important portal to truth on the internet, to maximize profit while inserting large-language models into everything without much regard for its impact on trust in information. So, fuck Google.
Beyond that, while research has shown a benefit to prebunking, the study also cautions that its positive effect tends to be temporary, with decay starting in a matter of days.3 Other studies have found that prebunking is measurably less effective than debunking.4 Basically all studies in this field have warned that really any positive data for these strategies may not be taking into account the messy, complicated, and emotional realities of real information environments.5
Even if prebunking emerges as the single most promising counter-misinformation tactic — or, more likely, that it just becomes one tool amongst many — there’s still a big but: None of this will fix what we want it to fix.
In a new discussion paper in Nature, Misunderstanding the harms of online misinformation, five prominent researchers from across disciplines tried to reorient the conversation around misinformation. They set the table nicely with some useful statistics:
The 20% of US citizens with the most conservative information diets were responsible for 62% of visits to the 490 untrustworthy websites described above during the 2016 campaign. Similarly, 6.3% of YouTube users were responsible for 79.8% of exposure to extremist channels from July to December 2020, 85% of vaccine-sceptical content was consumed by less than 1% of US citizens in the 2016–2019 period, and 1% of Twitter users were responsible for 80% of exposures to links from dubious websites during and immediately after the 2016 US presidential campaign
The researchers conclude that this misinformation content on social media comprises a tiny fraction of what the average user would see. And, even then, mere exposure to misinformation cannot be immediately infectious — only a tiny fraction of the tiny fraction of those who come across an anti-vaccine post will find themselves sucked into the misinformation vortex.
While various anti-misinformation strategies may have some marginal benefits for those who do interact with viral misinformation, the researchers argue a better focus ought to be on improving platform transparency to better understand the issues at play and to study more closely the vocal and radical few who are most likely to engage with this misinformation. (The authors, worth noting, have received funding from Meta and Google.)
Fundamentally, I think they’ve correctly underlined the weakness in how we engage on the topic of misinformation.
Strategies built around the idea that we can either neutralize or inoculate for misinformation gets everything turned around. The people most likely to trust these prebunks and fact checks are also the most likely to view misinformation with skepticism — we are primarily training the already-well-equipped. Those who are most likely to want to believe the misinformation are also those who are most inclined to distrust official sources of information, be it a government agency or Google, and that includes anti-misinformation messages.
Misinformation, in that sense, is a symptom, not the disease. While misinformation may contribute to radicalization, it is not the underlying factor at play.
By treating the symptoms, we may make the underlying problem worse.
Who Will Miss the Misinformation Researchers?
I have, as may have become evident over the course of this newsletter, a bit of a bone to pick with ‘the misinformation-fighting industry.’
Not because I have a problem with the individual practitioners (I don’t.) In fact, I think we’ve done those practitioners a massive disservice. By taking their research and misapplying it, and asking it to fix a slew of problems that they simply cannot fix, we’ve needlessly politicized their work and made them a target.
We had, for example, some good research that suggested deplatforming — that is, mass banning and enforcing rather stringent community standards on big, open social media and broadcast networks — could produce better online conversations.
But then we went and actually did it, and it gave us all sorts of externalities. While it may have made some online experiences better for some individuals — and that’s nothing to sniff at — we simply didn’t observe the overall improvement to our political discourse that we were promised. As a team of researchers found last year, after Twitter’s post-January 6 purge, mass-banning pro-insurrectionist accounts didn’t do much good. In fact, “hate speech spiked dramatically in the week of January 6th compared to the month before” while far-right platform Gab saw not only a surge in traffic but that it “became much more toxic, with hate speech rising to levels that were much higher than in previous months.”6 Other research observed similar results for Facebook’s anti-vaccine purge.7 (A new study, for what it’s worth, did observe some benefits to deplatforming in reducing misinformation on Twitter.8)
The platforms didn’t do this alone. As was popularized by the so-called ‘Twitter Files’ (which I broke down for WIRED at the time) platforms were coordinating with the U.S. government and academics in order to identify trends and narratives — sometimes coded as part of a foreign influence campaign, often not — that ought to be aggressively, sometimes even proactively, removed from major platforms. Recall how Twitter forbade users from posting the New York Post story about Hunter Biden’s laptop, operating under a (justifiable) concern that it could have been a Russian disinformation operation; or how Facebook took down every post suggesting that COVID-19 leaked from a lab.
While those Twitter Files might’ve been cloaked in all sorts of absurd innuendo and wild leaps, they described a real problem: Social media companies, in conjunction with government and outside experts, made some big calls in the gray zone and get some big things wrong. In the process, they wound up convincing the conspiracy-minded that the conspiracy was real. If it had worked, and actually reduced the spread of misinformation, maybe it would have been worth it. But it didn’t, and it wasn’t.
Trace the line of consequences, and you can watch the massive backlash rise in opposition to those mass deplatformings. As the insurrection was rebranded to “legitimate political discourse,” Twitter’s purge became not just an act of censorship but proof of elite collusion. And it prompted not just Elon Musk’s takeover of the platform but grew the popularity of Truth Social, Rumble, and a slew of other conspiracy-oriented websites. And, to cap it all off, it made misinformation researchers prime targets for attack.
Renée DiResta, one of the best misinformation researchers out there, recently wrote of how a massive harassment campaign from the far-right turned her into “CIA Renee.” The Stanford Internet Observatory, where DiResta used to work, is on the brink of total collapse. Harvard shut down its counter-misinfo program earlier this year. A number of these researchers have recently been targeted by a completely bad-faith hack-job from Matt Taibbi, and it’s likely to produce another round of Congressional harassment hearings.
If it were just that these misinformation research teams were being targeted by right-wing reactionaries, I think it would be a lot easier for us to rush to their defence. But here’s the uncomfortable parallel truth: We didn’t really value what they produce.
This isn’t to say that their work isn’t valuable. It is. But we keep demanding that these researchers furnish us with tools that will fix our current political polarization and extremism problems. We kept hoping that debunking would be a stake through a vampire’s heart, prebunking a braid of garlic.
We should know by now that social science simply isn’t that good — but we should have known it then, too.
Again, these tactics may have some general benefits. But debunking and prebunking won’t avert efforts to overturn the 2024 presidential election, nor could they have thwarted vaccine skepticism. And there is no magical trick that will fix misinformation going forward, if only we can implement it just right. And we should stop expecting these misinformation researchers to come up with such a trick, and we should stop co-opting them to implement their research on a wide scale without any consideration of the consequences.
We need to stop thinking about top-down solutions, and start thinking about bottom-up ones.
Something to Believe In
I’m no academic. I am, in fact, highly allergic to learnin’.
But if there’s one thing I always appreciate, it is academics’ ability to sink enormous amounts of time and energy into a certain strategy or idea, to stake a considerable amount of professional capital on it in the process — and then to simply move off it, should better evidence arise.
Because while the media and governments may continue tilting at windmills, convinced that all of these heavy-handed anti-misinformation tactics may yet work, the best anti-misinfo academics are already moving on to the next generation of research on how to bring truth back to society — and there’s good reason to think that they’re homing in on some smart ideas.
Kate Starbird, director of the University of Washington’s Center for an Informed Public — and a primary target for Taibbi’s smear job — recently published a fascinating study of how well-run online communities can be a super effective tool in fighting misinformation. Studying how Croatian-language Wikipedia governance had fallen to disinfo-friendly nationalists, Starbird and her fellow researchers make a fascinating illustration of how important online community governance can be:
Researchers studying online disinformation have tended to focus on identifying features of problematic content and arresting the processes through which it is disseminated. Accordingly, many of the solutions currently under development to address disinformation campaigns and other influence operations in the context of Wikipedia involve the introduction of automated tools to detect problematic content and behavior. These tools may empower good faith administrators to fight “one-off” risks like vandalism more efficiently, but they do not address the more fundamental question of how various institutional arrangements condition how power is configured in online communities, and how those dynamics in turn affect information integrity outcomes. […]
As this study has shown, variation in self-governing structures at the community level can indeed result in divergent outcomes for otherwise similar communities. This suggests that, at the very least, certain governance arrangements can provide checks on the accumulation of both technical and social power in online institutions, ultimately resulting in more democratic, participatory, and resilient self-governed communities. Our work also suggests that this can reduce other social problems like disinformation.9
It’s such a great bit of research because it starts thinking, in a round-about way, like the producers and disseminators of misinformation.
If crafting conspiracy theories is a participatory sport, could the truth be disseminated in a similar way?
We don’t think about it that way, do we? We sort of accept that Wikipedia, despite our initial fears, has become an enormous source of truth and a bulwark against misinformation — but we don’t really ask why.
Starbird and her colleagues, looking at successful iterations of Wikipedia, have described clearly how open, transparent, decentralized bureaucracy with more formal institutions, produces really good results. Anyone can contribute, so long as they adhere to the clear standards and guidelines — and individual contributors are conscripted to enforce those rules. It makes for a platform that is agile, trustworthy, and clearly transparent.
This runs pretty counter to our normal way of thinking about things, where we prefer to have single voices of truth — government, health authorities, scientists — who we then parrot out to the masses. But, intellectually, we know that’s a terrible way to convince people who distrust authority.
To replace “trust us, stop asking questions” with “come ask questions with us” is certainly a fascinating transition for the effort to fight misinformation. And this is a thing we can do, although it is not quick or easy. (Dispatch #47)
This problem goes far beyond the internet, and it can’t be fixed with technological solutions. Our’s is a trust problem. The confluence of governments that got really comfortable lying about the big things — whether it’s Qaddafi’s hit squads or Saddam’s chemical weapons program — and the democratization of information online meant that we became much more skeptical just as we learned how frequently we’d been lied to.
In practise, it means that we’ve got a core group of people who are willing and eager to create, adopt, disseminate, and remix misinformation into conspiracy theories. Then we have a bigger chunk of people who are more discerning, but also deeply distrusting and/or viscerally partisan. That means that, as these narratives take shape, charlatans in the media and unscrupulous politicians adopt them as emotional truths. At that point, these ideas cannot be debunked or fact-checked because they have become a part of adherents’ political identities. This has a push-pull effect between partisan and politician, as they each drag each other further down the line of radicalization.
Belief in misinformation today is not a question of ignorance, but of faith. And it’s because, to the faithful, the lies are more trustworthy than our institutions.
This is how the MAGA movement came to see COVID-19 vaccines as dangerous, the election as stolen, and the January 6 insurrection as a peaceful gathering of patriots (that was also an FBI false flag.)
There is no way to prebunk, deplatform, or fact-check away this problem. And it was grossly unfair of us to expect that these misinformation experts have a tonic that will heal this. But because we adopted these unrealistic expectations on them, we have been disappointed when they don’t deliver. As such, it was so much easier for their respective institutions to ditch them when the water got choppy.
So let’s stop doing the same thing and expect better results. We need to stop looking at this as a purely informational problem and start recognizing it as a trust problem.
To that end, institutions — academic, journalism, government — have a lot of work to do.
That’s it for this week’s edition — dispatch #99!
Look our for a special 100th edition later this week.
For Canadian subscribers: In the Toronto Star this past weekend, I had a column breaking down paranoia over foreign interference in Ottawa. I also guest-hosted Canadaland, talking about Ottawa’s efforts to finance journalism.
As always, paying subscribers can comment below. Feel free to kick off the conversation: How do you see the future of truth playing out?
Until next week later this week. (Hopefully.)
Target Qaddafi, New York Times. February 22, 1987
Disinformation Gate, LA Weekly. March 19, 1987
Countering Misinformation and Fake News Through Inoculation and Prebunking, Stephan Lewandowsky and Sander van der Linden
A Comparison of Prebunking and Debunking Interventions for Implied versus Explicit Misinformation, Li Qian Tay, Mark J. Hurlstone, Tim Kurz, Ullrich K. H. Ecker
Prebunking Against Misinformation in the Modern Digital Age, Cecilie S. Traberg, Trisha Harjani, Melisa Basol, Mikey Biddlestone, Rakoen Maertens, Jon Roozenbeek, and Sander van der Linden
Cross-Platform Reactions to the Post-January 6 Deplatforming, Cody Buntain, Martin Innes, Tamar Mitts, Jacob Shapiro
The efficacy of Facebook’s vaccine misinformation policies and architecture during the COVID-19 pandemic, David A. Broniatowski, Joseph R. Simons, Jiayan Gu, Amelia M. Jamison, Lorien C. Abroms.
Post-January 6th deplatforming reduced the reach of misinformation on Twitter, Stefan D. McCabe, Diogo Ferrari, Jon Green, David M. J. Lazer & Kevin M. Esterling
Governance Capture in a Self-Governing Community: A Qualitative Comparison of the Croatian, Serbian, Bosnian, and Serbo-Croatian Wikipedias, Zarine Kharazian, Kate Starbird, Benjamin Mako Hill
I'll have to go back and read all of that again. I found myself skimming ahead to see if two rather major points would come up, and unless I skimmed too lightly, they didn't.
1) Opens with the infuriating tale of Wm. Casey (and thank-you; I'm learning how quickly such crimes are forgotten). But the rest skips the obvious point that there are real conspiracies. Casey and many others in the "Intelligence Community" conspired to sell those various lies and affect policy. The first Iraq War was sold with the fake babies-thrown-from-incubators story that the entire Bush I government conspired to sell. The second, they just invented a conspiracy theory about a dictator giving his worst enemies a nuke for laughs.
How are you going to debunk unless the government itself comes clean? In the USA, at least, they have a lot of admitting to do.
2) It should start in school. School has been remiss in teaching basic logic and logical fallacies; cowardly about teaching that MLMs and most herbal remedies are scams. If you can't equip kids to avoid Herbalife, you can't protect them from Steve Bannon.
It's just plain history, how Big Tobacco muddled the science on smoking, too, I think schools can get away with criticizing that. I think they could also teach about gambling addiction, the actual certainty of losing when you gamble, and look at the advertising of gambling - as a fun night out with attractive young people - versus the reality.
Broadly, they could teach that "advertising", "propaganda", and "public relations" all describe the same process, distinguished only by motive. Go over how advertising lies on many levels, study historical public relations statements versus the truth.
Remarkable how careful I have to be even suggesting that schools teach things that are inarguably true, but would harm the business model of a profitable business like Amway. Or any casino.
Very glad I came back to read it all again. This topic is SO covered, I tend to start skimming when I see a familiar intro.
The wikipedia matter is really crucial, I think. Radical transparency rather than a lot of control, transparency is clearly the winning strategy. As it was when science was getting invented!
The other example to look at, Justin, if you do a whole piece on it, is Slashdot. May have been the first "blog" before there was such a word, certainly the first big one. Still going, despite being sold off to owners who only think about money (lost some people then, isn't as good now, but it's still going).
Slashdot lets anybody post anything, but *randomly chosen* members rate stuff up or down. There's no banning, but every rater can "shadow ban" if you will, at least for people who only read stuff with thumbs-up ratings. (Any one rater can add +1 or -1, and ratings go from -1 to 5; lots of people only read "above 3" posts.) It works!
Slashdot tackled the original huge controversies, like PC vs Mac - the rhetoric is so vicious because the stakes are so small - and didn't fold up. It handled global warming, nuclear power.
All without top-down control! Slashdot's success has not been studied enough.