The Substack Verschlimmbesserung
Deplatformings will continue until the Nazi situation impraves
It’s 1966 and the myriad of crises felt like maybe, just maybe, they were on a path to resolution.
On the frontpage of the Fort Worth Star-Telegram’s morning edition was a dispatch from one of their reporters, recounting his quiet, if nerve-racking, journey down Mekong River in Vietnam. “This is good territory we’re in now,” an American soldier told the journalist. Kids were appearing on the riverbanks, waving and shouting “Hall-o, OK,” the writer noted. That was a good sign, his military escorts explained: “Watch out when you pass through a village where there are no kids or where the kids don’t wave and say ‘OK.’ That’s the best sign there is that VC are in the area.”
Elsewhere on the page was a dispatch from the trial of Talmadge Hayer, the man who shot and killed Malcolm X. Next to that was a report from Egypt, where a former Israeli fighter pilot has just landed a biplane in the United Arab Republic, proclaiming that he was there to lead peace negotiations to avert all-out war between Israel and the Arab world.
Whatever hope was embedded in these reports soon crashed into reality. The Vietnam War grew only more bloody and pointless. Whatever relief was brought from Hayer’s conviction was upset when Martin Luther King Jr was assassinated in 1968. The freelance Arab-Israeli peace negotiations didn’t work, either: The Six Day War broke out a year later.
Beyond the misplaced optimism was the bit of news I was actually hunting for in the paper. It appears on the left sidebar, in a slice-of-life column written by humorist George Dolan.
Dolan, who wrote his “This is West Texas” column for three decades, relayed that the Hudson Word Mint — the work of Sam Hudson, a Boston-based freelance writer — had released its newest creation. The Mint, Dolan tells us, “coins words much-needed, but previously missing from the American language.”
“The Germans have it,” Hudson told Dolan. “In their word, verschlimmbesserung, which signifies a bit of progress that has backfired, an act of human advancement which only worsens the situation. As in: A new entrance ramp added, at the wrong place exactly, to an overcrowded freeway; a piece of corrective legislation which makes the law thus corrected less comprehensible than it was, etc. etc.”
Hudson’s English equivalent: Impravement.
“Hudson promises that the verb, ‘imprave,’ will be released soon,” Dolan writes.
This week, on a very special Bug-eyed and Shameless, I want to talk about a well-intentioned push to clean up the internet. In particular: Right here, on Substack. And I want to argue that we’re thinking about this all wrong.
Earlier this month, I wrote a few words about the growing discontent with Substack’s moderation policies. (Dispatch #92) In particular, about how journalists and the media would be mad, purely on the basis of self-preservation, to abandon a trusted distribution model. I had not planned on writing much more, because I always fear gazing at my own navel. But here we are.
I wanted to come back to the topic because I think it’s a useful inflection point about the state of the internet, media, politics, and society more broadly. Extremism is on the march and we’re desperate for solutions to push back.
But the great abandoning of Substack is a bit of verschlimmbesserung.
If you’re out-of-the-loop on this particular controversy, let me give you the quick synopsis.
In November, writer Jonathan M. Katz wrote a piece for The Atlantic making the case that Substack “has a Nazi problem.” It’s a good case: Katz identifies a number of publications on Substack, from the small to the big, from the soft white nationalist to the overtly neo-Nazi. At least a few of them seem to be generating substantial revenue to these extremist figures. He writes that Substack’s commitment to free speech “goes beyond welcoming arguments from across a wide ideological spectrum and broadly defending anyone’s right to spread even bigotry and conspiracy theories; implicitly, it also includes hosting and profiting from bigoted and conspiratorial content.”
Katz doesn’t spell out a specific solution and, quoting journalism professor Whitney Phillips, he even notes how banning fringe figures may only entrench their victim complex and further the radicalization of their fans. Where the issue is, he continues, is whether Substack is promoting the offending publications, and whether or not its technology is forging an extremist community.
The reporting kicked off a campaign to demand that Substack improve its moderation practises — enforcing, for starters, its prohibition on any content that “promotes harmful or illegal activities, including material that advocates, threatens, or shows you causing harm to yourself, other people, or animals.” Implicit in the petitioners’ letter was a demand that Substack expand those policies to forbid outright Nazi fetishization, antisemitism, and perhaps other forms of hate.
Substack, initially, demurred. As Substack co-founder
wrote:We don't think that censorship (including through demonetizing publications) makes the problem go away — in fact, it makes it worse. We believe that supporting individual rights and civil liberties while subjecting ideas to open discourse is the best way to strip bad ideas of their power.
Under more pressure, and a few rounds of back-and-forth, Substack announced that it would remove a handful of avowedly neo-Nazi publications that were clearly violating its terms of service.
Unassuaged, many Substackers opted to leave anyway. Katz moved his publication to startup company Beehiiv, arguing that Substack’s “Nazi-tolerating policy” meant that everyone was at risk. Their handling of the issue, particularly their defensive posture in reponse to the complaints, showed Substack's allegiance: “Who wouldn’t they sell out if they saw some potential gain?” He asked. Marisa Kabas, who organized the initial campaign, wrote that Substack “is still not protecting its publishers and users against dangerous white supremacist content, and it’s certainly not protecting them against transphobic content.” Gone, too, was Garbage Day and Today in Tabs. (Another one of my favorites, Max Read, remains.)
Platformer News, probably Substack’s biggest news publication, was also packing its bags. “I’m not aware of any major US consumer internet platform that does not explicitly ban praise for Nazi hate speech,” its founder, Casey Newton, wrote. Platformer would be moving to decentralized startup Ghost.
Like everything, these actions prompted a series of opposite reactions. The usually interesting, if insufferably smug, Freddie deBoer took potshots at the departing authors and made a few compelling points along the way. Calm Down, from the enjoyably contrarian Ben Dreyfuss, declared the saga was evidence of the end of deplatforming. The reliably unreliable Jesse Singal wrote a supposed takedown of Katz’s reporting that really ended up confirming most of what he found.
That, in a nutshell, is what has transpired here over the last few weeks. As I explained earlier this month, and in response to a few comments: I wish Substack were slightly more aggressive in banishing actual Nazi publications. And I’ve privately made that point to Substack people. I think their decision to ban the worst offenders was satisfactory.
But this isn’t a dispatch about me, or even those who have left — I think that discourse is pretty interesting for people who religiously use Substack, and probably pretty inside baseball for everybody else.
No, this is about the broader problem of what to do about the rise in misinformation, extremism, conspiracy theories, and online hate.
Having spent years trawling through online spaces where the worst of humanity lives — whether it’s Islamic State chatrooms, 4chan’s /b/ board, Hamas Telegram channels, neo-Nazi webforums, Incel digital hangouts, etc. — I have basically come around to McKenzie’s view of moderation. So here I go, trying to make that case.
The Nazis Are Always Greener On the Other Side
When Newton announced that Platformer would be going to Ghost, he conveyed a promise from Ghost’s CEO: All pro-Nazi content would be removed, “full stop.”
It’s a slightly misleading promise, though. Ghost’s platform is decentralized and open-source, technological concepts that usually lend themselves to a much more laissez-faire approach to moderation. Anyone can grab Ghost’s code and go off to launch their own newsletter, much like how Wordpress is the technology by which we access a huge variety of blogs and news sites.
Ghost does have a hosting solution — which works, more-or-less, like Substack. It’s those publications, I imagine, that Ghost is vowing to police.
And, sure enough, I’ve identified a couple of publications that were using Nazi iconography, hosted by Ghost, that are now offline. (Though it’s unclear if they were removed by Ghost itself.)
There are, however, still some Ghost publications that seem to be just as bad as many of the offending Substack outlets.
One relatively-popular Ghost newsletter is a clearing housing for all kinds of conspiratorial, antisemitic, and garbage content. In particular, it shares a lot of tripe about the “Khazar Mafia” — an old antisemitic myth made popular by David Icke. (Dispatch #3) This newsletter blames the Jewish people for the “Holohoax.” Another similar publication rehashes the same antisemitic and Holocaust-denial conspiracy theories, but adds the spicy take that Eva Braun was Barack Obama’s grandmother.
There’s also anti-vaccine content, the equation of Queer theory to “sanctioned pedophilia,” and other far-right nuttery. And Ghost is earning money from those publications and delivering them services in return, just like on Substack.
I don’t go through this to dunk on Ghost, which seems like a genuinely good platform. Without a doubt, it seems to have a smaller proportion of objectionable content, though that may be because it is considerably smaller and less oriented towards politics. I just want to underscore that having an open platform, where anyone can come along and set up their own publication, quickly becomes a moderation nightmare. It is managable if you opt to stay small, but it becomes very hard if you want to get big.
Making Sacrifices
A few months ago I wrote a piece of tech futurism about what I, nonsensically, called The Bird Internet. (Dispatch #46) Imagine my glee when, during a recent conference, a speaker recommended An Illustrated Field Guide to Social Media.
The Field Guide’s editors, Chand Rajendra-Nicolucci and Ethan Zuckerman, imagined the guide may help users identify the traits and characteristics of different pieces of the internet, as though they were birds in the park. (They also challenge the notion of homophily, which I also spent some time debunking, Dispatch #64. Synergy!)
The result is a fascinating look at how we want the internet to operate, how it actually works, and how we might improve it. For our purposes, they make a few really acute observations.
The Field Guide authors note that many people do want ‘civic’ platforms — that is, platforms on which we can debate, connect, share news, build societies, right wrongs, and so on. But they underscore that we have a hard time building networks for that purpose.
When Egyptian activists found that generalist sites like Facebook were “powerful in bringing angry people out into the streets [but] were far less useful in enabling careful deliberation about a way forward for the country,” they built their own alternatives. These new social media hubs, geared towards collective action and mass mobilization, were flops. They barely registered any traffic, bled money, then shut down.
A similar thing occurred with the Front Porch Forum. They wanted to create friendly digital networks that could map onto real-life neighborhoods, much like Nextdoor: But without all the anger. Front Porch Forum built in an automatic moderation tool, requiring each post be vetted before appearing on the site. Nextdoor has 40 million active users a week. Front Porch has yet to expand beyond Vermont.
There are nearly as many models for platform governance as there are platforms. Our helpful internet ornithologist
wrote a very helpful taxonomy of social networks which helps break that down. But suffice it to say: Creating centralized, top-down management models is extraordinarily difficult and sometimes actively bad. Facebook oft-changing standards have chased off QAnon but they have also removed content from Black users for reverse racism. Twitter’s campaign against misinformation led them to automatically banning posts about Hunter Biden’s pilfered emails, and now that system has been weaponized against trans people.Most people want to be on open platforms. They do not like an unseen company micromanaging their interactions. Platforms that do try and enforce changing norms find themselves frequently offside. And building arbitrary systems of aggressive moderation can be, and often are, used against good actors.
Diversity is the Point
The digital zoologists note that big digital services are not uniform communities. They tend to have diverse cultures and subcultures. Twitter, for example, was widely known for being a spot for constant updates on news, politics, and sports — but it also had Black Twitter, weird Twitter, actual Nazis, etc. Smaller niches that the average user may never even notice.
Subcultures can influence a broader culture, but the relationships aren’t exactly simple.
Reddit was once reviled for its mean and reactionary culture. Over time, however, positive and diverse subcultures, like r/lgbt, helped enact a vibe shift. Even though it maintained some horribly misogynist communities, the Field Guide authors write, its overall culture was “surprisingly healthy and even wholesome.” It’s a platform where the viciously paranoid r/The_Donald co-existed with r/aww. At worst, it seems that nasty sub-cultures were essentially quarantined in the larger polity. At best, it seemed the diverse nature of Reddit had a moderating influence on its angriest users.
That is, more or less, how Substack works. You can very easily subscribe to, and engage with, a single newsletter and ignore the rest. Or dive deep into your chosen interest and never leave it. There are Substack communities of knitters, bakers, entrepreneurs, and so on. They are orders of magnitude bigger than its far-right rump. There is, increasingly, a central stream where all these various communities meet, but it is an expansion and not the core purpose.
And yet, returning to Reddit, they did opt to ban r/The_Donald, for a variety of reasons. Its users fumed and migrated to their own platform. When researchers studied the migration, they found some good news — activity on the community decreased, as did its ability to attract new users. However, they did find “significant increases in radicalization-related signals.”
Much like the community of Twitter users who flocked to Gab after the failed insurrection, moving users to a homogenous community made them more radical. And, of course it did. If you get banned from the neighborhood pub for excessive and colorful swearing, things are likely to get more salty if you move to Fuckhead’s Grill.
Removing extremist communities may be good, if their ability to attract new users remains low. But you can only banish so many people before the community of exiles becomes a thriving metropolis of its own. Given the waves of mass deplatformings from Facebook, Twitter, Youtube, Reddit, and elsewhere, big tech banned so many people that a far-right alt-media has exploded in popularity. Rumble, the conspiracy-oriented clone designed to challenge Youtube’s strict moderation practises now boasts 58 million monthly active users and has hosted several presidential debates. Truth Social, rife with QAnon kookery, continues to be the platform of the possible next president. Twitter is run by a guy captured by the anti-woke mind virus. (Dispatch #81)
We have so balkanized the internet, trying to force ‘good’ platforms into a civic mission that is neither desired by their users nor economically wise, that we fostered the creation a purpose-built radicalization machine.
Substack has rejected this pressure — likely more out of financial self-preservation than altruism — and many users have opted to self-isolate in protest. What a mistake!
We have accepted that hate-oriented communities have a corrosive effect on our broader culture without accepting that big conversations have a calming effective on those radical ideologies and their followers. But we know that keeping people in diverse communities is good for them, and shunting them into homophily is unnatural and unwise.
We should want r/aww, rife with adorable cat videos, down the block from r/The_Donald. We should hope to see as little extremist content as possible, while also hoping the extremists see as many cute cats and their associated discourse as possible. And we should want these users to at least play by some basic rules, such as forbidding violent rhetoric, instead of congregating in places where bad behavior is tolerated or even encouraged.
Not every platform should be so open, of course. Some platforms should be small, or civic-minded, or kid-friendly. There is nothing wrong with nuking all far-right presence on Minecraft, for example, because we want Minecraft to be a safe and encouraging space for kids, not an outlet to hash out the politics of irregular migration. Some space on the internet should be safe by design.
But for my fellow writers, especially those who cover the increasingly toxic nature of discourse and politics, they should want to be on a platform used by those susceptible to Nazi demagoguery. We are writing to educate, in some cases to agitate, and occasionally to deradicalize. (Dispatch #20) You can’t do that if you’re speaking to a captive audience in a forum with an implied admission test.
The Recommendation Engine
I’ve been a bit unfair to the ex-Substackers thus far, because there is one key piece to their argument I haven’t dealt with. As Casey explains:
Ghost tells us it has no plans to build the recommendation infrastructure Substack has. It does not seek to be a social network. Instead, it seeks only to build good, solid infrastructure for internet businesses. That means that even if Nazis were able to set up shop here, they would be denied access to the growth infrastructure that Substack provides them. Among other benefits, that means that there is nowhere on Ghost where their content will appear next to Platformer.
It’s both compelling and confusing.
For starters, I agree: I do not want Bug-eyed and Shameless readers being recommended Nazi blogs. But, inversely, I do want readers of Nazi blogs to be recommended Bug-eyed and Shameless.
But more importantly, I wanted to figure out whether Substack’s machinery helped push people into rabbit holes. So I went about setting up a dummy Substack account, and proceeded to follow all manner of conspiratorial and hate-oriented publications I could, with some help from Katz and others. After a few days of work, I amassed a list of 71 Substack publications.
Of those 71, I would say a half-dozen are outright Nazi, both neo- and the old school kind, and a couple are using some Nazi iconography. A bigger number are broadly white supremacist, but the majority are fairly pedestrian far-right, at least by the standards of our current climate. They range from a QAnon outlet with 47,000 subscribers to a hub for longwinded essays on Catholic chauvanism that appears to have just a few hundred. Over the course of this survey, I saw only a couple instances that I would consider a violation of Substack’s terms: A suggestion that public health figures ought to be executed, and organizing for an unapologetic hate group.
I put together this list of publications to see how Substack’s system would respond: Would it give me more of what I want, or push me towards alternative content? The results are a mixed bag.
Certainly, the recommendations chosen by these publications tend towards radical, but not exclusively so. Plenty of far-right Substacks are also recommending fellow writers dedicated to philosophy and health advice — stuff that probably isn’t great, but isn’t actively bad.
When it comes to algorithmic recommendations suggested by Substack, I saw plenty of conservative content, but it was certainly more mainstream than the newsletters I had selected for myself. There were some longwinded essays on the future of the new right, with the usual reverence for tradition and social customs; a Matt Taibbi podcast; and an interesting-but-illegible history of anonymity on the internet. I would liken it to be recommended Fox News after binging Newsmax.
At one point Substack recommended the top healthcare-related posts from my network: Which included some ‘died suddenly’ anti-vaccine hokum. But it also included an interesting dispatch from some anti-vaccine doctors who were actually fact-checking misinformation on their own side: No, they wrote, 17 million people didn’t die from getting the COVID-19 vaccines — it’s probably more like a 2,000. (Still wrong, but, hey, much less wrong.)
Occasionally, however, I found Substack slipped in publications that had absolutely nothing to do with my apparent obsession with anti-wokeness and the renaissance of a Judao-Christian world order. It suggested a nice meditation on productivity, a publication about good management from a former Amazon executive, and a Substack all abut cycling. Most interestingly, it recommended a Substack about leaving Christianity.
The Substack page for my fake extremist is by no means moderate or diverse. The authors I have chosen to follow are pumping out conspiratorial, sometimes racist, occasionally borderline violent content that pollutes my alt-homepage and dummy email inbox. That’s what I asked it to do. But the Substack algorithm pushes little blades of grass to peak through.
The ex-Substackers warn that while the far-right rump on Substack may be small now, this whole saga — which they blew up, mind you — might be a bat signal to more bad eggs who will arrive, growing their proportion on the platform. Which may well be true: And, if so, that equilibrium is only made worse by the mass exodus of thoughtful people.
A Real Impravement
This dispatch is not meant to simp for Substack, nor to beat up on anyone who left.
Substack needs competition, and hopefully this shift helps to create new, serious players in the newsletter publishing game. I’ll certainly keep following Platformer, The Handbasket, Garbage Day, etc. on their new homes. I hope they succeed there.
But this really isn’t about Substack at all. It’s about the future of the internet, the discourse, and our politics more broadly. Because Ghost and Beehiiv will face these challenges one day, too. And my message, despite what I’m describing here, is uncharacteristically cheery.
We have spent years trying to create ‘pure’ platforms by relegating impure users to intentionally impure spaces. In the process, the impure platforms are booming and the platforms we sought to perfect are worse than ever.
The solution cannot be to keep doing the same thing and expecting better results.
Instead, we have to accept platforms that are lumpy and imperfect. We should demand that moderation be strong, quick, and effective at dealing with illegality and targeted harassment, but which doesn’t try to untangle the lawful-but-awful debate. We should want community culture which encourages civil discourse and positive interaction — a culture that is well-armed to push back against the bad actors who arrive. And we should want companies which use technology, including algorithms, to promote positive and diverse conversation and ideologies, including non-political stuff.
We have to accept the problem as an opportunity. People susceptible to radicalization are not inherently evil or beyond redemption. Even many influencers and leaders in the far-right space are still capable of reform and moderation. We won’t deradicalize them by writing them off.
We know that many people, but not all, tend to soften extreme views when they come face-to-face with real people who hold contrary opinions, instead of fuming over the caricature of The Other they’ve devised in their minds. More than that, people tend to abandon radical and anti-social positions when they stop obsessing over their particular ideology and relax. There’s a story of an al-Qaeda recruit who abandoned his plans to become a suicide bomber after being given some old Seinfeld tapes.
The internet, we know, is not an ideal space to deradicalize. But we can, at the very least, make it less of an ideal space to radicalize. We cannot do that if we keep blowing up everything that fails to meet our standards of impossible perfection.
That’s it for this week!
I’ve been deep in the weeds on a few bigger projects I’m excited to share in the coming weeks.
I’ve been reporting on the growing culture of political assassinations over at Foreign Policy, and I’ve got some really interesting stuff coming out there soon on the future of Gaza’s political administration.
Over at WIRED, I’ve been exploring the technological breakthrough Ukraine needs to beat Russia. It’s the first part of a series, so keep an eye out for more of that.
Right here at Bug-eyed and Shameless, I know I’ve got some Canadian readers who have been asking for a look at Pierre Poilievre and his brand of reactionary politics. I’ve really not had much to say recently, at least nothing new beyond what I wrote in 2022. (It’s a piece that now looks quite prescient, if I may so myself.) But recently I’ve been toying with a few ideas on how to come at the Poilievre phenomenon from some new angles. So stay tuned.
As always, I’m keen to hear your comments!
I guess I'm happily out of the loop. I had no idea that Nazi's were posting on Substack until the rebellion of the righteous came to my reluctant attention. Like you I don't think I'm going anywhere. This is all too exhausting. Moving on, thanks for your link to your Wired article, and I look forward to following up on that. Despite the fact that my father's family are 1st gen Ukrainians, I understand the geopolitical consequences of a russian success.
A thoughtful and informative piece as usual – which is why I subscribe to YOU, and only incidentally, come into contact with some “platform”, constructed as a market mechanism, and thus is obliged to be profitable.
Which makes me wonder what THAT means, and whether non-profit entities (like PBS) or a charitable model (like The Hub?) might have an internet role.
While, by contrast, I remember CBC and its behavior in respect of the “forbidden words”…and shudder; and I am wary of “official truth”, as history tells me I should be. Then there is the official FOI process, which is an oxymoron in Canada.
I also wonder how important anonymity is, in shaping the discourse and its impact on public or private interests; because it seems to me that that can be a significantly moderating influence in the “real” public square or a venue like a workplace.
Where, for example: identifying antisemitic protesters, or folks inclined to burn churches or topple statutes can have consequences. But this too is a difficult question, especially in respect of political views. “Should” we “know” that you are a “real Nazi” or an ISIS shill?
Finally, I love some of the adjectives that you report (but don’t use yourself) – like “healthy” and “wholesome” -- which, of course, are part and parcel of the associated with the (for profit) touting for “health food” and “alternative medicine”.
Keep up the good work!