Skip to main content

The Twitter to Riot Pipeline

In the aftermath of the Capitol Riot, Ellie Burridge examines how social media responded to the crisis, and the part it may have played in causing it as part of our series on ‘Building a Trusted Web’.

Social media was January’s big partisan issue in the US. After inciting violence by pedalling conspiracy theories about the legitimacy of November’s election, former President Donald Trump was banned from his favourite place: Twitter.

He was also banned from Facebook, Instagram, Snapchat, Twitch, Shopify, Stripe, and his rhetoric has been limited on Reddit, Discord, YouTube, TikTok, and Pinterest. But we have to imagine that the Twitter ban was the one that cut deepest for Trump, who treated the site as a dumping ground for just about every thought that passed through his head. Over the years, he used Twitter to comment on everything from the relationships of movie stars (“Robert Pattinson should not take back Kristen Stewart. She cheated on him like a dog & will do it again--just watch. He can do much better!”) to the extrajudicial murders he thought should happen during the Black Lives Matter protests of 2020 (“when the looting starts, the shooting starts”).

For months, Twitter had been trying to reduce the amount of harm caused by Trump’s serial tweeting; the aforementioned tweet about shooting protestors “violated the Twitter Rules about glorifying violence” and was hidden unless a user clicked on it. Users were also prevented from liking or replying to it—but not from retweeting the incendiary message. Since then, Trump’s tweets have often been limited in similar ways, and Twitter introduced several features which seemed specifically targeted to curb the President’s propensity for spreading disinformation. But what happened on the sixth of January was the straw that broke the tech giant’s back, and after a couple days of dithering, Trump was permanently banned from the platform.

Is this an infringement on free speech? Cancel culture? Are social media platforms becoming the tyrannical Big Brother from Orwell’s Nineteen Eighty-Four?

Fox News were quick to complain of censorship from their shows, all of which draw millions of viewers daily. With five people dead as a direct result of violence incited by Internet conspiracy theories, big names flocked to their still-active Twitter accounts to humble-brag about how many followers they lost during the so-called ‘Twitter Purge’, during which thousands of accounts were removed from the platform. Press Secretary Sarah Huckabee Sanders put her figure at ‘50k+’; Matt Schlapp (chairman of the American Conservative Union) tweeted, ‘I’m down 40 k [sic].’

The fact of the matter is that the harm the disinformation spread by QAnon conspiracy theorists does cannot be compared to the minor annoyance losing Twitter followers.

Because that’s who the ‘Twitter Purge’ targeted: QAnon supporters. Twitter explained themselves clearly in a blog post: “Given the violent events in Washington DC, and increased risk of harm, we began permanently suspending thousands of accounts that were primarily dedicated to sharing QAnon content on Friday afternoon.” QAnon is a big tent conspiracy theory (meaning it’s a complicated web of insane nonsense) but its most high-profile claim is that Donald Trump is a lone hero trying to dismantle an international cabal of ‘establishment’ Satan-worshipping child sex traffickers. Supposedly, the trafficking is done in order to harvest the chemical adrenochrome (produced by adrenaline) from children, which the ‘elite’ then use to get high.

Not everyone who gets sucked into QAnon believes in its more outlandish claims, but a lot of them do. And the conspiracy has been allowed to proliferate through social media, partly because of companies’ instinct to look the other way in order to protect their bottom line (Twitter stock took a $5bn hit after Trump’s ban) and partly because conspiracy theorists know how to cover their intentions using memes and unobjectionable hashtags like #SaveTheChildren. There have been several previous attempts by Facebook and Twitter to curb the number of Q-believers on their sites, but the conspiracy has only gained popularity over the past few years, with some estimates suggesting that hundreds of thousands, or even millions, of people believe in QAnon to some degree. President Joe Biden’s inauguration has helped some; former QAnon adherents have been coming forward in recent weeks to denounce their beliefs now that their hero, Trump, has been toppled from his pedestal. But it is impossible to know how many of these hardcore conspiracy theorists—including, in their ranks, a sitting US senator—will be able to abandon the movement.

QAnon is not new: it’s been around for years, ever since an anonymous poster going by the name ‘Q’ started posting baseless predictions on 4chan in 2017. From such humble beginnings, the conspiracy snowballed, collecting subsidiary conspiracy theories like birtherism (the belief that former President Barack Obama was not born in the United States) and 9/11-trutherism as it went. Its adaptability has been its greatest strength, effortlessly shifting from a focus on Robert Mueller and the Russia investigation back in 2017 to pedalling baseless Coronavirus conspiracies in 2020. But even without a clear unifying ideology—beyond supporting Donald Trump—QAnon believers have been causing real-world harm for years.

In 2019, the FBI acknowledged QAnon as a potential domestic terror threat. This came after a Q-supporter blocked a bridge near the Hoover Dam with an armoured vehicle containing two rifles, two handguns, and nine-hundred rounds of ammunition; it was after a Californian man was arrested for allegedly, “planning to ‘blow up a satanic temple monument’ in the Capitol rotunda in Springfield, Illinois, to ‘make Americans aware of Pizzagate and the New World Order.’”

Since the beginning of 2019, the Q-inspired violence has only escalated. Most high-profile in 2019 was the murder of a crime family leader by a 24-year-old man who had previously attempted to make a citizen’s arrests of Democrat politicians Maxine Waters, Adam Schiff, and Bill de Blasio.  

In June of 2020, a man livestreamed his 20-mile police chase on Facebook: “Donald Trump, I need a miracle or something. […] QAnon, help me. QAnon, help me!” His five children were in the car, which crashed at the culmination of the chase. Luckily, no one was seriously hurt.

QAnon isn’t bound by the USA’s borders: a Canadian Rangers reserve rammed his truck into Prime Minister Justin Trudeau’s gate, although Trudeau was not home at the time. The man indicated on Instagram that he believed that Bill Gates was responsible for Coronavirus, as well as using hashtags that alluded to popular Q-theories about adrenochrome, Pizzagate, and a murdered Democratic Party staffer.

Then, in October 2020, a woman kidnapped her son and fled, having shared Trump and QAnon supportive material on Facebook. One article shared to her Facebook page claimed that Child Protective Services abducts children to drain them of adrenochrome.

See more here: https://www.theguardian.com/us-news/2020/oct/15/qanon-violence-crimes-timeline

Which brings us to what happened in Washington DC on the 6th January 2021. Not everyone who stormed the Capitol Building believed in QAnon, but enough did. The acts of violence listed above, as well as the riot, are clearly abhorrent, but for those who have been duped into believing they are acting to uncover a vast paedophile ring, you can understand why they believed that a violent response was justified.

Social media has not only allowed this collective delusion to come to fruition; it has actively encouraged it.

The woman who was shot dead by police on the day of the riot, Ashli Babbit, had been tweeting regularly about QAnon and its theories since February of 2020. She posted, on average, 50 times per day, and 77 times on election day. This sort of engagement is like catnip to social media developers.

The day before the insurrection, she tweeted: “the storm [mass cleansing] is here and it is descending upon DC in less than 24 hours….dark to light!” A week before: “I will be in DC on the 6th! God bless America and WWG1WGA [where we go one, we go all]”. The phrasing clearly linked her with QAnon, which has (again) been a known domestic terror threat for years. But this woman was able to tweet and livestream to Facebook on the day of her death, saying: “We are walking to the Capitol in a mob. There is a sea of nothing but red, white and blue patriots.”

QAnon is a deeply evangelical movement, meaning that its followers are committed to recruiting as many people as they can. The main recruiting tool is social media. Grifters and believers alike prey on lonely, unbalanced people who are unsatisfied in their daily lives—people who it is easy to convince that the world is out to get them. Using memes and inside jokes, they entice people to find out more. Facebook’s algorithms have frequently enabled this, directing vulnerable individuals down ‘rabbit holes’ of content linked to QAnon or its predecessor, Pizzagate. Facebook’s algorithms know that the higher someone’s emotional engagement is, the likelier they are to stay on the site. In short, these fear-mongering delusions are good for business.

During the pandemic, more people have been radicalised than ever due to a toxic combination of the anxiety generated by the pandemic and the increased time many are spending on the internet and, specifically, social media.

The tech giants’ unwillingness to act in meaningful ways to curb the spread of misinformation lasted far too long. It was only during the lead up to the November US election that Twitter began to include fact-checking tags on tweets that spread lies—but President Trump and his allies had already been pre-emptively calling the election fraudulent for months.

Too often, it seems as though social media platforms are scrambling to keep up with the monster they’ve created, only instituting changes when it is too little, too late. It was not inevitable for QAnon to become a self-sustaining entity—but we’re now so far past the point of no return that it’s impossible to put a stop to it. Even removing the accounts may not be enough: Q-believers now congregate in less mainstream social media niches, where their rhetoric can spread unchallenged. This wasn’t inevitable; had the movement been curbed sooner, far fewer would have become so dedicated as to search out new online spaces to indulge their delusions.

Maybe the social media giants—Facebook and Twitter—feared the accusations of killing free speech—even though, as private companies, they have no responsibility to host anything that goes against their terms of service. It is true that social media platforms have a disturbing amount of power in our society, but that does not mean they should sit idly by while users advocate for violence; hate speech should never be tolerated and large-scale deception, which demonstrably leads to horrific violence, should be similarly cracked down upon.

Legislation in the US and UK means that internet companies are not treated as publishers; they are not held legally responsible for the content that appears on their platforms, even though they can still monetise said content by displaying ads next to it. There is no legal incentive, currently, for them to do the right thing.

More regulation for social media companies is a rare bipartisan issue in the United States, with Trump tweeting in 2020: “Republicans feel that Social Media Platforms totally silence conservatives [sic] voices. We will strongly regulate, or close them down, before we can ever allow this to happen.” Clearly, Trump is motivated by his own fragile ego—but the larger issue, that of regulating social media, is as contentious as it is important.

In the early days of the internet, there was little to no regulation regarding ‘speech’; essentially, anyone could post anything they wanted to and not face legal repercussions. For obvious reasons, this wasn’t sustainable. Hate speech is hate speech, even—or sometimes especially—when it’s online. As laws began to be made about the content available on the internet, a question arose: who was liable? Was it just the publisher (i.e. the individual who decided to write or post something illegal)? Or did platforms have some responsibility for what was being shared on their corner of the internet?

The unfortunate choices the courts made during that period led to an internet landscape where platforms that did nothing to moderate content were less likely to be prosecuted than platforms that attempted to moderate. This actively discouraged content hosts from interfering at all with the content they hosted—even if it was clearly illegal—because, in doing so, they could avoid liability.

The solution to this issue—at least for US congress—was to offer immunity to content providers who were moderating their platform’s content in 1996. It was a policy with the best of intentions, but it led to its own host of problems. The Communications Decency Act Section 230 states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

In 2018, Section 230 underwent alteration so that platforms used by alleged sex traffickers could be held accountable by the law. In 2020, Donald Trump attempted to enact an executive order that would stop Twitter from fact-checking his tweets, with other conservatives arguing that the fact checks themselves turned Twitter from platform to publisher. (Banning Trump from the platform turned out to be an effective way to circumvent this issue.) Twitter responded by stating that Section 230 “protects American innovation and freedom of expression.” Mark Zuckerberg, founder of Facebook, has said in the past that he “just believe[s] strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online.”

However, President Biden is also in favour of reforming Section 230, chiefly because of the proliferation of misinformation on Facebook.

The line between what constitutes free speech in an online setting and what crosses the line into dangerous disinformation has been one at the forefronts of the minds of those in governments around the world. Legislation is being proposed and passed that will change the way social media works—hopefully for the better.

Trusting social media platforms to do the right thing is difficult: they’ve shown time and time again that they’re not equipped to be people’s primary source of news/exposure to the outside world (particularly during the pandemic), even if people insist on using them that way. But the removal of Trump from social media, and the attempt to purge QAnon believers, is a step in the right direction. Between upholding a flawed definition of freedom of speech and protecting democracy, the lives and wellbeing of marginalised groups, and vulnerable people’s perception of reality—well, I’d choose the latter every time.

Also, for the love of Orwell, stop bringing Nineteen Eighty-Four into this.

Ellie Burridge

About the author

Ellie Burridge

Ellie Burridge has a degree in English Literature and Creative Writing and multitudinous conflicting feelings about the internet and technology as we know it. Although she does not always manage to follow her own principles on the web, she hasn't ordered anything from Amazon in almost 6 months and the only outlandish conspiracy theories she ever peddled on social media involved the boyband One Direction.

"We make sure we fully understand the brief before we do anything. It means we can get the project up and running for you more quickly and we know what outcome you want from it.

We have a good reputation and we want you to recommend us or use us again yourself. If our work doesn’t bring you the results you want, you won’t come back! That’s why we have a growing team in-house to build your website, install your WebShop, set up a high-end content management system or create bespoke software applications – and to make sure it’s exactly what you want."

Iain Row

Managing Director

Cookie Notice

This website uses cookies for traffic monitoring only.

Back to top