Facebook CEO Mark Zuckerberg testifies at a hearing of the Senate Judiciary and Commerce committees
Facebook CEO Mark Zuckerberg testifies at a Senate hearing, April 2018

At the very first Global Fact-Checking Summit, hosted in a stuffy conference room at the London School of Economics in June 2014, Lucas Graves felt a spirit of creativity in the air. The gathering was makeshift and intimate, with only a few dozen people present. When he was handed the three-page programme, it looked like a Microsoft Word print-out mocked up in a hurry. Graves, associate professor at the University of Wisconsin–Madison, chatted with almost everyone there. Fact-checkers were in the process of defining their fledgling industry. “There was a real sense of mutual discovery,” he said.

Five years later, when Graves flew to the University of Cape Town for the sixth iteration in June 2019, the summit had professionalised. Fact-checking had exploded to 210 organisations, from 44 in 2014. The programme had thickened to 40 pages. Hundreds of guests attended, and it wasn’t just fact-checkers; Graves could also see tech firms, platform companies, donors and funders.

Today there are over 300 fact-checking organisations, according to the Duke University Reporters’ Lab. But the rise of fact-checking is just one segment of a much broader trend: the surge of what might be called the anti-misinformation industry. An ecosystem has emerged to combat the disinformation (defined as content that’s deliberately false) and misinformation (content that’s accidentally false) that’s flourishing on social media.

This growing network is built, largely, of four types of outfit: for-profit companies, non-profit companies, charities and research institutes – all attempting to polyfill the holes left by inadequate government restrictions and tech platforms’ lack of action. Between the US presidential election and the ongoing vaccine roll-out, the last 12 months have been the industry’s biggest test yet. Whether it succeeded or not depends on who you ask.

Another guest who bumped elbows with Graves at the first summit was Will Moy, chief executive of Full Fact, a UK non-profit that he co-founded in 2009. “This was a time when spin was what people were worried about,” Moy, formerly assistant to a crossbench peer, told me. Full Fact focused on verifying politicians’ claims during the 2010 UK general election. It still fact-checks political statements, but when social media expanded the boundaries of public life, Full Fact refocused. Now roughly half their work is online verification and debunking. “You see the same misinformation coming from heads of state as from memes on social media,” Moy said.

Every morning, Claire Milne, Full Fact’s acting editor, logs on and searches for the day’s misinformation. Milne and her eight-strong editorial team scour newspapers, monitor drivetime radio, scroll news sites and inspect morning TV for untruths, ready for the 10am meeting where they set the agenda. Milne focuses on three things: misinformation that could provoke real harm, has spread widely or was megaphoned by a public figure. Then they’ll splinter off to verify facts for the rest of the day – calling MPs’ offices, searching images to their source, debunking vaccine rumours and publishing fact-checks on their website. If this sounds like a newsroom, that’s kind of the point.

Fact-checking originated within newsrooms in the early 20th century – intended as a before-publishing safety valve to prevent falsehoods – at American magazines like TIME and the New Yorker. Fact-checkers as after-publishing mechanisms began spinning off from newsrooms in the 2000s. The rise of sites like FactCheck.org and PolitiFact, US non-profits set up in 2003 and 2007 respectively, was a sign of the industry’s growing professionalisation. Moy co-founded Full Fact as a British equivalent.

Against black-and-white thinking

The UK scene today is populated by non-profits like Full Fact, newsroom-adjacent bodies, like Channel 4 News FactCheck or BBC Reality Check, and private companies that run some fact-checking services, like Factmata Ltd or TheLogically Ltd, which trades as Logically. The anti-misinformation world is “a complex ecosystem, there are more and less commercial ends of it,” Graves said.

One common criticism is that fact-checks are less viral than misinformation. Closed networks like WhatsApp and Telegram are particularly tough to reach. Worse, if checks do reach their intended targets, some research suggests debunks may end up entrenching people in their existing political views. There is, of course, good and bad fact-checking. “The more binary and polar-oriented approach is really detrimental to the problem itself,” said Samuel Woolley, assistant professor at the University of Texas at Austin. Black and white fact-checking – reducing all content to a big red cross or a green tick – can be counterproductive. Some statements are misleading or partly true. The best fact-checking “reinserts shades of grey into things that people have presented as black and white,” Moy said.

For Graves, critics often miss the real purpose of fact-checking. “It’s not simply a question of attaching a label of true or false to a claim,” he said. Judging its effectiveness means not just asking if you can persuade misinformed people, but also if you can discourage elites from repeating falsehoods by proving there are consequences, or drive up standards in journalism by avoiding false doubts. Fact-checking says there’s a cost to sexing up dodgy data or spinning the truth. Its purpose is “trying to shore up faith in public institutional sources of knowledge,” Graves said. At its best, it tries to glue back together society’s fractured sense of a shared epistemology.

How does fact-checking work in practice?

Fighting misinformation is slippery because the vectors that deliver it are the same channels that carry mainstream journalism and entertainment. It can be hard to tell the difference between propaganda and dogged journalism when you’re scrolling down a newsfeed. Stories from the New York Times or Breitbart look more or less the same.

In 2018, Full Fact was approached by Facebook to work as one of its independent third-party fact-checkers in a contract worth hundreds of thousands of pounds. After six months of negotiations, the checking began. Every day, staff log onto a software tool built by Facebook engineers – known simply as “The Queue” – featuring potentially harmful misinformation flagged by users or algorithms. Milne and her team then give content a rating before it’s shuttled back to Facebook, which may act to delete, downgrade or contextualise the post. But these judgements are unseen. Fact-checkers like Full Fact don’t know what action is taken and why. Facebook controls all the inputs and outputs.

While Facebook launching its third-party fact-checking initiative in December 2016 was hailed as a big step forward, this opaqueness has frustrated many. “One key crucial thing here is transparency, and that is at odds a lot of the time with the kind of mentality we see in Silicon Valley,” Woolley said. Nevertheless, for Moy, working with big tech platforms like Facebook and Google is a crucial way to make an impact in the battle against misinformation.

If fact-checkers are the engine rooms of the wider machine fighting misinformation, university research bodies are perhaps its neural command centre. Much of the way we understand and combat the problem comes from academia. “We deliberately don’t use the term fake news,” said Hannah Bailey, a doctoral candidate at the Oxford Internet Institute. “Fake news” is imprecise, not to mention politicised. Academics prefer terms like computational propaganda – the use of algorithms or automation to distribute misleading information – and mis- and disinformation.

What’s more, researchers say the tunnel-focus on untruths can be constricting; there’s much more in the toolbox of information operations than outright lies. Factual but emotive content or partisan narratives are just as powerful when the aim is to change behaviour. “Truth can definitely be used and warped slightly to suit particular actors’ goals,” Bailey said. After all, in academia, misinformation belongs to the wider field of propaganda studies.

Around the time of the populist upsets of Brexit and Donald Trump’s victory in 2016, heightened attention saw money pumped into scholarly initiatives documenting, exposing and combating mis- and disinformation, at Oxford and Stanford universities among others. Today the ecosystem tracking misinformation is a rich neural network firing on all synapses.

Research institutes often partner with policymakers or tech companies. Last year, several joined forces as the Election Integrity Partnership (EIP) – also involving government, platforms and civil society – tracking misinformation and the US election. The EIP’s report, The Long Fuse, outlined how, over many months, Trump built the meta-narrative of a “stolen election”. Rumours and one-off stories were constructed into an overarching conspiracy theory, with false claims flowing bottom-up as well as top-down before being amplified by pro-Trump influencers. To Trump’s followers, this narrative legitimised insurrection, culminating in a violent mob storming the Capitol in a deadly riot on the afternoon of 6 January.

The importance of pre-bunking

For Joe Ondrak, head of investigations at Logically, a British start-up which uses automation to “fight ‘fake news’, tear down echo chambers and fact-check misinformation”, that week had started like any other. The 30-year-old, based in Sheffield, was tracking extremist groups’ responses to Joe Biden’s victory, on behalf of a US agency they’d partnered with during the election. He pored over his “social listening” dashboard, tweaking documents and sheets, inspecting far-right chats. “We’re sitting in terrible channels on Telegram and Gab,” Ondrak said, measuring “the temperature of things throughout the day”. Within hours it hit fever pitch.

Logically’s chief executive Lyric Jain launched the company in 2017 with funding from MIT. Jain, 25, was motivated after the death of his grandmother, who abandoned chemotherapy after falling victim to health misinformation. Now Logically mostly supports government agencies, as well as some private companies or platforms like TikTok, to navigate a hostile information environment in the UK, US and India. (Its client ethics policy rules out working with companies that threaten its mission to enhance civic discourse, Ondrak said, to avoid “unwittingly working on behalf of the bad guys”.) It also runs free consumer products like its verification app and browser extension. Logically has grown rapidly. Last year, total invested capital climbed to £8.5 million.

Logically is just one twig in the growing branch of start-up companies battling misinformation. There’s Graphika, a social network analysis tech firm founded in 2009, which creates maps of the structural relationships between social media actors. There’s NewsGuard, which rates news outlets’ credibility, founded in 2018 to drive up media accountability and transparency and spot repeat spreaders of disinformation. Start-ups of this variety make money in different ways – providing services for governments or platforms or corporate clients looking for emerging threats to their business model. Many more pop up each year.

Ahead of 6 January, Ondrak’s investigative team was tracking QAnon followers’ reactions to Trump losing and its prophecies failing. As a movement, QAnon was always diverse. Some followers believed a clique of cannibal paedophiles were conspiring against Donald Trump. Others were “digital soldiers” preparing for civil war. Some were simply “mom and pop folks,” Ondrak said, persuaded on YouTube that a deep state controlled the government. But now the movement was rudderless. Ondrak wondered, what’s next? Far-right recruitment drives had started on Telegram. “It’s easy pickings for straight-up white supremacists to swell their ranks” with disenfranchised Q followers, he said

The goal of automated social media surveillance for Logically and others is pre-bunking disinformation before it goes viral. It’s about working “not just when there is a fire,” Jain said, but striving to “prevent these fires from occurring”. But once a threat is identified, it requires political will – from governments or platforms – to tackle it.

An opportunity to douse the rising flames arose after the election of President Joe Biden. When then Vice-President Mike Pence, who many conspiracists suspected would overturn the results, signalled he would ceremoniously rubber-stamp Biden’s victory, talk of violence online ceased to be theoretical. By this time, Logically’s partnership with the US agency had expired. However, they were so concerned that they alerted their former client, hoping their warnings would be acted upon. Then, Ondrak and Jain waited.

The limits of the anti-misinformation industry

Some question whether launching for-profit tech companies is the best highway out of society’s splintered information environment. “There’s a big culture of like, ‘Tech can solve everything,’” said Emma Briant, a research associate specialising in propaganda at Bard College. “There’s a big problem with entrepreneurial-like responses to this from the same industry.” Part of what created this mess, Woolley, the University of Texas at Austin professor, added, was a “lack of thought about the potential misuse of the tools” tech companies were building – compounded by a desire to “scale as quickly as possible”.

Jain says that Logically chose the start-up route over charitable or non-profit status because it offered the chance to make an impact faster. “We knew what was going to be important to us was scaling,” Jain told me, adding: “Even right now during the vaccination efforts, just the volume and the speed at which various rumours spread through the internet, it needs a degree of automation” to work. Jain said Logically is not profitable yet; that will come “in a few years”.

The wider argument critics make against the entire anti-misinformation industry – start-ups and non-profits alike – is that it attacks symptoms not causes: the inability of social media companies to get a grip on misinformation on their platforms, and the unwillingness of governments to regulate them.

“Go upstream, and everything else is small potatoes, the rest is just opportunists,” said Jennifer Grygiel, assistant professor of communication at Syracuse University. “Facebook essentially enables disinformation.” In other words, social media broke the media environment. (Facebook did not respond to requests for comment. But in October the company launched its self-appointed Oversight Board, a third-party panel holding binding power over content moderation judgements.)

As the hours ticked down to 6 January, Jain and Ondrak waited and waited. When rioters marched up the steps of the Capitol, all they could do was watch. “We were struck that no action was taken,” Jain told me over Zoom in March. That evening, Ondrak and his team scrambled to document the riot online as fast as they could so his investigative team could analyse the material later. It was like a historian trying to write a book against the clock while her sources self-destructed. Flanked by the darkened Sheffield sky outside, Ondrak worked late into the night, archiving extremists’ posts, messages and livestreams that otherwise, by morning, would disappear for ever.

This article is from the New Humanist summer 2021 edition. Subscribe today.