Misinformation and fake news have been part of society of thousands of years but perhaps has never been more prevalent since the election of the current US president.
Donald Trump used the phrase “fake news” to attack the likes of US network CNN and the New York Times when they posed questions to him that he didn’t like. In fact, his use of the term helped propel it to become Collins Dictionary’s Word of the Year for 2017 as fans of the president echoed his own scepticism of the media.
As his one-term tenure comes to a close, Mr Trump, who has perpetuated falsehoods consistently while in office, is himself being accused of proliferating fake news.
He has repeatedly pushed baseless claims the election had been “rigged” and encouraged his followers to “stop the steal”, which they tried and failed to do as they stormed the US Capitol on Wednesday.
The danger of misinformation and fake news was laid bare as the chaotic scenes unravelled in Washington as Trump’s devotees burst into Congress chanting his baseless claims of voter fraud.
Protesters themselves believed they had been victims of fake news with rioters on Wednesday hurling insults at journalists and destroying television cameras. Their actions come after Trump called journalists “the enemy of the people” and hit out at “lamestream media”.
In 2019, a major report into disinformation and fake news, the Digital, Culture, Media and Sport Committee said electoral law was “not fit for purpose” and should be updated to reflect the move to “microtargeted” online political campaigning. A year later, MPs are still calling for the appointment of a new regulator to tackle an “infodemic” of fake news.
And yet, nobody can agree on what it is, the extent of the problem, and how exactly to tackle it. Here’s everything you need to know.
The origins of fake news
Governments and powerful individuals have used information as a weapon for millennia, to boost their support and quash dissidence.
Octavian famously used a campaign of disinformation to aid his victory over Marc Antony in the final war of the Roman Republic. In its aftermath, he changed his name to Augustus, and dispatched a flattering and youthful image of himself throughout the Empire, maintaining its use in his old age.
In the 20th century, new forms of mass communication allowed propaganda’s scale and persuasive power to grow, particularly during wartime and in fascist regimes.
This sort of propaganda was largely funded and controlled by governments, but the blatant bias it carried waned as the ideological struggles became less apparent. Added to that, as populations became more used to mass communication, they could more easily see through it.
How did the internet and social media change things?
Before the internet, it was much more expensive to distribute information, building up trust took years, and there were much simpler definitions of what constituted news and media, making regulation or self-regulation easier.
But the rise of social media has broken down many of the boundaries that prevented fake news from spreading in democracies. In particular it has allowed anyone to create and disseminate information, especially those that have proven most adept at “gaming” how social networks operate.
Facebook and Twitter allowed people to exchange information on a much greater scale than ever before, while publishing platforms like WordPress allowed anyone to create a dynamic website with ease. In short, the barriers to creating fake news have been undone.
In addition chat apps like WhatsApp have become hubs for disinformation and fake news, so much so that its parent Facebook reduced the number of people users can forward a message to down to five. Other apps like end-to-end encryption app Telegram has also allowed for false information to be more readily spread.
Impact of Russia, the 2016 US election and Trump
In 2019, Trump unveiled the winners of his much-touted “Fake News Awards“, escalating his already persistent attacks on a number of major US media outlets.
The brash Republican president announced the ten “honorees” using his preferred medium of Twitter, linking to a list published on the Republican Party’s website that crashed minutes after his big reveal.
Trump’s Twitter account became a hub for fake news and misinformation with the social media site making the decision to label his tweets last year. In May, Jack Dorsey’s tech giant labelled Mr Trump’s tweets for showing manipulated media. It also limited his posts for “glorifying violence” when he tweeted “when the looting starts the shooting starts” during the Black Lives Matter protests.
However, hoaxes and falsehoods have been associated with the internet since its early days, but it is only in the last three years that organised, systematic misinformation campaigns, often linked to governments, have emerged, and their effect on democracy and society scrutinised.
The 2016 US election has been seen as providing a fertile breeding ground for fake news. Some credit Mr Trump’s anti-establishment rhetoric and distrust of the mainstream media. Others blame widening partisanship, which meant readers were more prone to believe and share stories that fit their beliefs.
The rise of social media itself has also been seen as central. Sites like Facebook are accused of creating “filter bubbles”, the phenomenon of showing people things that they like or tend to agree with, and hiding those that they don’t.
Critics of Facebook and Twitter say the sites are purpose built for spreading misinformation, with the reach of a story dependent on its ability to go viral – something that often depends on sensationalism and emotional reactions more than truth itself.
Headlines such as “Pope backs Trump”, “Hillary sold weapons to ISIS”, “FBI Agent Suspected in Hillary Email Leaks Found Dead” went viral on Facebook in the run up to the election, garnering thousands of shares.
But fake news goes beyond the US. According to Freedom House, 30 governments identified this year pay “opinion shapers” to promote propaganda online.
So, what exactly is fake news?
One of the problems of actually doing anything about fake news is that it comes in multiple variants, from different actors and with different motives. These include but are not limited to:
- Commercially-driven sensational content: Stories that are not ideologically driven, but very often have absolutely no grounding in fact. Their key goal is to drive web traffic and, as a result, generate advertising income. Pop-up websites run by Macedonian teenagers fall into this category.
- Nation state-sponsored misinformation: The goal here isn’t revenue, but influence. Outlets in Russia or elsewhere might produce content to swing public opinion, sow division or give the illusion of support for a particular candidate or idea, either domestically or abroad. Fabricated stories can often be mixed with true or sensationalised ones.
- Highly-partisan news sites: These can conflate fact and opinion, are nakedly supportive of one political viewpoint or party, and often position themselves as alternatives to the mainstream media.
- Social media itself: Swarms of Twitter bots posting doctored or misleading photos, adverts on Facebook paid for by Russian intelligence outfits, videos on YouTube claiming terrorist incidents are hoaxes. These are not links outside of social media but are part of the social networks themselves.
- Satire or parody: Light-hearted publications such as The Onion and Daily Mash have existed well before fake news was seen as a problem.
This is not to mention news stories from mainstream outlets that turn out to be hoaxes or mistakes, but had genuinely good intentions behind them when published.
And of course, there’s Donald Trump, who constantly refers to outlets such as CNN and the New York Times as “fake news”. His administration also coined the term “alternative facts” when presented with evidence of low turnout at Trump’s inauguration.
These different definitions and motives make it incredibly difficult to effectively counter fake news. Some types, such as commercially driven fake websites, have been countered by cutting off advertising, but this does not affect misinformation campaigns, for example. Warnings on potential fake news stories have been introduced by Facebook, but this does little to combat content that exists on the site itself.
Has it had an influence?
It is hard to tell. Facebook’s chief executive Mark Zuckerberg initially said the idea that misinformation on Facebook influenced the election was a “crazy idea”, but has backtracked, saying he regrets the comments.
The sheer scale of Facebook and Twitter – 2bn and 330m users respectively – and the hours spent on them each week suggest many eyeballs have come into contact with fake news stories or misinformation campaigns.
According to a study from Stanford University, fake news websites received 159 million visits during the month of the 2016 US election, and other research has shown the most widely-shared news stories during the election were fake. The majority were also pro-Trump.
There is less evidence of fake news taking off in the UK during the Brexit vote or this year’s general election, although there is evidence that bots have been used in both.
However, there is a difference between reach and influence. The latter is a very difficult thing to measure, and this has been true of media for years – how much does the media drive beliefs, and how much does it reflect beliefs?
One less measurable effect may have been that the sheer quantity of fake news stories may have reduced trust in mainstream media – if scepticism about what people read online increases, they may not know what to think. In these situations, people tend to stick to their prejudices.
In March, research found that Facebook was the worst purveyor of fake news, according to a study published in the journal Nature: Human Behaviour. The Princeton University study found that Facebook was the referrer site for untrustworthy sources 15pc of the time, considerably lower than the 1pc referral rate of Twitter.
What is being done about it?
After widespread criticism that the companies are failing to deal with fake news, Twitter, Facebook and Google have all announced measures to crack down on misinformation online.
Facebook, which has faced the most scrutiny, enlisted fact checkers to flag disputed stories, cutting off advertising revenue to fake news sites, and better reviewing adverts on the site. Mark Zuckerberg’s tech giant also launched a campaign to help people spot fake news as the company came under pressure from advertisers for not doing enough to clamp down on widespread disinformation.
Twitter says it has become better at dealing with bots, and also introduced a new labelling system. Twitter will now flag to users if claims are disputed and also encourage people to read an article before they retweet it.
Elsewhere Google has pledged to introduce better algorithms to police YouTube. The company regularly publishes updates detailing accounts and channels that have been removed and deleted for promoting false information.
How do you spot it?
Spotting fake news isn’t easy: a Stanford study last year found that students were shockingly bad at distinguishing between different types of material online, whether paid for, fake or legitimate.
Facebook has a useful list of ways to spot fake news, which include checking other sources and a site’s URL.
News stories are often described as so good you couldn’t make it up, which is often what makes them great news stories. But it’s worth expressing scepticism about everything: could this really happen?