On Monday 22 November, the Misinformation Cell held its first, sell out webinar – ‘In conversation with the Misinformation Cell’ – where our CEO Shayoni Lynn and our Head of Misinformation Cell Stefan Rollnick sat down to try to demystify mis and disinformation. If you want to make sure you’re first in the queue for our next one, you can sign up for Misinformation Cell updates here.
Here were some of the main takeaways:
We can’t just ‘fix’ misinformation with technological solutions.
Misinformation is not simply a technical or scientific challenge – it has large grey areas, making it a moral challenge as well.
How we decide to draw the line through these grey areas is a complex process – fraught with accusations of ‘censorship’ – but creating a robust set of new norms requires a conversation across civil society: between academia, democratic institutions, governments, private and 3rd sectors along with the platforms themselves.
In the online battlefield, technology can help us to maximise efficiency, but it cannot replace knowledge of the nuances of the political landscape (yet). The best we can do for now is ensure that we have a research presence in these algorithmic misinformation bubbles across social media and beyond (which Lynn’s Misinformation Cell does).
False information isn’t so easy to pin down
Drawing the line through the grey area of misinformation also means defining our terms – but is false information all that easy to define? We talked through four different lenses of false information and what they mean for our strategies for combating it. False information can range from demonstrably false claims, misrepresented or misapplied data, omission of information and even editorial choices about what is considered newsworthy. This final category of false information is the thorniest because it strays into what we see in some mainstream newspapers and media; what can best be described as ‘hyperpartisanship’. This is not to say hyperpartisanship isn’t dangerous – editorial choices that consistently single out ethnic or religious minorities as associated with specific crimes can inflict real world harm and violence.
Making use of the lens of vulnerability is vital to informing our strategy
By understanding what makes people vulnerable to misinformation, we can develop strategies to fight it. Some of these vulnerabilities were more benign like ‘news-find-me-thinking’ – the belief that we don’t have to be deliberate in our search for news but that we can let it jump out at us from our news feeds. Fighting this means large-scale education and comms to remind people that despite their name, our ‘News’ Feeds aren’t a safe or reliable place to get our news.
Some of these vulnerabilities are more complex and psychological – like lack of control which leaves individuals more vulnerable, not just to sharing misinformation, but displaying patterns of conspiracy thinking. Fighting this is harder and elements of it are beyond our control as communicators (e.g. economic recovery, end of the pandemic). Yet while it’s important to acknowledge what is beyond our control, we can also think creatively about how we can use public engagement to increase individuals’ sense of agency.
Misinformation isn’t new – but it is getting worse
The first appearance of the word misinformation in English-language encyclopedias dates back to 1816, which coincides neatly with both the beginning of the industrial revolution and the invention of the industrial printing press. History tells us that every time we make it easier for humans to communicate with one another, we also make it easier for those seeking to spread lies and division to do so. The difference between 2021 and 1816, or 1939 for that matter, is that when Hitler used the new power of the radio to spread his gospel of hate, ‘The Radio’ wasn’t a singular corporate entity who benefitted from engagement with Hitler’s message and was therefore financially incentivised to promote it. In 2021, as a new generation of hate-peddlers come of age, Big Tech is incentivised to promote their messaging because it improves their bottom line.
Progress has been made
In his 2020 Tanner Lecture, Professor Jonathan Zittrain, uses an example from 2005, where a Google Search for the word ‘Jews’ produced an anti-semitic website called Jew Watch News as its 2nd highest return. Instead of downgrading the offensive website in their returns, instead Google placed a note at the top of the search returns to reassure browsers that they are “distrubed about these results as well”. The implicit assumption is that they must blindly trust the results of their algorithm, rather than take responsibility for its output. Now, over 15 years later, the conversation has moved on to the point where politicians and citizens on both sides of the political spectrum are demanding that Big Tech companies take responsibility for the output of their algorithms. This is a good start, but there’s more work to do.
Don’t want to miss our next webinar?
If you weren’t able to secure your space at our first Misinformation Cell webinar, then make sure you don’t miss out on the next one. Sign up to our Misinformation Cell mailing list to be the first to hear about new products and events from the Cell.