How do we know that something we hear or read is accurate? What conventions or practices help us to establish whether a new piece of information is true or false? What is the harm in repeatedly receiving incorrect information?
It seems as though it is becoming harder and harder to take something we hear or read at face value. While the phenomenon of misinformation is hardly new, the degree of damage that it can cause in an increasingly digitized and interconnected world is unprecedented and this threat is likely going to continue to grow.
At the Sentinel Project, we know that misinformation has rapidly become a global phenomenon that needs to be better understood in order to tackle its potentially violent effects. Our organization has implemented several on-the-ground misinformation management projects to prevent and mitigate atrocities in various countries. In an effort to better understand the phenomenon of misinformation more broadly, we have published a report that investigates the breadth of the problem of misinformation on a global scale, especially in relation to how it contributes to societal instability, conflict, and violence.
Our report offers a snapshot of misinformation management efforts worldwide by synthesizing findings from research articles, online reports, and practical initiatives that address this topic. We examined solutions that have been implemented by different organizations, technology companies, and governments. The report further looks at several of the Sentinel Project’s initiatives to highlight how misinformation management frameworks can be replicated and scaled in different contexts.
The report contains five main findings:
New characteristics – Online misinformation has characteristics that make it distinct from other forms of misinformation. The creation, speed, breadth, design, and profitability of online misinformation make it a much more dangerous phenomenon than its offline counterparts.
Increased susceptibility – Although research in this area is still emerging and requires further work to draw definitive conclusions, it is clear that some people are more susceptible to misinformation than others and that some populations are specifically targeted for disinformation campaigns. These two factors mean that it is important to understand the factors that make a given population vulnerable to misinformation.
Ability to disrupt – Misinformation disrupts societies by creating tensions in social relations and may have a relationship with hate speech and physical violence. It also disrupts democracies by undermining institutional trust, interfering with elections, and exacerbating confusion around political issues.
Need for collaboration – Initiatives to counter misinformation must be joint initiatives between governments, public institutions, and private sector companies that flag content and home in on misinformation sources. Recent efforts to operationalize macro-level misinformation management need to account for a considerable number of factors. This reality highlights the need for misinformation management systems that can effectively and sustainably operate at scales involving mass data quantities. However, these top-down, highly technical initiatives are not a panacea. Low-technology grassroots initiatives remain important due to misinformation management’s contextual nature—which requires intensive trust building—especially in already unstable and conflict-affected contexts.
Scaling initiatives – The case studies demonstrate that local-level initiatives can be scaled up to reach larger populations and can also be replicated in different contexts while incorporating important contextual factors. Furthermore, funders and implementers need to recognize that these initiatives can be resource-intensive so having a significant impact requires substantial long-term investments.
The report ends with seven practical points that should be taken into consideration when thinking about misinformation management:
Rumours and misinformation are fundamentally human phenomena so any approach to countering them must consider human factors in order to be effective.
Technological tools can be very useful both for those seeking to spread misinformation and for those seeking to counter it.
Technology and human moderators must complement each other in misinformation management efforts since there is no single technological solution to this human problem, though technology does give humans the potential to deal with much higher volumes of data than they could on their own.
Governments have an important role to play in addressing misinformation but this must be done with restraint and in consideration of fundamental rights and freedoms.
Civil society actors are critical for effectively addressing misinformation since they may better understand the nuanced contextual factors that affect the relevant populations.
Technology companies must assume more responsibility for monitoring and moderating misinformation on their platforms.
Further research is required to understand the impact of rumours and misinformation and their relationships with hate speech and physical violence.
To read the full report, click here.