Online Mis- and Disinformation: How can Democracies Respond?
A well-informed electorate is crucial to the functioning of a democracy. In the absence of a knowledgeable citizenry, democracies risk falling victim to demagoguery and tribalism. Unfortunately, this trend appears to be occurring across democratic nations, as the rise of social media – and the internet more broadly – as people’s primary source of information has meant that credible information is often drowned out by misleading materials and/or deliberately manipulated messages. Indeed, events such as the 2016 American Presidential elections and the COVID-19 pandemic have highlighted how mis- and disinformation have become endemic to the cybersphere media environment. Russia’s invasion of Ukraine has also illustrated how foreign states can use deliberate disinformation campaigns as an instrument to diffuse state propaganda worldwide. More recently, Elon Musk’s acquisition of Twitter, one of the world’s largest social media platforms, has again brought issues related to online mis- and disinformation to the forefront of public debate. While Musk argues that he intends to restructure Twitter as “a common digital town square, where a wide range of beliefs can be debated in a healthy manner,” thus citing freedom of speech as his priority, his critics allege that without responsible management, Musk’s changes to Twitter’s policies will facilitate the uninhibited spread of mis- and disinformation across the digital platform. Similar dynamics characterize the broader digital media environment: private enterprises and actors are the primary agents responsible for regulating their organization’s policies regarding mis- and disinformation, and the more restrictive policies are often confronted with allegations of censorship and infringement on free speech. But how do governments enter this equation? More specifically, what policy options do democratic governments have to combat online mis- and disinformation.
Before continuing, it is important to define and differentiate between ‘misinformation’ and ‘disinformation.’ The fundamental point of difference is related to an author’s intent. ‘Misinformation’ refers to information that is generally false or inaccurate. On the other hand, ‘disinformation’ is deliberate and malicious, intended to spread fear and suspicion among a population. That said, in terms of impact, both mis- and disinformation erode the public’s access to credible information and therefore pose difficulties to the effective functioning of democracy. Therefore, democratic governments have four main policy options to address online mis- and disinformation: citizen resilience/education programs; third-party monitoring; private sector engagement; and regulation.
The first policy option, citizen resilience programs, involves governments dedicating resources toward educational programs and partnerships with civil society organizations with the goal of strengthening citizens’ critical thinking about online mis- and disinformation. An example is Canada’s Digital Citizen Initiative, which aims to support democracy by funding civic, news and digital media literacy learning materials and by supporting local research groups aiming to promote a “healthy information ecosystem.” Other similar policies, according to a special EU report, include specialised trainings, public conferences and debates, along with more conventional forms of learning from the media; overall, the goal is to improve citizens’ media literacy. These types of citizen resilience programs acknowledge that a low-cost (millions rather than billions) and relatively non-intrusive approach to combat online mis- and disinformation involves policies that facilitate greater public awareness of the sources of online mis- and disinformation, of the intentions, tools, and objective behind these phenomena, and of citizens’ own vulnerability. Another possible tool governments can use to foster citizen resilience is to support independent media sources. Democratic governments can help fund or remove barriers of entry to unbiased and expert information from more conventional news sources, thus providing citizens with a credible and reliable source of information and improving their resilience to false information present online.
A second policy option democratic governments may pursue is third-party monitoring. Such a policy may involve rapid alert systems during election campaigns, establishment of government bodies that can coordinate among relevant agencies, media organizations, and civil society groups to identify and communicate trends and/or threats of online mis- and disinformation, or the establishment of an independent, third-party task force with a mandate to monitor instances of online mis- and disinformation and to communicate their findings to the government and the public. For example, the G7 Rapid Response Mechanism is a transnational effort to identify and respond to foreign threats to democracy, including online disinformation, by sharing information and analysis with the public and by identifying opportunities for internationally-coordinate responses. Overall, however, third-party monitoring may be more costly, both in terms of fiscal resources and energy. Moreover, it may be difficult for actors involved in third-party monitoring to effectively reach a wide audience. Indeed, it is difficult to believe that everyone will pay attention to or trust these mechanisms.
Third, governments can take steps to mobilize the public sector in the fight against online mis- and disinformation. Such a policy is particularly important, as the private sector is best equipped to regulate online behaviour. Private sector engagement might involve negotiation between a government and online media companies of memorandums of understanding or agreed codes of conduct to implement actions and procedures that will limit the spread of mis- and disinformation online. For example, in 2016, Facebook, Twitter, Microsoft and YouTube agreed to comply with a European ‘code of conduct’ aimed at combating hate speech and terrorist propaganda across the EU. Further agreements with the private sectors could encourage online platforms to ensure transparency in political advertising, eliminate fake accounts, identify bots, and cooperate with independent fact-checkers to detect and flag mis- and disinformation. Indeed, Twitter enacted a similar fact-checking procedure during the COVID-19 pandemic, which identified misleading information on its platform.. The effectiveness of a policy of private sector engagement depends upon the widespread participation of social media companies and online platforms, as well as other relevant stakeholders in the private sector. The efficacy of such a policy is therefore constrained by the difficulties of enforcing cooperation from within the private sector without resorting to outright regulation. That said, private sector engagement is not a considerably costly policy.
Finally, governments may choose to directly regulate online information through legislation. This is where things get tricky: such a policy involves compelling internet companies to identify, flag, and minimize mis- and disinformation on their platforms, or to disallow and remove mis- and information entirely. For example, in 2017, Germany passed legislation that forced digital platforms to delete hate speech and misinformation by imposing fines on companies who refused to comply. However, regulation has several major shortcomings. First, it risks limiting, or even criminalizing freedom of expression. The internet has become the primary medium for individuals to express themselves and their views, and imposing limitations on online speech would therefore risk undermining everyday users’ freedom of speech. Furthermore, it is often quite difficult to identify objectionable content online. It is not always clear what constitutes misinformation, and it can also be challenging to determine whether misleading information online was deliberately spread by malicious actors.
Overall, when these policies are analyzed in regard to their costs and benefits, certain policy approaches appear much more favourable than others. As mentioned in the introduction, a well-informed citizenry is crucial to the effective functioning of democracy, and thus citizen resilience and reliable sources of information are crucial. Consequently, policies that aim to strengthen citizen resilience and to strengthen traditional, independent media will remain the most important tools to counteract the negative effects of online mis- and disinformation. Public education programs can be implemented at a low cost, are decentralized in the sense that they do not involve direct government intervention in online platforms, and they avoid the risks associated with direct regulation. Indeed, it is clear that “restrictive regulation of internet platforms” is an undesirable policy, as it sets a precedent for the government to expand censorship, thus restricting freedom of expression and generating hostility toward democratic governance. Regulation also allows actors in the private sector, like Elon Musk, to posture themselves as the champions of free speech fighting against intrusive governments. Nonetheless, it will be critical for governments to strengthen their cooperation with the private sector, as this policy is by far the least-costly and has the most direct capacity to deter online mis- and disinformation, given that private entities are the agents endowed with the most legitimate and effective tools to counter mis- and disinformation on their platforms.