While the precise conceptualization of the term misinformation remains a subject of debate, the current entry defines misinformation as any type of information which is misleading or false, regardless of intent. The COVID-19 pandemic has seen the rapid and widespread sharing of misinformation on a global scale, which has had detrimental effects on containment efforts and public health. This entry offers psychological insights to better our understanding of what makes people susceptible to believing and sharing misinformation and how this can inform interventions aimed at tackling the issue.
1. Introduction
December 2019 saw the emergence of SARS-CoV-2, a novel virus causing the coronavirus disease (COVID-19), which spread aggressively and rapidly across the globe. By 11 March 2020, the World Health Organization (WHO) had declared the outbreak a pandemic, and by 18 September 2021, there were over 226 million cases of COVID-19 and 4.7 million deaths reported worldwide
December 2019 saw the emergence of SARS-CoV-2, a novel virus causing the coronavirus disease (COVID-19), which spread aggressively and rapidly across the globe. By 11 March 2020, the World Health Organization (WHO) had declared the outbreak a pandemic, and by 18 September 2021, there were over 226 million cases of COVID-19 and 4.7 million deaths reported worldwide
. This global crisis was paralleled by the widespread sharing of both scientific and non-scientific information surrounding COVID-19 across multiple media channels. For the first time in history, social media and technology were being used on a huge scale by public health authorities and other institutions to keep people informed, safe, and connected. Social media and technology played an essential role in the response to the pandemic, for instance, through the implementation and promotion of public health measures, the tracking and mapping of symptoms, as well as the prediction of outbreaks in real-time. At the same time, however, this same technology also facilitated the overabundant spreading of information from uninformed sources, not all of which were accurate and reliable. The global scale of the pandemic amplified this spreading as people urgently sought out and shared information in an effort to protect themselves, their families, and their communities against the virus
.
On 15 February 2020, T.A. Ghebreyesus, the Director-General of WHO, announced the concern that the omnipresence and overabundance of often conflicting and inaccurate information posed a significant challenge for public health and was jeopardizing the response to the pandemic
. WHO declared that the world was facing what they termed an infodemic; “an overabundance of information, some accurate and some not, that makes it hard for people to find trustworthy sources and reliable guidance when they need it”
(p. 2). The COVID-19 infodemic saw the spread of information concerning the origin and cause of the virus and disease, the transmission of the virus and symptoms of the disease, available prophylactics, treatments and cures, and the impact and efficacy of interventions by public health authorities or other institutions
. Amongst this information was fake news, misinformation, disinformation, and conspiracy theories, which caused many to mistrust reliable sources of information and develop a distorted risk perception of the virus
. Due to this, people were less likely to adopt preventative public health behaviors, which had an adverse effect on the implementation and efficacy of containment strategies
.
The management of the infodemic was soon publicly recognized by WHO as a crucial part of the response to COVID-19
. On 29 June 2020, WHO held its first global infodemiology conference
, which led to the publication of the WHO Public Health Research Agenda for Managing Infodemics
. In this publication, WHO identified a need for research in the field of psychology to identify factors that make people more likely to share or believe inaccurate information
[9]. Understanding these factors can inform and enhance the development of innovative and creative interventions aimed at infodemic management.
This entry begins with a description of what constitutes fake news, misinformation, and disinformation, explores cases from the COVID-19 infodemic and considers the effect this has had on the societal response to the pandemic. It then goes on to explore the main psychological factors that have been found to play a role in the believability of misinformation and the role of sharing behavior. The entry ends with a description of interventions aimed at addressing misinformation.
. Understanding these factors can inform and enhance the development of innovative and creative interventions aimed at infodemic management.
2. Fake News, Misinformation, Disinformation, and COVID-19
The sharing of fake news is not something new. A classic example dates back to 1835 when a series of fabricated articles reporting the discovery of life and civilization on the moon was published by The Sun Newspaper in New York [10]. The mid-1890s saw a surge in fake news, when two major newspapers; W.R. Hearst’s New York Journal, and J. Pulitzer’s New York World competed for readers by prioritizing sensationalism over accuracy [11]. The promotion of fake news facilitated its circulation on a mass scale, a strategy which was soon adopted by other newspapers in their attempt to gain popularity. This soon came to be known as yellow journalism [11]. Another instance began in 1933, when the Nazi government founded the Reichsministerium für Volksaufklärung und Propaganda (RMVP) to enforce Nazi ideology through the spread of carefully choreographed propaganda [12]. A similar strategy was employed by the Soviet Union, with the establishment of a unit specializing in the manufacture and dissemination of disinformation in an attempt to influence political attitudes and public opinion [13]. The sharing of fake news has also played an influential role in public health issues over the years. For instance, in 1918, fake news surrounding the emergence of the H1N1 Influenza A virus led to it being coined the Spanish flu despite no evidence of it having originated from Spain at all [14]. This had significant detrimental economic and psychosocial consequences due to stigmatization [14][15].
Although the sharing of fake news is nothing new, the proliferation and democratization of social media has provided a principle conduit allowing it to spread more rapidly than ever before. With this has come accelerated growth in public and scientific interest, with the term since being referred to as a global buzzword [16][17].
2.1. Defining Fake News, Misinformation, and Disinformation
No consensus definition of fake news currently exists [18], although various definitions have been proposed. Fake news has been referred to as news which “aesthetically resembles actual legitimate mainstream news content but that is fabricated or extremely inaccurate” [19] (p. 389) and as “false information masquerading as verifiable truth” [20] (p. 735). Based on a review of the literature, Tandoc et al. [17] proposed a typology of fake news, defining it as news satire, news parody, fabrication, manipulation, propaganda, and advertising. A similar conceptualization was proposed by Waszak et al. [21], who added to this the idea that fake news is often irrelevant. Shu et al. [18] characterized fake news as having the intent to deceive and a verifiable lack of authenticity.
The terms misinformation and disinformation are widely used in research on fake news [16]. However, the way in which these terms are used varies. Some have used the terms to distinguish false information that is spread intentionally from that which is spread unintentionally, for example, referring to misinformation as the “inadvertent sharing of false information” and disinformation as the “deliberate creation and sharing of information known to be false” [22]. More specifically, misinformation has been referred to as the publishing of “wrong information without meaning to be wrong or having a political purpose in communicating false information”, and disinformation as “manipulating and misleading people intentionally to achieve political ends” [23] (p. 24). At the same time, some have used the terms interchangeably [24][25], and others have used misinformation to mean all kinds of misleading information and disinformation to mean only that which is intentionally misleading [26][27]. In line with this, and for the purposes of this entry, the term misinformation is used to encompass all types of misleading or false information, regardless of intent.
2.2. The Role of Technology and Social Media
Advances in technology have redefined the way in which information is published, spread, and accessed [23][28]. Hardware devices, such as smartphones and tablets, are becoming more and more affordable, removing financial barriers, and allowing easier access to a variety of tools [29]. A notebook in 2001, for instance, was priced at $2200, and in 2020, a similar device was priced at just $350 [29]. As a result, the accessibility to technology is rapidly and continually increasing in what has come to be known as the democratization of technology. At the same time, technology is becoming increasingly user-friendly, providing a variety of tools in which good quality content can be easily created. Content that is of higher quality is more likely to be perceived to be from a more reliable source, and thus is more likely to be believed [30]. Using technology, such content can be instantaneously uploaded and shared online, through one of the many popular, easily accessible, free-of-charge, social media platforms, for instance. The democratization of technology therefore plays a pivotal role in the exchange and spread of misinformation.
Social media platforms, in particular, have provided an efficient, user-friendly, highly accessible tool that allows for the high-speed and cross-platform publishing and sharing of information without being vetted and at no cost [28][31]. Social media has recently been referred to as a “powerful source for fake news dissemination” [18] (p. 23). A survey carried out by the Pew Research Centre found just over half (53%) of U.S. adults get their news from social media, with Facebook, YouTube, Twitter, and Instagram being the most popular platforms of choice [32]. In addition to changing the way that news is spread, technology has also impacted the way news looks, with tweets (a written message shared on Twitter that is a maximum of 140 characters long) being considered as significant news [17]. Another critical factor to consider in the spreading of misinformation is that most social media posts are accompanied by popularity ratings (e.g., likes, shares, and comments). The more popular a post appears to be, the more attention it receives, and the more likely it is to be liked, shared, and commented on, regardless of how accurate the information is [33]. Indeed, misinformation has been found to outperform accurate information when it comes to engagement and popularity [34]. The repeated sharing of posts on and between social media platforms poses a further challenge, as it makes it more difficult for users to determine the proximate source of the information [34]. What results is an online environment with an infinitely and rapidly flowing stream of posts, some with accurate information and others with inaccurate, varying in their appearance and level of detail, with a wide range of popularity ratings and often indistinguishable sources. This makes it very challenging for people to distinguish accurate from inaccurate information.
2.3. Misinformation during the COVID-19 Pandemic
The COVID-19 pandemic provided the ideal conditions that allowed misinformation surrounding the virus to thrive; high fear, low trust, and low confidence [35]. As a result, social media platforms were inundated with shared misinformation about the virus [36][37][38]. For instance, it was found that over 25% of the most viewed YouTube videos relating to COVID-19 (with over 62 million views worldwide) contained misinformation [39]. Other research found that 46% of the U.K. population [40] and 48% of the U.S. population [41] reported being exposed to misinformation surrounding COVID-19, with 66% of those exposed reporting repeated exposure daily [42].
Misinformation about COVID-19 ranged from conspiracy theories that the virus was bioengineered in a lab in Wuhan, China [43], to the promotion of fake cures such as adding pepper to meals, drinking or injecting oneself with bleach, and gargling lemon and salt water [44], or that the symptoms of COVID-19 were exacerbated by the 5G cellular network [45]. Another prevailing narrative claimed the virus was being used by B. Gates to enforce a global vaccination program and surveillance regime [35]. One of the most widespread examples of COVID-19 related misinformation was Plandemic, a conspiracy film that promoted anti-scientific health advice such as to avoid wearing masks since they activate the virus [46]. Even political leaders were contributing to the spreading of fake news, despite having access to official information. For instance, both U.S. President D. Trump and Brazilian President J. Bolsonaro actively promoted hydroxychloroquine as an effective treatment against the virus despite the lack of scientific evidence on the efficacy of the drug [47]. This illustrates how the veracity of misinformation is often difficult to determine, as it is not always blatant. For instance, even though there is no conclusive evidence, hydroxychloroquine is actually being studied as a potential treatment for COVID-19 [48].
2.4. Effects of Misinformation on the Societal Response to COVID-19
The detrimental effect of misinformation surrounding the COVID-19 pandemic has been made evident by the societal response on the behavioral level [5]. Numerous events and behaviors demonstrated the extent of the believability of misinformation, with uninformed opinion and conspiracy theories often being falsely equated to scientific evidence [20]. Research has shown that believing misinformation interferes with perceptions of the seriousness of COVID-19, which causes the underestimation of the risk posed by the virus [6]. As risk perception has been found to be significantly associated with the adoption of preventative health behaviors [7], this might partially explain why so many failed to adhere to the recommended guidelines for the containment of the virus (e.g., frequent hand washing and social distancing) [49][50], and are hesitant to receive the vaccine [50][51]. An additional factor in explaining this behavior might be the increased propensity to mistrust information from expert sources that is associated with believing misinformation [50][52], leading to the adoption of avoidance behaviors over health-protective behaviors [53]. Misinformation surrounding COVID-19 has also encouraged many to try dangerous treatments with severe health consequences. For instance, the myth that COVID-19 can be cured by drinking highly concentrated alcohol led many to follow this information, resulting in over 5800 hospitalizations, 60 cases of blindness, and 800 deaths worldwide [54].
Besides influencing health behaviors, misinformation has also led to the stigmatization of various groups of people, such as the Chinese, who have faced discrimination due to biased and misleading media coverage [55]. “China Kids Stay at Home” [56] and “China is the Real Sick Man of Asia” [57] were amongst some of the headlines promoted by influential news companies worldwide. As a result, the Chinese have faced racial discrimination, unequal treatment, and social isolation, having negative psychological consequences, including stress and anxiety [58]. This discrimination was not limited to Chinese but extended to Asians more generally. For instance, between 19 March 2020, and 15 April 2020, 1497 instances of COVID-19-related discrimination against Asian-Americans were reported to the Asian Pacific Policy and Planning Council in the U.S. [59]. These included hate crimes such as the attempted murder and stabbing of a Burmese-American father, four-year-old and two-year-old because the perpetrator mistakenly assumed the family was Chinese and therefore infecting people with the virus [60]. Other acts of violence due to misinformation included the destruction of telecommunication masts and the verbal and physical abuse of telecommunication workers in the U.S., Europe, and Australasia due to the 5G conspiracy theory (cf., Jolley and Paterson [61]), as well as mob attacks [5].
3. Psychological Factors Affecting the Susceptibility to Misinformation
It should not be ignored that many were not susceptible to believing misinformation, adhering to expert-recommended guidelines, and adopting the appropriate health-preventative behaviors [7]. This begs the question of what it is that makes some people more susceptible to believing misinformation than others. Identifying such factors is important in informing interventions for addressing misinformation. This section uses psychological theory to explore what makes people susceptible to believing misinformation.
3.1. Emotionality
The rapid spread of the highly contagious SARS-CoV-2 and its associated high mortality rates meant that feelings of uncertainty and threat were rife during the COVID-19 pandemic [38]. This was exacerbated by extreme measures, such as social isolation and quarantine, aimed at containing the virus, which left many experiencing significant feelings of anger, confusion, anxiety, and stress [62]. Prolonged elevated stress-related emotions are well known to activate symptoms of depression, and this proved to be the case for many [63][64]. A systematic review and meta-analysis of research on the prevalence of stress-related emotions during the COVID-19 pandemic concluded that it had caused a significant detrimental effect on public mental health [64]. In one study, for instance, over one-third (34%) of Chinese people reported experiencing moderate to severe levels of stress or anxiety-related symptoms in response to COVID-19 [65]. The stress-related emotional response to the pandemic was deemed so significant it led to the conceptualization of COVID stress syndrome [66].
The emotional response to the COVID-19 pandemic is a crucial factor to consider for understanding the believability and spread of misinformation [38][67]. People tend to be strongly motivated to maintain a sense of control and understanding over their lives [68], and when this sense is under threat, it results in heightened feelings of anxiety [69]. In an attempt to reduce this anxiety and regain their sense of control, people “compensate with strategies that lead to greater acceptance of misconceptions” [70] (p. 3). These strategies include sense-making mechanisms, whereby information is obtained from various sources in order to make sense of a complex and unfamiliar situation, as well as having someone or something to blame and project feelings of anxiety towards [67].
Misinformation, and especially conspiracy theories, provide a narrative for people to both make sense of a situation and place the blame somewhere by explaining an event as a result of an influential individual or organization’s secret attempts to achieve a sinister goal [67]. They offer an appealing solution to making sense of a situation and thus to regaining a sense of control in a way that is often clearer and simpler than that offered by accurate information. For instance, believing that China manufactured SARS-CoV-2 in a laboratory as a bioweapon offers an intelligible explanation of the origin of the virus, as well as somewhere to place the blame. The accurate account, on the other hand, which proposes that SARS-CoV-2 could have originated from the transmission of the virus from an animal to a human (a random, uncontrollable event), potentially enhances anxiety and lack of control. It has been repeatedly proven that feelings of anxiety and a lack of control foster openness to information and are a significant driving factor underlying a higher propensity to believe misinformation and conspiracy theories (see, e.g., Douglas et al. [71]).
3.2. Motivated Reasoning
On 19 April 2020, just over a month into the COVID-19 pandemic, an American protestor shouted, “This is a free country. Land of the free. Go to China if you want communism” at a nurse who was counter-protesting the lifting of quarantine measures [72]. This was one of many incidents that demonstrated how the pandemic led to the polarization of discourse and revealed deep-rooted epistemological and political positions [20]. This was largely fueled by the actions of governors and politicians, many of whom had opposing views on COVID-19 [73]. Various governors in the U.S., for instance, discounted the recommendations of health officials, failing to fully implement social distancing measures and openly discouraging the use of face masks [73]. In addition, news coverage surrounding COVID-19 was highly polarized and politicized, with politicians appearing more frequently in the news than scientists [74]. The politicization of COVID-19 was found to have had a detrimental effect on efforts to contain the virus, mainly through the spread of misinformation [75].
There is a growing body of research that links political ideology to the societal response to the pandemic [73][76][77], with political differences having been found to be the most significant factor predicting policy preferences and the adoption of health behaviors [77]. Political ideology has also been found to underlie susceptibility to believing and sharing misinformation [73], making it a crucial factor to consider in understanding the COVID-19 infodemic. The link between political ideology and the believability of misinformation can be better understood using the psychological theory of motivated reasoning or identity-protective cognition.
Motivated reasoning posits that information processing is directed so that it protects and is non-threatening to an individual’s existing beliefs or identity [78]. When an individual is faced with information that conflicts with either of these, they are likely to experience cognitive dissonance, which refers to a state of mental discomfort caused by conflicting attitudes or beliefs [79]. When in cognitive dissonance, people engage in thought processes that serve to minimize this discomfort. Building on this, the theory of motivated reasoning proposes that when faced with multiple sources of polarized information, people are more likely to believe that which reinforces their pre-existing beliefs (confirmation bias) and reject those which undermine their pre-existing beliefs (disconfirmation bias) [19][73][80]. Therefore, pre-existing ideological and partisan attitudes and beliefs might prevent people from fact checking of information, and lead to higher levels of engagement with ideologically concordant information [81]. These higher levels of engagement inform the efforts of curation algorithms of social media platforms, which present the user with content which maintains their interest and maximizes engagement. As a result, people become enclosed in a filter bubble, in which they are more likely to be exposed to information that confirms their pre-existing attitudes (selective-exposure) and less likely to be exposed to diverse content [81][82]. In this way, the interaction between user engagement and algorithmic content curation contributes to the spread of misinformation, and thus presents a significant challenge in addressing the issue.
Although there is much evidence to support that political ideology is associated with believability (cf., Pennycook and Rand, [19]), it is essential to consider that this effect has been found to be smaller than that of the accuracy of the information [83]. Therefore, information that is accurate and politically discordant is more likely to be believed than misinformation that is politically concordant [19]. Since accuracy is a more significant predictor of susceptibility to misinformation than political concordance, this raises the question of why misinformation is ever believed at all. One explanation is offered by Pennycook and Rand [84], who suggest that the issue lies in whether or not people are able to accurately determine the integrity of information; therefore, that susceptibility to misinformation might actually be better explained by a lack of reasoning rather than by motivational reasoning.
3.3. Cognitive Reasoning
Dual-process theory characterizes human cognition as having two distinct thinking styles; intuitive or autonomous thinking (system 1 processing) and analytic, rational, or reflective thinking (system 2 processing) [85][86]. The distinction between the two is demonstrated through the performance on a conflict task; the intuitive, incorrect response is a result of intuitive thinking, which is speedy and effortless and requires minimal working memory resources. The correct result requires analytic thinking, which is effortful, deliberate, and requires more working memory resources. However, humans tend to instinctively avoid resource-demanding processes whenever possible [87]. According to Pennycook and Rand [84], “humans are cognitive misers, in that resource-demanding cognitive processes are typically avoided” (p. 2). This is problematic when it comes to discerning truth from falsehood, given that analytic reasoning supports sound judgment [88].
There is a growing body of research providing support for the association between analytic reasoning processes and skepticism about epistemically ambiguous information [86]. For instance, a greater tendency to engage in analytic thinking is linked to the detection of pseudo-profound bullshit [86] as well as the rejection of conspiracy theories [89], including those related to COVID-19 [90]. However, more recent research is highlighting the critical role of prior knowledge in susceptibility to misinformation. It has been shown, for example, that the association between analytic reasoning and the rejection of misinformation is significantly stronger when the information is more obviously inaccurate [84], which suggests that an individual’s prior knowledge is an important factor underlying susceptibility to misinformation. This is supported by the finding that scientific reasoning has been found to be a stronger predictor of COVID-19 conspiracy theory beliefs than analytic thinking [90].
Scientific reasoning refers to having scientific knowledge and applying its “methods or principles of scientific inquiry to reasoning and problem-solving situations” [91] (p. 173). To better understand the role of scientific reasoning in the believability of misinformation, it is important to note the distinction between a belief (an attitude that is based on realistic, factual evidence), and an epistemically suspect belief (a belief which is not supported by factual evidence, and which conflicts with current knowledge) [90]. For example, the belief that methanol can cure COVID-19 conflicts with the factual evidence that methanol is toxic for human consumption. People with better scientific reasoning skills tend to hold the deep-rooted belief that scientific knowledge provides the most accurate conceptualization of the world [90]. As a result, they also tend to have beliefs which are supported by scientific evidence and therefore hold fewer epistemically suspected beliefs [92]. Therefore, it seems that having pre-exiting scientific knowledge and actually stopping to apply this knowledge to an analytical reasoning process makes people less susceptible to believing COVID-19 misinformation.
3.4. Heuristics
Cognitive psychology proposes heuristics as a thought process underlying intuitive thinking that ease the cognitive load when it comes to making judgments [93]. The word heuristic originates from the Ancient Greek word εὑρίσκω, meaning to find [93]. It refers to the process whereby an individual makes a decision based on a general rule of thumb, with very little cognitive reasoning involved [93]. Heuristics offer useful shortcuts for making a quick judgment call, however, the judgment is not guaranteed to be optimal or rational [93]. Simon [94] describes heuristics as satisficing; offering solutions that are good enough for the situation at hand, but which could be optimized. Research has shown that heuristics are used extensively in decision making in a variety of contexts (cf., Horne et al. [93]).
A recent study aimed to assess the use of heuristics in judging the veracity of COVID-19-related information [93]. Participants were shown a variety of news headlines relating to COVID-19 and were asked whether or not they believed the information and why. The researchers found that heuristics were extensively used in making judgments, and found that these fell into three broad categories; self-cognitive heuristics, content heuristics and source heuristics. Self-cognitive heuristics included heuristics based on how far the information aligned with the individual’s beliefs, their previous experiences, or their pre-existing knowledge. The more these are aligned, the more likely the individual is to accept the information as accurate. It was found that people made judgments by comparing the news to beliefs such as “vaccinations are proven safe and everyone should get vaccinated”, or previous experiences such as “I use the oil and it works”. Content heuristics consider supporting evidence, bias, accuracy, coherency and writing style. An example of a content heuristic is “they use derogatory terms such as libtard which is an obvious sign that the article is biased”. Source heuristics refer to whether the source of the information is perceived as accurate and include heuristics such as “the Chicago-Sun Times is a reputable paper”.
The most commonly used heuristics were belief alignment heuristics (29%), followed by knowledge alignment heuristics (22%), with just 6% basing their judgments on perceived accuracy. These findings provide further support for the argument that a lack of cognitive reasoning, and therefore failure to accurately judge the veracity of information underlies the susceptibility to believing misinformation [84].
4. Social Media Sharing
The simple click or tap of a like or share button is all it takes for the instantaneous dissemination of information. Advances in social media technology have granted users the ability to share information across multiple platforms simultaneously. A survey carried out by the Pew Research Center found that around three-quarters (73%) of U.S. adults are multiplatform users [95]. Of Facebook users, for example, 91% also use Instagram, 90% use Twitter, 90% use LinkedIn, 89% use Pinterest, 89% use Snapchat, 85% use WhatsApp and 81% use YouTube [95]. This creates an expansive online network, allowing for the seamless sharing of information, with every single share significantly expanding its reach. If, for example, an individual with 1000 Twitter followers shared a post, which in turn was shared by just 10% of those followers (to their own network of 1,000 followers), the post will effortlessly have reached 100,000 people [96]. This illustrates how conducive social media is to the rapid spreading of information, and how sharing exacerbated the spread of misinformation that led to the COVID-19 infodemic [2]. Therefore, exploring social-media sharing behavior can enhance our understanding of the spread of misinformation during the COVID-19 pandemic.
Understanding Sharing Behavior
Shared information is often mistakenly assumed to have been shared based on the individual believing the information [19]. However, recent research has proven that this is not necessarily the case, and that information is shared for a variety of reasons [19]. In one study, for instance, participants were presented with false claims about COVID-19 and asked whether or not they would share them on social media [97]. It was found that the perception of accuracy of the statement did not play a significant role in the intention to share, with the intention to share being 91% higher than the judgment of their accuracy. These findings demonstrate that people are willing to share COVID-19 related information without being certain of its accuracy. Pennycook and Rand [19] identify three possible explanations for this disconnection between sharing intention and accuracy judgment; the preference-based account, the confusion-based account, and the inattention-based account.
Consistent with the theory of motivated reasoning [85], the preference-based account proposes that people prioritize their political identity or moral viewpoints, over accuracy and truth. From this perspective, people share misinformation, even when knowing it is inaccurate, for reasons driven by their political or moral ideology. These reasons might include virtue signaling [98], social dominance orientation [99], furthering a political agenda [77], or simply because the information is thought to be interesting [30]. However, according to Pennybrook et al. [83], just 16% of shared misinformation is driven by preference-based motives.
According to the confusion-based account, people mistakenly and genuinely believe the misinformation they share to be accurate. This perspective is supported by the findings of Pennybrook et al. [83] which showed that only one-third (33%) of shared misinformation was believed, and two-thirds (67%) was shared as a result of confusion. Based on these findings, a significant amount of information that is shared online can be explained by confusion.
The inattention-based account suggests that while people generally prefer to only share accurate information, they fail to do so due to distractions from the online social media environment. Social media environments have become an attention economy, where posts compete for user attention and provide distractions which hinder with analytic thinking processes [64]. Much content is generated and posted with the aim of capturing as much attention as possible. This often done through the posting and sharing of ideologically extreme posts with the hope of achieving high popularity ratings (likes, comments and shares). Popularity ratings have been repeatedly shown to attract more attention, regardless of information veracity [33][96]. Therefore, such an environment is likely to provide conditions which make people who engage in intuitive rather than analytic reasoning [100] especially susceptible to believing and sharing misinformation. Indeed, analytic reasoning, besides being associated with a higher tendency to reject misinformation [89], is also associated with sharing behavior that is based on more accurate judgments of information veracity [97], and the sharing of more reliable information [101]. Therefore, a lack of analytic thinking seems to be a source of misjudgments when it comes to sharing information on social media.
5. Interventions for Addressing Misinformation
Current interventions for addressing misinformation fall into four broad categories; algorithmic, corrective, legislative and psychological [102]. Algorithmic approaches use machine learning, network analysis and natural language processing to detect misinformation [18]. A ranking algorithm downranks any information that is classified as problematic, making it less likely for users to see it. Although these approaches have been implemented by social media companies including Google and Facebook, they have not been entirely effective for two main reasons. Firstly, it is not always easy to ascertain the veracity of information; the truth is not always black and white. Therefore, algorithmic approaches run the risk of false positives and unjustified censorship, which is what happened at Facebook in 2017 [103]. Secondly, misinformation evolves rapidly, as the COVID-19 infodemic has proven. Therefore, in order for algorithmic approaches to remain effective, they would need to evolve at the same pace, which is difficult given that even classifiers trained to detect misinformation were unequipped for novel claims surrounding COVID-19 [19].
Corrective approaches attempt to debunk misinformation using fact-checking and correction [104]. Fact-checking initiatives such as PolitiFact [105] and Snopes [106] check and debunk major headlines which are published on their websites. The evidence on the efficacy of fact-checking approaches is mixed, with some highlighting their efficacy in addressing misinformation, and others suggesting they could actually increase belief in misinformation [107][108]. Since it is impossible to fact-check every story, the stories which have not been checked may be mistakenly assumed to have been verified, and therefore regarded as accurate. Fact-checking and debunking approaches were not able to handle the surge of information during the COVID-19 pandemic and they are simply not scalable [109].
Some countries have adopted a legislative approach in tackling misinformation, introducing new regulation and legislation. For instance, France introduced the Fake News Law, which placed restrictions on the information media companies were allowed to publish [110]. A similar initiative was implemented in the U.K., with a specialist unit set up to counter false claims against the COVID-19 pandemic [111]. The concern with such initiatives, however, arises from granting the power for an individual organization to decide what classifies as accurate and what doesn’t [102]. EUvsDisinfo (a European Union-funded group dedicated to tackling misinformation), for instance, was subject to heavy criticism, including from Dutch politicians, for infringing freedom of speech and was subsequently proposed the initiative be scrapped altogether [112].
5.1. Psychologically-Informed Interventions for Addressing Misinformation
The shortcomings of the approaches to addressing the sharing and spread of misinformation described above have seen scientists turn to psychology, education and the behavioral sciences in search of more effective interventions [102]. Two promising psychologically-informed approaches, namely, inoculation or prebunking, and accuracy prompts, are detailed below.
5.1.1. Inoculation
Misinformation has been described as something which “spreads through networks much like a real virus ‘infecting its host’ and rapidly transmitting falsehoods from one mind to another” [5] (p. 3). The non-psychological interventions described above all attempt to correct misinformation after the damage has already been done and face various difficulties in doing so. Researchers have now shifted their focus to a more proactive prebunking (i.e., preemptive bunking) or inoculation against misinformation [5][19][102][109].
This approach is based on inoculation theory, which uses an analogy from immunology [113]. Inoculation theory posits that in the same way in which vaccines work through exposure to a weakened version of a virus, preemptive exposure to weakened examples of misinformation might make people more immune, and less susceptible to believing it [113]. In what van der Linden et al. [5] term a persuasion inoculation, individuals are presented with some misinformation that has been weakened by the addition of two elements [5][102][113]. The first of these is a forewarning that the individual is about to be exposed to counter-attitudinal content (the affective basis), which is thought to elicit feelings of threat, and trigger the protection of pre-existing beliefs. The second is a preemptive refutation of counterarguments (the cognitive basis), which essentially teach and inform the user by modelling the counterarguing process. The information is weakened to the point where it doesn’t actually persuade the person, but is enough to trigger protective responses such as enhanced analytical thinking [113]. Following this experience, the individual develops mental antibodies to misinformation, and will likely employ these when exposed to similar challenges in the real world, thus reducing their susceptibility to misinformation. A meta-analysis of research on inoculation theory concluded that inoculation theory is indeed effective at protecting attitudes from persuasion [114].
The Bad News Game [115] is an award-winning online browser game which puts inoculation theory into practice. It uses a simulated social media environment where the user plays the role of a misinformation creator and learns about the spreading of misinformation in an engaging way (cf., van der Linden & Roozenbeek [102] for a detailed description). Similar to this is Go Viral! [116], a practical application of inoculation theory developed by WHO in collaboration with the U.K. government, specifically aimed at inoculating people against COVID-19 misinformation. This game focuses on building resistance to three techniques used on social media to manipulate people; fearmongering, conspiracy theories and the use of fake experts. Research has shown that these games significantly improve the ability to identify and resist misinformation [102][117].
One limitation of such approaches is identified by Pennycook and Rand [84], who note that they are opt-in; people have to voluntarily choose to participate with the inoculation technique. The problem with this is that people who are low on cognitive reflection and most susceptible to misinformation (and therefore in need of inoculation), are also less likely to participate in such activities. Shorter forms of inoculation (e.g., presenting digital media literacy tips), have proven to be effective in helping people to determine news veracity [118] and these may be more scalable and have more reach.
5.1.2. Accuracy Prompts
As noted above, the inattention-based account of misinformation sharing on social media posits that people generally only want to share information that is accurate, and one of the main reasons underlying the spread of misinformation on is the failure to accurately determine its veracity prior to sharing [83][97]. Research has shown that by shifting their attention towards accuracy, people can better distinguish misinformation from accurate information [97]. Accuracy prompts encourage people to do just this by having people rate the accuracy of information prior to making a judgment about sharing it. This approach is appealing as it does not rely on software to identify and distinguish between accurate information and misinformation (as is the case with algorithmic approaches [102]). In addition, accuracy prompts are easily scalable (unlike fact-checking for instance, which is time consuming and does not cover all information [109]). Accuracy prompts provide a promising approach towards tackling misinformation in a way which preserves user autonomy and encourages them to exercise their desire to avoid sharing misinformation [83].
6. Concluding Remarks
The COVID-19 infodemic has offered a stark warning sign as to the sheer scale and detrimental effects of misinformation, highlighting the importance of understanding its spread as well as its consequences for public health and everyday life. The evidence presented in this entry has revealed a variety of factors underlying the failure to discern misinformation from truth, including values and beliefs, political ideology and scientific knowledge. Overall, however, it seems that susceptibility to misinformation is mainly due to people failing to actually stop and think about the information they are exposed to. In the chaotic online environment created by social media, intuitive reasoning trumps analytic reasoning, resulting in endless impulsive clicks and taps, mindless shares and superficially alluring popularity ratings. As a result of these findings, interventions aimed at tackling misinformation are shifting their focus from remedial approaches to preventative approaches, with creative initiatives that encourage people to think deeper before they act. Although there is still some way to go in fully understanding the spread of misinformation, the field of psychology is proving to offer valuable insights and offers promising avenues for future research.
This entry begins with a description of what constitutes fake news, misinformation, and disinformation, explores cases from the COVID-19 infodemic and considers the effect this has had on the societal response to the pandemic. It then goes on to explore the main psychological factors that have been found to play a role in the believability of misinformation and the role of sharing behavior. The entry ends with a description of interventions aimed at addressing misinformation.