Disinformation refers to false or misleading information created with the deliberate intention to deceive and cause individual or societal harm. It is typically distinguished from misinformation, which involves falsehoods shared without deceptive intent, and from malinformation, which uses accurate information in misleading or harmful ways. Terms often used interchangeably in public debate—such as fake news, propaganda, and conspiracy theories—describe related but distinct phenomena with differing aims and methods. The term derives from the Soviet concept of dezinformatsiya, originally associated with covert influence operations and strategic deception. Over time, however, its meaning has expanded to encompass a wide range of manipulative practices enacted by both state and non-state actors. Disinformation can take textual, visual, and multimodal forms, including fabricated images and AI-generated content such as deepfakes. Motivations vary and may include political influence, economic gain, ideological mobilisation, or efforts to stigmatise specific groups. Although these practices have long historical precedents, digital and platformised communication environments have amplified their scale, speed, and persuasive potential. This entry provides a narrative overview and conceptual synthesis structured around four dimensions: the history of disinformation, the supply and diffusion mechanisms, the psychological, social, and narrative drivers, and the interventions designed to mitigate its impact.
The literature increasingly regards disinformation as a socio-technical challenge rather than merely a limited knowledge deficit
[1][2]. This perspective emphasises that false and misleading content emerges not only from individual misunderstandings but from the interaction between human cognition, social dynamics, and technological infrastructures. During the 2016–2020 cycle of “infodemic,” research came together around the “information disorder” framework by Wardle and Derakhshan
[3], which includes misinformation, disinformation, and malinformation. This framework gave everyone common terms to use in debates about measurement and policy. The literature also highlights conceptual challenges in distinguishing misinformation from disinformation, particularly when the intentions of those who share false content are difficult to ascertain, an issue explored in depth by recent philosophical analyses
[4].
The modern use of the term disinformation reflects this evolution. Western scholarship typically treats it as a loan translation of Soviet
dezinformatsiya, a term reported in intelligence contexts from the 1920s and subsequently codified in Soviet reference works
[5][6][7]. The use of English increased significantly after the 1950s, with a wider use of dictionaries starting in the 1980s. This was in line with Cold War debates about propaganda and active measures
[5][7][8]. This diffusion underscores a key distinction in the literature: while techniques of deception have deep historical roots, the modern category term disinformation—and its policy salience—are products of 20th-century statecraft.
In the most recent years, three research streams developed concurrently. The first issue is supply and diffusion: who makes fabricated content, how it is spread and kept going, and how platform incentives affect visibility and reach
[9]. The second examines susceptibility, integrating cognitive, affective, social, and narrative mechanisms to elucidate why falsehoods persist and why corrections frequently leave residual influence
[10][11]. The third examines the efficacy of interventions set up to counter disinformation, encompassing prebunking, media literacy initiatives, accuracy prompts, interface friction, post hoc corrections, and provenance or policy safeguards
[11][12].
New cumulative evidence makes it even clearer who is at risk and what works. An individual participant data meta-analysis (31 experiments; 11,561 U.S. participants; 256,337 headline judgments) distinguishes discrimination ability (the capacity to differentiate true from false) from response bias (a general inclination to classify items as true or false). It demonstrates that source display, in conjunction with headlines, enhances discrimination—yielding heterogeneous subgroup gains—and identifies demographic and cognitive moderators
[13]. A complementary toolbox synthesis in Nature Human Behaviour delineates nine individual-level interventions aligning strategies with objectives and evidential robustness
[12]. A meta-analysis of media literacy interventions encompassing 49 experiments (N = 81,155) indicates a moderate overall effect on resilience (d = 0.60), with enhanced outcomes for multi-session programs, in cultures with higher uncertainty avoidance, and among college students compared to crowdsourced adults
[14].
Together, these strands reflect a growing consensus that disinformation must be understood systemically, as an interplay between producers, environments, and audiences. This entry synthesises these perspectives while offering an accessible overview of the history, drivers, and countermeasures of disinformation.