Table of Contents

    Topic review

    Fake News in Social Networking

    View times: 19


    Fake news is defined as news that is intentionally and demonstrably false, or as any information presented as news that is factually incorrect and designed to mislead the news consumer into believing it to be true.

    1. Introduction

    The fake news term originally refers to false and often sensationalist information disseminated under the guise of relevant news. However, this term’s use has evolved and is now considered synonymous with the spread of false information on social media [1]. It is noteworthy that, according to Google Trends, the “fake news” term reached significant popularity in Brazil between the years 2017 and 2018, having its peak of popularity in October 2018, when there was the presidential election in Brazil (available at

    Fake news is defined as news that is intentionally and demonstrably false [2], or as any information presented as news that is factually incorrect and designed to mislead the news consumer into believing it to be true [3]. Sharma et al. argue that these definitions, however, are restricted by the type of information or the intention of deception and, therefore, do not capture the broad scope of the current use. Thus, Sharma et al. define the term as news or messages published and propagated through the media, containing false information, regardless of the means and reasons behind it [1]. Despite the lack of a clear consensus on the concept of fake news, the most accepted formal definition interprets news as intentionally and verifiably false. Regarding this definition, two aspects stand out: intention and authenticity. The first aspect concerns the dishonest intention of deceiving the reader. The second, on the other hand, relates to the possibility of this false information being verified.

    Fake news can be distinguished by the means employed to distort information. The news content can be completely fake, entirely manufactured to deceive the consumer, or it can be tricky content that employs misleading information to address a particular topic. There is also the possibility of imposing content that simulates genuine sources, but, in fact, the sources are false. Other fraudulent characteristics of fake news content are the use of manipulated content, such as headlines and images that are not in accordance with the content conveyed, or the contextualization of the fake news with legitimate elements and content but in a false context.

    Fake news also has different motives or intentions, such as intentions to harm or discredit people or institutions; profit intentions to generate financial gains by increasing the placement and viewing of online publications; intentions to influence and manipulate public opinion; as well as intentions to promote discord or, simply, for fun are identified as motivations for the creation and dissemination of fake news.

    Several concepts compete and overlap with the concept of fake news. A synthesis of these multiple concepts, which are not considered fake news, are listed as follows [2][4][5][6]:

    (1) Satires and parodies have embedded humorous content, using sarcasm and irony. It is feasible to have its deceptive character identified;

    (2) Rumors that do not originate from news events but are publicly accepted;

    (3) Conspiracy theories, which are not easily verifiable as true or false;

    (4) Spams, commonly described as unwanted messages, mainly e-mail, spams are any advertising campaign that reaches readers via social media without being wanted;

    (5) Scams and hoaxes, which are motivated just for fun or to trick targeted individuals;

    (6) Clickbaits use miniature images, or sensationalist headlines, in the process of convincing users to access and share dubious content. Clickbait is more like a type of false advertising;

    (7) Misinformation that is created involuntarily, without a specific origin or intention to mislead the reader;

    (8) Disinformation, which is pieces of information created with the specific intention of confusing the reader.

    The characteristics of these types of fraudulent content are compared to the fake news in Table 1.

    Table 1. Fake news-related terms and concepts.

      Authenticity Intention Reported as News
    Satires and Parodies False Not Bad No
    Rumors Unknown Unknown Unknown
    Conspiracy Theories Unknown Unknown No
    Spam Possibly True Bad/Advertising No
    Scams and Hoaxes False Not Bad No
    Clickbait Possibly True Advertising No
    Disinformation False Unknown Unknown
    Misinformation False Bad Unknown

    2. Fake News Characterization

    The growth of communications mediated by social media is one of the main factors that encourage the change of characteristics in current fake news [1]. An individual’s inability to accurately discern fake news from legitimate news leads to continued sharing and belief in false information on social media [2][7][8][9]. It is difficult for an individual to differentiate between what is true and what is false while being overwhelmed with misleading information received repeatedly. Furthermore, individuals tend to trust fake news because there is currently public disbelief in relation to traditional communication media. Additionally, the fake news is often shared by friends or confirms prior knowledge, which, for the individual, is more reliable than the discredited mass media. In this context, the identification of fake news is more critical than other types of information, since it is usually presented with elements that imbue it with authenticity and objectivity, thus making it relatively easier to obtain the public’s trust.

    Social media and collaborative information sharing on online platforms also encourage the spread of fake news, an effect called the echo chamber effect [10]. The naive realism, in which individuals tend to believe more easily in information that is aligned with their points of view, the confirmation bias, in which individuals seek and prefer to receive information that confirms their existing points of view, and the theory of normative influence, in which individuals choose to share and consume socially safe options as a preference for acceptance and affirmation in a social group, are important factors in the perception and sharing of fake news that foster the effect of the echo chamber [10]. These concepts imply the need for individuals to seek, consume and share information in line with their views and ideologies. As a consequence, individuals tend to form connections with ideologically similar individuals. In a complementary way, social network recommendation algorithms tend to personalize content recommendations that meet an individual or group's preferences. These behaviors lead to the formation of echo chambers and filter bubbles, in which individuals are less exposed to conflicting points of view and are isolated in their own information bubble [1][11]. The confinement of fake news in echo chambers, or information bubbles, tends to increase the survival and dissemination of such news. This is because the confinement incurs in the phenomenon of social credibility, which suggests that people’s perception of the credibility of information increases if others also perceive it as true, since there is a tendency for individuals to consider information to which they are submitted repeatedly as true [9].

    The spreading patterns of fake news on social media have often been studied to identify fake news characteristics that help discriminate between fake and legitimate news. The problem of identifying fake news can be defined in several ways. The classification can be seen as the execution of binary classification between false or true, rumor or not, hoax or not. Another way to define the problem is how to perform a classification of several classes, true, almost true, partially true, mainly false or false, or as an unverified rumor, true rumor, false rumor or not rumor [12]. The main difference between the classification problem's definition is due to the different annotation schemes or application contexts in different datasets. Typically, datasets are collected from annotated statements on fact-checking web sites, such as Politifact (available at, Full Fact (available at, Volksverpetzer (available at and Agência Lupa (available in Portuguese at These sites reflect the labeling scheme used by the specific fact-checking organization.

    Sharma et al. identify three characteristics relevant to identifying fake news: the sources or promoters of the news; the content of the information; and the user's response when receiving the news on social networks [1]. The source or promoters of the news have a major influence on the news's truthfulness rating. However, Sharma et al. highlight that the lists of possible sources of fake news are not exhaustive and that the domains used to spread the news can be falsified [1]. It is also important to emphasize that social networks are also populated by bots, which are fake or compromised accounts controlled by humans or programs to present and promote information on social networks. Such bots are responsible for accelerating the speed of propagating true and false information almost equally, aiming to leverage bot accounts' credibility and reputation [13] accounts. The second important feature is the content of the spread information. The content is one of the main characteristics to be analyzed to classify the news as true or false. Oliveira et al. identify that fake news and legitimate news dissemination in Brazil behave statistically different according to the sum of the relative frequency of the words used in the content. Fake news tends to use fewer relevant words than legitimate news [14]. Other textual characteristics include the use of social words, self-references, statements of denial, complaints, and generalizing items. There is a tendency for fake news to have less cognitive complexity, less exclusive words, more negative emotion words, and more action words [11]. Finally, user responses on social media provide auxiliary information for detecting fake news. User response is important for identification because, in addition to propagation patterns, user responses are more difficult to manipulate than the information's content. Besides, sometimes user responses contain obvious information about the truth [2]. In the form of likes, sharing, responses, or comments, user engagement contains information that is captured in the structure of propagation trees that indicate the path of the information flow. Such information is included in the form of temporal information in timestamps, textual information in user comments, and user profile information involved in the engagement [1].

    The characterization of the information source, propagation and content, and the user's response allows for defining different fake news identification techniques. For instance, the identification can be based on feedback from the propagation pattern, on the natural language processing applied to the content of messages and application of machine learning mechanisms, and, finally, on the user intervention. This paper focuses on solutions based on the analysis of news content.


    Fake News Spreading Process


    Several entities, individuals, and organizations interact to disseminate, moderate and consume fake news on social networks. Due to the plurality of actors involved, the problem of identifying and mitigating the spread of fake news becomes even more complicated. The dissemination of fake news heavily relies on social media to the detriment of traditional media due to the large scale, the reach of social media, and the ability to share content collaboratively. Social media websites have become the most popular form of fake news dissemination due to the increasing ease of access and popularization of computer-mediated communication and Internet access [15]. Concurrently, while in traditional journalism media, the responsibility of creating content remains with the journalist and the writing organization, moderation on social networks varies widely. Each social media is subjected to different moderation rules and content regulation. Information is consumed mainly by the general public or society, which constitutes an increasing number of social media users. The growth in the consumption of information through social media increases the risk of fake news causing widespread damage [1].

    Sharma et al. highlight three different actors in the spread of fake news: the adversary, the fact-checker, and the susceptible user [1]. The adversaries are malicious individuals or organizations that often pose as ordinary social network users using bot or real accounts [13]. Adversaries can either act as a source or as a promoter of fake news. These social network accounts also act in groups by propagating sets of fake news. The fact-checker consists of various fact verification organizations, which seek to expose or confirm the news that generates doubts about its veracity. Checking the veracity of the news often relies on fact-checking journalism that depends on human verification. However, there are automated technological solutions that aim to detect fake news for companies and consumers. These solutions assign credit scores to web content using artificial intelligence. Finally, the susceptible user consists of the social network user who receives the questionable content but is not able to distinguish between fake or legitimate news and, thus, ends up propagating the fake news on the user's own social network, even if there is no intention to contribute to the proliferation of fraudulent content.

    The entry is from 10.3390/info12010038


    1. Sharma, K.; Qian, F.; Jiang, H.; Ruchansky, N.; Zhang, M.; Liu, Y. Combating fake news: A survey on identification and mitigation techniques. ACM Trans. Intell. Syst. Technol. (TIST) 2019, 10, 1–42.
    2. Zhou, X.; Zafarani, R. A Survey of Fake News: Fundamental Theories, Detection Methods, and Opportunities. ACM Comput. Surv. 2020, 53.
    3. Golbeck, J.; Mauriello, M.; Auxier, B.; Bhanushali, K.H.; Bonk, C.; Bouzaghrane, M.A.; Buntain, C.; Chanduka, R.; Cheakalos, P.; Everett, J.B.; et al. Fake News vs Satire: A Dataset and Analysis; WebSci ’18; Association for Computing Machinery: New York, NY, USA, 2018; pp. 17–21.
    4. Rubin, V.L.; Chen, Y.; Conroy, N.J. Deception detection for news: Three types of fakes. In Proceedings of the 78th ASIS&T Annual Meeting: Information Science with Impact: Research in and for the Community, Silver Spring, MD, USA, 6–10 November 2015; p. 83.
    5. Shu, K.; Sliva, A.; Wang, S.; Tang, J.; Liu, H. Fake news detection on social media: A data mining perspective. ACM SIGKDD Explor. Newslett. 2017, 19, 22–36.
    6. Chen, Y.; Conroy, N.J.; Rubin, V.L. Misleading online content: Recognizing clickbait as false news. In Proceedings of the 2015 ACM on Workshop on Multimodal Deception Detection, New York, NY, USA, 13 November 2015; pp. 15–19.
    7. Wang, W.Y. “Liar, liar pants on fire”: A new benchmark dataset for fake news detection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Vancouver, BC, Canada, 30 July–4 August 2017; pp. 422–426.
    8. Rubin, V.L. On deception and deception detection: Content analysis of computer-mediated stated beliefs. In Proceedings of the 73rd ASIS&T Annual Meeting on Navigating Streams in an Information Ecosystem; American Society for Information Science: Silver Spring, MD, USA, 2010; Volume 47, p. 32.
    9. Rubin, V.; Conroy, N.; Chen, Y.; Cornwell, S. Fake news or truth? using satirical cues to detect potentially misleading news. In Proceedings of the Second Workshop on Computational Approaches to Deception Detection; Association for Computational Linguistics: San Diego, CA, USA, 17 June 2016; pp. 7–17.
    10. Shu, K.; Mahudeswaran, D.; Wang, S.; Lee, D.; Liu, H. FakeNewsNet: A Data Repository with News Content, Social Context, and Spatiotemporal Information for Studying Fake News on Social Media. Big Data 2020, 8, 171–188.
    11. Fuller, C.M.; Biros, D.P.; Wilson, R.L. Decision support for determining veracity via linguistic-based cues. Decis. Support Syst. 2009, 46, 695–703.
    12. Sharma, S.; Sharma, D.K. Fake News Detection: A long way to go. In Proceedings of the 2019 4th International Conference on Information Systems and Computer Networks (ISCON), Mathura, UP, India, 21–22 November 2019; pp. 816–821.
    13. Davis, C.A.; Varol, O.; Ferrara, E.; Flammini, A.; Menczer, F. BotOrNot: A System to Evaluate Social Bots. In Proceedings of the 25th International Conference Companion on World Wide Web, WWW ’16 Companion; International World Wide Web Conferences Steering Committee: Geneva, Switzerland, 2016; pp. 273–274.
    14. de Oliveira, N.R.; Medeiros, D.S.V.; Mattos, D.M.F. A Sensitive Stylistic Approach to Identify Fake News on Social Networking. IEEE Signal Process. Lett. 2020, 27, 1250–1254.
    15. Mattos, D.M.F.; Velloso, P.B.; Duarte, O.C.M.B. An agile and effective network function virtualization infrastructure for the Internet of Things. J. Internet Serv. Appl. 2019, 10, 6.