Information Disorder

Learning to Recognize Fake News

by Francesco Biondo (Volume editor) Gevisa La Rocca (Volume editor) Viviana Viviana Trapani (Volume editor)
©2022 Edited Collection 242 Pages
Open Access


Table Of Contents

  • Cover
  • Title
  • Copyright
  • About the editors
  • About the book
  • This eBook can be cited
  • Contents
  • Preface (by Ferdinando Trapani)
  • Part I Technology and News on Web
  • The proposed solution: The fake news algorithm project and verification of results (Massimiliano Aliverti)
  • Robot reporters, machine learning and disinformation: How artificial intelligence is revolutionizing journalism (Angelo Paura)
  • Geofacts: A geo-reliability tool to empower fact-checking (Simone Avolicino*, Marianna Di Gregorio*, Marco Romano*, Monica Sebillo*, Giuliana Vitiello*, Massimiliano Aliverti**, Ferdinando Trapani**** *)
  • Part II Communication and Society
  • The mediatization of disinformation as a social problem: The role of platforms and digital media ecology (Gevisa La Rocca)
  • Collective memory and the challenges of digital journalism (Guido Nicolosi)
  • Disinformation, emotivism and fake news: Polarising impulses and the breakdown of social bonds. Why the true-to-life can seem true (Francesco Pira)
  • Part III Justice and Misinformation
  • The marketplace of ideas and its externalities: Who pays the cost of online fake news? (Francesco Biondo)
  • Freedom of information and fake news: Is there a right to good information? (Laura Lorello)
  • Correctness of judicial information and impartiality of the judge: The distortions of the media criminal trial (Caterina Scaccianoce)
  • Extra computationem nulla salus? Considerations on democracy, fake news and blockchain (Stefano Pietropaoli)
  • Part IV Information and Misinformation Design
  • Packaging and plastic are synonymous with waste: But is that really the case? (Anna Catania)
  • Citizen journalism and social innovation: Digital platforms for qualitative implementation of participatory journalism (Serena Del Puglia)
  • “Fake it ‘til you make it”: The designer playground for crafting prototypes, orchestrating frauds and pushing the ecological transition (Salvatore Di Dio, Mauro Filippi and Domenico Schillaci)
  • The form of written thought (Cinzia Ferrara and Marcello Costa)
  • Natural light in the architectural interior: Fake news on the Caravaggio of Palermo (Santo Giunta)
  • Environment, information, fake news (Benedetto Inzerillo)
  • Re-thinking news: Information design and “antibody” contents (Francesco Monterosso)
  • From the Panopticon to the freedom to communicate in the city space (Ferdinando Trapani)
  • Fake news: A design-driven approach (Viviana Trapani)
  • The authors

←8 | 9→


by Ferdinando Trapani1

The Smart Specialisation Strategy (RIS3), namely the national or regional innovation strategies for smart, sustainable and inclusive growth in the European Union, co-funded by the European Commission, with the general objective of concentrating European resources on emerging technology areas that can be developed in the region by focusing on building local knowledge rather than transferring external technological resources.

The Sicily Region, with the ERDF Operational Programme 2014–2020, Action 1.1.5 for “Support for the technological advancement of companies through the financing of pilot lines and early product validation and large-scale demonstration actions”, has selected the Fake News project in the effort to support the technological development of tools to control information exchange on the Web to counter the phenomenon of disinformation.

The “Fake News” initiative was implemented by the University of Palermo as a partner in support of the lead partner It.Hub/Blasting News (Milan-Lugano) and was generally articulated in six different phases. The partners were involved in different ways: the university for expertise in the humanities (sociology of communication, law and information design) and the lead partner for advanced technology (ICT). This publication is part of the dissemination of the project and is in many ways its conclusion in terms of the outcome of the academic research carried out by Sicilian faculty with the contribution of other scholars who participated in the project and supported it in terms of transdisciplinary critical analysis.

The Fake News project was developed as a social project to suggest an idea of a plural, open, and dialectical society. One product of social action is public opinion, which directly and indirectly influences policy decisions, including those concerning the control and prospects of social innovation, thus exerting pressure on any kind of democratic regime. In non-democratic regimes, public opinion is strongly influenced by the ruling power. Disinformation hinders the free process of public opinion building by using various means to negatively influence public opinion with the effect of widening the chasm between decision-making power and active citizenry, who in turn needs to be properly ←9 | 10→informed in order to usefully contribute to achieving publicly shared goals in a transparent manner.

The volume is divided into four parts that in some ways reflect the cognitive path that the project followed: from technological (ICT) to social instances, reflections highlighting the impact of disinformation on law and the safeguarding of public information to considerations on the implications for visual communication, architecture and urban planning.

Based on these studies, we believe it is possible to open a new field of study in which social studies can find a way to engage with other crucial disciplines to build connections between society, justice and quality of communication in the transformation of the places and spaces of the physical and virtual city.

←12 | 13→
Massimiliano Aliverti

The proposed solution: The fake news algorithm project and verification of results

Abstract: We are describing the attempt to build an algorithm able to detect how much a piece of online content is likely to be considered “fake news” based on the analysis of the article text through artificial intelligence and machine learning, particularly through Natural Language Processing, and the analysis of contextual information such as website authority and author realness. The end goal of this algorithm is to promote a new preventive approach to identifying fake news content, obtained through the empowerment of readers to self-assess what is to be considered “fake news” or, on the opposite, what is to be considered trustworthy and reliable content.

Keywords: ICT, fact-checking, algorithm, machine learning, misinformation, computing machinery


To date, the most common solutions to tackle the “fake news” problem in the online world have primarily been using a reactionary approach, mainly working on taking down misinformation before it can become viral. Social networks like Facebook or Twitter have mostly been using this approach to limit the spreading of false or misleading content within their digital environments. As Adam Mosseri, Facebook VP of News Feed, declared back in 2017: “We cannot become arbiters of truth ourselves – it’s not feasible given our scale, and it’s not our role. Instead, we’re working on better ways to hear from our community and work with third parties to identify false news and prevent it from spreading on our platform” (Mosseri, 2017).

Due to their immense scale of operations, the social networks typical course of action is to remove fake news from distribution as soon after it is flagged to become a potential problem or is likely to harm their audience. Social media act by removing the economic incentives for traffickers of misinformation, claiming that the motivation behind posting fake news is mainly financial (Olan et al., 2022).

The practical steps that have been used so far by social networking companies include (Mosseri, 2017):

←13 | 14→(a)Identification of false news through community reporting and through third party fact-checking organizations, so that its spread could be limited and made anti-economical. For example, if reading an article makes people significantly less likely to share it, social networks consider it a sign that a story has misled people in some way. Also, social networks tried to make it easier to report a false news story, allowing users to flag stories as false and subsequently demoting them in their content feeds. Lastly, companies like Facebook or Twitter started programs to work with independent third-party fact-checking organizations. Whenever fact-checking organizations identify a story as false, they typically link to a corresponding article explaining why and again, demote those articles in their content feeds.

(b)Making it difficult for individuals or organizations posting fake news to buy sponsored advertising through their social media platforms, thus removing a strong financial incentive that typically boosts the practice even more.

(c)Applying advanced technological tools such as AI and machine learning to detect accounts responsible for spamming and posting false news, subsequently removing them from their platforms. Social networks have started to take a hard line against this activity and usually block millions of fake accounts each day, most of them shortly after their creation. For example, during US Presidential Elections between October and December of 2020, Facebook alone disabled more than 1.3 billion of fake accounts created on their platform (Rosen, 2020).

However, we strongly believe that a reactionary approach is not always the best course of action to tackle the fake news problem effectively. Any social intervention that reacts to an emerging or existing problem affecting individuals or communities always assumes that things happen and there is really nothing to be done about it. On the other hand, a preventive approach relies on a proactive process that involves (a) forward-looking diagnostics to assess risk factors potentially affecting vulnerable individuals or communities and (b) providing the tools to prevent them from suffering negative effects or from aggravating those risks (Santana & Juana, 2021).

By providing a new, preventive approach we would like to give readers a simple tool – in the form of an algorithm-based web application – to assess by themselves whether to trust a news article or news source or, whenever this is not the case, to ask themselves additional questions and to search and compare different sources of information. This goes into the direction of meeting the underlying purpose of any social work, which is to promote changes that improve people’s quality of life and the environments they live in.

←14 | 15→

The technical solution

To tackle the fake news issue in the online and social media environment we built an algorithm which is able to identify potential false news signals at a website and news article level and to provide readers with recommendations on how it is best to interpret the content of a news article, in light of the trustworthiness of what is written inside. The algorithm has been built by combining four different areas of analysis into a synthetic score which returns these possible results: (a) likely to be fake news, (b) several missing elements to determine the trustworthiness of the content, but not enough to consider it fake news and (c) unlikely to be fake news. For the simplicity of interpretation by users every result is associated with a colour scheme: (a) green if the news article is ok, (b) orange, when elements are missing, (c) red when the news article is very likely to be fake.

The areas of analysis we investigated are the following:

  • Analysis at a website level;
  • Analysis at an author level;
  • Analysis at an entity level;
  • Sentiment analysis both at a document level and at a sentence level.

Website analysis

The goal of website analysis is to identify suspicious signals on the website hosting the news content we are evaluating. The basic assumption is that any legitimate publisher could be clearly identified and should provide a physical proof of existence in the form of addresses or contact information.

To execute this analysis, we verify through HTML analysis the presence of a few distinctive elements on the website, particularly thanks to a commonly used structured data markup scheme widely supported by search engines and social media, called schema.org. The original goal of this on-page markup scheme is to help search engines and social media understand the information on web pages and provide richer information to readers (Guha et al., 2015). We are tactically adapting schema.org metadata to identify the following distinctive elements and apply a scoring scheme A:

Tab. 1: Variable and scoring scheme A


Scoring scheme

Can the publisher name be found on the web page?

+​“n” (positive) score if present a

-“n” (negative) score if not present

Can a physical address be found on the web page?

+​“n” (positive) score if present

-“n” (negative) score if not present

Can any contact information be found on the web page (telephone or email)?

+​“n” (positive) score if present

-“n” (negative) score if not present

a Note: real scores won’t be provided throughout this entire paper because they constitute industrial secret pending patent approval of the algorithm.


ISBN (Hardcover)
Open Access
Publication date
2023 (March)
Berlin, Bern, Bruxelles, New York, Oxford, Warszawa, Wien, 2022. 242 pp., 34 fig. b/w, 7 tables.

Biographical notes

Francesco Biondo (Volume editor) Gevisa La Rocca (Volume editor) Viviana Viviana Trapani (Volume editor)


Title: Information Disorder