key: cord-0790837-4qmr28yi authors: Franceschi, Jonathan; Pareschi, Lorenzo title: Spreading of fake news, competence, and learning: kinetic modeling and numerical approximation date: 2021-09-28 journal: Philosophical transactions. Series A, Mathematical, physical, and engineering sciences DOI: 10.1098/rsta.2021.0159 sha: e31e266149cb8df6b37fe0fc3858ef4a4b2e3457 doc_id: 790837 cord_uid: 4qmr28yi The rise of social networks as the primary means of communication in almost every country in the world has simultaneously triggered an increase in the amount of fake news circulating online. This fact became particularly evident during the 2016 U.S. political elections and even more so with the advent of the COVID-19 pandemic. Several research studies have shown how the effects of fake news dissemination can be mitigated by promoting greater competence through lifelong learning and discussion communities, and generally rigorous training in the scientific method and broad interdisciplinary education. The urgent need for models that can describe the growing infodemic of fake news has been highlighted by the current pandemic. The resulting slowdown in vaccination campaigns due to misinformation and generally the inability of individuals to discern the reliability of information is posing enormous risks to the governments of many countries. In this research using the tools of kinetic theory we describe the interaction between fake news spreading and competence of individuals through multi-population models in which fake news spreads analogously to an infectious disease with different impact depending on the level of competence of individuals. The level of competence, in particular, is subject to an evolutionary dynamic due to both social interactions between agents and external learning dynamics. The results show how the model is able to correctly describe the dynamics of diffusion of fake news and the important role of competence in their containment. With the rise of the Internet, connections among people has become easier than ever; so has been for the availability of information and its accessibility. As such, the Internet is also the source of and malicious purposes). This descriptions are also sensible in terms of matching the model with data available. In this paper we follow this pathway: borrowing ideas from kinetic theory [15, 27] , we combine a classical compartmental approach inspired by epidemiology [18, 20] with a kinetic description of the effects of competence [28, 29] . We refer also to the recent work [6] concerning evolutionary models for knowledge. In fact, the first wave of initiatives addressing fake news focused on news production by trying to limit citizen exposure to fake news. This can be done by fact-checking, labeling stories as fake, and eliminating them before they spread. Unfortunately, this strategy has already been proven not to work, it is indeed unrealistic to expect that only high quality, reliable information will survive. As a result, governments, international organizations, and social media companies have turned their attention to digital news consumers, and particularly children and young adults. From national campaigns in several countries to the OECD, there is a wave of action to develop new curricula, online learning tools, and resources that foster the ability to "spot fake news" [26] . It is therefore of paramount importance to build models capable of describing the interplay between the dissemination of fake news and the creation of competence among the population. To this end, the approach we have followed in this paper falls within the recent socio-economic modeling described by kinetic equations (see [27] for a recent monograph on the subject). More precisely, we adapted the competence model introduced in [28, 29] to a compartmental model describing fake news dissemination. Such a model allows not only to introduce competence as a static feature of the dynamics but as an evolutionary component both taking into account learning by interactions between agents and possible interventions aimed at educating individuals in the ability to identify fake news. Furthermore, in our modeling approach agents may have memory of fake news and as such be permanently immune to it once it has been detected, or fake news may not have any inherent peculiarities that would make it memorable enough for the population to immunize themselves against it in the future. The approach can be easily adapted to other compartmental models present in the literature, like the ones previously discussed [5, 25, 32] . The rest of the manuscript is organized as follows. In Section 2 we introduce the structured kinetic model describing the spread of fake news in presence of different competence levels among individuals. The main properties of the resulting kinetic models are also analyzed. Next, Section 3 is devoted to study the Fokker-Planck approximation of the kinetic model and to derive the corresponding stationary states in terms of competence. Several numerical results are then presented in Section 4 that illustrate the theoretical findings and the capability of the model to describe transition effects in the spread of fake news due to the interaction between epidemiological and competence parameters. Some concluding remarks are reported in the last Section together with details on the theoretical results and the numerical methods in two separate appendices. In this section, we introduce a structured model for the dissemination of fake news in presence of different levels of skills among individuals in detecting the actual veracity of information, by combining a compartmental model in epidemiology and rumor-spreading analysis [14, 18] with the kinetic model of competence evolution proposed in [29] . We consider a population of individuals divided into four classes. The oblivious ones, still not aware of the news; the reflecting ones, who are aware of the news and are evaluating how to act; the spreader ones, who actively disseminate the news and the silent ones, who have recognized the fake news and do not contribute to its spread. Terminology, when describing this compartmental models, is not fully established; however, the dominant one, inspired by epidemiology, refers to the definitions provided by Daley [14] of a population composed of ignorant, spreader and stifler individuals. The class of reflecting agents can be referred to as a group that has a time-delay before taking a decision and enter an active compartment [5, 25] . Notation, i.e., the choice of letters to represent the compartments, is even more scattered and somewhat confusing. In Table 1 for readers' convenience we have summarized some of the different possible choices of letters and terminology found in literature. Given the widespread use of epidemiological models compared to fake news models, in order to make the analogies easier to understand, we chose to align with notations conventionally used in epidemiology. Therefore, in the rest of the paper we will describe the population in terms of susceptible agents (S), who are the oblivious ones; exposed agents (E), who are in the time-delay compartment after exposure and before shifting into an active class; infectious agents (I), who are the spreader ones and finally removed agents (R) who are aware of the news but not actively engaging in its spread. Note that this subdivision of the population does not take into account actual beliefs of agents about the truth of the news, so that removed agents, for instance, need not be actually skeptic, nor the spreaders need to actually believe the news. To simplify the mathematical treatment, as in the original works by Daley and Kendall [13, 14] , we ignored the possible 'active' effects of the population of removed individuals by interacting with other compartments and producing immunization among susceptible (the role of skeptic individuals in [5, 25] ) and remission among spreaders (the role of stiflers in [32] ). Of course, the model easily generalizes to include these additional dynamics. The main novelty in our approach is to consider an additional structure on the population based on the concept of competence of the agents, here understood as the ability to assess and evaluate information. Let us suppose that agents in the system are completely characterized by their competence x ∈ X ⊆ R + , measured in a suitable unit. We denote by , the competence distribution at time t > 0 of susceptible, exposed, infectious and removed individuals, respectively. Aside from natality or mortality concerns (i.e., the social network is a closed system-nobody enters or leaves it during the diffusion of the fake news, which is a common assumption, based on the average lifespan of fake news) we therefore have: SEIR (this paper) DK [13, 14] ISR [32] SEIZR [5] SEIZ [25] Category name Susceptible Ignorant Ignorant Susceptible Susceptible Exposed --Idea incubator Exposed Infectious Spreader Spreader Idea adopter Infectious Removed Stifler Stifler Skeptic/Recovered Skeptic which implies that we will refer to as the fractions of the population that are susceptible, exposed, infected, or recovered respectively. We also denote the relative mean competences as The fake news dynamics proceeds as follows: a susceptible agent gets to know it by a spreader. At this point, the now-exposed agent evaluates the piece of information-the reflecting, or delay, stage-and decide whether to share it with other individuals (and turning into a spreader themselves) or to keep silent, removing themselves by the dissemination process. When the dynamic is independent from the knowledge of individuals, the model can be expressed by the following system of ODEs with S + E + I + R = 1 and where β is the contact rate between the class of the susceptible and the class of infectious, δ is the rate at which agents make their decision about spreading the news or not, 1 − η is the portion of agents who become infectious and γ is the rate at which spreaders remove themselves from the compartment, due, e.g., to loss of interest in sharing the news or forgetfulness. Finally, α is related to the specificity of the fake news and the probability of individulas to remember it. A probability of 0 means that the fake news has not any inherent peculiarity (e.g., in terms of content, structure, style, . . . ) that can make it memorable enough for the population to 'immunize' against it in the future, while a probability of 1 allows for the agents to have the full ability to not fall for that fake news a second time. The various parameters have been summarized in Table 2 . The diagram of the SEIR model (1) is shown in Figure 1 . It is straightforward to notice that when α and η are zero, system (1) specializes in a classic SEIS epidemiological model. This is consistent with treating the dissemination of non-specific fake news in a population as the spread of a disease with multiple strains, for which a durable immunization is never attained. In this case system (1) has two equilibrium states: a disease-free equilibrium state (1, 0, 0) and an endemic equilibrium stateP = (S,Ẽ,Ĩ) wherẽ Parameter Definition β contact rate between susceptible and infected individuals 1/δ average decision time on whether or not to spread fake news η probability of deciding not to spread fake news 1/γ average duration of a fake news α probability of remembering fake news and R 0 = β/γ is the basic reproduction number. It is known [21] that if R 0 > 1 the endemic equilibrium stateP of system (1) is globally asymptotically stable. If instead α > 0 or η > 0, there also is the possibility to permanently immunize against fake news with those traits; moreover, both infectious and exposed agents eventually vanish, leaving only the susceptible and removed compartments populated. In the case of maximum specificity of the fake news, i.e., α = 1, the stationary equilibrium state has the form where S ∞ is solution of the nonlinear equation in which S 0 is the initial datum S(t = 0). We refer to [5, 25, 32] for the inclusion of additional interaction dynamics, taking into account counter-information effects due to the removed population interacting against susceptible and infectious, and the relative analysis of the resulting equilibrium states. In the following, we combine the evolution of the densities according to the SEIR model (1) with the competence dynamics proposed in [29] . We refer to the degree of competence that an individual can gain or loose in a single interaction from the background as z ∈ R + ; in what follows we denote by C(z) the bounded-mean distribution of z, satisfying Assuming a susceptible agent has a competence level x and interacts with another one belonging to the various compartments in the population and having a competence level x * , their levels after the interaction will be given by where λ S (·) and λ BS (·) quantify the amount of competence lost by susceptible individuals by the natural process of forgetfulness and the amount gained by susceptible individuals from the background, respectively. λ CJ , instead, models the competence gained through the interaction with members of the class J, with J ∈ {S, E, I, R}; a possible choice for is the characteristic function andx ∈ X a minimum level of competence required to the agents for increasing their own skills by interactions. Finally, κ SJ andκ SJ are independent and identically distributed zero-mean random variables with the same variance σ(t) to consider the non-deterministic nature of the competence acquisition process. The binary interactions involving the exposed agents can be similarly defined the same holds for the interactions concerning the infectious fraction of the population and finally we have the interactions regarding the removed agents It is reasonable to assume that both the processes of gain and loss of competence from the interaction with other agents or with the background in (5)-(8) are bounded by zero. Therefore we suppose that if J, H ∈ {S, E, I, R}, and if λ J ∈ [λ − J , λ + J ], with λ − J > 0 and λ + J < 1, and λ CJ (x), λ BJ (x) ∈ [0, 1] then κ HJ may, for example, be uniformly distributed in . In order to combine the compartmental model SEIR with the evolution of the competence levels given by equations (5)-(8) we introduce the interaction operator Q HJ (·, ·) following the standard Boltzmann-type theory [27] . As earlier, we will denote with J a suitable compartment of the population, i.e., H, J ∈ {S, E, I, R}, and we will use the brackets · to indicate the expectation with respect to the random variable κ HJ . Thus, if ψ(x) is an observable function, then the action of Q HJ (f H , f J )(x, t) on ψ(x) is given by with x defined by (5) with x defined by (6) with x defined by (7), with x defined by (8) . All the above operators preserve the total number of agents as the unique interaction invariant, corresponding to ψ(·) ≡ 1. The system then reads: where the function is responsible for the contagion, β(x, x * ) being the contact rate between agents with competence levels x and x * . In the above formulation we also assumed β, γ, δ, η and α functions of x. Note that, clearly, the most important parameters influenced by individuals' competence are β(x, x * ), since individuals have the highest rates of contact with people belonging to the same social class, and thus with a similar level of competence, δ(x) as individuals with greater competence invest more time in checking the authenticity of information, and η(x), which characterizes individuals' decision to spread fake news. On the other hand, the values of γ and α we may assume to be less influenced by the level of expertise of individuals. In this section we analyze some of the properties of the Boltzmann system (13) . First let us consider the reproducing ratio in presence of knowledge. By integrating system (13) against x, and considering only the compartments of individuals which may disseminate the fake news we have In the above derivation we used the fact that the Boltzmann interaction terms describing knowledge evolution among agents preserve the total number of individuals and therefore vanish. Following the analysis in [4] , and omitting the details for brevity, we obtain a reproduction number generalizing the classical one Next, following [15] , we can prove uniqueness of the solution of (13) in the simplified case of constant parameters: 1] . In this case, exploiting the fact that the interaction operator Q(·, ·) has a natural connection with the Fourier transform by choosing its kernel e −ixξ as test function, we can analyze the system (13) with the Fourier transforms of the densities as unknowns. Indeed, given a function f (x) ∈ L 1 (R + ), its Fourier transform is defined aŝ The system (13) becomes where the operatorsQ HJ (f H ,f J ) are defined in terms of the Fourier transforms of their arguments for J ∈ {S, E, I, R}, so that where A HJ , with H, J ∈ {S, E, I, R} is defined as We suppose that the parameters satisfy the condition ν = max H,J∈{S,E,I,R} which will prove useful in the proof. As in [15] we recall a class of metrics which is of natural use in bilinear Boltzmann equations [27] . Let f and g be probability densities. Then, for s > 0 we define which is finite whenever f and g have equal moments up to the integer part of s or to s − 1 if s is an integer. We have the following result. For the details of the proof we refer to Appendix A. A highly useful tool to obtain information analytically on the large-time behavior of Boltzmanntype models are scaling techniques; in particular the so-called quasi-invariant limit [27] , which allows to derive the corresponding mean-field description of the kinetic model (13) . Indeed, let us consider the case in which the interactions between agents produce small variations of the competence. We scale the quantities involved in the binary interactions (5)- (8) accordingly where J ∈ {S, E, I, R} and the functions involved in the dissemination of the fake news, as well We denote by Q ε HJ (·, ·) the scaled interaction terms. Omitting the dependence on time on mean values and re-scaling time as t → t/ε, we obtain up to O(ε) where we used a Taylor expansion for small values of ε of in (9)-(12) and the scaled interaction rules (5)-(8). Let us impose that ε → 0, following [27] from the computations of the previous section we formally obtain the Fokker-Planck system where now We can consider the mean values system associated to (22)- (25) in the case of constant epidemiological parameters with m(t) = m S (t) + m E (t) + m I (t) + m R (t). In the case α > 0 or η > 0, we know that E(t) → 0, I(t) → 0, S(t) → S ∞ and R(t) → R ∞ = 1 − S ∞ due to mass conservation, so that m E (t) → 0 and m I (t) → 0 as well. Thus, adding all the equations together leads us to i.e., m ∞ S + m ∞ R = m B . At this point, adding together equations (22) to (25) gives us which has as solution an inverse Gamma density It is straightforward to see that the scaled Gamma densities are solutions of the system (22)- (25) . If, instead, α = η = 0, we find again the same solution as (31) , but in this case J →J, wherẽ J are defined as in (2) . In Figure 2 we report two examples of the stationary solutions where we chose the competence variable z to be uniformly distributed in [0, 1]: in the first case (left) we considered α = η = 0, while in the second case (right) we set α = 0.2 and η = 0.1. In this section we present some numerical tests to show the characteristics of the model in describing the dynamics of fake news dissemination in a population with a competence-based structure. To begin with, we validate the Fokker-Planck model obtained as the quasi-invariant limit of the Boltzmann equation: we will do so through a Monte Carlo method for the competence distribution (see [27] , Chapter 5 for more details). Next, we approximate the Fokker-Planck systems (22)-(25) by generalizing the structure-preserving numerical scheme [30] to explore the interplay between competence and disseminating dynamics in the more realistic case of epidemiological parameters dependent on the competence level (see Appendix B). Lastly, we investigate how the fake news' diffusion would impact differently on different classes of the population defined in terms of their capabilities of interacting with information. In this test we show that the mean-field Fokker-Planck system (22)-(25) obtained under the quasi-invariant scaling (20) and (21) is a good approximation of the Boltzmann models (13) when ε 1. We do so by using a Monte Carlo method with N = 10 4 particles, starting with a uniform distribution of competence f 0 (x) = 1 2 χ(x ∈ [0, 2]), where χ(·) is the indicator function, and performing various iterations until the stationary state was reached; next, the distributions were averaged over the next 500 iterations. We considered constant competencerelated parameters λ CJ = λ BJ and λ J = λ CJ + λ BJ as well as a constant variance σ for the random variables η HJ . In Figure 3 , we plotted the results for (λ, σ) = (0.075, 0.150) (circle-solid, teal) and for (λ, σ) = (0.001, 0.002) (square-solid, ochre): those choices correspond to a scaling regime of ε = 0.075 and ε = 0.001, respectively, with µ = 2. Finally, we assumed that m B = 0.75 (left) and m B = 1 (right). Directly comparing the Boltzmann dynamics equilibrium with the explicit analytic solution of the Fokker-Planck regime shows that if ε is small enough, Fokker-Planck asymptotics provide a consistent approximation of the steady states of the kinetic distributions. For this test, we applied the structure-preserving scheme to system (22)- (25) in a more realistic scenario featuring an interaction term dependent on the competence level of the agents, as well as a competence-dependent delay during which agents evaluate the information and decide how to act. In this setting, we refer to the recent Survey of Adult Skills (SAS) made by the OECD [26] : in particular, we focus on competence understood as a set of information-processing skills, especially through the lens of literacy, defined [26] as "the ability to understand, evaluate, use and engage with written texts in order to participate in society". One of the peculiarities that makes the SAS, which is an international, multiple-year spanning effort in the framework of the PIAAC (Programme for the International Assessment of Adult Competencies) by the OECD, interesting in our case is that it was administered digitally to more than 70% of the respondents. Digital devices are arguably the most important vehicle for information diffusion in OECD countries, so that helps to keep consistency. Literacy proficiency was defined through 6 increasing levels; we therefore consider a population partitioned in 6 classes based on the competence level of their occupants, equated to the score of the literacy proficiency test of the SAS, normalized. Thus, we chose a log-normal-like distribution , whereξ = 5,μ ≈ 0.85 andσ ≈ 0.22 to make f (x) agree with the empirical findings in [26] . The computational domain is restricted to x ∈ [0, 5] and stationary boundary conditions have been applied as described in Appendix B. Initial distributions for the epidemiological compartments were set as The contact rate β(x, x * ) was set as with ∆ = 2 on the hypothesis that interactions occur more frequently among people with a similar competence level and are higher for people with lower competence levels. The rate δ at which the information is evaluated by the agents, who therefore exits the exposed class, was set to be with δ L = 1, δ R = 5, a = 2 and b = 2.5. Here, we are taking into account that people with higher efficacy at identifying fake news spend significantly more time on conducting their evaluations than people with lower efficacy [23] . In this specific test case the time range for the evaluation of the information spans between 1 day and about 5 hours. The values were purposely chosen rather large compared to realistic values in order to highlight also the behavior of the exposed compartment. Finally, we set γ = 0.2, which correspond to an average fake news duration of 5 days, and α = 0.2, so that individuals have a moderate possibility to remember the fake news and become immune to it, and assume η = 0.1, namely in this test we do not relate the decision to spread or not fake news to the level of competence. We investigate the relation between the dissemination-related component of the model and the competence-related one, which entails that agents can learn, i.e., increase their competence level, both from the background and from direct binary interactions. Under the assumption that λ CJ + λ BJ = λ J , which is a conservative choice: the expected value of the competence gained through interactions cancels out the one lost due to forgetfulness, in this latter process two main parameters are involved: λ = λ J (i.e., all compartments have the same learning rate) and m B , which is the mean of the background competence variable z. For what concerns the dissemination-related component, instead, the main factor is the reproduction number R 0 (15). Hence, we measured the differences on the spread of fake news varying these three parameters. In Figure 4 (left) we show the highest portion of spreaders in relation to the mean of background m B and to the reproduction number R 0 ; in the right image λ is opposed to R 0 . To perform the test, we leveraged the structure-preserving numerical scheme [30] whose details are presented for convenience in Appendix B. In both images of Figure 4 we see transition effects: the learning process triggered by the competence dynamics is capable of slowing down the dissemination of fake news in the population, even to the point of preventing it to take place. In the first case, the mean of the background m B , i.e., the mean of the distribution of the background competence variable z, which we assumed uniformly distributed, varies between 0.03125 and 0.25, while the reproduction number R 0 varies between 1.1 and 10. In the second case, we left untouched R 0 , while λ varies between 0.0125 and 0.125 with a background mean m B = 0.125. We can see that the mean of the background has a more pronounced impact on the slowing of the diffusion of fake news, with a steeper transition effect. In this final test we considered how much of an impact the competence level can have on the dissemination of fake news in the population. We simulated the mean-field model (22)-(25) assuming the same competence-dependent contact rate β(x, x * ) of Test 2, in this case with β 0 = 4, as well the same delay rate δ(x) and the same γ, but we additionally assume that the decision to spread or not a fake news is affected by the level of competence. This is somewhat controversial in the literature since other factors also affect this behavior like the age of individuals (tests carried out on young people have shown independence from the competence in the decision to share a fake news in contrast to what happens in in older people, see [23, 26] ). To emphasize this effect we assume with k = 0.1. Thus individuals with high level of competence rarely decide to spread fake news. In Figure 5 we report the time evolution of the distributions of susceptible (top left), exposed (top right), infected (bottom left) and removed (bottom right) agents with competence parameters of λ BJ = λ CJ = 0.125, λ J = λ BJ + λ CJ and m B = 0.125, in the case α = 0.1. In Figure 6 , instead, we show the evolution with the same parameters except for a larger probability α = 0.9 of remembering the fake news. In Figure 7 are shown the relative numbers of susceptible, exposed, infected and removed agents, on the left for α = 0.1 and on the right for α = 0.9. To measure the effects of the competence, we considered the curve of the infected agents depending on their levels accordingly to [26] for x ∈ [0, 5]: • below level 1: scoring less then 175/500, (x < 1.75); • level 1: scoring between 176/500 and 225/500, (x > 1.75 and x < 2.25); • level 2: scoring between 226/500 and 275/500, (x > 2.25 and x < 2.75); • level 3: scoring between 276/500 and 325/500, (x > 2.75 and x < 3.25); • level 4: scoring between 326/500 and 375/500, (x > 3.25 and x < 3.75); [26] . Left: α = 0.1; right: α = 0.9. • level 5: scoring more than 375/500, (x > 3.75). Figure 8 shows clearly that the more competent the individual, the lesser they contribute to the spread of fake news, in perfect agreement with the transition effects observed in Test 4.2 due to the interplay between competence and the dissemination dynamics. Moreover, we can see how the probability α of detecting fake news influences its dissemination in the population: a lower probability implies a higher peak of infected agents for each competence level, as well as a slower spread overall. In this paper, we introduced a compartmental model for fake news dissemination that also considers the competence of individuals. In the model, the concept of competence is not introduced as a static feature of the dynamic, but as an evolutionary component that takes into account both learning through interactions between agents and interventions aimed at educating individuals in the ability to detect fake news. From a mathematical viewpoint the competence dynamics has been introduced as a Boltzmann interaction term in the corresponding system of differential equations. A suitable scaling limit, permits to recover the corresponding Fokker-Planck models and then the resulting stationary states in terms of competence. These, in agreement with [29] , are given explicitly by Gamma distributions. The numerical results demonstrate the model's ability to correctly describe the interplay between fake news dissemination and individuals' level of competence, highlighting transition phenomena at the level of expertise that allow fake news to spread more rapidly. Future developments of the model will be considered in particular in the case of networks, in order to describe the spread of fake news on social networks and present plausible scenarios useful to limit the spread of false information. This can be done by following an approach similar to that of kinetic models for opinion formation on networks [1] . Another challenging aspect concerns the matching of the model with realistic data that requires the introduction of quantitative aspects not always easy to identify [25, 36] . One of the main applications will be related to combating misinformation in the vaccination campaign against COVID-19. We provide here a proof for Theorem 1. The proof is identical to [15] ; we develop it here, too, for completeness, only in the case α = η = 0. If H ∈ {S, E, I} we have where the second equality follows from mass conservation. In (35) , we have definedQ + (f H )(ξ, t) asQ where A HJ has been defined originally in (17) . Thus, now system (16) reads To ensure positivity of all coefficients on the right-hand side (since I(t) < 1 and β, γ, δ < 1 we can addf J (ξ, t) to each side, where J equals S, E and I in the first, second and third equation, respectively, to resort to the equivalent system in which all coefficients on the right-hand side are positive. Now, letf J (ξ, t) andĝ(ξ, t) be two solutions of the system (37) . We look at the time behavior of the d 2 metric of their difference, where the Fourier-based metric was defined in (19) ; therefore we define We see that the metric and h J are related by By its definition, the h J are solutions of where, with H ∈ {S, E, I, R} are defined as We can rewrite L + (f H )(ξ, t) in full: with the expectation · put outside for convenience. As shown, e.g., in [27] , and since f J and g J are solution of the SEIS system for the masses and the mean values, we can profitably bound the addends in the sum on the right-hand side of (40) in terms of the functions h H and h J So we obtain that Multiplying both sides of (39) by e 2t we have If we integrate from 0 to t and take the suprema we get Thanks to mass conservation we can also write Recalling the relation (38) we obtain the thesis of Theorem 1. Here we provide some details on the structure-preserving numerical scheme [30] for the general class of nonlinear Fokker-Planck equations of the form    ∂g(x, t) ∂t = ∇ x · B[g](x, t)g(x, t) + ∇ x (D(x)g(x, t)) , g(x, 0) = g 0 (x), where t ≥ 0, x ∈ X ⊆ R d , d ≥ 1 and g(x, t) ≥ 0 is the unknown distribution function. As mentioned above, B[g] is a bounded aggregation operator and D(·) models diffusion. The scheme [30] follows the work of Chang and Cooper [9] to construct a numerical method which can preserve features of the solution such as its large time behavior. If we examine system (22)-(25), we notice it has a structure like the following where f (x, t) = (f S (x, t), f E (x, t), f I (x, t), f R (x, t)) T , E (f (x, t)) is a vector accounting for dissemination dynamics T J∈{S,E,I,R} . Here we recognize that the J-th entry of F [f ](x, t) is precisely the right side of equation (41) in dimension d = 1, with the choices when α > 0 and D(x) = σ/2x 2 . Hence, if we consider system (22)- (25) in the form (42) we can apply the structure-preserving numerical scheme [30] to it: if we consider a spatially-uniform grid x i ∈ X, such that x i+1 − x i = ∆x, and denoting x i±2 = x i ± ∆x/2, we have that the discretization of the J-th component of (42) can be obtained by [30] df J,i (t) dt = F i+1/2 (t) − F i−1/2 (t) ∆x + E J,i (t), where δ i+1/2 = 1 λ i+1/2 + 1 1 − exp(λ i+1/2 ) , and finally For what concerns integration with respect to the competence level was performed using a Gauss-Legendre quadrature with 6 points. Notice also that we need to truncate the domain of computation for x > 0: following [30] we imposed on the last grid point x N +1 the quasi-stationary condition [30] f N +1 (t) f N (t) = exp Time integration of (43) was performed using a semi-implicit scheme which, upon choosing ∆t = O(∆x), preserves the nonnegativity of the solution (see [30] ). Opinion dynamics over complex networks: Kinetic modelling and numerical methods Social media and fake news in the 2016 election Stewardship of global collective behavior. Proceedings of the National Academy of Sciences Hyperbolic compartmental models for epidemic spread on networks with uncertain data: Application to the emergence of Covid-19 in Italy The power of a good idea: Quantitative modeling of the spread of ideas from epidemiological models Convergence of knowledge in a stochastic cultural evolution model with population structure, social learning and credibility biases Information Dissemination in Scale-Free Networks: Profusion Versus Scarcity How to model fake news A practical difference scheme for Fokker-Planck equations An epidemic model of rumor diffusion in online social networks Automatic deception detection: Methods for finding fake news Vaccination strategies against COVID-19 and the diffusion of anti-vaccination views Epidemics and rumours Epidemic Modelling: An Introduction. Cambridge Studies in Mathematical Biology Wealth distribution under the spread of infectious diseases Epidemic threshold in structured scale-free networks Fake news: A definition The mathematics of infectious diseases Epidemiological modeling of news and rumors on twitter Contributions to the mathematical theory of epidemics. II. -The problem of endemicity Lyapunov functions and global properties for SEIR and SEIS epidemic models Small world effect in an epidemiological model How college students evaluate and share "fake new" stories. Library & Information Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA Using an Epidemiological Model to Study the Spread of Misinformation during the Black Lives Matter Movement OECD. Skills Matter: Additional Results from the Survey of Adult Skills. OECD Skills Studies Interacting multiagent systems. Kinetic equations and Monte Carlo methods Wealth distribution and collective knowledge. a Boltzmann approach Kinetic models of collective decision-making in the presence of equality bias Structure preserving schemes for nonlinear Fokker-Planck equations and applications Epidemic spreading in scale-free networks Daley-Kendal models in fake-news scenario CSI: A hybrid deep model for fake news detection The diffusion of misinformation on social media: Temporal pattern, message, and source Defensive modeling of fake news through online social networks FakeNewsNet: A Data Repository with News Content, Social Context, and Spatiotemporal Information for Fake News Detection on Social Media: A Data Mining Perspective Fake News Risk: Modeling Management Decisions to Combat Disinformation The agenda-setting power of fake news: A big data analysis of the online media landscape from An overview of online fake news: Characterization, detection, and discussion. Information Processing and Management Fake news propagate differently from real news even at early stages of spreading This work was partially supported by MIUR (Ministero dell'Istruzione, dell'Università e della Ricerca) PRIN 2017, project "Innovative numerical methods for evolutionary partial differential equations and applications", code 2017KKJP4X.