key: cord-0638394-yxzkbu3w authors: Johnson, N. F.; Velasquez, N.; Leahy, R.; Restrepo, N. Johnson; Jha, O.; Lupu, Y. title: Not sure? Handling hesitancy of COVID-19 vaccines date: 2020-09-17 journal: nan DOI: nan sha: 94f0b85134adca2db78efd5217f57726d6244079 doc_id: 638394 cord_uid: yxzkbu3w From the moment the first COVID-19 vaccines are rolled out, there will need to be a large fraction of the global population ready in line. It is therefore crucial to start managing the growing global hesitancy to any such COVID-19 vaccine. The current approach of trying to convince the"no"s cannot work quickly enough, nor can the current policy of trying to find, remove and/or rebut all the individual pieces of COVID and vaccine misinformation. Instead, we show how this can be done in a simpler way by moving away from chasing misinformation content and focusing instead on managing the"yes--no--not-sure"hesitancy ecosystem. Such a high level of hesitancy risks everyone's health 3, 4, 5 and hopes of a 'return to normal'. It risks the diversity of volunteers for Phase III trials, and the billions of dollars allocated to vaccine development. It risks the likelihood of people accepting future COVID booster shots. And it undermines trust in existing vaccines for other diseases, as well as more general public health advice. Trying to change the minds of hardcore "no"s 1 will be too slow and too hard. Not only has anti-vaccination sentiment been around since vaccines were first created, our study of the social media ecosystem shows that anti-vaccination communities actually grew 6 in size and resilience during 2019 --despite the fact that measles outbreaks were proliferating throughout 2019 and despite the fact that the measles vaccine already exists and has a strong safety record. Worse, their opposition to a future COVID vaccine has now gone into overdrive (Fig. 1A , red nodes). They won't be changing their minds any time soon. A better approach is to focus on the "not sures": and we don't just mean the 31% who specifically responded "not sure" in the latest poll 1 , but also the large number of "yes" respondents who say they will delay receiving the vaccine until others have had it. Our analysis of the online Facebook ecosystem --of which the poll appears crudely representative --reveals well in excess of 100 million such "concerned" individuals. Each is a member of a community comprising 10-1,000,000 like-minded fans of a particular topic that is typically unrelated to vaccines, e.g. pet lovers, parent school groups, yoga fans, foodies or alternative health followers (green nodes, Fig. 1A ). Since members of the same community tend to trust each other on this one topic or lifestyle choice (e.g. pet care, best choice of kindergarten, wine or organic blueberries), they also tend to listen when their community starts talking or posting about COVID and vaccines. And their growing collective concern has led their community to form links with antivaccination communities (red nodes, Fig. 1A ). But how to reduce the hesitancy of these "concerned" communities? Given the vicious circle of hesitancy and misinformation, one might try ramping up the current contentfocused policy approach of labeling, removing or debunking specific stories. But this is not practical given the escalating number and nature of such stories and the need to act now before vaccine rollout. It may also backfire. Take the often cited story that Bill Gates is planning to put semiconductor markers into COVID vaccines, so that medical records can be scanned from our families' foreheads at school or work like a can of supermarket soup. Both elements are indeed true: semiconductor quantum dots 7 can be excellent biomarkers, and The Gates Foundation has been involved in funding related research as well as COVID vaccine development. So while there is zero likelihood this story will play out, it is in principle possible scientifically. So classifying it as 'misinformation' and removing it could spark claims of stifling free speech, while saying it is 'wrong science' amplifies the debate of what is 'right science'. Both fuel the misinformation fire and both are now rampant within the "concerned" communities (green nodes). Worse, Facebook itself cannot find all misinformation within its own platform. Leaving pieces untouched can wrongly suggest to users that they are true. Moreover, misinformation also flows freely within and between other platforms such as 4Chan which are beyond Facebook's control 8 . So to do all this well, public health agencies and vaccine manufacturers would have to become more expert in social media than Facebook --which is again impossible. Then come the content flavors. The entire establishment health enterprise ("Blue", Fig. 1B) can do little more than put out statements that are scientifically correct, which means almost by definition that they are quite standard and plain: vanilla. By contrast, the anti-establishment health subpopulation ("Red", Fig. 1B ) collectively offers the concerned-and-engaged subpopulation ("Green", Fig. 1B) all sorts of tempting flavors of narrative. These range from the lack of any long-term safety record for a COVID vaccine, which is of course technically true; to claims that the human immune system offers a superior form of resistance, which is also hard to disprove given the lack of understanding COVID biology; to claims of hidden agendas of governments and big pharma, which again is hard to disprove given the highly political nature of the COVID vaccine race and the billions being invested to secure future batches; to the fact that science is still struggling to give precise answers to seemingly straightforward (but actually highly complex) questions such as best-choices for school opening hours, numbers in a class, and mask design. Blue cannot hope to win such a content-chasing war quickly enough for vaccine rollout. It would get bogged down in virtual whack-a-mole across the ever-expanding multiverse of interconnected social media platforms, with an ever-expanding set of stories to tackle as different vaccine candidates from different countries come online and different political decisions loom. Here we suggest a more immediate and less resource-intensive approach that leverages what proestablishment messaging (Blue) already has at its disposal, i.e. when to engage in messaging and at what level in terms of volume. Red and Green feed off of Blue's activity, and Green also feeds off of Red (Fig. 1B) . Figure 1C shows how powerful the approach of managing the ecosystem could be using a simple, undergraduate level model (Fig. 1B) that combines Newman's gossip model 9 and Strogatz's relationship dynamics model 10 , and which is backed up by empirical findings and theory from studies of online opinion formation 11 and conflicts 12 as well as Ref. 6 . There are very few parameters (Fig. 1B) and each is physically meaningful. Most importantly, this simple model reproduces the main features of the evolution of the Reds, Blues and Greens from the start of COVID until now (see SOM). The advantages of this type of ecosystem analysis for policymakers are that the predictions are available as precise, plug-and-play formulae (see SOM) that can be evaluated by hand or using a simple calculator app on a phone. No computer simulations or coding required. Interestingly, it predicts 4 distinct classes of outcomes and hence 4 classes of policy (Fig. 1C) , each of which could be implemented immediately as discussed below. As with any other issue such as climate change, framing policy discussions in terms of calculated behaviors with quantifiable and testable predictions, is far more powerful than vaguer verbal arguments about what might work. Immediate action is essential: if left to carry on as is, Fig. 1A predicts Red's stronghold will not only continue growing, it will draw in and likely tip an increasing fraction of Greens which will seriously undermine all future COVID vaccine rollouts and renewals. Policies 1-4 show the trade-offs for Blue: on the one hand, Blue must get communities such as pet lovers and yoga fans away from concern about COVID and vaccines, and back to back to their real interests, i.e. reduce the green curve in Fig. 1C . At the same time, it must keep the red curve under control (Red). And it must do all this using the blunt instrument of its own messaging activity and without feeding the infodemic frenzy, i.e. Blue's average output must remain steady (blue curve). Policy 1 shows what happens if Blue mirrors Red and Green's messaging activity, in the scenario that Red and Green are also doing the same to each other, i.e. the couplings in Fig. 1B are all positive ( ! , " , ! > 0). This is like becoming louder when the other is loud, and quieter when the other is quiet. The "concerned"s green curve initially increases and hence gets worse, but then settles to a stable value. Red activity drops to a lower steady value. While not a dramatic improvement in overall hesitancy, it is in principle possible to choose the positive coupling values such that total support is above the estimated herd immunity threshold. Policymakers would however have to warn the public that things will initially feel slightly worse before improving. Policy 2 differs from Policy 1 in that it shows what happens if Blue has high messaging activity levels when Green is low and vice versa ( ! < 0). This is like becoming louder when the other is quiet, and quieter when the other is loud. Though Red activity drops which is desirable, Green escalates dramatically which is undesirable. Policy 3 is the opposite of Policy 1 in that Blue, Red and Green all have negative feedback ( ! , " , ! < 0). So they become louder when the others are quiet and vice versa. The outcome is good all round: not only does Red activity drop to a steady state, Green drops dramatically and keeps decreasing over time. Policy 4 differs from Policy 3 in that Red and Green now have positive feedback ( " > 0), so Red and Green become louder when the other is loud, and quieter when the other is quiet. This policy has the advantage over Policy 3 in that both Red and Green eventually both keep decreasing over time. However Green initially gets worse, before then showing a turning point which can be calculated and hence predicted exactly. Which of these 4 policies is most suitable will depend on the current value of the couplings in Fig. 1B . But the key is that they can each be analyzed and compared ahead of implementation on a case-by-case basis depending on Blue's level of control of the various parameters in Fig. 1B , using the plug-and-play formulae in the SOM. Hence the system can be nudged toward the estimates required for herd immunity. These policies also apply to other situations where there is competition between establishment messaging (Blue), anti-establishment messaging (Red) and a background population (Green) whose 'hearts and minds' can tip the balance 13, 14 . For example, it could help with the contentious climate-change narratives that are circulating concerning the September 2020 wildfires in California --and it could kickstart the needed public engagement before quantum information technologies are unleashed 15 . Also the analysis doesn't just apply to hesitancy online. Working with epidemiologists, more detail can be added to Fig. 1B by incorporating details of how communities are interconnected within each subpopulation and how the messaging spreads, hence yielding a fuller theory of infodemic spreading within a heterogeneous population 16, 17 . Moreover, the role of specific content could be included using machine learning 18 , with different types of misinformation having different coupling values in Fig. 1B , while cleverer use of human psychology could enhance the model's realism 19, 20, 21 . Why the anti-vaxxer movement is gaining ground amid a pandemic. The Straits Times A lack of information can become misinformation Of virality and viruses: the anti-vaccine movement and social media Misinformation, Crisis, and Public Health-Reviewing the Literature V1.0. Social Science Research Council The online competition between pro-and anti-vaccination views Quantum dots; few-body, low-dimensional systems Hate multiverse spreads malicious COVID-19 content online beyond individual platform control Nonlinear Dynamics and Chaos Emergent dynamics of extremes in a population driven by common information sources and new social media algorithms Anomalously slow attrition times for asymmetric populations with internal group dynamics Counteracting Dangerous Narratives in the Time of COVID-19 Structure and tie strengths in mobile communication networks Quantum for All Covid-19 infodemic reveals new tipping point epidemiology and a revised R formula Science vs conspiracy: Collective narratives in the age of misinformation Quantifying COVID-19 Content in the Online Health Opinion War Using Machine Learning The Strength of Social Norms Across Human Groups Dynamic spread of happiness in a large social network: longitudinal analysis over 20 years in the Framingham Heart Study The science of fake news Supporting Online Material (SOM) For simplicity here, let's suppose that Blue will continue to put out scientific messaging and advice as the COVID vaccine research develops. This output does not depend on the gossip going on within Red and Green, hence the direction of the arrow and coupling in the model. Green absorbs this (see SI) and to some extent so does Red, but Red also has its own activity toward Blue and Green. So the couplings ! and " (Fig. 1B) are largely in Red's control, not in Blue's. Hence Fig. 1C focuses on Blue only being able to control its activity level with respect to Green Proof that the model reproduces the features in the empirical data during 2020, despite have very few parameters ) and model predictions (right panels) for the number of communities (i.e. clusters, each of which is a node in Fig. 1A) for (top) the number of clusters that are subscribers to other clusters that are broadcasting COVID narratives and hence 'listening to COVID narratives', and (bottom) the number of clusters that are broadcasting COVID narratives to other clusters and hence 'talking COVID narratives