key: cord-0124070-rz331bxf authors: Weber, Derek; Falzon, Lucia; Mitchell, Lewis; Nasim, Mehwish title: Promoting and countering misinformation during Australia's 2019-2020 bushfires: A case study of polarisation date: 2022-01-10 journal: nan DOI: nan sha: 75da77c3456893c65d24e7d495b42109806ecfa5 doc_id: 124070 cord_uid: rz331bxf During Australia's unprecedented bushfires in 2019-2020, misinformation blaming arson resurfaced on Twitter using #ArsonEmergency. The extent to which bots were responsible for disseminating and amplifying this misinformation has received scrutiny in the media and academic research. Here we study Twitter communities spreading this misinformation during the population-level event, and investigate the role of online communities and bots. Our in-depth investigation of the dynamics of the discussion uses a phased approach -- before and after reporting of bots promoting the hashtag was broadcast by the mainstream media. Though we did not find many bots, the most bot-like accounts were social bots, which present as genuine humans. Further, we distilled meaningful quantitative differences between two polarised communities in the Twitter discussion, resulting in the following insights. First, Supporters of the arson narrative promoted misinformation by engaging others directly with replies and mentions using hashtags and links to external sources. In response, Opposers retweeted fact-based articles and official information. Second, Supporters were embedded throughout their interaction networks, but Opposers obtained high centrality more efficiently despite their peripheral positions. By the last phase, Opposers and unaffiliated accounts appeared to coordinate, potentially reaching a broader audience. Finally, unaffiliated accounts shared the same URLs as Opposers over Supporters by a ratio of 9:1 in the last phase, having shared mostly Supporter URLs in the first phase. This foiled Supporters' efforts, highlighting the value of exposing misinformation campaigns. We speculate that the communication strategies observed here could be discoverable in other misinformation-related discussions and could inform counter-strategies. this misinformation has received scrutiny in the media and academic research. Here we study Twitter communities spreading this misinformation during the population-level event, and investigate the role of online communities and bots. Our in-depth investigation of the dynamics of the discussion uses a phased approach -before and after reporting of bots promoting the hashtag was broadcast by the mainstream media. Though we did not find many bots, the most bot-like accounts were social bots, which present as genuine humans. Further, we distilled meaningful quantitative differences between two polarised communities in the Twitter discussion, resulting in the following insights. First, Supporters of the arson narrative promoted misinformation by engaging others directly with replies and mentions using hashtags and links to external sources. In response, Opposers retweeted fact-based articles and official information. Second, Supporters were embedded throughout their interaction networks, but Opposers obtained high centrality more efficiently despite their peripheral positions. By the last phase, Opposers and unaffiliated accounts appeared to coordinate, potentially reaching a broader audience. Finally, unaffiliated accounts shared the same URLs as Opposers over Supporters by a ratio of 9:1 in the last phase, having shared mostly Supporter URLs in the first phase. This foiled Supporters' efforts, highlighting the value of exposing misinformation campaigns. We speculate that the communication strategies observed here could be discoverable in other misinformation-related discussions and could inform counter-strategies. People share an abundance of useful information on social media during crises (Bruns and Liang, 2012; Bruns and Burgess, 2012) . This information, if analysed correctly, can rapidly reveal population-level events such as imminent civil unrest, natural disasters, or accidents (Tuke et al, 2020) . Not all content is helpful, however: different entities may try to popularise false narratives using sophisticated social bots and/or engaging humans. The spread of such misand disinformation not only makes it difficult for analysts to use Twitter data for public benefit (Nasim et al, 2018) but may also encourage large numbers of people to adopt the false narratives causing social disruption and polarisation, which may then influence public policy and action, and thus can be particularly dangerous during crises (Singer and Brooking, 2019; Kušen and Strembeck, 2020; The Soufan Center, 2021; Scott, 2021) . This paper expands our previous work (Weber et al, 2020) presenting deeper analysis of a case study of the dynamics of misinformation propagation, and the communities which promote or counter it, during one such crisis. We demonstrate that polarised groups can communicate/use social media in very different ways even when they are discussing the same issue, and in effect these can be considered communication strategies, as they are promoting their narrative and trying to convince others to accept their position. The 2020 Australian 'Black Summer' bushfires (a.k.a., wildfires) burnt over 16 million hectares, destroyed over 3,500 homes, and caused at least 33 human and a billion animal fatalities, 1 and attracted global media attention. During the bushfires, as in other crises, social media provided a mechanism for people in the fire zones to provide on-the-ground reports of what was happening around them, a way for those outside to get insight into the events as they occurred (including authorities and media), but also a way for the broader community to connect and process the imagery and experiences through discussion. The lack of the traditional information mediator or gatekeeper role played by the mainstream media on social media permits factual errors, mis-interpretation and outright bias to proliferate without check in a way it could not in decades past. Our analysis of online discussion at this time shows: • Significant Twitter discussion activity accompanied the Australian bushfires, influencing media coverage. • Clearly discernible communities in the discussion had very different interpretations of the ongoing events. • In the midst of the discussion, false narratives and misinformation circulated on social media, much of it seen during previous crises, including specific statements that: -the bushfires were mostly caused by arson; -preventative backburning efforts had been reduced due to green activism (previously presented in 2009 2 ); -Australia commonly experiences such bushfires (previously put forward in 2013 3 ); and -climate change is not related to bushfires. All of these statements and their associated narratives were refuted officially, including via a state government inquiry which found that of 11,744 fires, only "11 were lit with intention to cause a bush fire" (NSW Bushfire Inquiry, 2020, p.29). In particular, the arson figures being disseminated online were incorrect, 4 preventative backburning has increasingly limited effectiveness, 5 its use has not been curbed to appease environmentalists, 6 the Analysis of the networks of different interactions in the data reveal how central these groups became and to what degree they connected to each other and the broader discussion. Content and co-activity analyses highlight how the different groups used hashtags, external articles and other sources to promote their narratives. Finally, an analysis of bot-like behaviour then seeks to replicate Graham and Keller's findings (2020) and explores the most bot-like contributors in detail, including their contribution to the overall discussion. This paper expands upon our original work, which was presented at the 2 nd Media (MISDOOM) in 2020 (Weber et al, 2020) by providing: • An examination of polarised and unaffiliated accounts' behaviour and content over time at the group level, which shows how how the Opposers were mostly active only in Phase 2 and the majority of Supporter and Unaffiliated activity appeared in response to that in Phase 3; • A specific research question addressing the polarised accounts' behaviours, other than retweeting, via SNA measures and visualisations, to explore how central to the discussion they were, and to what degree they interacted with each other and the broader discussion community; • A specific research question addressing apparent coordinated retweeting, hashtag use and link sharing behaviour at the group level, via the analysis of new visualisations; • A specific research question addressing the country of origin of polarised accounts and other active participants and to what degree the groups received 'external' support, which is addressed with manual examination and categorisation of accounts' self-reported location descriptions, finding that a significant minority of non-Australians were present in the discussion; • An exploration of inauthentic behaviour via hashtag use and tweet text patterns, finding Supporters engaged in aggressive trolling behaviour more than Opposers; • An examination of the contribution of most bot-like accounts to the discussion, with close examination of five in particular, raising questions regarding the distinction between bot behaviour and highly repetitive human behaviour; • Comparison with a further contemporaneous contentious discussion, namely the #brexit discussion at a time when the United Kingdom was in the final stages of separating from the European Union; and • An expanded literature review and updated sources, including independent reviews of the bushfires that have occurred since the publication of the original conference paper. The contribution of this work includes: 1. Insights into the evolution of a misinformation campaign deliberately exaggerating the role of arson and downplaying the role of climate change in a catastrophic weather event; 2. Characterisation of two polarised communities active in the discussion with distinct agendas and communication strategies, also considered within the context of the broader discussion; and 3. A further dataset contemporaneous with the original period, augmenting those published in Weber et al (2020) ; 4. Methods and approaches for examining the behaviour and interaction of polarised communities in the context of the broader discussion, including co-activity analysis and statistical measures of community homophily. The study of Twitter during crises and times of political significance is well established (Bruns and Liang, 2012; Bruns and Burgess, 2012; Flew et al, 2014; Marozzo and Bessi, 2017; , and has provided recommendations to governments and social media platforms alike regarding its exploitation for timely community outreach. The social media response of the Australian Queensland State Government was praised for its use of social media to manage communication during devastating floods (Bruns and Burgess, 2012) , and analyses of coordinated behaviour have revealed significant organised anti-lockdown behaviour during the COVID pandemic Magelinski and Carley, 2020; Loucaides et al, 2021) and in the lead up to the January 6 Capitol Riots in America (Scott, 2021; Ng et al, 2021) . The continual presence of trolling and bot behaviour diverts attention and can confuse the public at times of political significance, whether it is to generate artificial support for policies and their proponents (Keller et al, 2017; Rizoiu et al, 2018; Woolley and Guilbeault, 2018) , harass opponents (Keller et al, 2017; CREST, 2017) or just pollute existing communication channels (Woolley, 2016; Nasim et al, 2018; Kušen and Strembeck, 2020) . Malign actors can also foster online community-based conflict Datta and Adar, 2019; Mariconti et al, 2019) and polarisation (Conover et al, 2011; Garimella et al, 2018; Morstatter et al, 2018; Villa et al, 2021) . Misinformation on social media has also been studied (Kumar and Shah, 2018; Starbird, 2019; Starbird and Wilson, 2020; Singer and Brooking, 2019; , with growing attention to its overall effect on society (Starbird, 2019; Carley, 2020) , but many relevant current events are yet to be explored in the peer-reviewed literature. Because social media has become such a mainstay of modern communication, misinformation on social media is often amplified on the mainstream media (MSM), or by prominent individuals, often when it aligns with their ideological outlook, which then feeds back into social media as people discuss it further. 10 Such cycles have been known to be deliberately fostered (Benkler et al, 2018; Starbird and Wilson, 2020; Badham, 2021) . Patterns of fire-related misinformation similar to those observed on #ArsonEmergency were repeated in the US during Californian wildfires in mid-2020, even causing armed vigilante gangs to form to counter non-existent Antifa activists who were blamed for the fires on social media. 11 Arson has again been blamed for the 2021 fires around the Mediterranean, throughout southern Europe and in northern Africa, 12 even as the United Nations' Intergovernmental Panel on Climate Change released its sixth Assessment Report stating that humans' effect on climate is now "unequivocal" (IPCC, In Press). Furthermore, when the misinformation involved relates to conspiracy theories involving public health measures during a global pandemic, the risk is that adherents will turn away from other evidence-based policies (Ball and Maxmen, 2020; Brazil, 2020; The Soufan Center, 2021) . A particular mixed-method investigation of the disinformation campaign against the White Helmets rescue group in Syria is useful to consider here (Starbird and Wilson, 2020) . Starbird & Wilson identified two clear corresponding clusters of pro-and anti-White Helmet Twitter accounts and used them to frame an investigation of how external references to YouTube videos and channels compared with videos embedded in Twitter. They found the anti-White Helmet narrative was consistently sustained through "sincere activists" and concerted efforts from Russian and alternative news sites. These particularly exploited YouTube to spread critical videos, while the pro-White Helmet activity relied on the White Helmets' own online activities and sporadic media attention. Other researchers have found similar patterns (Benkler et al, 2018; Jamieson, 2020; Pacheco et al, 2020) . This interaction between supporter and detractor groups and the media may offer insight into activity surrounding similar crises. We propose the following research questions to guide our exploration of Twitter activity over an 18 day period during the 2019-20 Australian "Black Summer" bushfires: To what extent can online misinformation campaigns be discerned? Are there discernible groups of accounts driving the misinformation, and if so how are they doing it? RQ2 How did the spread of arson narrative-related misinformation differ between phases, and did the spread of the hashtag #ArsonEmergency differ from other emergent discussions (e.g., #AustraliaFire and #brexit)? RQ3 How did the online behaviour of those who prefer the arson narrative differ from those who refute or question it? How was it affected by media coverage exposing how the #ArsonEmergency hashtag was being used? RQ4 How central were the communities to the discussion and how insular were they from each other and the broader discussion? RQ5 How did the communities make use of retweets, hashtags and URLs to promote their narrative? What evidence is there of coordination? RQ6 To what degree did the polarised groups receive support from outside Australia? RQ7 To what degree was the spread of misinformation facilitated or aided by trolls and/or automated bot behaviour engaging in inauthentic behaviour? In the remainder of this paper, we describe our mixed-method analysis and the datasets used. A timeline analysis is followed by the polarisation analysis. The revealed polarised communities are compared from behavioural and content perspectives, as well as through bot analysis. Answers to the research questions are summarised and we conclude with observations and proposals for further study of polarised communities. The primary dataset, 'ArsonEmergency', consists of 27,546 tweets containing this term posted by 12,872 unique accounts from 31 December 2019 to 17 January 2020. The tweets were obtained using Twitter's Standard search Application Programming Interface (API) 13 by combining the results of searches conducted with Twarc 14 on 8, 12, and 17 January 2020. As a contrast, the 'AusFire' dataset comprises tweets containing the term 'AustraliaFire' over the same period, made from the results of Twarc searches on 8 and 17 January 2020. 'AusFire' contains 111,966 tweets by 96,502 accounts. Broader searches using multiple related terms were not conducted due to time constraints and in the interests of comparison with Graham and Keller's findings . Due to the use of Twint 15 in that study, differences in dataset were likely but expected to be minimal. Differences in datasets collected simultaneously with different tools have been previously noted . 13 https://developer.twitter.com/en/docs/tweets/search/api-reference/get-search-tweets 14 https://github.com/DocNow/twarc 15 https://github.com/twintproject/twint Live filtering was also not employed, as the research started after Graham and Keller's findings were reported. Twitter may have removed inauthentic content in the time between it being posted and us conducting searches as part of data cleaning routines. For these reasons, some of the content observed by Graham and Keller were expected to be missing from our dataset. This lack of consistency between social media datasets for comparative analyses is a growing challenge recently identified in the benchmarking literature (Assenmacher et al, 2021) . A final contrast dataset was obtained via the Real-Time Analytics Platform for Interactive Data Mining (RAPID) (Lim et al, 2018) consisting of tweets containing the term '#brexit' during the same period. It contains 187,792 tweets by 78,216 accounts. This study focuses on about a week of Twitter activity before and after the publication of the ZDNet article (Stilgherrian, 2020) . Prior to its publication, the narratives that arson was the primary cause of the bushfires and that fuel load caused the extremity of the blazes were well known in the conservative media (Barry, 2020) . The ZDnet article was published at 6:03am GMT (5:03pm AEST 16 ) on 7 January 2020, and was then reported more widely in the MSM morning news, starting around 13 hours later. We use these temporal markers to define three dataset phases: • Phase 1 : Before 6am GMT, 7 January 2020; • Phase 2 : From 6am to 7pm GMT, 7 January 2020; and • Phase 3 : After 7pm GMT, 7 January 2020. Since late September 2020, Australian and international media had reported on the bushfires around Australia, including stories and photos drawn directly from social media, as those caught in the fires shared their experiences. No one hashtag had emerged to dominate the online conversation and many were in use, including #AustraliaFires, #ClimateEmergency, #bushfires, and #AustraliaIsBurning. The use of #ArsonEmergency was limited in Phase 1, with the busiest hour having around 100 tweets, but there was an influx of new accounts in Phase 2. Of all 927 accounts active in Phase 2 (responsible for 1,207 tweets), 824 (88.9%) of them had not posted in Phase 1 (which had 2,061 active accounts). 1,014 (84%) of the tweets in Phase 2 were retweets, more than 60% of which were retweets promoting the ZDNet article and the findings it reported. Closer examination of the timeline revealed that the majority of the discussion occurred between 9pm and 2am AEST, possibly inflated by a single tweet referring to the ZDNet article (at 10:19 GMT), which was retweeted 357 times. In Phase 3, more new accounts joined the conversation, but the day/night cycle indicates that the majority of discussion was local to Australia (or at least its major timezones). The figures above raise the question: is this growth in accounts using a term typical? As a contrast, we considered tweets in the same period containing the term 'AustraliaFire' and compared the growth in the accounts using the term over time. #AustraliaFire was one of the terms used in Graham's analysis as a contrast to #ArsonEmergency. Figure 2 shows that the patterns of growth of the users of the two terms differed considerably. One notable difference between the use of these terms is that 'AustraliaFire', though employed more than 'ArsonEmergency', did not receive the same degree of media exposure around the 7 th of January. As a further contrast, the same growth in uses of the term 'brexit' in a collection based on just that keyword was also used. Given the use of #brexit is well-established and the period did not include any notable Brexit-related events, we offer it as an example of steady activity. Use of 'AustraliaFire' clearly has a significant period of growth in use early in January at a different point to 'ArsonEmergency'. The term 'ArsonEmergency' (sans '#') was used for the Twarc searches, rather than '#ArsonEmergency', to capture tweets that did not include the hashtag symbol but were relevant to the discussion. This was done to capture discussions of the term, in which participants deliberately chose to avoid 01 Jan 2020 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 0% 25% 50% 75% 100% "ArsonEmergency" "AustraliaFire" "#brexit" Fig. 2 Growth patterns in the number of users using the terms 'ArsonEmergency' and 'AustraliaFire' (which includes use with the '#' prefix as well as without), and '#brexit' over the same period. Fig. 3 Counts of tweets using the terms 'ArsonEmergency' and 'AustraliaFire' without a '#' symbol from the period 2-15 January 2020 in meta-discussion regarding each term's use as a hashtag (counts outside were zero). using the term in a way that would contribute to the hashtag discussion (i.e., by including the hashtag symbol). We refer to this as meta-discussion, i.e., discussion about the discussion. We sought to understand how much of the discussion relating to #ArsonEmergency was, in fact, meta-discussion. Of the 27,546 tweets in the 'ArsonEmergency' dataset, only 100 did not use it with the '#' symbol (0.36%), and only 34 of the 111,966 'AustraliaFire' tweets did the same (0.03%), so it is clear that very little of the discussion was metadiscussion. That said, there were several days on which tens of tweets seemed to be involved in meta-discussion, as shown in Figure 3 . These coincide with Phase 2, when the story reached the MSM, and then again a few days later, possibly as a secondary reaction to the story (commenting on the initial reaction to the story on the MSM). The small number of uses in the meta-discussion imply that most use of the term 'ArsonEmergency' without the hash or pound symbol is a deliberate, rather than an incidental, part of the discussion. Examination of these particular tweets confirms this; we present examples in Table 1 . Table 1 Examples of meta-discussion referring to the #ArsonEmergency hashtag without including it directly by removing or separating the leading '#' character. Research from QUT shows that 'some kind of a disinformation campaign' is pushing the Twitter hashtag # ArsonEmergency. There is no arson emergency. https://t.co/ URL @ ACADEMIC @ JOURNALIST Venn Diagram of "ArsonEmergency" with "Qanon" and "Agenda21" conspiracies could be interesting UNIMPRESSED EMOJI suggest @AFP @NSWpolice ,@Victoriapolice as this misinformation is likely to cause panic & distress in Bushfire hit communties. This link is US news but it contains saliant facts about arrests. https://t.co/ URL When retweeting, remove hashtag from 'arsonemergency' https://t.co/ URL @ JOURNALIST #!ArsonEmergency -a notag. On the left in blue is the Opposer community, which countered the arson narrative promoted by the red Supporter community on the right. Nodes represent users. An edge from node A to B means that account A retweeted one of B's tweets. Node size corresponds to indegree centrality, indicating how often the account was retweeted. As our aim is to learn about who is promoting the #ArsonEmergency and its related misinformation, we first looked to the retweets. Retweets are the primary mechanism for Twitter users to reshare tweets to their own followers. Retweets reproduce a tweet unmodified, except to include an annotation indicating which account retweeted them. There is no agreement on whether retweets imply endorsement or alignment. Metaxas et al (2015) studied retweeting behaviour in detail by conducting user surveys and studying over 100 relevant papers referring to retweets. Their findings conclude that when users retweet, it indicates interest and agreement as well as trust in not only the message content but also in the originator of the tweet. This opinion is not shared by some celebrities and journalists who put a disclaimer on their profile: "retweets = endorsements". Metaxas et al (2015) also indicated that inclusion of hashtags strengthens the agreement, especially for political topics. Other motivations, such as the desire to signal to others to form bonds and manage appearances (Falzon et al, 2017) , serve to further imply that even if retweets are not endorsements, we can assume they represent agreement or an appeal to likemindedness at the very least. Given the highly connected nature of Twitter data and our aim of exploring human social behaviour, using networks to model our data facilitate social network analysis is a logical step (Brandes and Erlebach, 2005) . Using nodes to represent individuals, edges can be used to represent the flow of information and influence and the strength of those connections. We conducted an exploratory analysis on the retweet network built from the 'ArsonEmergency' dataset, which shown in Figure 4 . The nodes represent Twitter accounts and are sized by indegree (i.e., frequency of being retweeted). An edge between two accounts shows that one retweeted a tweet of the other. Using conductance cutting (Brandes et al, 2007) , we discovered two distinct well-connected communities, with a very low number of edges between the two communities. Next, we selected the top ten most retweeted accounts from each community, manually checked their profiles, and hand labelled them as Supporters and Opposers of the arson narrative accordingly. 17 The accounts have been coloured accordingly in Figure 4 : the 497 red nodes are accounts that promoted the narrative (the Supporters), while the 593 blue nodes are accounts that opposed them (the Opposers). The term #ArsonEmergency had different connotations for each community. Supporters used the hashtag to reinforce and promote their existing beliefs about climate change, while Opposers used this hashtag to refute the arson theory. The arson theory was a topic on which people held strong opinions resulting in the formation of the two strongly connected communities. Such polarised communities typically do not admit much information flow between them, hence members of such communities are repeatedly exposed to similar narratives, which then further strengthens their existing beliefs. Such closed communities are also known as echo chambers, and they limit people's information space. The retweets tend to coalesce within communities, as has also been shown for Facebook comments (Nasim et al, 2013) . These two groups, Supporters and Opposers, and those users Unaffiliated with either group, are used to frame the remainder of the analysis in this paper. The relative behaviour of the communities over the collection period, shown in Figure 5 , informs several key observations. The first is the impact of the story reaching the MSM: the peaks of both Opposer and Unaffiliated contributions 17 Labelling was conducted by the first two authors independently and then compared. Account labelling is available on request. is on the morning of Phase 3, immediately after the story appeared on the morning bulletins. Despite the much greater number of Unaffiliated accounts (11,782), their peak is only a little more than twice that of the 593 Opposer accounts. Unaffiliated and Supporter accounts are active during the entire collection, but Supporters' activity is prominent each day in Phase 3, and peaks on the second day of Phase 3. That peak might have occurred as a response to the previous peak, as by that time the news would have had a full day to disseminate around the world. By reaching a broader audience via the MSM, more Supporter accounts may have been drawn into the online discussion. Finally, a clear diurnal effect can be seen with daily peaks of activity occurring during Australian daytime hours, implying that the majority of the activity is domestic and analysis of the 'lang' field in the tweets 18 confirmed that over 99% of tweets used 'en' (English, 90.5%) or 'und' (undefined, 8.7%). User behaviour on Twitter can be examined through the features used to connect with others and through content. Here we consider how active the different groups were across the phases of the collection, and then how that activity manifested itself in the use of mentions, hashtags, URLs, replies, quotes and retweets. In Phase 1, Supporters used #ArsonEmergency nearly fifty times more often than Opposers (2,086 to 43), which accords with Graham and Keller's findings that the false narratives were significantly more prevalent on that hashtag compared with others in use at the time (Stilgherrian, 2020; Graham and Keller, 2020) . This use is roughly proportional to the number of tweets posted by the two groups, however (Table 2) . Overall in that Phase, Supporters used 22 times as many hashtags as Opposers. In Phase 2, during the Australian night, Opposers countered with three times as many tweets as Supporters, including fewer hashtags, more retweets, and half the number of replies, demonstrating different behaviour to Supporters, which actively used hashtags in conversations. Manual inspection and content analysis confirmed this to be the case. This is evidence that Supporters wanted to promote the hashtag as a way to promote the narrative. Interestingly, Supporters, having been relatively quiet in Phase 2, responded strongly, producing 64% more tweets in Phase 3 than Opposers. They used proportionately more of all interactions except retweeting, including many more replies, quotes, and tweets spreading the narrative with multiple hashtags, URLs and mentions. In short, Opposers tended to rely more on retweets, while Supporters engaged directly and were more active in the longer phases. Overall, as shown in the bottom section of Table 2 , Supporter accounts tweeted much more often than other accounts, and used more hashtags, mentions, quotes, replies and URLs, but retweeted less often than both Opposers and Unaffiliated accounts. This suggests that Supporters were generating their own content (not just retweeting it), and attempting to engage with others through the use of platform features, implying a high degree of motivation on their part. Opposers using these interactions, engaging with each other and others in the network. They are particularly tightly and centrally clustered in the mention network, which is a reflection of their attempts to actively engage directly (rather than only indirectly, such as with hashtags). They are more diffusely located in the reply network, and the quote network, sharing similar network positions to Unaffiliated accounts. This is less to do with the amount of activity (i.e., the number of replies or tweets) and more to do with how they connect with others. The Opposer accounts that appear in the networks are not as centrally located nor as tightly clustered. To provide a more objective analysis of the structural properties of these networks and the accounts within them, we employ a variety of centrality (c) Quotes. The largest connected components from directed, weighted networks built from the replies, mentions, and quotes, linking from one account to another when it replied, mentioned, or quoted the other, laid out by extracting the quadrilateral Simmelean backbone (Serrano et al, 2009; Nocaj et al, 2014) . Edges are sized by weight, indicating the frequency of connections, and coloured by source node affiliation. Thicker edges have greater weight. Nodes are sized by outdegree (indicating the replies, mentions and quotes they used) and coloured by affiliation: red nodes are Supporters, blue are Opposers, and green are Unaffiliated. The replies component has 1,580 nodes and 2,308 edges, the mentions component has 2,984 nodes and 5,670 edges, and the quotes component has 915 nodes and 1,230 edges. measures and k-core analysis. We also use the assortativity coefficient and a variation of Krackhardt and Stern's E-I Index (Krackhardt and Stern, 1988) as measures of homophily. Centrality measures provide an indication of the importance of a node within a network, while the k-core of a node describes Online Polarisation During Australia's 2019-2020 Bushfires how deeply embedded it is within its network based on its connectivity, 19 and assortativity measures the degree to which accounts in the same groups connect to each other (i.e., their degree of homophily). The reader is referred to Newman (2010) for an introduction to these concepts. The E-I Index is a simple ratio of the internal edges that connect members of a labeled group to each other, I, compared with external edges connecting to nodes outside the group, E: Both the assortativity coefficient and the E-I Index lie within [−1, 1] but the values are reversed: high assortativity coefficients and low E-I indices indicate highly homophilous networks in which nodes connect mostly with others in the same group. Our E-I Index implementation addresses both the availability of edge weights 20 and imbalances in the size of the polarised groups of interest. It does this by summing the weights of edges (rather than just their number) and then normalising the sums, so what is considered is the proportion of the edge weight sum that connects outside the group compared to inside the group. We refer to this measure as the modified E-I Index in the remainder of this work. Though the location of Supporter and Opposer accounts in the networks in Figure 7 gives the impression that Supporters are more central in each network, the statistics presented in Table 3 facilitate a more nuanced interpretation. In the reply, mention and quote networks, Supporters and Opposers make up only a small fraction of the overall networks (shown as a percentage in the 19 "A k-core is a maximal subset of vertices such that each is connected to at least k others in the subset." (Newman, 2010, p.196) 20 Edge weights are ignored in the implementation of the E-I Index in the version of NetworkX (Hagberg et al, 2008 ) that we used, version 2.5, which is why we implemented our own. Nodes column). Supporter betweenness scores are much higher than Opposers' in the reply and mention networks and even twice as high in the quote network (though still very low). Closeness scores are more weighted towards the Opposers, implying that even though they are not centrally positioned, they remain directly linked to more of the network than the Supporters. The mean degree centrality of Supporters is again higher than Opposers' for all networks, reflecting their tendency to directly reach out to a wider audience than Opposers, who relied mostly on retweets to disseminate their message. The eigenvector centrality scores are higher for Opposers in the reply and mention networks, suggesting they are more connected to important nodes in the network and perhaps were more efficient at selecting their interaction targets, while their lower scores for the quotes network is probably reflective of the fact they used them a lot less (139 uses to Supporters' 789). The centrality scores suggest that the Opposers were less centrally located, but well connected, while Supporters were more centrally positioned (reflected in their relatively high betweenness scores). k-core analysis The question of how tightly clustered the nodes are can be addressed with k-core analysis. This analysis progressively breaks a network down to sets of nodes that have at least k neighbours, so nodes on the periphery are discarded first, while highly connected nodes form the 'core' of the network. The result is that the higher the k-core for a particular node (i.e., the highest k-core of which they are a member), the more embedded in the network they are. Figure 8 shows the proportions of each groups' members (of those present in each network) in each core. We can immediately see that across all networks, more Supporters have higher k-core values than both Opposers and the Unaffiliated. In fact, while the majority of Opposers and Unaffiliated are on the periphery of the networks, Supporters are relatively evenly spread throughout the networks' cores. This implies more of the Supporters were more active in reaching out to many alters, something that is also reflected in their higher use of mentions, replies and quotes per account, as shown in Table 2 . Fig. 8 The distributions of k-core values for accounts in the reply, mention and quote networks. Nodes with higher k-core values are more deeply embedded in their network. The percentage refers to the proportion of each group's accounts with a given k-core value. The homophily measures introduced provide an indication of how insular the groups were with their interactions, and here we also apply them to the retweet network for comparison (Table 4 ). As expected, the vast majority of edges involving Supporters and Opposers in the retweet network are homophilic, leading to a very low E-I Index and a very high assortativity coefficient. Even when the broader network is introduced (i.e., re-introducing all Unaffiliated nodes), the E-I Index remains very low, dominated by the polarised groups. Polarisation is maintained between Supporters and Opposers in the other interaction networks, but to a lesser degree, with the most separation observed in the quote network, again drawing our attention to the fact that Supporters used quotes more than Opposers. It is immediately clear that, outside of retweets, Supporters were much more active, and were biased towards connecting to members of their own group. The degree of activity is notable, because there were fewer Supporters (497) than Opposers (593) though their numbers were similar. Opposers were also heavily biased to connect to other Opposers via replies and quotes, but not so for mentions. The proportional view makes clear the bias in connectivity: while raw numbers of interactions may be low from Opposers, they strongly preferred to connect to themselves, while Supporter bias is less pronounced for mentions, replies and quotes, despite the raw numbers of interactions being much higher. OPP. Source Figure 9a shows raw counts of interactions, while Figure 9b shows the proportions of interactions from each source group to each target group. The concentration of narrative from certain voices requires attention. To consider this, Table 5 shows the degree to which accounts were retweeted by the different groups by phase and overall. Unaffiliated accounts relied on a smaller pool of accounts to retweet than both Supporters and Opposers in each phase and overall, which is reasonable to expect as the majority of Unaf- were Opposers. Thus Supporters and Opposers made up the majority of the most retweeted accounts, and arguably influenced the discussion more than Unaffiliated accounts. When contrasting the content of the two affiliated groups, we considered the hashtags and external URLs used. A hashtag can provide a proxy for a tweet's topic, and an external URL can refer a tweet's reader to further information relevant to the tweet, and therefore tweets that use the same URLs and hashtags can be considered related. To discover how hashtags were used, rather than simply which were used, we developed co-mention networks (visualised in Figure 10 ). In these networks: each node is a hashtag in its lower case form, sized by degree centrality; edges represent an account using both hashtags (not necessarily in the same tweet); the edge weight represents the number of such accounts in the dataset. Nodes are coloured according to the affiliation of the accounts that used them. We removed the #ArsonEmergency hashtag (as nearly each tweet in the dataset contained it) as well as edges having weight less than 5. Opposers used a smaller set of hashtags, predominantly linking #AustraliaFires 21 with #ClimateEmergency and a hashtag referring to a well-known publisher. In contrast, Supporters used a variety of hashtags in a variety of combinations, mostly focusing on terms related to 'fire', but only a few with 'arson' or 'hoax', and linking to #auspol and #ClimateEmergency. Manual inspection of Supporter tweets included many containing only a string of hashtags, unlike the Opposer tweets. Notably, the #ClimateChangeHoax node has a similar degree to the #ClimateChangeEmergency node, indicating Supporters' skepticism of climate science, but perhaps also that Supporters were attempting to join or merge the discussion communities defined by those hashtags in order to pollute the predominant hashtag of the #ClimateChangeEmergency community with a counter-narrative (Woolley, 2016; Nasim et al, 2018 Supporters. The distribution of hashtag uses for the ten most frequently used by each group (which overlap but are not identical), omitting the ever-present #ArsonEmergency, is shown in Figure 11 It indicates that Opposers focused slightly more strongly on a small set of hashtags, while Supporters spread their use of hashtags over a broader range (and thus their use of even their most frequently used hashtags is less than for Opposers). Unaffiliated accounts used their frequently used hashtags more often than both groups by the 4 th hashtag, possibly due to the much greater number of accounts being active but less 21 Capitals are re-introduced to hashtags used in the discussion for readability. (a) Supporter hashtags. (b) Opposer hashtags. focused in their hashtag use. A second hashtag appeared in fewer than 20% of each groups' tweets. Fig. 11 Hashtag uses per tweet for the ten most used hashtags for Supporters, Opposers and the Unaffiliated, ommitting #ArsonEmergency. Opposers used hashtags more frequently than Supporters, but after the second hashtag, Unaffiliated accounts used more than either polarised group. Manual inspection of Supporter tweets revealed that many replies consisted solely of "#ArsonEmergency" (e.g., one Supporter replied to an Opposer 26 times in under 9 minutes with a tweet just consisting of the hashtag). This kind of behaviour, in addition to inflammatory language in other Supporter replies, suggests a degree of aggression, though aggressive language was also noted among Opposers. The tweets that included more than 5 hashtags made up only 1.7% of Opposer tweets, but 2.8% of Supporter tweets and 2.1% Unaffiliated tweets. Further analysis of inauthentic behaviour is addressed in Section 4.2. A statistical examination of how Supporters and Opposers used hashtags also revealed significant levels of homophily when considering only Supporters and Opposers, but less so when the hashtags use of Unaffiliated accounts was included. We created an account network by linking accounts that use the same hashtag. For accounts u and v, which used a set of hashtags {h 1 , h 2 , ..., h n } in common, and each account x used a hashtag h with a frequency of h x , the weight of the undirected edge {u, v} between u and v is given by hashtags). When we consider only the 14,777 edges between or within the Supporter and Opposer groups (excluding all edges to adjacent Unaffiliated accounts), the modified E-I Index falls to −0.964 with a corresponding assortativity coefficient of 0.966, which indicates the great majority of such edges were homophilic (i.e., within groups). Given we started with hashtags unique to each group, a degree of homophily is not surprising, however these very strong results imply that not many of the co-occurring hashtags each group used overlapped either. These results are clearly evident in a visualisation of the network (Figure 12a ). (a) Accounts using partisan and co-occurring hashtags. (b) Partisan and co-occurring hashtags. Fig. 12 Two networks built from the tweets containing 'partisan' hashtags (omitting uses of the ten most common hashtags). Left: Supporter (red) and Opposer (blue) nodes are linked when they mention the same hashtag, and are laid out with a force-directed algorithm. Red edges connect Supporters, blue connect Opposers, while green edges connect across the groups. Edge width is proportional to edge weight. Isolates have been removed. Though some polarisation should be expected given the partisan hashtags provide a natural axis of polarisation, it is notable quite how little overlap there is in use of the co-occurring hashtags. Right: Supporter partisan hashtags (red), Opposer partisan hashtags (blue) and co-occurring hashtags (green) are linked when mentioned by the same account (potentially in different tweets). Nodes are laid out with the quadrilateral Simmelean backbone algorithm (Serrano et al, 2009; Nocaj et al, 2014) , and edges are coloured according to their contribution (backbone strength). Edge widths represent edge weights. The separate clusters in the bottom left are included as they had been linked to the most common hashtags prior to their removal. The clusters apparent in the account network (left) are caused by the fact that partisan hashtags are rarely co-mentioned (right). Instead they are clearly co-mentioned with a variety of distinct hashtags, implying that although Supporters and Opposers were polarised in their hashtag use, they also had distinct sub-communities within their discussions (using hashtags as a proxy for discussion topic). Quickly returning to the network of hashtags co-mentioned in the partisan tweets, we can see the clusters in the account network (Figure 12a ) are caused by the fact that the accounts rarely used multiple partisan hashtags together (otherwise there would be clusters of partisan hashtags); instead, whenever a tweet included a partisan hashtag, they also included one or a few of a variety of non-partisan hashtags, which are represented by clusters of green nodes in URLs in tweets can be categorised as internal or external. Internal URLs refer to other tweets in retweets or quotes, while external URLs are often included to highlight something about their content, e.g., as a source to support a claim. By analysing the URLs, it is possible to gauge the intent of the tweet's author by considering the reputation of the source or the argument offered. We categorised 23 the ten URLs used most each by the Supporters, Opposers, and Unaffiliated accounts across the three phases, and found a significant difference between the groups. URLs were assigned to one of these four categories: NARRATIVE Articles used to emphasise the conspiracy narratives by prominently reporting arson figures and fuel load discussions. CONSPIRACY Articles and web sites that take extreme positions on climate change (typically arguing against predominant scientific opinion). DEBUNKING News articles providing authoritative information about the bushfires and related misinformation on social media. OTHER Other web pages. URLs posted by Opposers were concentrated in Phase 3 and were all in the DEBUNKING category, with nearly half attributed to Indiana University's Hoaxy service (Shao et al, 2016) , and nearly a quarter referring to the original ZDNet article (Stilgherrian, 2020) (Figure 13a ). In contrast, Supporters used many URLs in Phases 1 and 3, focusing mostly on articles emphasising the arson narrative, but with references to a number of climate change denial or right wing blogs and news sites (Figure 13b ). To investigate whether coordinated dissemination of content was occurring, we performed co-retweet, co-hashtag and co-URL analyses (Weber and Neumann, 2020) , searching for sub-communities of accounts that retweeted the same tweets, and shared the same hashtags, URLs, and URL domains within the same timeframes (denoted by γ). Regarding the URLs, Figure 13 indicates the nature of article external links referred to, but not the distributions of the URLs or their domains, which is the aim of using these co-activity analyses. The analyses result in weighted networks consisting of the sub-communities as disconnected components of accounts, the edge weights of which indicate the frequency of co-linking or co-mentioning of a hashtag. Further, to examine how the sub-communities relate to one another, we can then re-introduce the URLs and domains as explicit 'reason' nodes into these networks, making them bigraphs in which communities are joined according to these 'reason' nodes (Weber and Neumann, 2021 ). The largest components of the co-retweet network (γ=1 minute) shown in Figure 15 show that the polarisation observed in the retweet network (in Figure 4 ) is still evident, as expected, but what is particularly notable is the absence of tight cliques amongst the Supporter nodes, which, as promoters of the arson narrative, were originally thought to include a large proportion of bots (Stilgherrian, 2020; Graham and Keller, 2020) . Cliques would indicate accounts all retweeting the same tweets within the same timeframe, a signal associated with automation, but also with high popularity (i.e., increasing the number of interested accounts increases the chance that they co-retweet accidentally). Cliques are visible amongst the 103 Opposers and many of the 966 Unaffiliated accounts (and could also be due to simple popularity and coincidence), but rare amongst the 233 Supporters. Instead their connection patterns imply real people seeing and retweeting each others retweets. For example, account A sees a tweet and retweets it, which is then seen by account B (within 1 minute), and then account C sees that and retweets it as well, but longer than 1 minute after A. A 1 minute window is quite large for the purposes of identifying botnets, so this would indicate a lack of evidence of retweeting bots amongst the Supporters. A further item to note is the degree of support offered by the Unaffiliated accounts, which co-retweet with Opposer accounts far more frequently than Supporter accounts in the coordination networks presented in Figure 15 . This observation raises the question of whether some of the Unaffiliated accounts may, in fact, be Opposers, but were simply not captured in the application of conductance cutting community detection to the retweet network, and they may have been captured with modification of the detection parameters. As using a hashtag in a tweet can increase its reach to observers of the hashtag as well as one's followers, coordinated promotion of a hashtag is a mechanism to disseminate one's message (Varol et al, 2017) , as well as pollute a discussion space (Woolley, 2016; Nasim et al, 2018) . Given how frequently hashtags are Fig. 16 The two largest connected components of the co-hashtag coordination network (γ=1 minute, excluding #ArsonEmergency), with nodes sized by the number of tweets they posted in the discussion. Red nodes are Supporters, blue are Opposers, green are Unaffiliated, and edge widths are sized by the frequency of co-hashtag activity. used, we chose a tight timeframe of 1 minute and excluded #ArsonEmergency from our co-hashtag analysis. The two largest components discovered highlight the polarisation between the Supporter and Opposer communities ( Figure 16 ). The ring formation amongst the Supporters and small node sizes indicate less activity including a wider variety of hashtags. Opposers are more active and focused in the hashtags they used. These findings emphasise the findings in Section 3.3.1 but also highlight the support of Unaffiliated accounts, the most active of which appear to support the Opposers. For human users, grassroots-style coordinated co-linking should be visible in 'human' timeframes, such as within 10 minutes, allowing time for users to see each others' tweets. The polarisation evident in the retweet network is also evident in the co-linking networks (γ=10 minutes) shown in Figure 17 , especially considering only the Supporter and Opposer networks (Figure 17a ). When we examine the co-linking in context in Figure 17b , along with the contributions of Unaffiliated accounts, we can see that, again, Unaffiliated accounts co-acted with Opposer accounts far more often than Supporters, which appear relatively (a) Co-URL coordination network including only Supporters (in red) and Opposers (in blue). (b) Co-URL coordination network laid according to the network's quadrilateral Simmelian backbone (Serrano et al, 2009; Nocaj et al, 2014) . Fig. 17 The coordination networks resulting from co-URL analysis (γ=10 minutes), with nodes sized by indegree. Red circular nodes are Supporters, blue are Opposers, and the green remainder are Unaffiliated accounts. Edge width and darkness indicates frequency of co-linking. Fig. 18 The account/URL bigraph resulting from co-URL analysis (γ=10 seconds), annotated with the websites hosting highly shared articles. Pale green triangular nodes are the URLs, sized by indegree. Red circular nodes are Supporters, blue are Opposers, and the green remainder are Unaffiliated accounts. isolated, compared with the concentrated co-linking in the Opposer/Unaffiliated clusters on the right. Here, cliques represent groups of accounts sharing the same URLs, but it is unclear whether each clique represents a different URL or simply a different time window. To consider that, we need to introduce 'reason' nodes, representing the shared URLs, to create account/URL bigraphs. Figure 18 shows the resulting account/URL bigraph, which includes annotations indicating the websites hosting the most shared articles (referred to by the URLs). As expected, there is clear polarisation around the URLs, but it is immediately also clear how focused the Opposer accounts were on a small number of URLs, similar to their use of hashtags. The blue Opposer nodes link mostly to three URLs: the original ZDNet article (Stilgherrian, 2020) , the Hoaxy website , and an article on The Guardian relating to online misinformation during the bushfires. 24 The Supporter community's use Fig. 19 The account/domain bigraph resulting from co-domain analysis (γ=10 seconds), annotated with the websites hosting highly shared articles. Pale green triangular nodes are the URL domains, sized by indegree. Red circular nodes are Supporters, blue are Opposers, and the green remainder are Unaffiliated accounts. Two zones of contrasting highly linked to domains are highlighted, one primarily used to support the arson narrative, and one used primarily to debunk it. of URLs is more dispersed, and includes MSM sites with the addition of a large cluster of Supporters and Unaffiliated accounts around an article on The Daily Chrenk, the website of an Australian blogger promoting the arson narrative. It is notable that two Australian Broadcasting Corporation (ABC) articles are so centrally located amongst the Supporters, as these were classified as DEBUNKING articles. When we consider the co-domain bigraph (Figure 19 ), however, it is clear that the ABC domain binds the polarised Supporter and Opposer communities together, along with, interestingly, The Guardian and the URL shortener bit.ly. One bit.ly link appeared much more frequently than others, and it resolved to a Spanish news article on online bushfire misinformation. 25 Highlighted in the co-domain bigraph are two zones of domains that appear mostly linked to one or the other of the Supporter and Opposer nodes, which are, again, appear polarised in the network. The domains in these zones appear aligned again with Opposers referring to domains hosting DEBUNKING URLs and Supporters referring to domains hosting NARRA-TIVE URLs. A few domains are referred to very frequently by individual nodes (visible as dark, large edges), and these are often social media sites, such as YouTube, Instagram, and Facebook. The analyses of a variety of co-activities here emphasises the polarisation observed in the retweet network permeates the groups' collaborative efforts. Evidence indicates that Opposers, much less so than Supporters, engaged in coordinated action, however, given the significant contribution of Unaffiliated accounts, it is unclear whether this is deliberate or merely a reflection of high popularity (especially given the considerably greater number of Unaffiliated accounts active in the discussion). Fig. 20 The self-reported locations of Supporter, Opposer and Unaffiliated accounts. The number in brackets indicates how many accounts were evaluated. The Miscellaneous category was used for locations which described a physical location but were vague, e.g., Earth, whereas Other was used for whimsical entries, e.g., "Wherever your smartphone is." or "Spot X". Given the global effect of climate change, any prominent contentious discussion of it is likely to draw in participants from other timezones. Although the activity patterns in Figure 1 indicate the majority of activity aligns with Australian timezones, a deeper analysis of the self-reported account 'location' fields in tweets revealed that only 88% of active 26 participants were Australian ( Figure 20) . (Tweets can contain geolocation information but rarely do: only 127 tweets in the 'ArsonEmergency' dataset had any geolocation information, and 114 were posted in Australia.) Based on the self-reported location, more Supporters declared locations outside Australia (23%) than Opposers (11%), but the biggest proportion of non-Australian participants were Unaffiliated, perhaps drawn in by the international news. It is unclear whether the international accounts were drawn in to aid the Supporters or Opposers in Phase 3, but we know the articles the Unaffiliated shared changed to DEBUNKING in that Phase, and Unaffiliated accounts appeared to coordinate with Opposers. More detail can be found in Appendix A. The analysis reported in ZDNet (Stilgherrian, 2020) indicated widespread botlike behaviour by using the tweetbotornot 27 R library. Our analysis had two goals: 1) attempt to replicate Graham and Keller's findings in Phase 1 of our dataset; and 2) examine the contribution of bot-like accounts detected in Phase 1 in the other phases. Specifically, we considered the questions: • Does another bot detection system find similar levels of bot-like behaviour? • Does the behaviour of any bots from Phase 1 change in Phases 2 and 3? We evaluated 2,512 or 19.5% of the accounts in the dataset using Botometer (Davis et al, 2016) , including all Supporter and Opposer accounts, plus all Unaffiliated accounts that posted at least three tweets either side of Graham and Keller's analysis reaching the MSM (i.e., the start of Phase 3). 26 We considered all Supporters, Opposers, plus all Unaffiliated accounts that tweeted at least three times, and who populated the field. 27 https://github.com/mkearney/tweetbotornot accounts, relying on over a thousand features drawn from six categories, which provides a structured analysis report of an account, rating various of its features for 'botness'. The report includes a "Complete Automation Probability" (CAP), a Bayesian-informed probability that the account in question is "fully automated", as well as a rating that assumes an account is English-speaking which is different from the language-agnostic rating. This does not accommodate hybrid accounts (Grimme et al, 2018) and only uses English training data (Nasim et al, 2018) , leading some researchers to use conservative ranges of CAP scores for high confidence that an account is human (<0.2) or bot (>0.6) (e.g., Rizoiu et al, 2018) . We adopt that categorisation. Table 6 shows that the majority of accounts were human and contributed more than any automated or potentially automated accounts. The distributions of English and CAP scores for all tested accounts overall and only in Phase 1, when few Opposers were active, and separately for Supporters and Opposers are shown in Figure 21 . There are no significant differences between the ratings of the tested accounts overall and in Phase 1, nor between Supporters and Opposers. A t-test confirmed that the overall and Phase 1 score distributions do not have the same mean (p < 0.05 for both the CAP and English scores), and a Mann Whitney test confirmed that there is not enough information to be confident that the Supporter and Opposer score distributions are the same (p > 0.05 for both scores). These results contrast with the reported findings (Stilgherrian, 2020 ) is likely to be due to a number of reasons, but the primary one is differences in our datasets. Graham and Keller used the collection tool Twint (which avoids using the Twitter API and instead uses the Twitter web user interface (UI) directly) to focus on results from Twitter's web UI when searching for #ArsonEmergency. Only 812 tweets appeared in both datasets, and even those were restricted to Phase 1. Of the 315 accounts in common, 100 were Supporters and 5 Opposers, implying that those Supporter accounts had already been flagged by misinformation researchers as having previously engaged in questionable behaviour. The size of our dataset and the greater number of accounts we tested is likely to have skewed our Botometer results towards typical users. There are also differences between the bot analysis tools. Botometer's CAP score is focused on non-hybrid, English accounts, whereas tweetbotornot may provide a more general score, taking into account troll-like behaviour. The content and behaviour analysis discussed above certainly indicates Supporters engaged more with replies and quotes, consistent with other observed trolling behaviour Mariconti et al, 2019) or "sincere activists" (Starbird and Wilson, 2020) . Follow-up work by Graham and Keller's research group has focused on such "activists", finding that they appeared to coordinate their activities with prominent public figures and media outlets as part of a broader and longer-running disinformation campaign spanning the months surrounding the period we have focused on . Finally, it should be noted that at the time of writing the tweetbotornot library has been replaced with a new version in a completely separate library tweetbotornot2 28 in which the bot rating system has been changed and is now more conservative. In this way, the original findings in January 2020 may be been an artifact of the original implementation, however the polarised communities discovered since are certainly real and worthy of study. 2 This account was found to have been deleted when checked in December, 2020. Deeper analysis of the most bot-like accounts (those with a Botometer (Davis et al, 2016 ) CAP rating of 0.8 or more) revealed that the kinds of bot-like accounts present in each community differed significantly in a few primary respects (see Table 7 ). For convenience, we will refer to these accounts as "bots", but given they all present as genuine human users, they all also qualify as "social bots" (Cresci, 2020) and therefore are likely to be tools for influence. The accounts were re-examined in October, 2020, and screenshots taken of their Twitter profiles (see Figures 22 29 and 23) . Two of the Supporter accounts appear to be American supporters of US President Donald Trump, while the third presents as an Australian indigenous woman from Tasmania who is also an active Trump supporter. The Opposer accounts include one with very little personal detail, mentioning only a hashtag for decentralised finance, 30 in its description, and one that presents as a left-wing individual. 28 https://github.com/mkearney/tweetbotornot2 29 Supporter bot 2's account had been deleted, and so a mock-up based on the last known tweet in the ArsonEmergency corpus is presented in Figure 22b . 30 Decentralised finance: a field of cryptocurrency in which blockchain technology is used to avoid financial institutions in transactions. https://theconversation.com/decentralised-finance-c alls-into-question-whether-the-crypto-industry-can-ever-be-regulated-151222 (a) Supporter bot 1. (b) Supporter bot 2, which was suspended-this mockup is based on data from the collection. (c) Supporter bot 3. Supporter accounts with a Botometer rating higher than 0.8, implying a high degree of bot-like traits. Personal details have been obscured. Screenshots of accounts were obtained in mid October, 2020. Together, the five accounts contributed 81 tweets over the 18 day collection period, 73 by the Supporters (including 59 from Bot 3) and 4 each from the Opposer bots. This suggests they had very limited opportunity to have an impact on the discussion. All accounts had been active for at least eighteen months, up to a maximum (at the time of the collection) of nearly four years. The variations in posting rates highlight the fact that Botometer's ensemble classifier will catch accounts that do not have high posting rates (e.g., Opposer bot 2 only posted approximately 25 tweets per year, but had been suspended by December, 2020). The reputation score is defined by and is a measure considered desirable enough worth manipulating through follower fishing (Dawson and Innes, 2019 ), yet even the bots' reputation scores are not very different (other than Opposer bot 2, which seems to be a rarely used account). In fact, the primary distinction between the Supporter and Opposer bots is the magnitude of their friend and follower counts. Supporter bots had an average of 18.8k friends and 18.5k followers compared with Opposer bots' averages of 512.5 friends and 276 followers. By October, 2020, over nine months, the two remaining Supporter bots, bots 1 and 3, had increased their friend and follower counts significantly: bot 1 had 1.7k 31 more friends and 1.1k more followers, while bot 3 had 14.5k more friends and 13.4k more followers. Over the same period, bot 1 had posted another 36.3k tweets (a 77% increase at more than 130 tweets per day) and bot 3 had 31 Count changes are in thousands, as the figures are obtained from the profile screenshots. posted another 157.3k tweets (a 45% increase at nearly 600 tweets per day). Bots 1 and 3 had been created 6 days apart and, in January, 2020, both had been running for just over three years. In contrast, Opposer bot 1 had lost one follower and reduced the number of accounts it followed by 9, but added just over 10k tweets (approximately 37 tweets per day), while Opposer bot 2 had increased the accounts it followed by 148%, added one follower and posted only 25 tweets. It is not clear why these accounts are so different. It is possible these accounts are, in fact, merely highly motivated people, who spend a significant amount of time curating their Twitter feeds to include material they prefer and then retweet almost everything they see to simply promote their preferred narrative. This accords with recent observations that Twitter increasingly consists of retweets of official sources and celebrities and tweets with URLs, and rather than being a town square of public discussion, it should be treated as an "attention signal", which highlights the "stories, users and websites resonating" at a given time (Leetaru, 2019) . These accounts appear driven to amplify that "attention signal" for ideological reasons, for the most part (Opposer bot 2's tweeting motivations are unclear). What also stands out is that the Supporter bots differ distinctly from the rest of the Supporter community who relied much less on retweets than the Opposer community. Figure 24 shows the activity patterns for the Supporter and Opposer bot accounts, and also for the 15 Unaffiliated accounts that had been suspended when the bot analysis was conducted (at the end of January 2020). The Opposer contribution is small and occurs in Phase 2 and the first day of Phase 3, clearly responding to the MSM news, while the Supporter bots are active in the lead up to Phase 2 and well into Phase 3, engaging in the ongoing discussion, though their activity patterns indicate that if they are bots tweeting frequently, then their tweets mostly avoided using #ArsonEmergency (and thus were not captured in our collection). The Unaffiliated accounts are also mostly active only on the day the story reached the MSM and the following day, and their contribution was limited to only 32 tweets. Aggressive language was observed in both Supporter and Observer content, but the hashtag and mention use provide the most insight into potential inauthentic behaviour (Weedon et al, 2017) . Supporters used more hashtags and more mentions in tweets than Opposers in general (Table 2) , and posted individual tweets with many more of each (the number of tweets with at least 14 hashtags or 5 mentions was 50), though a small proportion of Unaffiliated accounts used even more hashtags in their tweets (a maximum of 27). Supporters posted tweets consisting of only hashtags, mentions and a URL in various combinations (i.e., eschewing actual content) far more frequently than Supporters or Unaffiliated, on a per-account basis, particularly in Phase 3 (see Table 8 ). Using hashtags and mentions in these numbers is a way to increase the reach of your message (though, ironically, it often leaves little space for the message itself), but can also be used to attack others or pollute hashtag-based discussion communities (Woolley, 2016; Nasim et al, 2018) . In one notable instance, a Supporter account posted 26 highly repetitive tweets to an Opposer account within 9 minutes, including only the #ArsonEmergency hashtag in the majority of them ( Figure 25 ). In six tweets, other accounts were mentioned, including prominent Opposer and Unaffiliated accounts, perhaps in the hope that they would engage by retweeting and thus draw in their own followers. Our discussion addresses the research questions we posed in Section 1.5. Discerning misinformation-sharing campaigns. Analysis revealed two distinct polarised communities, each of which amplified particular narratives. The content posted by the most influential accounts in each of these communities shows Supporters were responsible for the majority of arson-related content, while Opposers countered the arson narrative, debunking the errors and false statements with official information from community authorities and fact-check articles. Prior to the release of the ZDNet article, the discussion on the #ArsonEmergency hashtag was dominated by arson-related content. In that sense, the misinformation campaign was most effective in Phase 1, but only because its audience was small. Once the audience grew, as the hashtag received broader attention, the conversation became dominated by the Opposers' narrative and related official information. RQ2 Differences in the spread of information across phases and other discussions. We regarded URL and hashtags as proxies for narrative and studied their dissemination, finding distinct differences between the groups and the their activity in different phases. In Phase 1, only Supporters and Unaffiliated shared URLs, the most popular of which were in the NARRATIVE category, but by the third Phase, the most popular URLs shared were DEBUNKING in nature by a ratio of 9 to 1, and NARRATIVE URLs were share only by Supporter accounts. Although it is unclear whether this change in sharing behaviour was due to changes in opinions or the influx of new accounts, there was certainly a changing of the guard. Of the 2,061 accounts active in Phase 1, less than 40% (787) The #ArsonEmergency discussion's growth rate followed a similar pattern to a related hashtag that appeared around the same time (#AustraliaFire), but it was clearly different from that of a well-established discussion (#brexit). RQ3 Behavioural differences over time and the impact of media coverage. Supporters were more active in Phase 1 and 3 and used more types of interaction than Opposers, especially replies and quotes, implying a significant degree of engagement, whether as trolls or as "sincere activists" (Starbird and Wilson, 2020 (Stilgherrian, 2020) also affected activity, spurring Opposers and others to share the analysis it reported. RQ4 Position of communities in the discussion network. Supporter efforts to engage with others in the discussion resulted in them being deeply embedded in the discussion's reply, mention and quote networks and having correspondingly high centrality values. Our k-core analysis showed they were evenly distributed throughout the networks, from the periphery to the cores. Despite Opposers staying more on the periphery of the networks, they maintained high closeness and eigenvector centrality scores, meaning they stayed connected to more of the network than Supporters and certainly to more important nodes in the network. Correspondingly, this may imply that Supporters, though being highly connected, were not connecting as efficiently as Opposers, in order to spread their narrative. Both Opposer and Supporter groups were highly insular with respect to each other, across a variety of network analyses, but they connected strongly to the broader community according to E-I indices and assortativity. RQ5 Content dissemination and coordinated activity. Analyses of hashtag and URL use revealed further evidence of the gap between Supporters and Opposers, not just in terms of connectivity, as discussed above, but also in terms of narrative. Supporters used a variety of hashtags to reach greater audiences, to disrupt existing communication channels, or to otherwise harass. In doing so, they exhibited less evidence of coordination than Opposers, who were focused in both the hashtags and URLs they used, supported by or in concert with the much greater number of Unaffiliated accounts. Analysis of co-activities (namely co-retweeting, and co-URL and co-hashtag instances) suggested a lack of botnets in the discussion and that some Unaffiliated and Opposers were coordinating their URL sharing, appearing together in cliques that are often attributed to automation (e.g., Pacheco et al, 2020) . The apparent coordination could, however, be attributed to high levels of popularity driven by increased activity in Phase 3 (i.e., coincidence due to high numbers of discussion participants), and the co-activities of Supporters indicated the presence of genuine human users more than any automated coordination. Further analysis using account/URL bigraphs showed that Opposers and Unaffiliated were focused on sharing a small set of URLs, compared with Supporters' greater variety. These findings imply the Supporter community members, for all they attempted to engage with others via replies, mentions and hashtags, becoming deeply embedded in the interaction networks, remained relatively isolated from a narrative perspective. RQ6 Support from non-Australian accounts. Based on manual inspection of accounts' free text 'location' fields, the Supporter group included more non-Australian than Opposers, with the greatest number of non-Australian accounts Unaffiliated with either, but the vast majority of all groups indicated they were located in Australia (> 70%). Despite the large number of Unaffiliated accounts present in Phase 1 (1,680), the majority joined the discussion in Phase 3, likely bringing in the majority of non-Australian accounts. Investigations of content dissemination also revealed that Opposers received the majority of Unaffiliated support, resulting in a majority of debunking article shares in Phase 3 from a majority of narrativealigned article shares in Phase 1, so it is possible that this also included non-Australian support. Given most accounts do not report their location, and locations have not been verified, this conclusion remains speculative. RQ7 Support from bots and trolls. We found very few bots and their impact was limited: only 0.8% (20 of 2,512) had a Botometer (Davis et al, 2016) CAP score above 0.6 while 96.6% (2,426) were highly likely to be human (CAP < 0.2). In contrast, Graham and Keller had found many more bots (46%) and fewer humans (< 20%) in their smaller sample (Stilgherrian, 2020; Graham and Keller, 2020) . The affiliated 'bot' accounts, on closer examination, may not all have been automated, but the ones with bot-like posting rates could certainly be classed as 'social bots' (Cresci, 2020) given their appearance as genuine human users. Aggressive language was observed in both affiliated groups, but troll-like tweet text patterns including only hashtags, mentions and URLs (i.e., without content terms) were employed far more often by Supporters. Distinguishing deliberate baiting from honest enthusiasm (even with swearing) is non-trivial Starbird and Wilson, 2020) , but identifying targeted tweets lacking content is a more tractable approach to detect inauthentic and potentially malicious behaviour. Further research is required to examine the dynamic aspects of the social and interaction structures formed by groups involved in spreading misinformation to learn more about how to better address the challenge they pose to society. Future work will draw more on social network analysis based on interaction patterns and content (Bagrow et al, 2019) as well as developing a richer, more nuanced understanding of the Supporter community itself, including revisiting the polarised accounts over a longer time period and consideration of linguistic differences. A particularly challenge is determining a social media user's intent when they post or repost content, which could help distinguish between disinformation intended to deceive, and merely biased presentation of data or misinformation that aligns with the user's worldview. The study of polarised groups, their structure and their behaviour, during times of crisis can provide insight into how misinformation can enter and be maintained in online discussions, as well as provide clues as to how it can be removed. The #ArsonEmergency activity on Twitter in early 2020 provides a unique microcosm to study the growth of a misinformation campaign before (Stilgherrian, 2020) , and the subsequent associated MSM attention, is likely to have contributed to this effect, given the significant increase in discussion participants in Phase 3. This highlights the value in publicising research into misinformation promotion activities. We speculate that the communication patterns documented in this study could be communication strategies discoverable in other misinformationrelated discussions, such as those relating to vaccine conspiracies (Broniatowski et al, 2018) , COVID-19 anti-lockdown regulations (Loucaides et al, 2021) , challenging election results (Scott, 2021; Ng et al, 2021) or QAnon (The Soufan Center, 2021), and could help inform the design and development of counter-strategies. Supplementary information. This paper includes appendices with further detail of analyses conducted. All data was collected, stored and analysed in accordance with Protocol H-2018-045 as approved by the University of Adelaide's human research ethics committee. The datasets collected and analysed during the current study (the identifiers of the tweets, as per Twitter's terms and conditions) are available at https: //github.com/weberdc/socmed sna. The code used in this work are available at https://github.com/weberdc/soc med sna. To learn more, we examined the 'location' field in the 'user' objects in the tweets. This is a free text field users can populate as they wish and contains a great variety of information, not all of which is accurate, but the majority of populated fields are at least meaningful locations (88%). We manually coded the 'location' for each Supporter and Opposer account and then the 'location' values that appeared more than once for the Unaffiliated accounts (Table A1 ). The majority of contributors in each group is from Australia, but the Supporters and Unaffiliated accounts included more non-Australian but English-speaking contributions than Opposers. The larger proportion of American and UK contributions in the Unaffiliated accounts may be due to an influx of highly-motivated users who joined the discussion after Graham's analysis (Stilgherrian, 2020) reached the MSM. It is thought that climate change is less settled in those countries. 33 This is borne out by the increased number of unique Unaffiliated accounts in Phase 3. Behaviour Aggressive and profane language was observed in content posted by both Supporters and Opposers, but our observations includes behaviour that could be regarded as inauthentic (Weedon et al, 2017) , including trolling. We examined the frequency of hashtags and mentions appearing in tweets by Supporters, Opposers and the remainder of accounts, as well as identifying inflammatory behaviour through manual inspection. The 288 Supporters and 149 Opposers in the mention network connected to Opposers and Supporters, respectively, slightly more than they mentioned themselves, with 710 edges (E-I Index of −0.14). When Unaffiliated accounts are considered (resulting in a mention network of 3,206 nodes and 5,825 edges, a subset of the one shown in Figure 7b (main paper) which omits Unaffiliated-Unaffiliated edges), the combined E-I Index for Supporters and Opposers rises to 0.7, suggesting a clear preference to mention Unaffiliated accounts. An analysis of contemporaneous co-mentions also reveals that Supporter accounts mentioned the same accounts in quick succession much more frequently than Opposers, but that one prominent Opposer account was mentioned by many other accounts ( Figure B1 ). It is clear the highly mentioned Opposer is a target for accounts, with many pairs of co-mentioners mentioning only the Opposer. A second (Unaffiliated) account is also highly mentioned, lying just below the Opposer account, though it appears mentioned more often by Supporter accounts, while the Opposer is more often mentioned by Unaffiliated accounts. The Opposer account is a prominent left-wing online personality mentioned more than 2400 times in the dataset, while the Unaffiliated account had been suspended by the end of January 2020, just after the collection period, and was mentioned over 350 times in the dataset. The largest Unaffiliated mentioning account (circular green node, on the right of the large connected component) appears to support the arson narrative and also promotes a number of QAnon-related hashtags (The Soufan Center, 2021). Tweets that include many hashtags or mentions can stand out in a timeline, because the vast majority of tweets include very few, if any. By including many hashtags, a tweet may be seen by anyone searching by those hashtags, thereby increasing its potential audience. Including many mentions may be a way to draw other participants into an ongoing conversation or at least inform them of an opinion or other information. Figure B2 shows that all groups trended similarly, and that Supporters posted more tweets with many hashtags than Opposers did (although they tweeted nearly twice as often). Unaffiliated accounts used the most hashtags in tweets, with more than 100 Unaffiliated tweets including 19 or more hashtags. Given the great numbers of Unaffiliated accounts and tweets, these can be regarded as outliers (making up less than 1% of their contribution). The account/mention bigraph resulting from a co-mention analysis, connecting accounts with black edges when they mentioned the same account within 60 seconds. Purple edges connect accounts with the accounts they mention, which are shown as triangles. Node colour indicates affiliation: red nodes are Supporters; blue nodes are Opposers; green nodes are Unaffiliated accounts; and yellow nodes are accounts that were mentioned but did not post a tweet in the dataset. Node size indicates the number of tweets they contributed to the corpus or, for mentioned accounts, their degree (reflecting the number of times they were mentioned). Supporters used many more mentions than Opposers more often ( Figure B3 ). Opposers only used a maximum of 5 mentions on fewer than 10 occasions, while Supporters did the same more than 50 times. In fact, Supporters used more than 5 mentions in 369 tweets. In a few tweets, 45 or more mentions appear, however analysis of this phenomenon has revealed that Twitter accumulates mentions from tweets that have been replied to. One reply tweet including 50 mentions was a simple reply into a reply chain that stretched back to 2018. Many replies in the chain had mentioned one or two other accounts, and they were then incorporated as implicit mentions in any replies to them. Unfortunately, from the point of view of the data provided by the Twitter API, it is unclear whether mentions in a reply are manually added by the respondent or included implicitly, as they simply appear at the start of the tweet text. Although using many hashtags and mentions may expose inauthentic behaviour, trolling involves broad or direct attacks or simple provocation, and is exposed through use of platform features as well as the content of posts. Patterns of activity that appeared provocative included repetitions of tweets consisting of only: • one or more hashtags; • one or more hashtags and a trailing URL; • one or more mentions with one or more hashtags; and • one or more mentions with one or more hashtags and a trailing URL. The frequencies of the occurrence of these text patterns in tweets by each group, in each phase and overall, is shown above in Table 8 . The majority of these behaviours were present in Phase 3. Although Unaffiliated accounts certainly used some of these patterns, Supporters made much more use of them, particularly more than Opposers ( Figure B4 ). Many of the instances of hashtags followed by a URL are instances of quote tweets, where the URL is the link to the quoted tweet. These are attempts to disseminate the quoted tweet to a broader audience (engaged through the hashtags). Finally, inspection of the ten most retweeted tweet contributors revealed that three were Supporters, one was Unaffiliated, and the remainder were Opposers (including five of the top six). As expected, the most prominently used hashtag for all communities was #ArsonEmergency, however it is clear that there are other commonly occurring hashtags. Table C2 shows the top ten hashtags used by the Supporters, Opposers and Unaffiliated in each phase, as well as the number of tweets in which they appeared. In Phase 1, it is clear that the Supporters are trying to engage with existing climate change emergency discussion communities, as well as the media (#7news) and broader political discussion (#auspol). The few Opposer tweets seem to be poking fun at the discussion (e.g., #RelevanceDepravationEmergency, #PoliticalBSEmergency), while the Unaffiliated tweets are very broadly about the bushfires, but #ClimateChangeHoax is the third most used hashtag. In the brief Phase 2, Supporters appear to be more concentrated in their promotion of the arson narrative (using #ClimateCriminals and #ecoterrorism) into the #auspol political discussion. Opposers seem to focus almost exclusively on using #ArsonEmergency rather than any other hashtags, while the Unaffiliated still follow, to some extent, the Supporters' lead with hashtags related to the arson narrative. Finally, in Phase 3, Supporters focus mostly on just #ArsonEmergency, briefly linking to blaming an environmental political party and references to hoaxes, and even reversing the attack and accusing others of being #ArsonDeniers. Opposers are firmly focused on #ArsonEmergency but start referring to an individual prominent in the media industry commonly seen as advocating against dealing with climate change. By this stage, the Unaffiliated accounts are starting to follow the Opposers' lead discussing emergency-and fire-related hashtags. The top ten hashtags used by the Supporters, Opposers, and Unaffiliated communities in each phase. Hashtags have been compared without considering case in the same way Twitter does. The tag anon 1 in Phase 3 refers to the same redacted identity in Figure 10b (main paper). Benchmarking crisis in social media analytics: A solution for the data-sharing problem Australia is not actually an evil dictatorship Media Watch: News Corps Fire Fight. Australian Broadcasting Corporation Network Propaganda Network Analysis: Methodological Foundations Engineering graph clustering: Models and experimental evaluation Fighting flat-Earth theory Weaponized health communication: Twitter bots and Russian trolls amplify the vaccine debate ARC Centre of Excellence for Creative Industries and Innovation Tools and methods for capturing Twitter data during natural disasters Social cybersecurity: an emerging science Political polarization on Twitter A decade of social bot detection Russian interference and influence measures following the 2017 UK terrorist attacks Extracting inter-community conflicts in Reddit BotOrNot: A system to evaluate social bots How Russia's Internet Research Agency built its disinformation campaign Representation and analysis of Twitter activity: A dynamic network perspective Social media and its impact on crisis communication: Case studies of Twitter use in emergency management in Australia and New Zealand Polarization on social media Bushfires, bots and arson claims: Australia flung in the global disinformation spotlight. The Conversation URL Like a virus: the coordinated spread of coronavirus disinformation. Commissioned report, Centre for Responsible Technology Changing perspectives: Is it sufficient to detect social bots? Exploring Network Structure, Dynamics, and Function using NetworkX Press) Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change Cyberwar How to manipulate social media: Analyzing political astroturfing using ground truth data from South Korea Coordinated inauthentic behaviour' and other online influence operations in social media spaces. Presented at the Annual Conference of the Association of Internet Researchers Informal Networks and Organizational Crises: An Experimental Simulation False information on web and social media: A survey Community interaction and conflict on the Web You talkin' to me? Exploring human/bot communication patterns during riot events Twitter users mostly retweet politicians and celebrities. That's a big change RAPID: Real-time Analytics Platform for Interactive Data Mining How Germany became ground zero for the COVID infodemic Detecting coordinated behavior in the Twitter campaign to Reopen America. Presented at the Center for Informed Democracy & Social-cybersecurity annual conference A synchronized action framework for responsible detection of coordination on social media You know what to do": Proactive detection of YouTube videos targeted by coordinated hate attacks Analyzing polarization of social media users and news sites during political campaigns What do retweets indicate? Results from user survey and meta-review of research From Alt-Right to Alt-Rechts: Twitter analysis of the 2017 German federal election On commenting behavior of Facebook users Real-time detection of content polluters in partially observable Twitter networks Networks: An Introduction Coordinating narratives and the Capitol Riots on Parler. SBP-Brims Disinformation Challenge Untangling hairballs -from 3 to 14 degrees of separation NSW Bushfire Inquiry (2020) Final report of the NSW Bushfire Inquiry. State inquiry report Unveiling coordinated groups behind White Helmets disinformation The role and influence of socialbots on Twitter during the 1st 2016 U.S. Presidential debate. In: ICWSM. AAAI Press Capitol Hill riot lays bare what's wrong with social media Extracting the multiscale backbone of complex weighted networks Hoaxy: A platform for tracking online misinformation Anatomy of an online misinformation network Likewar: The Weaponization of Social Media Disinformation's spread: bots, trolls and all of us Cross-Platform Disinformation Campaigns: Lessons Learned and Next Steps Disinformation as collaborative work: Surfacing the participatory nature of strategic information operations Twitter bots and trolls promote conspiracy theories about Australian bushfires Quantifying the Q Conspiracy: A Data-Driven Approach to Understanding the Threat Posed by QAnon. Special report Pachinko prediction: A Bayesian method for event prediction from social media data Early detection of promoted campaigns on social media Echo chamber detection and analysis Who's in the gang? Revealing coordinating communities in social media Amplifying influence through coordinated behaviour in social networks 2020) #ArsonEmergency and Australia's "Black Summer": Polarisation and misinformation on social media Exploring the effect of streamed social media data variations on social network analysis Information operations and Facebook. White Paper, Facebook United States: Manufacturing consensus online Automating power: Social bot interference in global politics Acknowledgments. The authors acknowledge support from the Australian Research Council's Discovery Projects funding scheme (project DP210103700) and thank Graham and Keller for access to their datasets for comparison. The authors acknowledge support from the Australian Research Council's Discovery Projects funding scheme (project DP210103700). The authors have no relevant funding, financial or non-financial interests to disclose.