key: cord-0179908-zlg2fs0a authors: Fister, Iztok; Fister, Karin; Fister, Iztok title: Discovering associations in COVID-19 related research papers date: 2020-04-06 journal: nan DOI: nan sha: 54a7f6c2c6e4b47a7856f5e4e22c7a83549e1c49 doc_id: 179908 cord_uid: zlg2fs0a A COVID-19 pandemic has already proven itself to be a global challenge. It proves how vulnerable humanity can be. It has also mobilized researchers from different sciences and different countries in the search for a way to fight this potentially fatal disease. In line with this, our study analyses the abstracts of papers related to COVID-19 and coronavirus-related-research using association rule text mining in order to find the most interestingness words, on the one hand, and relationships between them on the other. Then, a method, called information cartography, was applied for extracting structured knowledge from a huge amount of association rules. On the basis of these methods, the purpose of our study was to show how researchers have responded in similar epidemic/pandemic situations throughout history. When we look at the COVID-19 Global Cases web site [5, 2] maintained by the Center for Systems Science and Engineering at Johns Hopkins University, we can observe with concern the comprehensiveness of the pandemic on the one hand, and an exponential increase in the number of infected people around the world on the other. While the conditions have stabilized in China, the circumstances in Europe and the USA have become critical. The number of infected people in Italy, the pandemic's epicenter in Europe, have achieved almost 60,000 at the time of this writing (i.e. on 23.3.2020), while the number of deaths is quickly approaching 7,000. Consequently, the mentioned circumstances have mobilized scientists from different domains around the world to try to find a way to throttle the coronavirus. These endeavors are not only the domain of researchers in medical labs, where they are searching for a new vaccine, but all the other mass of researchers from various scientific disciplines being indirectly affected. In this, data scientists also play an important role. The present study associates the Association Rule Text Mining (ARTM) [3] method with information cartography [4] . The former is a data mining method used to search for interestingness terms and their mutual relations in the form of association rules. This method demands the parsing of text documents and highlights the words that are distinguished according to the appropriate measures. These words are called terms, while the relationships among them are described in the form of association rules. The latter is devoted to extracting structured knowledge from the huge amount of association rules generated in the first step. In line with this, the concept of information cartography has been applied [11] which is capable of creating structured summaries of information, and visualize them in the form of metro maps. The role of the metro map in helping users understand their surroundings, also is similar to the effect the metro map of information has on understanding information landscapes [10] . Visualizations with metro maps can even tell stories to users and provide them with good directions. In essence, the metro maps consist of a set of metro lines, where each metro line interprets the same story from a different aspect. Metro stops on these lines introduce salient pieces of information (i.e. a definite term), while the interrelations among these pieces ensure the plot of the story. Recently, arXiv:2004.03397v1 [cs.IR] 6 Apr 2020 this methodology has been applied to understanding information in many areas [9, 12] . However, the concept of metro maps serves as a basis for exploring the extracted knowledge in this study. The proposed method consists of the following steps: text preprocessing, generation of ARTM database, association rule simplification, word graph generation, metro map construction, and the exploration of extracted knowledge. In the first step, the interestingness words are extracted from a collection of observed paper abstracts. The association rules are generated from a set of mined words in the second step. The third step is devoted to simplification rules, where the rules with more antecedents and more consequent are simplified into a set of simple rules consisting of one antecedent and one consequent. These simple rules serve as building blocks for creating a word graph with source X and sink nodes Y connected with a directed arc, when there is an association rule X ⇒ Y (fourth step). In the fifth step, the metro map is created from the word graph. Finally, the knowledge hidden in the metro maps is explored. The method was applied to a collection of paper abstracts found in the CORD-19 dataset [8] in order to show how researchers have responded to similar epidemic/pandemic situations during history. Indeed, the results of the performed experiments have proven an increasing of terms referred to these situations. In the remainder of the paper, the structure is as follows. Section 2 introduces material and methods used in our study. In Section 3, the experiments are described and the obtained results are analyzed. A discussion of the results can be found in Section 4, while the paper is concluded with Section 5, which also provides an outline for the future work. The proposed method consists of three components: • ARTM: text preprocessing, generation of an ARTM database, • information cartography: association rule simplification, term graph generation, metro map construction, • exploration of extracted knowledge. The purpose of ARTM is to generate a database of the more interestingness terms. The information cartography enables us to extract the structured knowledge from a huge amount of association rules in the form of metro maps. The extracted knowledge in the form of terms, constituting the particular metro lines, serves as keywords for matching the terms from paper abstracts found in the huge database. Text preprocessing step. Here, punctuation marks are removed as a first step. As a result, only words delimited by space remain in the document. Some words, like definite and indefinite articles (e.g. the, a, an), connective words (e.g. and, also, then), conjunctions (e.g. but, when, because), and verbs (e.g. is, done), represent so-called stop words, and must be removed next. The result of this removal is a sequence of terms. Then, the terms undergo term frequency calculation, where occurrences are not only determined, but also weighted. Here, a Term Frequency/Inverse Term Frequency (TF/ITF) weighting scheme is used that penalizes the rare occurring terms with higher weights. The TF/ITF weighting scheme is defined as follows: For the given term z j , for j = 1, . . . , M , occurring in document d i , for i = 1, . . . , N , the term frequency is expressed as: where n(d i , w j ) denotes the number of occurrences of term w j in document d i , and |d i | is the total number of terms in document D i . On the other hand, the inverse term frequency is expressed as: where n(d|w j ) denotes the number of documents d containing the term w j , where N is the total number of documents. Furthermore, the weighted frequency of the term z j in document d i is represented as a vector of weights w i = {w i,1 , . . . , w i,n }, where each element w i,j is expressed as: Finally, the transaction database is generated from the relevant documents by moving each vector w i , representing weighted frequencies for all terms in the corresponding document, to a transaction database. In this way, the transaction database is very similar to the market basket, except that the weighted frequencies are put into transaction database instead of the value of one. Generation of ARTM database step. The ARTM problem is defined formally as follows: Let us assume a set of documents D = {d 1 , . . . , d N } and set of terms Z = {z 1 , . . . , z M }, where N denotes the maximum number of documents, and M the maximum number of terms, respectively. Additionally, the matrix of weights W is assigned with the dimension N × M , where each element w i,j represents a frequency weight of term z j in document d i , calculated according a TF-ITF weighting scheme. Then, the task of generation is to select the binary vector y = (y 1 , . . . , y M ) T , determining the presence or absence of the corresponding term in the solution, such that the scalar product is maximum. Let us mention that variable K denotes the maximum number of terms in an association rule. Actually, the selected elements of vector y form the set Y = {y j |y j = 1, for j = 1, . . . , M } that is a subset of Z, in other words Y ⊂ Z. Let us notice that the values of the vector are initially set to zero. Obviously, the problem is defined as an optimization and can be solved using any of the well-known stochastic population-based, nature-inspired algorithms. For our study, the Particle Swarm Optimization (PSO) [7] was selected for this purpose. Interested readers, who would like to see the detailed implementation of this algorithm, are invited to consult the paper of Fister et al. [3] . The concept of information cartography is applied in order to explore knowledge from an archive of mined association rules in text [4, 12] , where this knowledge is visualized in the form of metro maps. The metro map is formally defined as M = (G, Π), where G = (T, E) denotes a term graph of vertices T = {X 1 , . . . , X N }, representing attributes, and edges E = {r 1 , . . . , r M }, representing simple rules, together with the incident function ψ G that associates an ordered pair ψ G (r k ) = (X i , Y j ) with direct edge r k , when there exists a simple association rule in the form of X i ⇒ Y j , and Π represents a set of paths in G. In the definitions, variables N and M denote the maximum number of vertices and maximum number of edges, respectively. Association rule simplification. Thus, the simple association rule consists of only one antecedent and one consequent, where the former is mapped to the source node X i ∈ G and the latter to the sink node Y j ∈ G of the corresponding attribute graph, while the path X i → Y j leads from the source to the sink node. In general, the association rules in the archive consist of more antecedents and more consequences, in other words: The simple association rules are obtained from the mined rules by pairing each antecedent with each consequent, in other words: In this process of simplifying rules, the p × q pairs of simple rules are obtained representing direct edges in the association graph. Term graph generation. The simple association rules present building blocks from which a term graph is constructed. In a term graph, each simple rule X i ⇒ Y i , for i = 1, . . . , p×q, where p designates the maximum number of antecedents and q the maximum number of consequents, respectively, denotes a direct arc from source node X i to sink node Y i . However, the nodes can appear in this graph as: (1) antecedent only, (2) consequent only, or (3) antecedent in one and consequent in the other rules. Consequently, these are divided into three subsets, i.e. Ante(T ), Cons(T ), and Mixed (T ). In the term graph G, the attributes in the antecedent subset X ∈ Ante(A) represent source nodes with indegree zero, the attributes in consequent subset Y ∈ Cons(A) are sink nodes with outdegree zero, while the attributes in the mixed subset X|Y ∈ Mixed (A) denote the intern nodes with an indegree and outdegree higher than zero. In summary, the antecedent set consists of nodes suitable for starting metro stops on the metro lines, the consequent set for the final metro stops, while the mixed set determines the intermediate metro stops and outlines a definite path towards achieving a certain final destination. Metro map construction. The task of metro map construction is to find a set of metro lines, where each metro line starts with the particular starting metro stop X i ∈ Ante(T ) and finishes with the particular final metro stop Y i ∈ Cons(T ), while the intermediate metro stops connect the starting metro stop with the final one by selecting proper simple rules from the term graph such that the sink node of the i-th simple rule is the source node of the (i + 1)-th simple rule, in other words: The terms Y i for i = 0, . . . , (n − 1) in Eq. (8) can be avoided due to equivalence Y i ≡ X i+1 . As a result, a sequence of implication rules is given, as follows: According to standard rules in mathematical logic, Eq. (9) can be transformed, as follows: asserting that the conjunctions of (n − 1) terms implied by the consequent is equivalent to a sequence of implications of n term. Obviously, Eq. (10) is more easier to apply in an interpretation of the obtained results. The algorithm for constructing the metro map for visualizing the association rules needs to fulfill the following four objectives: • minimum line coherence, • maximum map size, • high coverage, • high structure quality. The minimum line coherence limits the number of intermediate metro stops in some metro line and is expressed by the following relation: Indeed, we are interested in covering our information domain by using the number of metro lines as close to K as possible. The coverage estimates how well the selected metro line exploits the attributes in a transaction database. In line with this, the lift measure of association rule Lift(X ⇒ Y ) is used that is expressed as: Let it be noted that the characteristic of the measure is that the higher the value, the stronger the association. Additionally, the coverage of the whole metro line π ∈ Π is expressed as: where r represents the particular simple association rule X ⇒ Y . Finally, the coverage of the metro map is a simple average of all the proposed metro lines, in other words: The metro map structure quality refers to the diversity of the metro lines, where we are interested in those metro lines that differ in the intermediate points as much as possible. This relation is expressed by the following equation: where the variable C = |M| 2 counts the number of metro line interactions. In summary, the quality of the solution considers the constructed metro map according to two objectives: the coverage (according to Eq. (15)), and the quality (according to Eq. (16) ). Both equations are contained within a linear combination as follows: where the weight variable w indicates the influence of the second term on the total fitness value, and n i is the number of metro lines. However, each solution is subject to minimum line coherence, and maximum map size as previously explained. A stochastic population-based, nature-inspired evolutionary algorithm was used for the implementation of the metro map construction. Interested readers are invited to consult the paper of Fister et al. [4] for more details about the implementation of the evolutionary algorithm. Normally, the created metro map of ARTM information is visualized in the sense of real metro maps, where each metro line consists of a particular number of metro stops. Some metro lines proceed straightforwardly, while some can interrelate between the other. Obviously, these relationships affect the plot of the story and highlight special events that can occur either unexpectedly or as an ordinary consequence of some process operation. In our study, we are interested in identifying the terms that occur in the best metro map according to the fitness function evaluation. These terms, then, serve as keywords for searching for knowledge hidden in articles of papers saved into a huge database. The results of these experiments are then visualized using traditional statistical visualization techniques. The proposed search method introduces two stochastic, nature-inspired, population-based algorithms: The former searches for the optimal binary vector y, in which the value 0 determines that the corresponding term is absent from the solution and value 1 that it is present in the solution. The latter is reserved for constructing the metro map. Both algorithms are controlled by some parameters that ensure their proper operation. The parameter setting used during the experimental work is illustrated in Table 2 . The study was divided into two parts: In the first part, the ARTM was conducted on the CORD-19 dataset [8] 1 , where its non-commercial subset was taken into consideration, while the quality of solutions were evaluated by maximizing Eq. (4). The second part was applied on the MEDLINE 2 database. This database consists of medical scientific papers. Here, the abstracts of all the papers found in the database were parsed using the tool Pubmed Parser in Python [1] . In this case, the quality of solutions are estimated using Eq. (17). The purpose of our experimental work was to show how researchers have responded to similar epidemic/pandemic situations throughout history. In line with this, the ARTM method was applied to the CORD-19 dataset. The results of the method is presented in the word cloud in Fig. 1 , from which it can be seen that terms like "cell", "protein", "infection", and "patient" occur most frequently in the observed abstracts of the papers. The goal of our research was to show how researchers reacted to epidemic/pandemic events in the past. In line with this, terms in association rules constituting the best metro maps according to the fitness function were extracted (i.e., 44 such terms), from which those terms that do not have any connections with medicine were eliminated (i.e., 18 terms). Finally, the 26 terms that remained are illustrated in Table 2 . Let us mention that the eliminated terms are denoted as crossed out text in the table. All the other regular terms (i.e. 21 without any repetition of the same words) are entered into the second phase of the experiment, where they were used as keywords to searching for the abstracts of medicine papers maintained in the MEDLINE database from the year 1955 onward. All abstracts matching at least 30 % of the keywords contribute to the final outcome. The number of hits are depicted in Fig. 2 . Interestingly, the number of hits increased from the year 1955 until 2019 almost exponentially, although there are some periods of stall (e.g., the year 2013, or period from years 2014 to 2018). This means that the application of terms like "viruses", "quarantine", and "h7n9" have appeared with greater frequency in line with the appearance of different viruses denoting epidemic/pandemic events over recent years. Once again, the increasing can be observed in the year 2019. The results of the study showed that epidemic/pandemic events affected the production of new scientific papers a lot during the course of history. On the other hand, these were inspired by the emergence of new viruses or caused a mutation of old ones. A historical analysis revealed the biggest increase in the number of papers in the years 2013 and 2014. This increase correlates with the outbreak of the MERS disease. Interestingly, in these times, preprints were not as popular as today [6] . Therefore, a lot of papers struggled during the long-term review process and appeared many months after the outbreak. Among the terms found using the proposed method, the papers that referred to any aspect correlated with coronavirus research were mostly distinguished by the types of the virus (e.g. RNA), its clinical manifestation (e.g. pneumonia), and consequences (e.g. quarantine), the familiarity (e.g. H7N9), or virus description (e.g. pathogen). Some terms found in the study were hard to define like mitochondrial and pseudoknots. The COVID-19 pandemic has affected the lives of people all over the world. Social isolation and quarantines stopped the world for many months. Moreover, the catastrophe has entered its zenith at this time. Although no one in the world expected such dimensions for the pandemic, the situation has shown how susceptible humanity can be. The purpose of the study was to show how researchers responded with subjects of their papers in similar epidemic/pandemic situations during history. This study analyzed the abstract of the papers found in the CORD-19 dataset using the ARTM method and extracted knowledge hidden in a large amount of mined association rules with metro map methodology. The extracted terms were then used as keywords to search the abstracts of papers collected in the MEDLINE database. The results of the study showed that the number of papers that include the terms proposed by the metro map method increased exponentially during the course of history. In the future work, we will try to relate these findings to the increased usage of antiviral drugs. We speculate that higher consumption of antiviral drugs may lead to the development of more pathogenic strains like SARS-CoV-2. Pubmed parser: A python parser for pubmed openaccess xml subset and medline xml dataset xml dataset An interactive web-based dashboard to track covid-19 in real time. The Lancet infectious diseases Population-based metaheuristics for association rule text mining Information cartography in association rule mining Coronavirus COVID-19 Global Cases Are preprints the future of biology? a survival guide for scientists Particle swarm optimization Covid-19 open research dataset (cord-19) Metro maps of science Trains of thought: Generating information maps A metro map can tell a story, as well as provide good directions Information cartography: Creating zoomable, large-scale maps of information