key: cord-0256681-9s4fop7s authors: Nicholson, David N.; Rubinetti, Vincent; Hu, Dongbo; Thielk, Marvin; Hunter, Lawrence E.; Greene, Casey S. title: Linguistic Analysis of the bioRxiv Preprint Landscape date: 2021-05-25 journal: bioRxiv DOI: 10.1101/2021.03.04.433874 sha: 94273704ff5cba3223d881c84a4606f7d7e34ebf doc_id: 256681 cord_uid: 9s4fop7s Preprints allow researchers to make their findings available to the scientific community before they have undergone peer review. Studies on preprints within bioRxiv have been largely focused on article metadata and how often these preprints are downloaded, cited, published, and discussed online. A missing element that has yet to be examined is the language contained within the bioRxiv preprint repository. We sought to compare and contrast linguistic features within bioRxiv preprints to published biomedical text as a whole as this is an excellent opportunity to examine how peer review changes these documents. The most prevalent features that changed appear to be associated with typesetting and mentions of supplementary sections or additional files. In addition to text comparison, we created document embeddings derived from a preprint-trained word2vec model. We found that these embeddings are able to parse out different scientific approaches and concepts, link unannotated preprint-peer reviewed article pairs, and identify journals that publish linguistically similar papers to a given preprint. We also used these embeddings to examine factors associated with the time elapsed between the posting of a first preprint and the appearance of a peer reviewed publication. We found that preprints with more versions posted and more textual changes took longer to publish. Lastly, we constructed a web application (https://greenelab.github.io/preprint-similarity-search/) that allows users to identify which journals and articles that are most linguistically similar to a bioRxiv or medRxiv preprint as well as observe where the preprint would be positioned within a published article landscape. The dissemination of research ndings is key to science. Initially, much of this communication happened orally [1] . During the 17th century, the predominant form of communication shifted to personal letters that were shared from one scientist to another [1] . Scienti c journals didn't become a predominant mode of communication until the 19th and 20th centuries, when the rst journal was created [1,2,3]. Although scienti c journals became the primary method of communication, they added high maintenance costs and long publication times to scienti c discourse [2, 3] . Some scientists' solutions to these issues has been to communicate through preprints, which are scholarly works that have yet to undergo peer review process [4, 5] . Preprints are commonly hosted on online repositories, where users have open and easy access to these works. Notable repositories include arXiv [6] , bioRxiv [7] and medRxiv [8] ; however, there are over 60 di erent repositories available [9] . The burgeoning uptake of preprints in life sciences has been examined through research focused on metadata from the bioRxiv repository. For example, life science preprints are being posted at an increasing rate [10] . Furthermore, these preprints are being rapidly shared on social media, routinely downloaded, and cited [11] . Some preprint categories are shared on social media by both scientists and non-scientists [12] . About two-thirds to three-quarters of preprints are eventually published [13, 14] and life science articles that have a corresponding preprint version are cited and discussed more often than articles without them [15, 16, 17] . Preprints take an average of 160 days to be published in the peer-reviewed literature [18] , and those with multiple versions take longer to publish [18] . The rapid uptake of preprints in the life sciences also poses challenges. Preprint repositories receive a growing number of submissions [19] . Linking preprints with their published counterparts is vital to maintaining scholarly discourse consistency but is challenging to perform manually [16, 20, 21] . Errors and omissions in linkage result in missing links and consequently erroneous metadata. Furthermore, repositories based on standard publishing tools are not designed to show how textual content of preprints is altered due to the peer review process [19] . Certain scientists have expressed concern that competitors could scoop them by making results available before publication [19, 22] . Preprint repositories by de nition do not perform in-depth peer review, which can result in posted preprints containing inconsistent results or conclusions [17, 20, 23, 24] ; however, an analysis of preprints posted at the beginning of 2020 revealed that most underwent minor changes as they were published [25] . Despite a growing emphasis on using preprints to examine the publishing process within life sciences, how these ndings relate to the text of all documents in bioRxiv has yet to be examined. Textual analysis uses linguistic, statistical, and machine learning techniques to analyze and extract information from text [26] . For instance, scientists analyzed linguistic similarities and di erences of biomedical corpora [27, 28] . Scientists have provided the community with a number of tools that aide future text mining systems [29, 30, 31] as well as advice on how to train and test future text processing systems [32, 33, 34] . Here, we use textual analysis to examine the bioRxiv repository, placing a particular emphasis on understanding the extent to which full-text research can address hypotheses derived from the study of metadata alone. To understand how preprints relate to the traditional publishing ecosystem, we examine the linguistic similarities and di erences between preprints and peer-reviewed text and observe how linguistic features change during the peer review and publishing process. We hypothesize that preprints and biomedical text are pretty similar, especially when controlling for the di erential uptake of preprints across elds. Furthermore, we hypothesize that document embeddings [35, 36] provide a versatile way to disentangle linguistic features along with serving as a suitable medium for improving preprint repository functionality. We test this hypothesis by producing a linguistic landscape of bioRxiv preprints, detecting preprints that change substantially during publication, and identify journals that publish manuscripts that are linguistically similar to a target preprint. We encapsulate our ndings through a web app that projects a user-selected preprint onto this landscape and suggests journals and articles that are linguistically similar. Our work reveals how linguistically similar and dissimilar preprints are to peer-reviewed text, quanti es linguistic changes that occur during the peer review process, and highlights the feasibility of document embeddings with respect to preprint repository functionality and peer review's e ect on publication time. Text analytics is generally comparative in nature, so we selected three relevant text corpora for analysis: the BioRxiv corpus, which is the target of the investigation, the PubMedCentral Open Access corpus, which represents the peer-reviewed biomedical literature, the New York Times Annotated Corpus, which is used a representative of general English text. BioRxiv [7] is a repository for life sciences preprints. We downloaded an XML snapshot of this repository on February 3rd, 2020, from bioRxiv's Amazon S3 bucket [37] . This snapshot contained the full text and image content of 98,023 preprints. Preprints on bioRxiv are versioned, and in our snapshot, 26,905 out of 98,023 contained more than one version. When preprints had multiple versions, we used the latest one unless otherwise noted. Authors submitting preprints to bioRxiv can select one of twenty-nine di erent categories and tag the type of article: a new result, con rmatory nding, or contradictory nding. A few preprints in this snapshot were later withdrawn from bioRxiv; when withdrawn, their content is replaced with the reason for withdrawal. As there were very few withdrawn preprints, we did not treat these as a special case. PubMed Central (PMC) is a digital archive for the United States National Institute of Health's Library of Medicine (NIH/NLM) that contains full text biomedical and life science articles [38] . Paper availability within PMC is mainly dependent on the journal's participation level [39] [30, 43] . PMC also contains a resource that holds author manuscripts that have already passed the peer review process [44] . Since these manuscripts have already been peer-reviewed, we excluded them from our analysis as the scope of our work is focused on examining the beginning and end of a preprint's life cycle. We downloaded a snapshot of the PMCOA corpus on January 31st, 2020. This snapshot contained many types of articles: literature reviews, book reviews, editorials, case reports, research articles, and more. We used only research articles, which aligns with the intended role of bioRxiv, and we refer to these articles as the PMCOA corpus. We used CrossRef [46] to identify bioRxiv preprints linked to a corresponding published article. We accessed CrossRef on July 7th, 2020, and were able to link 23,271 preprints to their published counterparts successfully. Out of those 23,271 preprint-published pairs, only 17,952 pairs had a published version present within the PMCOA corpus. For our analyses that involved published links, we only focused on this subset of preprints-published pairs. We compared the bioRxiv, PMCOA, and NYTAC corpora to assess the similarities and di erences between them. We used the NYTAC corpus as a negative control to assess the similarity between two life sciences repositories when compared with non-life sciences text. All corpora contain both words and non-word entities (e.g., numbers or symbols like ), which we refer to together as tokens to avoid confusion. We calculated the following characteristic metrics for each corpus: the number of documents, the number of sentences, the total number of tokens, the number of stopwords, the average length of a document, the average length of a sentence, the number of negations, the number of coordinating conjunctions, the number of pronouns and the number of past tense verbs. Spacy is a lightweight and easy-to-use python package designed to preprocess and lter text [47] . We used spaCy's "en_core_web_sm" model [47] (version 2.2.3) to preprocess all corpora and lter out 326 spaCy-provided stopwords. Following that cleaning process, we calculated the frequency of every token across all corpora. Because many tokens were unique to one set or the other and observed at low frequency, we focused on the union of the top 0.05% (~100) most frequently occurring tokens within each corpus. We generated a contingency table for each token in this union and calculated the odds ratio along with ± the 95% con dence interval [48] . We measured corpora similarity by calculating the Kullback-Leibler (KL) divergence across all corpora along with token enrichment analysis. This metric measures the extent to which two distributions di er. A low value of KL divergence implicates that two distributions are similar and vice versa for high values. The optimal number of tokens used to calculate the KL divergence is unknown, so we calculated this metric using a range of the 100 most frequently occurring tokens between two corpora to the 5000 most frequently occurring tokens. We sought to build a language model to quantify linguistic similarities of biomedical preprint and articles. Word2vec is a suite of neural networks designed to model linguistic features of words based on their appearance in the text. These models are trained to either predict a word based on its sentence context, called a continuous bag of words (CBOW) model, or predict the context based on a given word, called a skipgram model [35] . Through these prediction tasks, both networks learn latent linguistic features that can be used for downstream tasks, such as identifying similar words. We used gensim [49] (version 3.8.1) to train a CBOW [35] model over all the main text within each preprint in the bioRxiv corpus. Determining the best number of dimensions for word embeddings can be a nontrivial task; however, it has been shown that optimal performance is between 100-1000 dimensions [50] . We chose to train the CBOW model using 300 hidden nodes, a batch size of 10000 words, and for 20 epochs. We set a xed random seed and used gensim's default settings for all other hyperparameters. Once trained, every token present within the CBOW model is associated with a dense vector representing latent features captured by the network. We used these word vectors to generate a document representation for every article within the bioRxiv and PMCOA corpora. For each document, we used spaCy to lemmatize each token and then took the average of every lemmatized token present within the CBOW model and the individual document [36] . Any token present within the document but not in the CBOW model is ignored during this calculation process. We sought to visualize the landscape of preprints and determine the extent to which their representation as document vectors corresponded to author-supplied document labels. We used principal component analysis (PCA) [51] to project bioRxiv document vectors into a low-dimensional space. We trained this model using scikit-learn's [52] implementation of a randomized solver [53] with a random seed of 100, an output of 50 principal components (PCs), and default settings for all other hyperparameters. After training the model, every preprint within the bioRxiv corpus is assigned a score for each generated PC. We sought to uncover concepts captured the generated PCs and used the cosine similarity metric to examine these concepts. This metric takes two vectors as input and outputs a score between -1 (most dissimilar) and 1 (most similar). We used this metric to score the similarity between all generated PCs and every token within our CBOW model for our use case. We report the top 100 positive and negative scoring tokens as word clouds. The size of each word corresponds to the magnitude of similarity, and color represents positive (orange) or negative (blue) association. The bioRxiv maintainers have automated procedures to link preprints to peer-reviewed versions, and many journals require authors to update preprints with a link to the published version. However, this automation is primarily based on the exact matching of speci c preprint attributes. If authors change the title between a preprint and published version (e.g., [54] and [55] ), then this change will prevent bioRxiv from automatically establishing a link. Furthermore, if the authors do not report the publication to bioRxiv, the preprint and its corresponding published version are treated as distinct entities despite representing the same underlying research. We hypothesize that close proximity in the document embedding space could match preprints with their corresponding published version. If this nding holds, we could use this embedding space to ll in links missed by existing automated processes. We used the subset of paper-preprint pairs annotated in CrossRef as described above to calculate the distribution of available preprint to published distances. This distribution was calculated by taking the Euclidean distance between the preprint's embedding coordinates and the coordinates of its corresponding published version. We also calculated a background distribution, which consisted of the distance between each preprint with an annotated publication and a randomly selected article from the same journal. We compared both distributions to determine if there was a di erence between both groups as a signi cant di erence would indicate that this embedding method can parse preprint-published pairs apart. Following the comparison of the two distributions, we calculated distances between preprints without a published version link with PMCOA articles that weren't matched with a corresponding preprint. We ltered any potential links with distances greater than the minimum value of the background distribution as we considered these pairs to be true negatives. Lastly, we binned the remaining pairs based on percentiles from the annotated pairs distribution at the [0,25th percentile), [25th percentile, 50th percentile), [50th percentile, 75th percentile), and [75th percentile, minimum background distance). We randomly sampled 50 articles from each bin and shu ed these four sets to produce a list of 200 potential preprint-published pairs with a randomized order. We supplied these pairs to two co-authors to manually determine if each link between a preprint and a putative matched version was correct or incorrect. After the curation process, we encountered eight disagreements between the reviewers. We supplied these pairs to a third scientist, who carefully reviewed each case and made a nal determination. Using this curated set, we evaluated the extent to which distance in the embedding space revealed valid but unannotated links between preprints and their published versions. Preprints that are published can take varying amounts of time to be published. We sought to measure the time required for preprints to be published in the peer-reviewed literature and compared this time measurement across author-selected preprint categories as well as individual preprints. First, we queried bioRxiv's application programming interface (API) to obtain the date a preprint was posted onto bioRxiv as well as the date a preprint was accepted for publication. We measured time elapsed as the di erence between the date at which a preprint was rst posted on bioRxiv and its publication date. Along with calculating the amount of time elapsed, we also recorded the number of di erent preprint versions posted onto bioRxiv. Using this captured data, we used the Kaplan-Meier estimator [56] via the KaplanMeierFitter function from the lifelines [57] (version 0.25.6) python package to calculate the half-life of preprints across all preprint categories within bioRxiv. We considered survival events as preprints that have yet to be published. There were a limited number of cases in which authors appeared to post preprints after the publication date, which results in preprints receiving a negative time di erence, as previously reported [58] . We removed these preprints for this analysis as they were incompatible with the rules of the bioRxiv repository. Following our half-life calculation, we measured the textual di erence between preprints and their corresponding published version by calculating the Euclidean distance for their respective embedding representation. This metric can be di cult to understand within the context of textual di erences, so we sought to contextualize the meaning of a distance unit. We accomplish this by rst randomly sampled with replacement a pair of preprints from the Bioinformatics topic area as this was well represented within bioRxiv and contains a diverse set of research articles. Next, we calculated the distance between two preprints 1000 times and reported the mean. We repeated the above procedure using every preprint within bioRxiv as a whole. These two means serve as normalized benchmarks to compare against as distance units are only meaningful when compared to other distances within the same space. Following our contextualization approach, we performed linear regression to model the relationship between preprint version count with a preprint's time to publication. We also performed linear regression to measure the relationship between document embedding distance and a preprint's time to publication. For this analysis, we retained preprints with negative time within our linear regression model, and we observed that these preprints had minimal impact on results. We visualize our version count regression model as a violin plot and our document embeddings regression model as a square bin plot. Preprints are more likely to be published in journals that contained similar content to work in question. We assessed this claim by building classi ers based on document and journal representations. First, we removed all journals that had fewer than 100 papers in the PMC corpus. We held our preprint-published subset (see above section 'Mapping bioRxiv preprints to their published counterparts') and treated it as a gold standard test set. We used the remainder of the PMCOA corpus for training and initial evaluation for our models. Speci c journals publish articles in a focused topic area, while others publish articles that cover many topics. Likewise, some journals have a publication rate of at most hundreds of papers per year, while others publish at a rate of at least ten thousand papers per year. Accounting for these characteristics, we designed two approaches -one centered on manuscripts and another centered on journals. We identi ed manuscripts that were most similar to the preprint query for the manuscript-based approach and evaluated where these documents were published. We embedded each query article into the space de ned by the word2vec model (see above section 'Constructing a Document Representation for Life Sciences Text'). We selected manuscripts close to the query via Euclidean distance in the embedding space. Once identi ed, we return the journal in which these articles were published. We also return the articles that led to each journal being reported as this approach allows for journals that frequently publish papers to engulf our results. We constructed a journal-based approach to accompany the manuscript-based process to account for the overrepresentation of these high publishing frequency journals. We identi ed the most similar journals for this approach by constructing a journal representation in the same embedding space. We computed this representation by taking the average embedding of all published papers within a given journal. We then projected a query article into the same space and returned journals close to the query. Both models were constructed using the scikit-learn k-Nearest Neighbors implementation [59] with the number of neighbors set to 10 as this is an appropriate number for our use case. We consider a prediction to be a true positive if the correct journal appears within our reported list of neighbors and evaluate our performance using 10-fold cross-validation on the training set along with test set evaluation. We developed a web application that places any bioRxiv or medRxiv preprint into the overall document landscape and identi es similar papers and journals. The application downloads a pdf version of any preprint hosted on the bioRxiv or medRxiv server uses PyMuPDF [60] to extract text from the downloaded pdf and feeds the extracted text into our CBOW model to construct a document embedding representation. We pass this representation onto our journal and manuscript search to identify journals based on the ten closest neighbors of individual papers and journal centroids. We implemented this search using the scikit-learn implementation of k-d trees. To run it more cost-e ectively in a cloud computing environment with limited available memory, we sharded the k-d trees into four trees. The app provides a visualization of the article's position within our training data to illustrate the local publication landscape, We used SAUCIE [61] , an autoencoder designed to cluster single-cell RNA-seq data, to build a two-dimensional embedding space that could be applied to newly generated preprints without retraining, a limitation of other approaches that we explored for visualizing entities expected to lie on a nonlinear manifold. We trained this model on document embeddings of PMC articles that did not contain a matching preprint version. We used the following parameters to train the model: a hidden size of 2, a learning rate of 0.001, lambda_b of 0, lambda_c of 0.001, and lambda_d of 0.001 for 5000 iterations. When a user requests a new document, we can then project that document onto our generated two-dimensional space; thereby, allowing the user to see where their preprint falls along the landscape. We illustrate our recommendations as a shortlist and provide access to our network visualization at our website (see Software and Data Availability). Our manuscript describes the large-scale analysis of bioRxiv. Concurrent with our work, another set of authors performed a detailed curation and analysis of a subset of bioRxiv [25] that was focused on preprints posted during the initial stages of the COVID-19 pandemic. The curated analysis was designed to examine preprints at a time of increased readership [62] and includes certain preprints posted from January 1st, 2020 to April 30th, 2020 [25] . We sought to contextualize this subset, which we term "Preprints in Motion" after the title of the preprint [25] , within our global picture of the bioRxiv preprint landscape. We extracted all preprints from the set reported in Preprints in Motion [25] and retained any entries in the bioRxiv repository. We manually downloaded the XML version of these preprints and mapped them to their published counterparts as described above. We used Pubmed Central's DOI converter [63] to map the published article DOIs with their respective PubMed Central IDs. We retained articles that were included in the PMCOA corpus and performed a token analysis as described to compare these preprints with their published versions. As above, we generated document embeddings for every obtained preprint and published article. We projected these preprint embeddings onto our publication landscape to visually observe the dispersion of this subset. Finally, we performed a time analysis that paralleled our approach for the full set of preprintpublication pairs to examine relationships between linguistic changes and the time to publication. The preprint landscape is rapidly changing, and the number of bioRxiv preprints in our data download (71, 118) was nearly double that of a recent study that reported on a snapshot with 37,648 preprints [13] . Because the rate of change is rapid, we rst analyzed category data and compared our results with previous ndings. As in previous reports [13], neuroscience remains the most common category of preprints, followed by bioinformatics (Supplemental Figure S1 ). Microbiology, which was fth in the most recent report [13] , has now surpassed evolutionary biology and genomics to move into third. When authors upload their preprints, they select from three result category types: new results, con rmatory results, or contradictory results. We found that nearly all preprints (97.5%) were categorized as new results, consistent with reports on a smaller set [64] . The results taken together suggest that while bioRxiv has experienced dramatic growth, how it is being used appears to have remained consistent in recent years. Figure 1 : A. The Kullback-Leibler divergence measures the extent to which the distributions, not speci c tokens, di er from each other. The token distribution of bioRxiv and PMC corpora is more similar than these biomedical corpora are to the NYTAC one. B. The signi cant di erences in token frequencies for the corpora appear to be driven by the elds with the highest uptake of bioRxiv, as terms from neuroscience and genomics are relatively more abundant in bioRxiv. We plotted the 95% con dence interval for each reported token. C. Of the tokens that di er between bioRxiv and PMC, the most abundant in bioRxiv are "et" and "al" while the most abundant in PMC is "study." D. The signi cant di erences in token frequencies for preprints and their corresponding published version often appear to be associated with typesetting and supplementary or additional materials. We plotted the 95% con dence interval for each reported token. E. The tokens with the largest absolute di erences in abundance appear to be stylistic. Documents within bioRxiv were slightly longer than those within PMCOA, but both were much longer than those from the control (NYTAC) ( Table 1 ). The average sentence length, the fraction of pronouns, and the use of the passive voice were all more similar between bioRxiv and PMC than they were to NYTAC (Table 1 ). The Kullback-Leibler (KL) divergence of term frequency distributions between bioRxiv and PMCOA were low, especially among the top few hundred tokens ( Figure 1A ). As more tokens were incorporated, the KL divergence started to increase but remained much lower than the biomedical corpora compared against NYTAC. These ndings support our notion that bioRxiv is linguistically similar to the PMCOA repository. Terms like "neurons", "genome", and "genetic", which are common in genomics and neuroscience, were more common in bioRxiv than PMCOA while others associated with clinical research, such as "clinical" "patients" and "treatment" were more common in PMCOA ( Figure 1B and 1C) . When controlling for the di erences in the body of documents to identify textual changes associated with the publication process, we found that tokens such as "et" "al" were enriched for bioRxiv while " ", "-" were enriched for PMCOA ( Figure 1D and 1E) . Furthermore, we found that speci c changes appeared to be related to journal styles: " gure" was more common in bioRxiv while " g" was relatively more common in PMCOA. Other changes appeared to be associated with an increasing reference to content external to the manuscript itself: the tokens "supplementary", "additional" and " le" were all more common in PMCOA than bioRxiv, suggesting that journals are not simply replacing one token with another but that there are more mentions of such content after peer review. These results taken together suggest that the structure of the text within preprints on bioRxiv are similar to published articles within PMCOA. The di erences in uptake across elds are supported by di erences in authors' categorization of their articles and by the text within the articles themselves. At the level of individual manuscripts, the terms that change the most appear to be associated with typesetting, journal style, and an increasing reliance on additional materials after peer review. Document embeddings provide a means to categorize the language of documents in a way that takes into account the similarities between terms [36, 65, 66] . We found that the rst two PCs separated articles from di erent author-selected categories (Figure 2A) . Certain neuroscience papers appeared to be more associated with the cellular biology direction of PC1, while others seemed to be more associated with the informatics-related direction Figure 2A ). This suggests that the concepts captured by PCs were not exclusively related to their eld. Visualizing token-PC similarity revealed tokens associated with certain research approaches ( Figures 2B and 2C) . Token association of PC1 shows the separation of cell biology and informatics-related elds through tokens: "empirical", "estimates" and "statistics" depicted in orange and "cultured" and "overexpressing" shown in blue ( Figure 2B ). Association for PC2 shows the separation of bioinformatics and neuroscience via tokens: "genomic", "genome" and "genomes" depicted in orange and "evoked", "stimulus" and "stimulation" shown in blue ( Figure 2C ). Examining the value for PC1 across all author-selected categories revealed an ordering of elds from cell biology to informatics-related disciplines ( Figure 2D ). These results suggest that a primary driver of the variability within the language used in bioRxiv could be the divide between informatics and cell biology approaches. A similar analysis for PC2 suggested that neuroscience and bioinformatics present a similar language continuum ( Figure 2E ). This result supports the notion that bioRxiv contains an in ux of neuroscience and bioinformatics-related research results. For both of the top two PCs, the submitter-selected category of systems biology preprints was near the middle of the distribution and had a relatively large interquartile range when compared with other categories ( Figure 2D and 2E), suggesting that systems biology is a broader sub eld containing both informatics and cellular biology approaches. Examining the top ve and bottom ve preprints within the systems biology eld reinforces PC1's dichotomous theme ( [72, 73, 74, 75, 76] were focused on cellular signaling and protein activity. We provide the rest of our 50 generated PCs in our online repository (see Software and Data Availability). Preprints are closer in document embedding space to their corresponding peer-reviewed publication than they are to random papers published in the same journal. B. Potential preprint-publication pairs that are unannotated but within the 50th percentile of all preprint-publication pairs in the document embedding space are likely to represent true preprint-publication pairs. We depict the fraction of true positives over the total number of pairs in each bin. Accuracy is derived from the curation of a randomized list of 200 potential pairs (50 per quantile) performed in duplicate with a third rater used in the case of disagreement. C. Most preprints are eventually published. We show the publication rate of preprints since bioRxiv rst started. The x-axis represents months since bioRxiv started, and the y-axis represents the proportion of preprints published given the month they were posted. The light blue line represents the publication rate previously estimated by Abdill et al. [13] . The dark blue line represents the updated publication rate using only CrossRef-derived annotations, while the dark green line includes annotations derived from our embedding space approach. The horizontal lines represent the overall proportion of preprints published as of the time of the annotation snapshot. Distances between preprints and their corresponding published versions were nearly always lower than preprints paired with a random article published in the same journal ( Figure 3A ). This suggests that embedding distances can identify documents with similar textual content. Approximately 98% of our 200 pairs with an embedding distance in the 0-25th and 25th-50th percentile bins were scored as true matches ( Figure 3B ). These two bins contained 1,542 preprint-article pairs, suggesting that many preprints may have been published but not previously connected with their published versions. There is a particular enrichment for preprints published but unlinked within the 2017-2018 interval ( Figure 3C ). We expected a higher proportion of such preprints before 2019 (many of which may not have been published yet); however, observing relatively few missed annotations before 2017 was against our expectations. There are several possible explanations for this increasing fraction of missed annotations. As the number of preprints posted on bioRxiv grows, it may be harder for bioRxiv to establish a link between preprints and their published counterparts simply due to the scale of the challenge. It is possible that the set of authors participating in the preprint ecosystem is changing and that new participants may be less likely to report missed publications to bioRxiv. Finally, as familiarity with preprinting grows, it is possible that authors are posting preprints earlier in the process and that metadata elds that bioRxiv uses to establish a link may be less stable. The y-axis indicates the number of days that elapsed between the rst version of a preprint posted on bioRxiv and the date at which the peer-reviewed publication appeared. The density of observations is depicted in the violin plot with an embedded boxplot. C. Preprints with more substantial text changes took longer to be published. The x-axis shows the Euclidean distance between document representations of the rst version of a preprint and its peer-reviewed form. The y-axis shows the number of days elapsed between the rst version of a preprint posted on bioRxiv and when a preprint is published. The color bar on the right represents the density of each hexbin in this plot, where more dense regions are shown in a brighter color. The process of peer review includes several steps, which take variable amounts of time [79] , and we sought to measure if there is a di erence in publication time between author-selected categories of preprints ( Figure 4A ). Of the most abundant preprint categories microbiology was the fastest to publish (140 days, (137, 145 days) [95% CI]) and genomics was the slowest (190 days, (185, 195 days) [95% CI]) ( Figure 4A ). We did observe category-speci c di erences; however, these di erences were generally modest, suggesting that the peer review process did not di er dramatically between preprint categories. One exception was the Scienti c Communication and Education category, which took substantially longer to be peer-reviewed and published (373 days, (373, 398 days) [95% CI]). This hints that there may be di erences in the publication or peer review process or culture that apply to preprints in this category. Examining peer review's e ect on individual preprints, we found a positive correlation between preprints with multiple versions and the time elapsed until publication ( Figure 4B ). Each new version adds additional 51 days before a preprint is published. This time duration seems broadly compatible with the amount of time it would take to receive reviews and revise a manuscript, suggesting that many authors may be updating their preprints in response to peer reviews or other external feedback. The embedding space allows us to compare preprint and published documents to determine if the level of change that documents undergo relates to the time it takes them to be published. Distances in this space are arbitrary and must be compared to reference distances. We found that the average distance of two randomly selected papers from the bioinformatics category was 4.470, while the average distance of two randomly selected papers from bioRxiv was 5.343. Preprints with large embedding space distances from their corresponding peer-reviewed publication took longer to publish ( Figure 4C ): each additional unit of distance corresponded to roughly fortythree additional days. Overall, our ndings support a model where preprints are reviewed multiple times or require more extensive revisions take longer to publish. We developed an online application that returns a listing of published papers and journals closest to a query preprint in document embedding space. This application uses two k-nearest neighbor classi ers that achieved better performance than our baseline model (Supplemental Figure S2 ) to identify these entities. Users supply our app with digital object identi ers (DOIs) from bioRxiv or medRxiv, and the corresponding preprint is downloaded from the repository. Next, the preprint's PDF is converted to text, and this text is used to construct a document embedding representation. This representation is supplied to our classi ers to generate a listing of the ten papers and journals with the most similar representations in the embedding space ( Figures 5A, 5B and 5C) . Furthermore, the user-requested preprint's location in this embedding space is then displayed on our interactive map, and users can select regions to identify the terms most associated with those regions (Figures 5D and 5E ). Users can also explore the terms associated with the top 50 PCs derived from the document embeddings, and those PCs vary across the document landscape. Starting with the home screen, users can paste in a bioRxiv or medRxiv DOI, which sends a request to bioRxiv or medRxiv. Next, the app preprocesses the requested preprint and returns a listing of (B) the top ten most similar papers and (C) the ten closest journals. D. The app also displays the location of the query preprint in PMC. E. Users can select a square within the landscape to examine statistics associated with the square, including the top journals by article count in that square and the odds ratio of tokens. The Preprints in Motion collection included a set of preprints posted during the rst four months of 2020. We examined the extent to which preprints in this set were representative of the patterns that we identi ed from our analysis on all of bioRxiv. As with all of bioRxiv, typesetting tokens changed between preprints and their paired publications. Our token-level analysis identi ed certain patterns consistent with our ndings across bioRxiv ( Figure 6A and 6B) . However, in this set, we also observe changes likely associated with the fast-moving nature of COVID-19 research: the token "2019-ncov" became less frequently represented while "sars" and "cov-2" became more represented, likely due to a shift in nomenclature from "2019-nCoV" to "SARS-CoV-2". The Preprints in Motion were not strongly colocalized in the linguistic landscape, suggesting that the collection covers a diverse set of research approaches ( Figure 6C ). Preprints in this collection were published faster than the broader set of bioRxiv preprints ( Figure 6D and 6E) . The relationship between time to publication and the number of versions ( Figure 6D ) and the relationship between time to publication and the amount of linguistic change ( Figure 6E ) were both lost in the Preprints in Motion set. Our ndings suggest that Preprints in Motion changed during publication in ways aligned with changes in the full preprint set but that peer review was accelerated in ways that broke the time dependences observed with the full bioRxiv set. BioRxiv is a constantly growing repository that contains life science preprints. The majority of research involving bioRxiv focuses on the metadata of preprints; however, the language contained within these preprints has not previously been systematically examined. Throughout this work, we sought to analyze the language within these preprints and understand how it changes in response to peer review. Our global corpora analysis found that writing within bioRxiv is consistent with the biomedical literature in the PMCOA repository, suggesting that bioRxiv is linguistically similar to PMCOA. Tokenlevel analyses between bioRxiv and PMCOA suggested that research elds drive signi cant di erences; e.g., more patient-related research is prevalent in PMCOA than bioRxiv. This observation is expected as preprints focused on medicine are supported by the complementary medRxiv repository [8]. Token-level analyses for preprints and their corresponding published version suggest that peer review may focus on data availability and incorporating extra sections for published papers; however, future studies are needed to ascertain individual token level changes as preprints venture through the publication process. Document embeddings are a versatile way to examine language contained within preprints, understanding peer review's e ect on preprints, and provide extra functionality for preprint repositories. Examining linguistic variance within document embeddings of life science preprints revealed that the largest source of variability was informatics. This observation bisects the majority of life science research categories that have integrated preprints within their publication work ow. Preprints are typically linked with their published articles via bioRxiv manually establishing links or authors self-reporting that their preprint has been published; however, gaps can occur as preprints change their appearance through multiple versions or authors do not notify bioRxiv. Our work suggests that document embeddings can help ll in missing links within bioRxiv. Furthermore, our analysis reveals that the publication rate for preprints is higher than previously estimated, even though our analysis can only account for published open access papers. Our results raise the lower bound of the total preprint publication fraction; however, the true fraction is necessarily higher. Future work, especially that which aims to assess the fraction of preprints that are eventually published, should account for the possibility of missed annotations. exception is the scienti c communication and education category, which contained preprints that took much longer to publish. Regarding individual preprints, each new version adds several weeks to a preprints time to publication, which is roughly aligned with authors making changes after a round of peer review; furthermore, preprints that undergo substantial changes take longer to publish. Overall, these results illustrate that bioRxiv is a practical resource for obtaining insight into the peer-review process. Lastly, we found that document embeddings were associated with the eventual journal at which the work was published. We trained two machine learning models to identify which journals publish linguistically similar papers towards a query preprint. Our models achieved a considerably higher fold change over the baseline model, so we constructed a web application that makes our models available to the public and returns a list of the papers and journals that are linguistically similar to a bioRxiv or medRxiv preprint. Quantifying and contextualizing the impact of bioRxiv preprints through automated social media audience segmentation Jedidiah Carlson Tracking the popularity and outcomes of all bioRxiv preprints Richard J Abdill Kou Amano Proceedings of the Association for Information Science and Technology The relationship between bioRxiv preprints, citations and altmetrics Releasing a preprint is associated with more attention and citations for the peer-reviewed article Darwin Y Fu An Exploratory Qualitative Study of Adoption, Practices, Drivers and Barriers Andrea Chiarelli, Rob Johnson, Stephen Pin eld The Need for Speed: How Quickly Do Preprints Become Published Articles? Rachel Herbert, Kate Gasson Technical and social issues in uencing the adoption of preprints in the life sciences Naomi C. Penfold, Jessica K Day-to-day discovery of preprint-publication links Guillaume Cabanac, Theodora Oikonomidi On the value of preprints: An early career researcher perspective Sarvenaz Sarabipour Preprints in motion: tracking changes between posting and journal publication Textual Analysis in Accounting and Finance: A Survey TIM LOUGHRAN The textual characteristics of traditional and Open Access scienti c journals are similar Karin Verspoor, K Bretonnel Cohen A survey on annotation tools for the biomedical literature M. Neves, U. Leser Brie ngs in Bioinformatics PubTator central: automated concept annotation for biomedical full text articles Chih-Hsuan Wei Coreference annotation and resolution in the Colorado Richly Annotated Full Text (CRAFT) corpus of biomedical journal articles K. Bretonnel Cohen, Arrick Lanfranchi The structural and content aspects of abstracts versus bodies of full text journal articles are di erent K Bretonnel Cohen A corpus of full-text journal articles is a robust evaluation tool for revealing di erences in performance of biomedical natural language processing tools From POS tagging to dependency parsing for biomedical event extraction Dat Quoc Nguyen E cient Estimation of Word Representations in Vector Space Tomas Distributed Representations of Sentences and Documents Quoc V. Le, Tomas Mikolov arXiv The GenBank of the published literature R Gold open access: the best of both worlds M. A. G. van der Heyden, T. A. B. van Veen Netherlands Heart Journal Author Manuscripts in PMC CrossRef Text and Data Mining Services Rachael Lammey Insights the UKSG journal Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing Matthew Honnibal, Ines Montani (2017) 48. Odds Ratio Steven Tenny, Mary R. Ho man StatPearls Software Framework for Topic Modelling with Large Corpora Radim Řehůřek On the Dimensionality of Word Embedding Zi Yin, Yuanyuan Shen arXiv Machine learning in Python Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions Nathan Halko, Per-Gunnar Martinsson The Drosophila Cortactin Binding Protein 2 homolog, Nausicaa, regulates lamellipodial actin dynamics in a Cortactin-dependent manner Applewhite Cold Spring Harbor Laboratory The Drosophila protein, Nausicaa, regulates lamellipodial actin dynamics in a Cortactindependent manner CamDavidsonPilon/lifelines: v0 … Jlim13 Zenodo Machine Learning in Python Fabian Pedregosa, Gaël Varoquaux Aune Cold Spring Harbor Laboratory Altmetric Scores, Citations, and Publication of Studies Posted as Preprints Stylianos Serghiou E cient Vector Representation for Documents through Corruption Minmin Chen Document Network Projection in Pretrained Word Embedding Space Antoine Gourru, Adrien Guille Conditional Robust Calibration (CRC): a new computational Bayesian methodology for model parameters estimation and identi ability analysis Fortunato Bianconi Machine learning of stochastic gene network phenotypes Kyemyung Park, Thorsten Prüstel Notions of similarity for computational biology models Ron Henkel GpABC: a Julia package for approximate Bayesian computation with Gaussian process emulation Evgeny Tankhilevich SBpipe: a collection of pipelines for automating repetitive simulation and analysis tasks Piero Dalle Pezze Spatiotemporal proteomics uncovers cathepsin-dependent host cell death during bacterial infection Joel Selkrig Systems analysis by mass cytometry identi es susceptibility of latent HIV-infected T cells to targeting of p38 and mTOR pathways Linda E. Fong, Victor L NADPH consumption by L-cystine reduction creates a metabolic vulnerability upon glucose deprivation Inhibition of Bruton's tyrosine kinase reduces NF-kB and NLRP3 in ammasome activity preventing insulin resistance and microvascular disease AKT but not MYC promotes reactive oxygen species-mediated cell death in oxidative culture Dongqing Zheng FPtool a software tool to obtain in silico genotype-phenotype signatures and ngerprints based on massive model simulations Guido Santos Bromodomain inhibition reveals FGF15/19 as a target of epigenetic regulation and metabolic control Chisayo Kozuka, Vicencia Sales Peer review and the publication process Parveen Azam Ali The authors would like to thank Ariel Hippen Anderson for evaluating potential missing preprint to published version links. We also would like to thank Richard Sever and the bioRxiv team for their assistance with access to and support with questions about preprint full text downloaded from bioRxiv. This work was supported by grants from the Gordon Betty Moore Foundation (GBMF4552) and the National Institutes of Health's National Human Genome Research Institute (NHGRI) under awards T32 HG00046 and R01 HG010067. Marvin Thielk receives a salary from Elsevier Inc. where he contributes NLP expertise to health content operations. Elsevier did not restrict the results or interpretations that could be published in this manuscript. The opinions expressed here do not re ect the o cial policy or positions of Elsevier Inc. Figure S1 : Neuroscience and bioinformatics are the two most common author-selected topics for bioRxiv preprints. Figure S2 : Both classi ers outperform the randomized baseline when predicting a paper's journal endpoint. This bargraph shows each model's accuracy in respect to predicting the training and test set.