key: cord-0234830-aufhbrt2 authors: Hill, Ryan; Yin, Yian; Stein, Carolyn; Wang, Dashun; Jones, Benjamin F. title: Adaptability and the Pivot Penalty in Science date: 2021-07-14 journal: nan DOI: nan sha: ae3c0871640a13c60fc78c80029c63b4d3cfce0e doc_id: 234830 cord_uid: aufhbrt2 The ability to confront new questions, opportunities, and challenges is of fundamental importance to human progress and the resilience of human societies, yet the capacity of science to meet new demands remains poorly understood. Here we deploy a new measurement framework to investigate the scientific response to the COVID-19 pandemic and the adaptability of science as a whole. We find that science rapidly shifted to engage COVID-19 following the advent of the virus, with scientists across all fields making large jumps from their prior research streams. However, this adaptive response reveals a pervasive"pivot penalty", where the impact of the new research steeply declines the further the scientists move from their prior work. The pivot penalty is severe amidst COVID-19 research, but it is not unique to COVID-19. Rather it applies nearly universally across the sciences, and has been growing in magnitude over the past five decades. While further features condition pivoting, including a scientist's career stage, prior expertise and impact, collaborative scale, the use of new coauthors, and funding, we find that the pivot penalty persists and remains substantial regardless of these features, suggesting the pivot penalty acts as a fundamental friction that governs science's ability to adapt. The pivot penalty not only holds key implications for the design of the scientific system and human capacity to confront emergent challenges through scientific advance, but may also be relevant to other social and economic systems, where shifting to meet new demands is central to survival and success. To capture prior proximity to COVID-related work, we collect the references in all the COVID-19 articles and count the total citations that every scientist's pre-2020 work receives among this set (excluding self-citations); this allows us to define a measure for how COVID-relevant a given author's prior work has been. We find that a scientist's propensity to write a COVID-19 paper is strongly predicted by the relevance of his or her prior work (Fig. 1C) . Among all authors who published in 2020, those who had not participated in producing this prior body of work had a 4.8% chance of writing a COVID-19 paper. By contrast, those who participated most substantially in producing the relevant corpus of prior work had a 50.1% probability of writing at least one COVID-19 paper, corresponding to a nearly eight-fold increase over the baseline rate. Fig. S2 considers alternative author proximity metrics, based on (i) title keyword similarity between COVID-19 papers and the scientist's pre-pandemic research or (ii) a simple count of the author's relevant prior papers, uncovering similar findings. Together, these results show that scientists are far more likely to write COVID-19 papers when their prior work is quantifiably more relevant. Yet at the same time, such already-proximate scientists are rare. The vast majority of scientists who publish in 2020 (87.1%) have no prior papers in the reference corpus of COVID-19 work (Fig. 1C) . Further, because this group is so large, these scientists collectively account for 67% of all researchers who wrote at least one COVID paper. These findings suggest that, amidst a strong response from the scientific community, scientists were substantially adapting their research streams to engage COVID-19, prompting us to further examine the degree of scientists' pivots from their prior work and how successful these pivots are. To quantify scientific pivots for individual scientists, both for COVID-19 papers and more generally in scientific research, we introduce a cosine-similarity metric ( Fig. 2A ) that measures the extent to which a given paper departs from a scientist's prior body of work. Specifically, for an author i and a focal paper j, we calculate a vector ! " , representing the distribution of journals referenced by j. Similarly, we count the frequency in which different journals are referenced in the union of i's prior work, defining a vector ! . The pivot measure, Φ ! " , is then defined as 1 minus the cosine of these two vectors: The measure Φ ! " thus takes the value 0 ("zero pivot") if the focal paper draws on the exact same distribution of journals as the author's prior work and takes the value 1 ("full pivot") if the focal paper draws entirely on a novel set of journals. The measure featured in the main text calculates pivoting in the focal paper compared to the prior three years of the author's work. We also calculate our measure by using all prior work of a given author, arriving at similar results (see SM and Fig. S3 ). We find that scientists who write COVID-19 related papers make unusually large pivots. Fig. 2B examines all 2020 papers and shows that pivot sizes, which in general tend to be widely dispersed, inherently differ in their propensities to produce COVID-19 research (Fig. 1B, Fig. S1 ), we find that scientists in every field undertake unusually large pivots when writing COVID-19 related papers (Fig. 2D) . Thus, uncommonly large individual pivots are a universal phenomenon of COVID-19 within all fields of science. Even in the most proximate fields like medical microbiology scientists are pivoting to an unusual degree. Indeed, we consider again scientists with relatively near or distant positions to the reference corpus of COVID-19 work (Fig. 1C ). While pivot sizes decrease among scientists who were ex-ante closer to the COVID-19 corpus, as one would expect, even those most closely situated to COVID-19 exhibit unusually high pivot sizes compared to their own prior work (Fig. S4) . Altogether, COVID-related papers represent an 6 unusual degree of departure from the authors' usual work, which appears universally across scientific fields and even among authors whose prior research was most closely related to COVID- As scientists shift into new areas, a central question is how impactful their work becomes. Consider first 37 million papers published from 1970 to 2020, presenting general findings for all of science, both historically and today. To quantify impact, we calculate the paper-level hit rate, a binary indicator for whether a given paper was in the upper 5% of citations received within its field and year [36] . Fig. 3A reveals a striking and fundamental fact: larger pivots are systematically associated with lower impact. Indeed, we observe a large and monotonic decrease in the average hit rate as the pivot size rises. The lowest-pivot work is high impact 7.4 percent of the time, 48% higher than the baseline rate, whereas the highest-pivot work is high impact only 1.8 percent of the time, a 64% reduction from the baseline. Furthermore, quantifying this "pivot penalty" over time reveals another pervasive finding: The relationship between pivot size and impact has become increasingly negative over the past five decades (Fig. 3B ). These findings generalize widely across science. Studying separately each of the 154 subfields, we find that the negative relationship between impact and pivot size holds for 93% of fields, and the increasing severity of the pivot penalty over time occurs in 88% of all scientific fields (Table S1) . Thus in science, larger pivots present a substantial and increasing penalty in the impact of the work. In balancing the tension between exploiting one's existing research streams and exploring new ones [31] , these findings highlight the difficulty of venturing into new areas. The growing impact advantage to narrowness in the ambit of research is consistent with scientists becoming increasingly specialized as scientific knowledge deepens [16, 46] and heightens the general concern in science communities that research with wide reach, interdisciplinary character, or novel orientations, while perhaps highly valued, is challenging [14, [47] [48] [49] . In the context of this general and growing pivot penalty, we next consider papers published in 2020, separating them into COVID and non-COVID (red vs blue lines in Fig. 3C ). Given that 2020 papers have had relatively little chance to receive citations [50] , we feature a journal placement measure for impact, where each journal is assigned the historical hit rate of its publications within its field and year (see SM S2.2). We find that overall there is a large impact premium associated with COVID-19 7 papers, as reflected by the visible upward shift in journal placement, consistent with the extreme interest in the pandemic. At the same time, the negative relationship between pivoting and impact appears in both groups and is especially steep for COVID research. Thus, scientists who traveled further from their prior work to write COVID-19 papers were not immune to the pivot penalty; rather they produced research with substantially less impact on average relative to low-pivot COVID papers. Crucially, the COVID impact premium is mostly offset by the unusually large pivots associated with COVID research. Comparing at the median pivot size within each group, the journal placement of COVID papers appears on par with non-COVID papers, despite the clear impact premium conferred upon COVID research. Indeed, the upper 45% of COVID-19 papers by pivot size turn out to have lower average journal placement than papers with less than or equal to the median pivot size among non-COVID work. We see a similar surge in interest, and penalty for high pivots, using citations to the specific papers (Fig. S5 ). Overall, we observe two extremely strong yet sharply contrasting relationships regarding impact. On the one hand, COVID-related work has experienced a large impact premium; on the other hand, greater pivot size markedly predicts less impactful work. These findings unearth a central tension in the adaptability of science: while engaging a high-demand area has value, pivoting exhibits offsetting penalties, as traversing larger distances from one's prior work predicts substantially less impactful outcomes. The pivot penalty in Fig. 3 [20, 22] , which posits that younger scientists will be more likely to engage novel research streams. Yet here we find that older scientists were disproportionately more likely to pivot into COVID research. These results hold for a large majority of fields ( Fig. S6A-B) . Together, these results indicate that exceptionally high impact scientists were leaders in engaging COVID-19 research -yet much less so among the young, suggesting that science overall may be missing out on valuable contributions and perspectives. Indeed, while bringing established field leaders into the COVID-19 landscape might be seen as auspicious for science's response, these results suggest that emerging field leaders -younger cohorts who may have fresh ideas and otherwise help scale up the adaptive responsewere substantially left out. Teamwork may be an additional and critical feature for understanding adaptability. Indeed, not only are teams increasingly responsible for producing high-impact and novel research [28, 33, 36, 38] , they can also aggregate individual expertise [16] , extending the reach of individual scientists and promoting subject-matter flexibility [29, 54] . We find that team size has indeed been larger for COVID-19 papers than is typical in the respective field. Compared to field means, COVID-19 papers see 1.5 additional coauthors on average (a 28% increase in team size, Fig. S6C ), supporting the broad importance of collaborations in science's response to the pandemic. Further, COVID-19 authors have worked to an unusual degree with new coauthors (Fig. 4B ), rather than existing collaborators, engaging new coauthors at a much higher rate than (1) that same scientist did in prior years, (2) that same scientist did on their other 2020 papers, and (3) other matched, non-COVID-19 scientists do. Moreover, engaging new coauthors tends to be associated with larger pivots (Fig. S7A-B) . Overall, these results are consistent with teamwork expanding reach [16, 39, 40 ]. Yet at the same time, these new collaborators mostly came from within the same institution We further probe science's adaptability through the lens of funding. Drawing on over 600 different funding sources worldwide, we measure the proportion of grant-supported research for all 2020 papers, separating them into COVID and non-COVID (red vs blue lines in Fig. 4C ). There is a large and monotonic decrease in grant-supported research as pivot size increases for both COVID and non-COVID papers. And this relationship is especially pronounced for COVID-19 papers, which are substantially less likely to cite a funding source across all pivot sizes (Fig. 4C ). Hence while COVID papers are characterized by larger pivots, grant-supported research disproportionately features small pivots. These findings are natural to the extent that existing projects are tied to specific agendas, so that large pivots in general, and COVID-19 pivots in particular, tend to occur without acknowledging specific grants. Further examining COVIDspecific funding mechanisms of the NIH and NSF, we find that COVID-specific grants were issued rapidly, peaking from late spring to early fall (Fig. S8 ). Yet at the same time, among the funded PIs who published a COVID-19 paper in 2020, 59% (NIH) and 32% (NSF) of PIs had published at least one COVID-19 paper before receiving the grant, suggesting the adaptive funding may still lag the early adaptive research efforts. Overall, these results show that grant funding in science is associated with less pivoting, and appears to facilitate scientists engaged in consistent rather than adaptive research streams. In the case of COVID-19, despite the adaptive efforts by funders such as the NSF and NIH, science's response came largely without project-specific funding, especially in the critical, early phase of the pandemic. Together, Fig We find that, regardless of the scientist's individual characteristics, funding status, or collaborative orientation, the pivot penalty persists. We further use regression methods to control for all these features together, finding that net of all these features, the pivot penalty remains substantial in magnitude (Fig. 4D ). Together, these results suggest that the pivot penalty acts as a fundamental friction that limits science's ability to adapt. Science must regularly adapt to new opportunities and challenges. The COVID-19 pandemic has produced enormous demand for relevant scientific research [55] and provides a high scale event to examine the adaptability of science. We find that science faces systematic challenges to pivoting research, which may not be easy to overcome. Although an extremely broad frontier of science engaged in COVID-related research, the highest-impact work came from those pre-situated closely to the subject, and even these individuals were pivoting to a personally unusual degree. COVID-19 authors tended to be high-impact scientists, but not young ones, often worked in large teams and with novel coauthors, and they pivoted largely without project-specific funding. Most importantly, the response betrays a deep tension, where the further people pivot, the less impactful the work tends to be. This 'pivot penalty' appears not only among COVID-19 research but also across the sciences, and it has become increasingly pronounced over the past five decades. Overall, the findings suggest that, despite the strong response by the scientific community to the pandemic, science faces fundamental constraints on its capacity to adapt, with potentially firstorder policy implications. While funding systems and science institutions can develop programs to facilitate rapid response, as seen in this pandemic, funding may come with substantial lags. And during this pandemic, many researchers, including highly productive young researchers, stayed on the sidelines, suggesting untapped opportunities to create more forceful responses through policy and institutional design. Ultimately, however, the pre-positioning of scientists appears to be a fundamental constraint, which has implications for both emergencies and ordinary times. In an emergency context such as the coronavirus pandemic, the world is constrained by the existing scientific expertise and relatively proximate scientists already in existence when the crisis hits. This suggests that success against the next pandemic or other emergency depends on the ability to scale and position science beforehand. Indeed, in Louis Pasteur's famous words, "chance favors only the prepared mind." Without pre-pandemic developments in scientific knowledge, including mRNA vaccines [56] , and the pre-positioning of relevant human capital, the pandemic would likely have been still more costly. Outside the emergency context, these findings also challenge the notion of outsiders offering fresh insights, further emphasizing the importance of expertise and accumulation of knowledge. Research on creative search has documented the benefits of exploration [31] and out-of-box thinking [22, 57, 58] . The pivot penalty suggests that, while individuals are indeed capable of traversing long distances, domain-specific expertise is increasingly crucial for producing impactful outcomes. This expertise advantage and associated pivot penalty has implications for science institutions in their hiring and funding strategies in pursuit of generating high-impact work. It may further inform organizational survival; for example, leading businesses often fail to successfully exploit new technologies and must choose workforce strategies to facilitate adaptation [59] [60] [61] . Our findings have further implications for the development and nurturing of the scientific workforce as a whole. Portfolio theory points to diversified investments as a key tool to manage risk [62] . But the pivot penalty suggests that, unlike typical investments, adjustments to the science portfolio are governed by deep inertia [63] . From this perspective, investing in a large and diverse "bench" of scientists becomes essential from a risk management standpoint, advancing the capacity of humanity to respond effectively to developing threats and opportunities. Viewing science as a range of specialized endeavors with increasingly specialized researchers [16, 46] , a "prepared mind" in science means a "collectively prepared mind." A diverse portfolio of highscale investments can then play essential roles in both advancing human progress and seizing high social returns in ordinary times [22, 64] while also expanding human capacity to confront novel challenges. Science presents evolving demands from many areas -from artificial intelligence to genetic engineering to climate change -creating complex issues, risks, and urgency. This paper shows generally that pivoting research directions is both hard and costly, with scientists' pivots facing a growing impact penalty, which governs the adaptability of science. Given the uniqueness of the pandemic, it is notable that the pivot penalty applies not just to COVID-19 research but also Code availability Code will be made freely available. New works by author i Our primary publication dataset is based on Dimensions, a data product by Digital Science [1, 2] . Dimensions is one of the world's largest citation databases, including scientific publications from journals, conference proceedings, books and chapters, and preprint servers. Publication data are updated on a daily basis, allowing us to collect reference information in a timely manner. Here we collected all publications added to Dimensions database before December 31, 2020 (116 million papers in total). For each paper we obtain its title, publishing venue, list of authors, affiliation(s), publication date, fields of study, references, number of citations received, and acknowledged funding information. We restrict our analysis to the 45.2 million papers with at least five references. We do this for two reasons. First, pivot size -our primary variable of interest -uses reference information as a proxy of a paper's knowledge sources. Second, this restriction helps filter out non-research articles and incomplete records. A manual check of a random sample of excluded papers shows that many are commentary or editorial pieces that cite very few references, while some others are due to the lack of reference data sharing between Since our primary interest is papers closely related to the COVID-19 pandemic, we limit the search to the title and abstract, yielding 95.5K COVID-related papers. Author name disambiguation and affiliation disambiguation One important yet challenging step in science of science studies is author name disambiguation. Dimensions has developed a systematic algorithm using both internal information derived from papers (e.g., affiliation and citations) as well as external author profile information (e.g., ORCID). Within our sample, 84.6% of records in the data are assigned author IDs. Dimensions has also mapped raw affiliation strings to GRID whenever possible, offering unique affiliation IDs to 75.1% of records in the data. In sections of our analysis that use author-level data, such as pivot size and new collaborators, the analysis is restricted to papers with at least one disambiguated author. Field classification Dimensions also implements classification approaches to assign fields under the FOR (field of research), a two-level field classification system (22 level-0 fields and 154 level-1 fields). Across all of Dimensions, 94.3% and 88.6% of papers have at least one level-0 and level-1 field, respectively. The median number of level-0 (L0) fields per paper is 1 and the median number of level-1 (L1) fields per paper is 1. In analyses where we calculate field-specific means, such as Fig. 1B and 2D , we associate papers to each of the fields they belong to, when the paper has multiple fields. For all other uses of field effects, including calculating citation percentiles or using field fixed effects in a regression, we group papers based on their distinct combination of fields. We also use scientific grant data from Dimensions. Dimensions collects over 5 million granted projects from over 600 funders across the world. For each project, the data includes the project title, investigators, funder, funding amount, internal project number from the funder, and project duration. Name disambiguation for the investigators and publication authors shares the same ID system in Dimensions, allowing us to examine the funding situation of each author. Here we focus on all grants with end dates no earlier than 2019 to approximate the set of recently funded investigators/authors. In addition, Dimensions combines funding and publication records as well as text mining from acknowledgement statements to infer whether a paper is supported by a funder or a specific grant. For part of the funding analysis, we focus our attention on two major funding organizations in the United States: The National Science Foundation (NSF) and the National Institutes of Health (NIH). We look broadly at all papers and individuals funded by the NSF and NIH, as well as the papers and authors specifically funded by these organizations for COVID-19 research. One technical challenge here is that COVID-related grants may not be directly searchable following the approach in S1.1, as funders like the NIH distributed a large amount of COVID grant money as supplementary funding to existing projects, even if the original project does not appear directly related to COVID. To this end, we acquire the list of COVID grants from official reporting systems of NIH and NSF, i.e. querying NIH grants marked as "NIH COVID-19 Response" from NIH RePORTER and searching for "COVID" in NSF RAPID awards. We remove grants with starting date earlier than 2020 and link the rest to Dimensions grant data using internal grant numbers, yielding 1,375 NIH projects and 1,054 NSF projects. In analyses where we compare authors' 2020 papers to their prior research, we focus on a Our primary measure of paper impact is an indicator for whether an article is in the 95 th percentile or higher of citations compared to articles published in the same year with the same L1 fields. We additionally measure impact in 2020 using journal placement rather than citations because of the limited time for citations to accrue to recently published papers. For the journal impact measure, we mirror the citation measure by calculating the share of papers in a given journal that reach the 95 th percentile of citations (within its field and year), averaged between 2000-2019. This metric is intended to infer both the likelihood of becoming a hit paper and the authors' perception of a paper's impact and contribution based on journal placement. Our primary measure of an author's prior proximity to COVID research counts the references in COVID papers to the scientist's pre-2020 papers. Specifically, we consider the references from all 2020 COVID papers to all pre-2020 papers and count the number of the citations to papers by each disambiguated author in Dimensions (excluding self-citations). Alternatively, we count the number of an author's distinct pre-2020 papers that were cited at least once by a COVID paper (again excluding self-citations), which produces similar results. One potential limitation of using COVID citations as a measure of proximity is that they are correlated with an author's prior impact; i.e., the authors may be especially highly cited in general, not just especially proximate to COVID topics. To account for this, we developed an alternative method to measure proximity based on title similarity. This second method is less susceptible to bias toward more prolific scholars and we repeated our analyses finding similar 5 results. More specifically, to construct the alternative measure, we first parse the titles of all 2020 papers into individual words. Then for each word, , we compute the relative frequency: of COVID papers appears in + 1)/(No. of COVID papers) (No. of nonCOVID papers appears in + 1)/(No. of nonCOVID papers) High values of RelFreq imply that the word is more commonly found in COVID paper titles than in non-COVID paper titles. We restrict to words that appear in the title of at least 20 COVID papers. From here, we further restrict to the top 500 words according to this relative frequency measure. To further ensure the quality of these keywords, we also go through the final word list by hand, allowing us to identify words that took on a different meaning due to COVID. For example, "princess" appears as a common COVID word, because of the "Diamond Princess" cruise ship. However, prior to COVID, it had no relation to coronaviruses, so we remove it. We use this modified word list to then "score" paper titles. For every paper title, we look up the relative frequency measure for each word. The paper's score is then the average of all the title words. We then aggregate up to the author level by averaging the scores of all their pre-2020 papers. This title-based approach provides an alternative to the citation-based approach for assessing the relative proximity of scientists to COVID-19 subject matter in their research prior to 2020, further indicating the robustness of our results (Fig. 2B) . To track a scientist's engagement with new collaborators over time, we first construct a set of collaborators for each author-paper pair, tracking coauthorship interactions among all the disambiguated authors in the established author set (see S2.1). We then sort all publications in one's career by publication date and sequentially calculate the number of new collaborators in each paper. We focus only on established authors in order to measure interactions between authors with independent research portfolios, rather than new graduate students or post-docs without a research track-record. To further understand the characteristics of these new coauthors, we calculate their major field of research (both level-0 and level-1) before the paper's publication year. For this analysis, we require the focal author and the collaborator to have at least 5 previous publications. We also compare the affiliation information of new collaboration pairs (based on disambiguated GRID 6 ID) to see if the focal author and collaborator share at least one common affiliation. Together, these measurements allow us to count the number of new collaborators coming from the same or different field or affiliation. If either the focal author or the collaborator has missing data in field or affiliation, this pair is considered as "unknown" and excluded in the same/different categorization. To examine relationships between two variables (Fig. 3A-C and Fig. 4A,C,D) , we use a binned scatterplot [5] to show correlations. The advantage of a binned scatterplot is to present statistical relationships without imposing a linear or other functional form on the data. In Fig. 3A for example, we order the sample of papers by average pivot size along the x-axis and split the observations into 20 evenly-sized groups. Then each marker is placed at the mean (x,y) value within each group. Similarly, in Fig. 3B , the set of papers from each decade are separately binned into 20 groups according to average pivot size and the mean citation hit rate is plotted for each bin. In specific cases (Fig. 3C, 4C , and 4D), we use regression methods to present the binned scatterplot net of control variables. For example, in Fig. 3C , we are interested in the relationship between pivot size and impact, holding the scientific field fixed. To implement this analysis, we residualize both our pivot size and impact measure (95 th percentile citations) as follows [6, 7] . We first estimate the following OLS models: where F is a vector of L1 field fixed effects and # and % are idiosyncratic error terms. We use the estimated parameter values H # , H % , I # , and I % to residualize our pivot size and impact measures: We then apply the binned scatterplot technique to the residualized data (adding back the mean values of pivot size and impact to properly scale the plots). In Fig. 3C , we perform this residualization separately for COVID papers and non-COVID papers, and plot the resulting 7 relationships on the same axes. In Fig. 4C , we replace the impact measure as the dependent variable with a binary indicator for whether the paper acknowledges a grant. For the wide-ranging multivariate regression results presented in Fig. 4D , we follow the same residualization procedure, but include additional controls to the F vector. The additional controls include fixed effects for average prior impact groups, author age groups, team size, the number of new collaborators, and an indicator variable for whether the paper was funded. In Fig. S9A -D, we split the COVID and non-COVID samples into groups based on the median prior impact, career age, team size, and number of new collaborators. In Fig. S9E we split the sample based on whether the paper was connected to a funding source. The four series in each plot follow the same residualization approach used in Fig. 3C , again controlling for field fixed effects. In Figs Figure S2 : Propensity to write a COVID-19 paper based on prior work. (A) This figure plots the probability that an author wrote a COVID paper the number of papers by that author that were cited by at least one COVID paper (excluding self-citations). (B) This figure plots the probability of writing COVID by topic proximity as measured by title similarity (S2.3). The inset plots show the fraction of authors that fall into three ranges of the relevant prior proximity score. In the main text, we measure pivot size comparing the author's focal paper with that author's prior three years of work. Here we examine pivot size using the entire history of that author's work. (A) The large shift in pivot size for COVID papers is evident when pivot size is measured by comparing 2020 papers to all past work. This shift is comparable to Fig. 2B , where pivot size is measured using only papers published in the prior 3 years. (B) The negative relationship between pivot size and impact is similar in slope when using the full career pivot metric here or the 3-year metric as shown in Fig. 3C . The dates of NIH COVID-specific grant receipt and the date of the first paper publication for the authors that received these grants. (B) The dates of NSF RAPID COVID-specific grant receipt and the date of the first paper publication for the authors that received these grants. The NIH grants came later in the year than NSF grants, with principal investigators often publishing their first COVID-19 papers earlier in the year. Density NSF COVID grant receipt date COVID publication date A B Figure S9 : The pivot penalty is robust across many team and paper characteristics. Following Fig. 3C in the main text, this figure further separates COVID and non-COVID papers into sub-groups. (A) Prior Impact divides papers above and below the median value of author impact, averaged across the coauthors in the team. (B) Age divides papers above and below the median average career age of the authors. (C) Team Size divides papers above and below the median team size. (D) New Teams divides papers into "old" and "new" teams. New teams are defined as those where at least one co-author pair had not appeared on any papers prior to 2020. We focus this definition on established authors with at least 5 prior papers, with the sample is restricted to teams with 2 to 10 co-authors. (E) Funded Paper divides papers based on whether the publication was linked to any funding source as defined in S1.3. Table S1 : Pivot-Impact relationship by field. This table shows that a large majority of fields exhibit negative relationships between pivot size and impact. Further, this relationship is becoming more negative over time. In the 2000-2019 rows, impact is measured as an indicator for being in the 95 th percentile of citations by year and field. In the 2020 rows, impact is measured as journal hit rate, or the probability that a paper will reach the 95 th percentile of citations based on journal placement. In all rows, only fields with at least 20 papers are included in the share, with the number of qualifying fields listed for each row. In Panel A, the sign of the relationship is estimated within each field using linear regression of impact regressed on pivot size. In Panel B, we add to the field-specific regressions an interaction between pivot size and year to estimate the change in slope over time. Extinction risk from climate change Adapt: Why success always starts with failure Dynamic capabilities and strategic management. Strategic management journal Collapse: How societies choose to fail or succeed Science, the Endless Frontier: A report to the President The Economics of Crisis Innovation Policy: A Historical Perspective The dual frontier: Patented inventions and prior scientific advance What is societal impact of research and how can it be assessed? A literature survey Team assembly mechanisms determine collaboration network structure and team performance Grand challenges and great opportunities in science, technology, and public policy. SCIENCE-NEW YORK THEN WASHINGTON A brief history of synthetic biology COVID-19: a new challenge for human beings Managing the risks of extreme events and disasters to advance climate change adaptation: special report of the intergovernmental panel on climate change Topic evolution, disruption and resilience in early COVID-19 research The burden of knowledge and the "death of the renaissance man": Is innovation getting harder? The Review of Economic Studies Incentives and creativity: evidence from the academic life sciences How to improve peer review at NIH: a revamped process will engender innovative research. The Scientist The structure of scientific revolutions Planck's principle. Science A simple model of herd behavior. The quarterly journal of economics Does science advance one funeral at a time? As science evolves, how can science policy?, in Innovation Policy and the Economy Quantifying patterns of research-interest evolution Tradition and innovation in scientists' research strategies COVID-19 research risks ignoring important host genes due to pre-established research patterns. Elife, 2020 The elasticity of science Atypical combinations and scientific impact Creativity in scientific teams: Unpacking novelty and impact. Research policy Measuring technological innovation over the long run Exploration and exploitation in organizational learning. Organization science Beyond local search: boundary-spanning, exploration, and impact in the optical disk industry. Strategic management journal Increasing trend of scientists to switch between topics Recombinant uncertainty in technological search. Management science Understanding the onset of hot streaks across artistic, cultural, and scientific careers The increasing dominance of teams in production of knowledge Multi-university research teams: shifting impact, geography, and stratification in science. science Large teams develop and small teams disrupt science and technology Why and Wherefore of Increased Scientific Collaboration Fresh teams are associated with original and multidisciplinary research Scientific teams and institutional collaborations: Evidence from US universities Coevolution of policy and science during the pandemic COVID-19 publications: Database coverage, citations, readers, tweets, news, Facebook walls, Reddit posts. Quantitative Science Studies How a torrent of COVID science changed research publishing-in seven charts Consolidation in a crisis: Patterns of international collaboration in early COVID-19 research The World as I See It Convergence: Facilitating Transdisciplinary Integration of Life Sciences Engineering, and I.o. Medicine, Facilitating Interdisciplinary Research Quantifying long-term scientific impact The science of science Toward a more scientific science Age dynamics in scientific creativity Beat COVID-19 through innovation The story behind COVID-19 vaccines Old masters and young geniuses: The two life cycles of artistic creativity Age and scientific genius The innovator's dilemma: when new technologies cause great firms to fail Ambidextrous organizations: Managing evolutionary and revolutionary change. California management review Architectural innovation: The reconfiguration of existing product technologies and the failure of established firms Foundations of portfolio theory. The journal of finance Does government R&D policy mainly benefit scientists and engineers? The American Economic Review A Calculation of the Social Returns to Innovation Immigration and Entrepreneurship in the United States Lockdowns and Innovation: Evidence from the 1918 Flu Pandemic The product space conditions the development of nations Dimensions COVID-19 publications, datasets and clinical trials Dimensions: Bringing down barriers between scientometricians and data. Quantitative Science Studies Large-scale comparison of bibliographic data sources: Scopus, Web of Science, Dimensions, Crossref, and Microsoft Academic Coevolution of policy and science during the pandemic Binned Scatterplots: introducing-binscatter-and exploring its applications Partial time regressions as compared with individual trends Seasonal adjustment of economic time series and multiple regression analysis