key: cord-0273357-g0i16wp9 authors: Krebs, Catharine E.; Lam, Ann; McCarthy, Janine; Constantino, Helder; Sullivan, Kristie title: Animal-reliance bias in publishing is a potential barrier to scientific progress date: 2022-03-27 journal: bioRxiv DOI: 10.1101/2022.03.24.485684 sha: ded0208a67cc995b6369ed0780273fea31d19057 doc_id: 273357 cord_uid: g0i16wp9 Publication of scientific findings is a fundamental part of the research process that pushes innovation and generates policies and interventions that benefit society, but it is not without biases. Publication bias is generally recognized as journal’s preference for publishing studies based on the direction and magnitude of results. However, early evidence of a new type of bias related to publication has emerged in which editorial policy and/or peer reviewers or editors request that animal data be provided to validate studies produced using nonanimal human biology-based approaches. We describe herein “animal-reliance bias” in publishing: a reliance on or preference for animal-based methods where they may not be necessary or where nonanimal-based methods may be suitable, which affects the likelihood of a manuscript being accepted for publication. To gather evidence of the occurrence of this emerging type of publication bias, we set out to collect the experiences and perceptions of scientists and reviewers related to animal- and nonanimal-based experiments during peer review. To this end, we created a survey with 33 questions that was completed by 90 respondents working in several areas in the biological and biomedical fields. Twenty-one survey respondents indicated that they have carried out animal-based experiments for the sole purpose of anticipating reviewer requests. Thirty-one survey respondents indicated that they have been asked by peer reviewers to add animal experimental data to their nonanimal study; 14 of these felt that it was sometimes justified, and 11 did not think the request was justified. The data presented strongly support the occurrence of animal-reliance bias and indicates that status quo and conservatism biases may explain such attitudes by peer reviewers and editors. To our knowledge, this is the first study to recognize the existence of animal-reliance bias in publishing and provides the basis for developing strategies to prevent this practice from happening. Publication of scientific findings is a fundamental part of the research process that pushes innovation and generates policies and interventions that benefit society, but it is not without biases. Publication bias is generally recognized as journal's preference for publishing studies based on the direction and magnitude of results. However, early evidence of a new type of bias related to publication has emerged in which editorial policy and/or peer reviewers or editors request that animal data be provided to validate studies produced using nonanimal human biology-based approaches. We describe herein "animal-reliance bias" in publishing: a reliance on or preference for animal-based methods where they may not be necessary or where nonanimal-based methods may be suitable, which affects the likelihood of a manuscript being accepted for publication. To gather evidence of the occurrence of this emerging type of publication bias, we set out to collect the experiences and perceptions of scientists and reviewers related to animal-and nonanimal-based experiments during peer review. To this end, we created a survey with 33 questions that was completed by 90 respondents working in several areas in the biological and biomedical fields. Twenty-one survey respondents indicated that they have carried out animal-based experiments for the sole purpose of anticipating reviewer requests. Thirty-one survey respondents indicated that they have been asked by peer reviewers to add animal experimental data to their nonanimal study; 14 of these felt that it was sometimes justified, and 11 did not think the request was justified. The data presented strongly support the occurrence of animal-reliance bias and indicates that status quo and conservatism biases may explain such attitudes by peer reviewers and editors. To our knowledge, this is the first study to recognize the existence of animal-reliance bias in publishing and provides the basis for developing strategies to prevent this practice from happening. Publication of scientific results is a necessary step in the dissemination and implementation of biomedical advances and plays a central role in the progress of researchers' careers. It is a fundamental part of the research process that promotes knowledge sharing, pushes innovation, and contributes to the development of research standards and interventions that benefit society. However, scientific publication in the biological and biomedical fields has been greatly affected by practices such as publication bias, which is currently most recognized as a journal's preference for publishing studies based on the direction and magnitude of results, while disfavoring those without statistical significance [1] . More recently, early evidence of what may be a new type of publication bias in biomedical and health sciences has emerged. This type of bias occurs when the likelihood of a study being published is affected by the methods used. For instance, in 2018, a preprint of a study using human airway epithelial organoids and resected tissue from patients with cystic fibrosis used no animal-based experiments [2] . However, in the accepted version of the same study published in the EMBO journal, additional animal-based experiments were presented in which tumor airway-organoids were transplanted into immunocompromised mice [3] . According to Hans Clevers, the senior author of the study and research group leader at Utrecht University in the Netherlands, de novo animal data was produced and added to the new version of the study manuscript because of requests by peer reviewers [4] . Since then, a perspective article was written by Donald Ingber at the Wyss Institute asks: "Is it Time for Reviewer 3 to Request Human Organ Chip Experiments Instead of Animal Validation Studies?" [5] . In this article, the author questions why animal data is still considered the gold standard by reviewers in biomedical research while presenting evidence that organ chips may better suit this purpose. This question reinforces the anecdote of the airway organoids study described above, revealing a publication bias caused by a status quo reliance on animal-based methods. That is, bias created by editorial policy, peer reviewers, or editors insisting on the inclusion of animal-based evidence, sometimes as a condition for acceptance of the manuscript. In this paper, we examine this phenomenon, which we term "animal-reliance bias." Specifically, animal-reliance bias indicates a reliance on or preference for animal-based methods despite the availability of potentially suitable nonanimal-based methods. Animal reliance bias may occur during the review of grant applications as well, but the current study focuses on animal-reliance bias in publishing. Nonanimal-based methods have advanced a great deal in the past decade, improving on the limitations of animal-based methods with their ability to replicate the molecular, cellular, and physiological mechanisms of human diseases more reliably and to translate preclinical findings into safe and effective treatments for patients in a more predictive manner. Human organoids, or stem cell-derived 3D culture systems, can replicate the structure and function of human organs [6] and have been used to model many human organ systems including brain [7] , lung [8] , and intestines [9] . Organoids have been used in a variety of applications such as investigating infectious diseases, genetic disorders, and cancer, and to enable precision medicine [6] . Similarly, organ chips are composed of human stem or primary cells, but cells are situated in microfluidic systems allowing fluid flow to mimic dynamic processes like circulation and tissue-tissue interfaces [5] . Nonanimal-based methods like organoids and organ chips are powerful tools that can replace the use of animals in many applications, but there remain many barriers to their uptake, including biases like the one described here. Thus, animal-reliance bias may have farreaching consequences, including (i) the continued use of animal-based research methods despite their unreliability in modeling diseases and predicting success in clinical trials, (ii) stifled use and further development of nonanimal methods, despite purported commitments to the principles of the 3Rs, and (iii) misdirected prioritization of federal research dollars. There is therefore a scientific and moral imperative to address this bias and mitigate its consequences for researchers and patients. Although anecdotal accounts of animal reliance bias have been described, there is not yet to our knowledge a published study that describes or presents evidence of animal-reliance bias in publishing. To begin investigating this issue systematically, we designed a survey study. The aim was to understand the experiences and perceptions of scientists and reviewers related to animal-reliance bias in publishing. To this end, we created a survey and shared it through social media and our private channels. This paper presents evidence indicating that animalreliance bias in publishing does exist, and we discuss the many implications of this and present solutions to address it. There was a total of 90 respondents. Nine respondents indicated that they had zero peer reviewed publications and an additional 13 respondents failed to complete survey questions about their research and experiences in publishing peer review. These respondents were excluded from analysis, resulting in a sample size of 68 respondents (Table 1) . There was a broad distribution across research fields. The fields that were most represented were medicine and clinical research (28%), molecular and cellular biology (21%), neuroscience (15%), pharmacology (16%), and toxicology (19%; S1 Table 1 ). The geographic representation of respondents was not evenly distributed. The largest number of respondents primarily worked in the United States (32%), followed by South Korea (15%; S1 Table 1 ). Most respondents (75%) worked in academia or for a research institution (Table 1) . There was a slightly greater number of male respondents (51%; Table 1 ). Most respondents held a research doctorate as their highest earned degree (76%; Table 1) . To assess the level of seniority and extent of experience in peer review, respondents were asked how many years it had been since they received their highest earned degree and number of peer-reviewed publications, both of which showed fairly even distributions (Table 1) . To help understand respondents' internal biases related to the nature of research methods they regularly use, respondents were asked about animal and nonanimal methods used in their research. Respondents' answers were slightly skewed toward using nonanimal research models, samples, data, or materials (68%) versus animal research models, samples, data, or materials (56%; Table 2 ). Asked another way, 44% of respondents indicated that they never use animal experiments, while 56% indicated that they use animals at least rarely ( Table 2 ). These data indicate a possible bias in the perceptions of respondents' experiences related to requests for animal experiments during manuscript review. When asked how often respondents use animal experiments for the sole purpose of anticipating reviewer requests for them, most respondents (69%) indicated that they never do this (Table 2 ). However, eight respondents indicated that they do this rarely, nine do this sometimes, and four do this often, indicating that journals' perceived reliance on animal evidence may contribute to the performance of potentially unnecessary experiments. To assess researcher experiences during manuscript peer review regarding requests for animal experiments, survey prompts were provided regarding the request by reviewers to add animal experimental data to their nonanimal study. The sample size (n=68) was too small to determine the extent to which this occurs. As such we consider the answers to these questions as indicators of the presence of animal reliance bias rather than a quantification of the frequency of occurrence or of the proportion of researchers to whom this happens. Thirty-two respondents indicated that they have never been asked for inclusion of animal data, whereas 31 respondents indicated that this has happened at least one time (). Responses to question 12: "During manuscript submission peer review, how many times have you been asked for animal experimental data to be added to a study that otherwise had no animal-based experiments?" Thirty-two respondents indicated that they have never been asked for inclusion of animal data; 31 respondents indicated that this has happened at least one time. The percent of respondents answering in each category is on top of each bar (total N=68). Of the 31 respondents who indicated that additional animal-based data was requested, only three respondents thought that the request was justified, 14 respondents felt that it was sometimes justified, and 11 respondents did not think the request was justified. Three respondents did not provide an answer to this question. When asked to elaborate on their answers in an open-ended prompt, further information was gained about researcher experiences (Box 1). These respondents were then asked if they complied with the request for additional animal-based experiments to be added to a study that otherwise had no animal experiments and to elaborate on their answers, to which 14 indicated that they did not comply, nine indicated that they sometimes complied, and five indicated that they did comply (Box 2). Responses to question 14: open-ended elaborations to question 13, "Did you feel the requested additional animal-based experiments were justified?"; grouped by whether respondents felt it was justified. Animal models are seen as a way to validate in vitro data. This is [ironic] . The study was about heterogeneity of cancer cells from human tissue samples. It was irrelevant to do an experiment on mice. The reviewer simply claimed that the human cellular models we used were not useful models to study viral infection, implying that only animal models would be acceptable. Referees ask for animal experiments because it is customary to do so in the field of biophysics, toxicology not because it is necessary. Many researchers are unaware about the potential of in vitro, in silico methods and human based models. The animal [studies] that were asked were to only [emphasize] the results we obtained in vitro. [N]o other reason. [M]erely requested to confirm in vitro data, no added value. The need for validation of human organoid data with animal studies... just because journal reviewers were used to this. They wanted human in vitro data to be "validated" against an animal model. We elucidated a [mechanism of action] using only gene expression data from hepatocytes in vitro. We were asked to perform the same in vivo, to confirm the findings. Sometimes the reviewers identify critical gaps in knowledge, these are valuable peer reviews. Other times it seems like they ask for animals out of habit. We refuse. This is even more difficult and hard to deal with when it comes to grant reviews Animal based models are not always the best in specific research. In cancer research some level of animal data is sometimes required. In some circumstances the added information would indeed provide extra validation, but it was not essential. Some ethical and social limitations can be overcome and long-term complete observation can be made. Responses to question 16: open-ended elaborations to question 15, "Did you comply with the request(s)?"; grouped by whether respondents complied. With limited time/money along with ethical concern, request was rejected and paper was withdrawn. We submit[ed] in a different journal. Reviewer three said words to the effect of "there are no mouse experiments in this study therefore I reject it for this journal." They said nothing about the rest of the study. I submitted to a different journal. We submitted the manuscript to another journal. Submitted to another journal as we [don't] have an animal breeding facility as we comply with 3R. My group is purely computational. We wrote a rebuttal to the editor, no success. Submitted elsewhere (high impact) accepted within 2 months. Justified why I shouldn't have to. The paper was withdrawn and used only as chapter in my PhD thesis. Respondents who indicated that they complied with the request(s) were then asked who performed the additional animal experiments, to which five indicated that their own lab(s) or lab(s) of coauthors did, two indicated that additional collaborators were brought in, and five indicated that some combination of their own labs and additional collaborators performed the experiments. When asked to elaborate on these answers, some respondents indicated that coauthors or members of their own lab had access to animal facilities and one respondent indicated that a lab member had "experience with performing in vivo procedure" (Box 3). We have IACUCs in limited circumstances. If we need to do the work, we do italthough replacement is important, we're not at the level of 100% replacement yet. A coauthor had this facility. A member of my lab had access to animal facility and experience with performing the in vivo procedure. I do not have an animal lab. Sometimes they are performed by the coauthors. Respondents who indicated that they did not comply with the request(s) for additional animal experimental evidence were then asked if they have ever had a manuscript rejected because they did not comply. Nine respondents indicated that they have not had a manuscript rejected for not complying with a request for animal experiments and eight indicated that they weren't sure, but nine indicated that they have. Elaborations on these answers are provided in Box 4. Finally, respondents were asked in an open-ended prompt to provide any additional details about their experiences being asked for additional evidence during manuscript review (Box 5). Responses to question 20: open-ended elaborations to question 19, "Have you ever had a manuscript rejected because you did not comply with a request for animal experimental data to be added to a study that otherwise had no animal-based experiments?"; grouped by whether this has happened to them. I just submitted to other journals if animal studies were requested. Cannot remember a case with over 400 publications. [Normally], when in vivo data were provided, articles were accepted. Usually, that's not only the reason. to not do animal experiments. One scientist at a conference proudly announced they had used 2,000 mice in one paper. I went to another seminar where the whole talk was about a virus study in primates; at the end they showed that the effect in primates was totally different to that in humans, therefore negating their entire study. (several had dropped out of the survey by this point), six indicated that they never serve as a manuscript reviewer for scientific journals, and the remaining 51 were routed to a series of questions specifically for reviewers (S1 Fig) . These respondents were then asked, in their capacity as a reviewer for a scientific journal, how often they requested animal experimental data to be added to a study that otherwise had no animal-based experiments. Of these 51 respondents, 36 said they never made these requests, but 15 indicated that they did (). We then asked these 15 respondents for what reasons they made these requests, to which one responded due to a "Request from editor," six responded due to "Preference for animal-based methods," and seven responded because they are "Unaware of nonanimal or alternative methods for the given hypothesis." Respondents were asked to elaborate on the previous answer (Box 6). Finally, respondents had the opportunity to provide any additional thoughts as reviewers they wanted to share (Box 7). Responses to question 26: "In your capacity as a reviewer for a scientific journal, how often do you request animal experimental data to be added to a study that otherwise had no animalbased experiments?" Thirty-six respondents indicated that they never made these requests; 15 indicated that they did less than once per year or more. The percent of respondents answering in each category is on top of each bar (total N=51). My main request for additional data is when controls are missing -this seems to be the most common missing thing these days -I would say it is more of a problem than it was ~20 years ago as people rush to publish. In some cases the original animal experiment was not conducted properly or without the right controls. Then I would make suggestions. If I am asked to review a paper that is entirely based on mouse models for example, I will probably not accept to review this because I don't work with mouse models myself. If there are lots of other molecular and cellular experiments with a few mouse experiments, then I would accept that paper to review. Often researchers are not aware of newly developed methods. As referee it is important to share info. I still didn't have to ask for additional experiments to improve a manuscript, I don't have that much experience being both principal investigator or reviewer. I don't like animal experiments and consider most of them unnecessary, so it is very unlikely that I would request additional experiments with animals for a publication. The type of data I have requested is better or additional statistical analysis. I get data since rationale is there. Reproducibility and real time data is required for drugs. Sometimes additional experiments are needed to gather further mechanistic understanding of described results, and nonanimal experiments serve this purpose. In this study, we set out to investigate the occurrence of animal-reliance bias in publishing through a survey initially completed by 90 respondents working in several areas in the biological and biomedical fields. The data collected indicates that animal-reliance bias in publishing does occur. Here we discuss the potential causes and implications of this type of bias and possible solutions to address it. To our knowledge, there was no prior specific method or strategy to investigate the occurrence and frequency of animal-reliance bias in publishing. Thus, using the SurveyMonkey platform, we designed a survey called "Survey to Assess Journal and Reviewer Requests for Evidence in Animals." Based on our survey, we cannot tell how frequently this type of bias occurs because our sample size of 90 was too small to conduct statistical tests. Furthermore, it is possible that our dissemination strategy may have led to a nonrepresentative sample comprised of researchers who would like to avoid using animals in research. Indeed, the survey was partially disseminated through our professional networks and social media channels, which predominantly include researchers who preferentially or exclusively use nonanimal methods. Thus, a larger survey that is sufficiently powered and representatively sampled will be necessary to further elucidate the animal-reliance bias phenomenon. The survey asked a series of questions aimed at understanding the experiences and perceptions of both authors and reviewers during the manuscript submission process. A total of 21 respondents said they have carried out animal-based experiments for the sole purpose of anticipating reviewer requests for them (Table 2) . A total of 31 respondents said they have been asked for animal experimental data be added to a study that otherwise had no animalbased experiments at least once (Fig 1) . A total of 15 respondents said, in their capacity as reviewers, have made such requests, several indicating that they did so for reasons including "Preference for animal-based methods," and being "Unaware of nonanimal or alternative methods for the given hypothesis" (Fig 2) . The data presented here strongly supports the occurrence of animal-reliance bias in publishing, which represents a novel type of bias in scientific publishing. In a broader sense, bias in scientific publishing can be defined as "a systematic prejudice that prevents the accurate and objective interpretation of scientific studies" [10] . According to a handbook on publication bias by the Oxford Handbooks Online, "the importance of addressing publication bias is as fundamental as the importance of promoting good science in general" [11] . Bias in the scientific literature may be of different types. Mostly, publication bias is recognized as any practice that distorts the true value of scientific results, producing misleading conclusions that are based on incomplete records of reporting results (only positive results are published) or on results that imply associations, cause-effect relations, prevalence, and other measures that are not supported by the available data or cannot be reproduced by different research groups. There are some suggestions of methodologies to identify these types of biases in the scientific literature. One idea proposed by Ioannidis and Trikalinos in 2007 is based on an exploratory test that evaluates whether the number of studies presenting statistically significant findings available in the scientific literature is higher than expected [12] ; another idea is to investigate ethical review boards, data repositories, and registries to find information on studies planned and/or intended to be carried out and check whether these have ever been published and how they were reported [13] . The Catalogue of Bias lists more than 50 distinct types of biases in clinical research and health-related studies, yet none describe the bias we report here (www.catalogofbias.org). The type of publication bias we investigated in this study is different than what is commonly recognized as publication bias, and it is not expected to be associated with missing data or findings that are never submitted or accepted for publication due to the quality of the finding itself (mostly negative findings) [1] . Instead, it is related to the preference for certain methods to obtain specific data when there may be no scientific justification for such preference. Thus, animal-reliance bias may be described as a systemic and internal bias within the scientific community, which demonstrates an inherent problem within the culture of science. One major issue stemming from this bias may be a misattribution of success in biomedical advances due to the use of animal-based experiments, despite the foundational work being done in nonanimal human-specific experimental systems. For example, the physiological breathing motion and not the presence of immune cells is the main reason cancer patients develop edema when treated with IL-2, a finding obtained using the lung alveolus chip [14] . More recently, the lung alveolus chip helped to elucidate how bacterial lipopolysaccharide endotoxin stimulates intravascular thrombosis [15] . Currently, this chip is being used to study infection by SARS-CoV-2 and to test FDA-approved drugs as potential candidates for treating COVID-19 [15] . There is no scientific basis to justify a request for animal-based experiments to be performed to "validate" these types of findings obtained with technologies 1 that did not rely on animals, and yet, it has been reported that such requests have been made [4] . Because there is no reason to expect that a human-based experimental system would produce the same results as an animal experiment, these requests skew expectations, confuse the scientific literature, and may even increase the use of animals in research. Animal-reliance bias in publishing aligns with the concept of conservatism bias, which is bias against groundbreaking and innovative research. In a presentation done at the First International Congress on Peer Review in Biomedical Publication that took place in Chicago in 1989, David Frederick Horrobin warned the audience about how peer review, and more specifically conservatism bias, may hinder scientific progress. In his speech, Horrobin listed around 20 cases of innovations that faced non-scientifically justified resistance and that included the paper describing the tricarboxylic acid cycle, a discovery that honored Hans Krebs a Nobel Prize in 1953 [16] . In fact, some of the most cited papers ever did not pass peer review at first, and some took years to be accepted [17] . In our study, it was revealing that, when acting as reviewers, respondents described the influence of editor requests, personal preference to animal-based methods, or lack of awareness of nonanimal methods. These results are in line with a study conducted to investigate peer reviewers' biases in accepting results produced while using methods not considered as conventional. In this study, authors found that reviewers tend to reject studies done using practices they consider as unconventional, which is supported by other studies showing that reviewers' judgment of the quality of a scientific study relies on their prior beliefs [19] . In fact, a recent study that aimed at investigating how a given animal model is chosen for studying a specific human condition found that the selection of an animal model is based mainly on availability, while the probability that results obtained using the model will translate to humans is not a criterion for choosing the model [20] . In 2013, a study comparing gene expression levels and genomic response to inflammation in humans who suffered trauma, burn, or endotoxemia and corresponding mouse models for these conditions received many critiques and was rejected for publication by a number of journals because it presented evidence of virtually no correlation between humans and the animal counterparts for the conditions investigated. Studies that contradict long-term beliefs may also find resistance in the scientific community, and difficulties to be accepted for publication, regardless of how solid the scientific evidence is [21] . In our study, authors who indicated that they had been requested by reviewers to add animalbased experiments to a study that otherwise had no animal-based experiments were asked if they thought the requests were justified. Only three responded yes, while 14 responded sometimes and 11 responded no. When asked to elaborate on this question, eight respondents expressed that the requested experiment would not add value to the study or implied that the request was made out of habit, because editors and reviewers are more familiar with animal data (Box 1). Indeed, the preference for long-time conventions may interfere with people's judgment when data indicate otherwise. This lack of flexibility may put scientists that work exclusively with nonanimal methods at a disadvantage in publishing. Moreover, studies conducted completely without the use of animals may be underrepresented in journals, creating a false impression that findings from these types of studies are unable to be translated to clinical trials without inclusion of animal data. This convention may delay scientific progress and the full acceptance of technologies that do not rely on animal data. Animal-reliance bias also adds to the number of background papers that have produced de novo animal-data to "validate" findings originally produced with technologies that do not rely on animals, giving the impression that findings obtained with technologies such as microphysiological systems, which better mimic human biology, require additional evidence that can only be provided by animal data. It should be noted that animal data, and more specifically animals, often must simply mimic certain human biological factors, such as gene expression, to be considered a valid experimental model. Although the current study does not comprehensively explore the consequences of animalreliance bias, its impact may be significant and may contribute to the continued use of animal-based research methods, the stifled use and further development of nonanimal methods, and the misdirected prioritization of federal research dollars. Thus, strategies to combat this bias are needed, even as researchers seek to further understand the causal factors of its existence. Some ways to address animal-reliance bias in publishing may be peer review training and accreditation and a two-stage review. A study that explored practical solutions for reducing publication bias reported that editors, reviewers, and researchers differ on what they see as a solution for the problem of publication bias but most recognized it as a problem associated with peer review [22] . In that study, one of the suggested solutions for reducing publication bias was peer review training and accreditation, even though the option was not well accepted by editors in general. Interestingly, the study shows that this option is especially favored by early-career scientists (< 10 years of experience), indicating that these individuals may see the need for training to be prepared for reviewing the work of their peers. The two-stage review process, another of the suggested solutions in which editors and reviewers decide whether to recommend a paper for publication based on the article's introduction and methods sections only, was considered as the best option for reducing publication bias. The rationale behind this idea is that the decision regarding publication should not depend on the study's results but on its design and the methods it used. It would therefore be important to investigate how the two-stage review process could address animalreliance bias in publishing, as this bias seems to stem primarily from the methods used rather than the results obtained. The practice of open peer review endeavors to improve the process of peer review by increasing transparency, mirroring the goals of the broader open science movement [23] . With open peer review, aspects of the peer review process are made public, such as the name the reviewers assigned to a paper and reviewer comments, including any requests for adding animal data. A 2000 randomized controlled trial of open peer review found that open reviews were of higher quality, were more courteous, and took longer to complete than closed reviews [24] . In a 2015 survey of editors aiming to assess the impact of an open review pilot, 33% of editors identified an improvement in the overall quality of review reports, and 70% of those editors said that the pilot resulted in reports that are "more in depth and constructive for authors to improve the quality of their manuscript" [25] . Transparency is a crucial component of rigorous science, especially with regard to animal research, a notion upheld by the U.S. National Institutes of Health, the largest funder of biomedical research in the world [26] . Open peer review may therefore be a useful tool in addressing animal-reliance bias by encouraging reviewers to take more time to produce more thoughtful and higher quality reviews. In addition, open peer review would make it known to readers when experimental results were obtained at the request of reviewers as opposed to being conceived as a part of the original study plan. The "validation" of human-specific models with animal-based data often lacks scientific justification and ethics principles require its avoidance. It is unlikely that performing de novo animal experiments to generate data to be added to a study originally conceived without the use of animals can adhere to the 3Rs principle; depending on the country or region, ethical committees may not (or should not) approve uses of animals that were not part of the original study plan. Currently, the causal factors leading to animal-reliance bias in publishing are not fully understood. It is possible that in some cases reviewers and editors are unfamiliar with the strengths of nonanimal methods in providing mechanistic knowledge and testing novel therapeutics. As described above, this may result in peer review requests or submission requirements for more conventional, animal-based methods to be used to produce data to "validate'' scientific findings previously obtained using animal-free methods. Other explanations behind animal-reliance bias in publishing may be: (i) the occurrence of status quo bias, defined as a preference for the current state of affairs, and which is based on emotional rather than objective reasons; (ii) the policy some journals have that findings should be validated in vivo, contributing to the perpetuation of animal-reliance bias in publishing; and (iii) regulatory testing requirements, which may also suffer from this type of bias. When evaluating the occurrence of any type of bias, the possibility that conflicts of interest (CoI) are at the root of the problem or may contribute to its perpetuation cannot be ruled out. Specifically, financial and professional relationships with animal-based companies or advocacy groups promoting animal-based experiments can increase peer reviewers' or editors' internal animal-reliance bias. Thus, it should be noted that while authors are virtually always asked to disclose any CoI they may have when submitting a paper for publication, the same request does not always apply to reviewers or editors. This situation is observed even in journals that subscribe to professional organizations' rules such as The Committee on Publication Ethics) and the International Committee of Medical Journal Editors that do recommend that both reviewers and editors disclose any conflict of interest they may have. Requiring CoI disclosure from all those involved in peer review, and not only authors, may help address animal-reliance bias in publishing [27] . A different type of conflict may also take place with animal ethics committees, such as This is the first study, to our knowledge, to describe evidence of animal-reliance bias in publishing and provides the basis for developing strategies to minimize its effects and address some of its potential causal factors. Additional work is needed to further characterize animalreliance bias, evaluate how pervasive it is within the many disciplines of science, and identify what role it may play beyond publishing, especially in the review of grant applications. Elimination of this type of bias will ultimately improve science communication and transparency and will help advance the development and adoption of methods in biomedical research that do not rely on animals, are more human-relevant, and aid in translation to the clinic. were only interested in experiences during peer review, respondents who indicated that they had zero peer reviewed publications were routed to the end of the survey. The survey was designed and collected using SurveyMonkey. A short introduction to the survey was provided to respondents before the survey questions were presented, which included a brief statement of purpose, as well as a confidentiality statement: "This survey aims to better understand the circumstances under which animal-derived data is requested to be added to a study performed without the use of animals as a condition for publication. The questions of this survey were designed to collect the experiences and perceptions of both researchers and reviewers on this topic. We thank you in advance for your participation and encourage you to disseminate this survey to your colleagues. Your responses will remain anonymous, except to surveyors in the case that you choose to provide your name and contact details, which will remain confidential. Responses will not be used for a discriminatory purpose." Because survey respondents may consider animal and nonanimal experiments to have an array of different meanings, and to avoid any confusion on how to classify types of study, we defined the two concepts in the introduction of the survey: Animal-based experiment: An experiment performed in a living non-human animal or in a non-human animal-derived organ, tissue, or other biological product, e.g., an animal in vivo Responses to question 25. The percent of respondents answering in each category is on top of each bar (total N= 57). Catalogue of bias: publication bias Longterm expanding human airway organoids for disease modelling. bioRxiv Long term expanding human airway organoids for disease modeling The coming of age of organoids Is it Time for Reviewer 3 to Request Human Organ Chip Experiments Instead of Animal Validation Studies? Advanced Science Human organoids: model systems for human biology and medicine Modeling Sporadic Alzheimer's Disease in Human Three-Dimensional Human Alveolar Stem Cell Culture Models Reveal Infection Response to SARS-CoV-2 SARS-CoV-2 productively infects human gut enterocytes Medical journal peer review: Process and bias Publication bias in science: What is it, why is it problematic, and how can it be addressed? The Oxford Handbook of the Science of Science Communication An exploratory test for an excess of significant findings Completeness and changes in registered data and reporting bias of randomized controlled trials in ICMJE journals after trial registration policy Primary Human Lung Alveolus-on-a-chip Model of Intravascular Thrombosis for Assessment of Therapeutics A human disease model of drug toxicity-induced pulmonary edema in a lung-on-a-chip microdevice The history of the tricarboxylic acid cycle Rejecting and resisting Nobel class discoveries: Accounts by Nobel Laureates National Institute of Allergy and Infectious Disease, NIH. Minimizing Unconscious Bias in Peer Revew A randomized controlled study of reviewer bias against an unconventional therapy Tradition, not science, is the basis of animal model selection in translational and applied research Genomic responses in mouse models poorly mimic human inflammatory diseases The perceived feasibility of methods to reduce publication bias Open peer review: promoting transparency in open science Open peer review: A randomised controlled trial Is open peer review the way forward? In: Reviewers' Update Statement on enhancing rigor, transparency, and translatability in animal research Editors' and authors' individual conflicts of interest disclosure and journal transparency. A cross-sectional study of high-impact medical specialty journals Thank you to Marcia Triunfol for her contributions to the survey and manuscript. Thank you to Lindsay Marshal and Bianca Marigliani for their remarks and suggestions on how to improve our manuscript. And thank you to the survey respondents for taking the time to share their experiences with us.