key: cord-0881864-hf9mdav1 authors: Borges, Renee M title: Reproducibility and replicability in science: A Sisyphean task date: 2022-02-25 journal: J Biosci DOI: 10.1007/s12038-022-00259-6 sha: 9243f54a4e80d3278c9a0487cebe47d9c09e7cd3 doc_id: 881864 cord_uid: hf9mdav1 The world received a confidence booster in the power of the scientific method, having witnessed and participated in the recent development of successful vaccines against SARS-COV-2. The world also got a peek into scientific controversies, the clamour for more transparency and data sharing, besides the requirement for rigorous testing, adequate sample sizes, false positives, false negatives, risk probabilities, and population variation. For an interested lay person, or even for a practising scientist, this was the equivalent of a crash course on the world stage on how science is done, warts and all, but where science triumphed in the end.... currently manned by idealistic and energetic students and early career researchers who believe that the truth must out. Yusuf Hannun, Director of Stony Brook University's Cancer Centre in New York, says that one of the ''biggest frustrations as a scientist [is] that it is so hard to know which exciting results are sturdy enough to build on'' (Hannun 2021) . Hannun blames some of this unfortunate situation on inadequate training which has resulted often in wasted resources, and champions the need for a set of replicated experiments to be incorporated into all schools of science so that young practitioners are made aware of the need for careful replication and the rigour of doing science. Another important reason for such hastily reported and often unverified research is the pressure to continually publish large numbers of papers so that stories are carved up into disjointed elements where the overall question is obscure, and the data themselves are inadequately authenticated. Richard Harris (2017), a respected science journalist, has written a hard-hitting book on wasted financial resources in the pursuit of sloppy and illadvised science. One clear hurdle for reproducibility is authenticity of data. Furthermore, what is considered biologically significant is certainly not the same as being statistically significant. There are many pitfalls; replication studies and the explicit requirement for more rigour will solve many of them, especially perhaps curbing some of the reasons for lack of replicability. These include genuine errors; intentional deception; mistakes due to accelerated process without adequate quality control measures; errors that come from large collaborations, where it is impossible to individually vouch for all sets of data; interdisciplinary research where all scientists are not aware of the limitations and quality of different data sets; and finally intentional manipulation or misinterpretation of data due to political interference. Should we then expect less of a scientific paper (Camerer et al. 2018; Amaral and Neves 2021; Camerer et al. 2018 )? Should we skim a scientific paper just for ideas, and accept that many of the claims may not bear up under scrutiny? Would we have to examine the scientific pedigree of the authors and hope that a good scientific culture has been inculcated, and thereby have more confidence in the findings? How can results be standardised? Which results should be standardised? Is the inherent variation in the system sufficiently interesting that it too needs to be documented and explored so that causal mechanisms are understood in a more nuanced and context-dependent manner (Bryan et al. 2021 )? In the rush to standardise, are we losing out on population-level variability and response? I submit that more science academies the world over should grapple with these demons and come up with good measures for grounding research findings in solid and verifiable reality. In starting out as the Chief Editor of Journal of Biosciences, I took heart from May Berenbaum's Editorial when she began recently as Chief Editor of PNAS. The editorial is titled: ''On zombies, struldbrugs, and other horrors of the scientific literature'' (Berenbaum 2021) . I leave it to you to find out what her horrors are, and to imagine what mine might be. Journal of Biosciences is committed to publishing papers of broad scientific interest, and those that adhere strictly to the principles of scientific ethics and good scientific practice. Plagiarism identification software has reduced the embarrassment of pilfered text, but only vigilant editors, reviewers and readers can identify pilfered ideas. A robust network of sensitised professionals is what Journal of Biosciences will depend on to bring the best science with the fine attributes of reproducibility and replicability into its ambit. Reproducibility: expect less of the scientific paper On zombies, struldbrugs, and other horrors of the scientific literature Behavioural science is unlikely to change the world without a heterogeneity revolution Evaluating the replicability of social science experiments in nature and science between Retracted articles in the biomedical literature from Indian authors What does research reproducibility mean?