key: cord-314288-6vh7dvad authors: Leibovici, L.; Allerberger, F.; Cevik, M.; Huttner, A.; Paul, M.; Rodríguez-Baño, J.; Scudeller, L. title: Submissions and publications in Corona times date: 2020-05-15 journal: Clin Microbiol Infect DOI: 10.1016/j.cmi.2020.05.008 sha: doc_id: 314288 cord_uid: 6vh7dvad nan In times of the COVID-19 pandemic we struggle at CMI between the urge to bring data to the readers as soon as possible and the necessity to publish trustworthy, robust material. The number of submissions to CMI during the first 4 months of 2020 increased by 60% compared to the same months in 2019. At its peak, over 40 articles were submitted on a single day; most of them concerning COVID-19. The majority was rejected, many immediately by the editors without peer-review. The decisions were not easy. We would like to explain our decisions, according to the type of submission. We hope thereby to save authors, editors and peer-reviewers time and avoid unnecessary efforts. Letters to the Editor: Under this format we publish interesting case reports and case series; and as expected we received many letters describing one (or a few) patients with presentations of COVID-19 that had not been described before. When infected people are counted in hundreds it makes sense to describe a small number of patients with an atypical presentation. But when counted in millions, clinical or laboratory findings in one or two patients might be just a coincidence (e.g., a non-specific finding that may ultimately be due to another condition in a patient with COVID-19; or a false positive result for the virus in a patient with another disease); and we would like to see larger and well described denominators. Commentaries: We have received many commentaries claiming that known remedies (vitamins, anti-diabetes treatments, anti-hypertensive and anti-inflammatory drugs, antimicrobials and antiviral treatments) could be effective against COVID-19. The chain of evidence was nebulous at best and we have decided not to publish such articles. We are happy to publish commentaries describing how people, hospitals or other healthcare settings have dealt with the challenges of managing COVID-19, as well as opinion articles discussing the implications of available evidence on clinical practice. Narrative reviews: We have also received many reviews attempting to address all questions and data relevant to the virus and the pandemic, some of which were lengthy monographs. We decided not to publish content that would likely be well-known to an informed practitioner. We are pleased to consider in-depth, updated reviews addressing aspects of the pandemic relevant to our readers [1, 2] . We are happy to publish systematic reviews in which the synthesis is greater than what can be gleaned from the original studies [3] . However systematic reviews of observational studies, and especially meta-analyses of such studies assessing the efficiency of drugs, are problematic. Bias by indication is always suspected, and we cannot be sure that the adjustments done in the original studies have taken care of all relevant biases. Systematic reviews and meta-analyses of small observational studies, with ingrained biases, are not helpful. Critical assessment of such studies, without an artificial attempt to combine the results, might be valuable. Observational studies: Now we have a good idea from published studies on the clinical course of COVID-19, and further descriptions of small groups of patients have little to add. Several studies have already been published on risk factors for symptomatic disease, severe disease and death in patients affected by the virus. However large studies will allow us a better look at the sub-groups of interest; on the interaction between risk factors; and permit external validation of prognostic models. Multi-centre and multi-national prospective studies can address both the concern that some risk factors are local; and point to true differences between populations. We (and others) see a problem with observational studies comparing one treatment to another treatment or to no treatment. Since the choice of treatment was made by the practitioners, we have no way to capture and correct for all factors which influenced this decision. Was the new treatment given to the worst patients? Or the other way around, to those patients perceived to stand a better chance? Or (in some settings) to those who can pay? Or to opinionated patients with opinionated families? Or prescribed by some physicians, or units, and not by others? It is difficult to assume that the new treatments were given at random. We should also be concerned by publication bias: authors are more likely to submit for publication observational studies with a positive result than those with a negative result. Observational studies can be of interest: they can generate hypotheses; serve as the base for randomized controlled trials (RCTs); or, if convincingly negative, to lower the priority for RCT testing of the treatment. But in order to provide convincing results, we would like to see efforts to compare like to like and to adjust for confounders, in large cohorts; to present data carefully collected in full; outcomes that matter to patients; and a correct ascertainment and counting of outcomes. We expect careful descriptions of the methods and results [4, 5] . The drive to develop better diagnostics for a new disease -faster, more accurate or even cheaper -is understandable. Considering our readers, we publish studies on diagnostics only if tested on clinical samples, and preferably in clinical situations [6] . We expect the sample size to be large enough to offer confidence in the results [7] . We believe RCTs to be the major building blocks of evidence based medicine, and are happy to consider them for publication [8] . RCTs are not free of problems: small sample size, studies stopped before recruitment was completed, or studies that are not up to methodological standards. We wish to honor the goodwill of the patients who agreed to participate in the study and were promised that the results would be published to help other patients. The most telling summary was provided by a cover letter of one of the submissions: "Please publish our article within 3-4 days. The data might be falsified (sic) in a few days". We, as editors, do not want to publish data that will be outdated or falsified within a few days (or a few months for that matter). Editorial note -not peer-reviewed. CMI readers' survey 2019 CMI readers' survey The CMI welcomes systematic reviews Reporting methods of observational cohort studies in CMI Observational studies examining patient management in infectious diseases How to: evaluate a diagnostic test Sample size calculations for diagnostic studies Randomized controlled trials in CMI