key: cord-0896628-oof166bd authors: Madhugiri, Venkatesh S.; Nagella, Amrutha Bindu; Uppar, Alok Mohan title: An analysis of retractions in neurosurgery and allied clinical and basic science specialties date: 2020-10-16 journal: Acta Neurochir (Wien) DOI: 10.1007/s00701-020-04615-z sha: e318973e0d700ccadcd85336b62599fad94d00ba doc_id: 896628 cord_uid: oof166bd BACKGROUND: As the volume of scientific publications increases, the rate of retraction of published papers is also likely to increase. In the present study, we report the characteristics of retracted papers from clinical neurosurgery and allied clinical and basic science specialties. METHODS: Retracted papers were identified using two separate search strategies on PubMed. Attributes of the retracted papers were collected from PubMed and the Retraction Watch database. The reasons for retraction were analyzed. The factors that correlated with time to retraction were identified. Detailed citation analysis for the retracted papers was performed. The retraction rates for neurosurgery journals were computed. RESULTS: A total of 191 retractions were identified; 55% pertained to clinical neurosurgery. The most common reasons for retraction were plagiarism, duplication, and compromised peer review. The countries associated with the highest number of retractions were China, USA, and Japan. The full text of the retraction notice was not available for 11% of the papers. A median of 50% of all citations received by the papers occurred after retraction. The factors that correlated with a longer time to retraction included basic science category, the number of collaborating departments, and the H-index of the journal. The overall rate of retractions in neurosurgery journals was 0.037%. CONCLUSIONS: The retraction notice needs to be freely available on all search engines. Plagiarism checks and reference checks prior to publication of papers (to ensure no retracted papers have been cited) must be mandatory. Mandatory data deposition would help overcome issues with data and results. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.1007/s00701-020-04615-z) contains supplementary material, which is available to authorized users. As of 2012, approximately 0.01-0.02% of all published papers in the biomedical and life sciences had been retracted [7, 9] . The reasons for the retraction of published papers and the rates of retraction vary across scientific disciplines. Data fabrication/falsification, plagiarism, research misconduct, and errors in the articles have been identified to be common reasons for retraction [12, 16, 17] . In open-access journals, one of the most common reasons for retraction is a compromised peer review process [16] . Very often, limited information regarding the reasons for retraction is available, and thus, it becomes difficult to distinguish retractions due to genuine errors from those due to misconduct [17] . The rate of retraction in a specialty can act as an indirect indicator of the efficacy of the institutional and post-publication checkpoints in addressing errors and fraud in that specialty. In clinical disciplines, these mechanisms are of particular importance, since published results often lead to changes in clinical practice and thus directly impact patient safety and outcomes. It thus becomes important to periodically review the number of retractions and the reasons therefor, in every clinical specialty. An analysis of retractions in neurosurgery was carried out in 2017; This article is part of the Topical Collection on Neurosurgery general Electronic supplementary material The online version of this article (https://doi.org/10.1007/s00701-020-04615-z) contains supplementary material, which is available to authorized users. however, that study was confined to papers that dealt with clinical neurosurgery [20] . Moreover, a detailed analysis of the pre-and post-retraction citations received by the retracted papers was not carried out in that study. In the present study, we analyzed retractions in clinical neurosurgery as well as in allied clinical and basic science specialties. We describe the characteristics of the retracted papers and identify how widely the information contained therein was disseminated, by performing citation analysis for the retracted papers. We also examined what proportion of the citations received by the retracted papers occurred after retraction. A two-pronged strategy was followed to identify retracted publications in clinical neurosurgery, allied clinical specialties, and basic sciences pertinent to neurosurgery. Step 1 The NLM catalog containing all the journals referenced in the NCBI database was searched (https://www.ncbi.nlm. nih.gov/nlmcatalog/journals/) using 4 search strings to identify journals relevant to neurosurgery. The search strings were "neurosurgery," "spine," "stroke," and "neuro oncology. " The results obtained from these 4 searches were collated and duplicates eliminated, to obtain a list of journals relevant to neurosurgery and its subspecialties (Fig. 1 ). Step 2 The main PubMed database (https://pubmed.ncbi.nlm. nih.gov) was searched with the names of the journals identified from step 1 entered in the search field. Two filters, viz., "Retracted publication" and "Retraction of publication" were activated to identify retracted papers and retraction notices published in each of the journals. For instance, the search strategy for the journal Acta Neurochirurgica would be "Acta Neurochir"[Journal] AND (Retracted Publication-[sb] OR Retraction of Publication[sb]). The retracted articles from each journal were collated, and a database of retracted articles from the selected journals was generated. The PubMed database was directly searched for retracted papers using various search strings (for e.g., "neurosurgery," "stroke," "neuro oncology") and with two filters, viz., "Retracted publication" and "Retraction of publication" applied ( Fig. 1 The results generated by search strategies 1 and 2 were merged and duplicates eliminated to generate a database of retracted papers. All papers, especially those from allied Fig. 1 Flow chart showing the search strategies to identify retracted papers in neurosurgery, allied clinical specialties, and basic sciences related to neurosurgery clinical specialties and the basic sciences, were screened to ensure that they were germane to neurosurgery as a specialty. Basic science papers were included only if they pertained to a topic of direct relevance to neurosurgery, e.g., spinal cord injury or glioblastoma. Data regarding the selected papers was extracted from two sources-PubMed and the Retraction Watch database (http://retractiondatabase.org/RetractionSearch.aspx). Current journal citation metrics (2-year cites per document and Hindex) were obtained from the Scimago website (https:// www.scimagojr.com). The level of evidence for each paper was categorized on the basis of the guidelines published by the Oxford Centre for Evidence-Based Medicine [22] . The reasons for retraction were extracted from the retraction notices, accessed either on PubMed Central or on the journal website. If the full text of the retraction notice was not available on PubMed or PubMed Central and was behind a paywall on the journal website, it was categorized as "not available." The actual reasons for retraction were sorted into the following 5 categories: 1-genuine error, paper withdrawn by authors (GEWA), 2-genuine error, paper withdrawn by journal (GEWJ), 3-withdrawn by authors, reason not known (WA), 4-withdrawn by journal, reason not known (WJ), and 5-misconduct by authors (MA). We then created 3 collated category reasons for retraction-genuine errors-GEWA and GEWJ, indeterminate-WA and WJ, and misconduct-category 5. The journals that had published the retracted papers included in this analysis were screened to identify "pure" neurosurgery journals, i.e., journals that only published articles relevant to neurosurgery (clinical papers or basic science and translational research). The total number of indexed articles published in each "pure" neurosurgery journal was obtained by searching the PubMed database with the name of the journal. The number of retracted articles that had been published in each journal was divided by the total number of indexed articles published in that journal to obtain the retraction rate for that journal. The number of citations (pre-and post-retraction) to each paper was identified from Google Scholar. Post-retraction citations were identified by applying the time filter on Google Scholar, using the year subsequent to the year of retraction. For instance, if a paper had been retracted in 2017, all citations to the paper in the year 2018 and subsequently were counted as post-retraction citations. Factors that correlated with time to retraction and the number of post-retraction citations were identified. All analyses were carried out on Stata (v14, Stata Corp, College Station, Texas) and Microsoft Excel (v16. 16 A total of 191 papers published in 108 journals were included in the analysis (Fig. 1) . The raw dataset for this study has been deposited in a data repository and can be found at DOI: https:// doi.org/10.5281/zenodo.4091508. The oldest retracted paper was published in May 1989 and the most recent in January 2020. Of the 191 retracted papers, 78 (40.8%) were basic science papers of direct relevance to neurosurgery, 8 papers (4.2%) were from allied clinical specialties pertinent to clinical neurosurgery, and 105 papers (55%) were clinical neurosurgery papers (Fig. 2) . The yearwise distribution of retracted papers is displayed in Fig. 3 . The maximum number of retractions have taken place in the past 5 years (2015-2020, n = 127, 66.5%). The lag period between publication to retraction ranged from 0 to 252.6 months (0-21 years). The median time to retraction was 16.2 months (IQR 5.1-36.5). Fourteen papers (7.3%) had a time to retraction of 0 months, which possibly meant either that errors in the paper were detected soon after publication or that the papers were withdrawn by the authors soon after publication. The median time to retraction varied significantly by group-17.25 months (IQR 30.4) for basic science papers, 28.9 months (IQR 35) for allied clinical science papers, and 10.2 months (IQR 28.3) for clinical neurosurgery papers (p = 0.006). There was no significant correlation between the number of authors (Pearson's r = 0.015, p = 0.8) or the number of institutions involved (r = − 0.077, p = 0.29) and the time to retraction. In single-institute studies, the number of collaborating departments correlated with the time to retraction (r = 0.22, p = 0.033). The H-index of the journal correlated positively with the time to retraction (r = 0.21, p = 0.0027). The time to retraction was not significantly different between open-access (OAJ) and hybrid model journals (HMJ) or between free to publish (FP) and pay to publish (PP) model journals. The median time to retraction varied significantly by the collated category reasons for retraction (vide supra, "Methods"). The median time to retraction was 7.13 months (IQR 20.27) for genuine errors, 7.1 months (IQR 16.17) for the indeterminate category, and 22.33 months (IQR 31.4) for retractions due to misconduct (χ 2 = 26.1, p = 0.001). Ninety-eight (51.3%) of the retracted papers were multiinstitutional collaborations. The median number of collaborating institutions involved in multi-institutional papers was 2 (range 1-22). In single-institution studies, the median number of collaborating departments was 1 (range 1-4). One-third of all retracted papers (n = 60, 31.4%) were single-institute and single-department papers. The median number of authors was 5.5 (range 1-37); only 8 of the retracted papers (4.2%) were single-author papers. The median number of countries of origin of the authors involved in the retracted papers was 1 (range 1-8). One hundred eighty papers originated from a single country (94.2%). Overall, retracted papers originated from 31 countries ( Table 1 ). Since some of the papers originated from more than one country, overall, 208 country attributions were responsible for the 191 retracted papers. The countries associated with Fig. 3 The year-wise distribution of retracted papers is displayed Fig. 2 The distribution of the retracted papers across categories the highest number of retractions were China (n = 93, 44.7%), the USA (n = 39, 18.8%), and Japan (n = 9, 4.3%). The maximum number of retracted papers pertained to neuro-oncology (30.9%), followed by spine (22.5%) and neuro-trauma (18.8%) ( Table 2) . The full texts of the retraction notices were available on PubMed or PubMed Central for 71 papers (37.1%) and on the journal website for 169 papers (88.5%). The full texts of the retraction notices (and thus the reasons for retraction) were not available for 21 papers (11%). Nine of these 21 papers had been withdrawn either by the authors or by the journal and were categorized as WA or WJ and as "indeterminate reason for retraction" in the collated category list. Thus, for 12 papers (6.3%), absolutely no information regarding the reason(s) for retraction was available, whereas the remaining 179 could be categorized. The median number of reasons for retraction was 1 (range 1-3). Overall, 199 actual reasons for retraction were given for 170 papers (Table 3 ). The most common reasons for retraction overall were plagiarism (18.59%), duplication of articles (14.59%), and compromised peer review (12.56%) ( Table 3 ). The 3 most common reasons for retraction for papers published before 2010 were duplication of the article (26.1%), plagiarism (17.4%), and concerns regarding the data and outright fabrication/falsification of data (13.04% each). The 3 most common reasons for retraction for papers published after 2010 (2010-2020) were plagiarism (18.9%), compromised peer review (16.3%), and authorship issues (12.4%). The distribution of the retracted papers across the collated category reasons was genuine errors-15.1%, misconduct-67.6%, and indeterminate reason-17.3% (Table 4 ). The current median 2-year citations per document (2y-CPD) statistic (equivalent to Impact Factor®) of the journals in which the 191 retracted papers were published was 2.9 (range 0.18-25). The current median H-index of these journals was 90 (range . The list of journals that had published the retracted articles is displayed in Supplementary Table 1. The majority of the retracted papers had been published in HMJ (n = 126, 66%). Similarly, the majority of the articles were published in journals that were of the FP model (n = 130, 68.1%). The rate of retractions due to compromised peer review was not different between the OAJ and HMJ or between the PP and FP journals. Similarly, there was no significant difference in the rate of papers withdrawn due to misconduct between the OAJ and HMJ or between the PP and FP journals ( Table 5) . Twenty-two journals were identified to exclusively publish papers of relevance to neurosurgery, and were categorized as "pure" neurosurgery journals ( Table 6 ). The journal Cureus was included even though it is a multi-specialty journal, since it has a separate channel for neurosurgery and the number of papers pertaining to neurosurgery could easily be identified. Thus, the retraction rate was computed for these 23 journals. The rates of retraction ranged from 0.013 (European Spine Journal) to 0.25% (Cureus). The median rate of retractions was 0.037% (IQR 0.67). Six journals had a retraction rate lower than the average computed rate for biomedical sciences, which was 0.02% in 2012 [9] . These were European Spine journal, Child's Nervous System, Spine, Neurosurgery, Journal of Neurosurgery, and Acta Neurochirurgica ( Table 6 ). The median rate of retractions for OAJ was higher (0.16%, IQR 0.1) than for HMJ (0.026%, IQR 0.04); this difference was significant (z = − 2.43, p = 0.015). The median rate of retractions did not differ on the basis of the payment model (PP vs FP). The rate of retractions did not correlate with the 2y-CPD or the H-index of the journals. As of May 2012, PubMed contained 19,974,272 citations and 2047 retracted papers, amounting to an overall retraction rate of 0.01% [9] . Compared with this, the 55 retractions across the 150,051 papers published in the 23 "pure" neurosurgery journals selected for this analysis amounted to an overall retraction rate of 0.037% ( Table 6 ). The rate of retractions in neurosurgery journals was significantly higher than that for life sciences as a whole (χ 2 = 28.95, p < 0.0001). The types of retracted papers are displayed in Table 7 . Laboratory/basic sciences studies formed 44% of the retracted papers. Among the clinical papers to be retracted, cohort studies and case series formed the largest group. The distribution of the papers across the levels of evidence is displayed in Fig. 4 . Fourteen papers (7.3%) presented level 1 evidence; one of these was a systematic review of randomized controlled trials (RCTs) and the others were individual RCTs. The median time to retraction of these papers was 18.2 months (range 0-92 months). The reasons for retraction were available for all but one of the level 1 papers. The most common reasons for retraction of these papers were falsification of data and errors in the methods of the study. The category reasons for retraction of these papers were genuine errors (n = 1, 7.1%), misconduct (n = 9, 64.3%), and indeterminate (n = 4, 28.6%). The median number of authors for these papers was 6, and the median number of participating institutes was 2.5. All but 4 of these studies were multi-institutional; however, all the papers in this category were of single-country origin. The most common countries of origin of these retracted studies were China and the USA with 6 papers (42.8%) each. The median number of authors, institutes, and countries involved was not different between level 1 and other papers. The median number of citations received by retracted papers in this study was 6 (IQR 18). The number of citations received by each paper was normalized to the number of years since publication (citations per year). The median cites per year (CPY) were 0.91 (IQR 2.23) per retracted paper. Several papers continued to receive citations even after they were retracted. The median number of post-retraction citations received by these papers was 4 (IQR 8). Post-retraction citations formed a median of 50% of the total number of citations received by the retracted papers. The median post-retraction CPY was 0.39 (IQR 1.24). The median post-retraction CPY varied by the category of the paper-0.54 (IQR 1.37) for basic sciences, 1.38 (IQR 3.42) for allied clinical specialty papers, and 0.23 (IQR 0.98) for clinical neurosurgery paper; these differences were significant (χ 2 = 9.3, p = 0.0096). The time to retraction correlated with the total number of citations received (r = 0.3364, p < 0.001) and the proportion of post-retraction citations (r = − 0.34, p < 0.001). The median number of citations received post-retraction was higher for level 1 papers (9.5, IQR 45) . 4 The levels of evidence generated by the retracted papers than for other papers (median 3, IQR 8); this difference was significant (z = − 3.2, p = 0.002). The median CPY postretraction was also higher for level 1 papers (1.9, IQR 3.4) than for other papers (median 0.33, IQR 1.1); this difference was significant as well (z = −3.4, p = 0.0007). The postretraction CPY was not different between HMJ and OAJ (p = 0.15) or between PP and FP journals (p = 0.7). The other factors (besides level of evidence) that correlated with the post-retraction CPY were the 2y-CPD for the journal (r = 0.17, p = 0.02) and journal H-index (r = 0.16, p = 0.024). As the volume of scientific literature increases exponentially, the number of retractions is likely to increase. Erosion of scientific integrity and author misconduct are becoming major concerns in the current milieu. In a survey, 1.97% of authors admitted to misconduct (fabrication, falsification, or modification of data or results), and up to 33.7% admitted to other questionable practices [8] . This is an alarming statistic by any measure. Even more alarming is the fact that in surveys that sought information regarding the behavior of respondents' colleagues, admission rates were 14.12% for falsification and 72% for other questionable research practices [8] . Formal mechanisms to detect malpractice and to prevent compromised data from being published are thus of great importance. The first checkpoint in the publication process is the editorial review, where many compromised manuscripts are rejected even prior to peer review and before plagiarism checks are carried out. However, this study found that despite the existence of many plagiarism-check software services, plagiarism remains one of the most common reasons for retraction of neurosurgical articles, even for those published after 2010. This could imply that several "low-impact" journals do not routinely use such services, and this is a lacuna that requires to be addressed. The next checkpoint in the publication cycle is the peer review process. However, this is far from being foolproof. A compromised peer review process was responsible for 12.6% of all the retractions analyzed in this study and for 16.3% of the retractions that occurred after 2010. The rates of retractions due to compromised peer review did not vary between HMJ and OAJ or PP and FP model neurosurgical journals. It is notable thereore, that even the "standard" HMJs had a significant incidence of compromised peer review. Compromised or inadequate peer review processes can lead to disastrous consequences. Possibly, the most infamous example of the global and lasting impact of compromised data published in a "high-impact" journal is the paper postulating a link between vaccines and autism in children [23] . Similarly, several high-profile retractions have occurred during the ongoing COVID pandemic as well [14, 15] . The Lancet retracted a paper that wrongly raised concerns about the safety of hydroxychloroquine in the treatment of Covid-19 [15] . On the same day, the New England Journal of Medicine retracted a paper that falsely concluded that angiotensinconverting enzyme (ACE) inhibitors and angiotensinreceptor blockers (ARBs) were safe in patients with Covid-19 [14] . Although the papers were retracted relatively early, the fallout had already occurred. Based on the findings reported in these papers, the World Health Organization prematurely suspended the hydroxychloroquine arm of the SOLIDARITY trial over safety concerns. This arm of the trial was restarted once the papers were retracted, but there was unnecessary delay in the completion of the trial. The urgent need for information regarding a novel pathogen, and thus the pressure to publish any paper related to the ongoing pandemic, has possibly led to compromised peer review processes across journals. The number of retracted publications related to COVID now stands at 32 [1] . While these examples indubitably emphasize the need for effective peer review processes, they also highlight the need for active post-publication scrutiny by the scientific community at large. The Lancet and the New England Journal of Medicine papers were retracted when post-publication reviews detected that the entire database on which the papers had been based was possibly fabricated [14, 15] . Postpublication review also led to the retraction of a paper that found that cloth masks were as efficacious as surgical masks in preventing the spread of the novel coronavirus from the cough droplets of infected individuals [4] . Dissemination of wrong data from this paper could have had disastrous consequences for healthcare workers across the world. One of the cardinal requirements for an effective post-publication review process is the ability to replicate all the experiments and analyses in a published paper. This would be dependent on the availability of datasets for every published research paper, as well as a detailed description of the experimental and statistical methodologies employed. However, relying entirely on post-publication peer review, as some new journals do, could lead to poor quality control of the papers prior to publication, resulting in unacceptably high rates of retraction. For instance, the present study found that the neurosurgery channel of the journal Cureus, which largely relies on post-publication peer review for quality control, had the highest rate of retraction among neurosurgical journals (0.25%). Thus, a combination of effective editorial-, peerand post-publication-review mechanisms is required for optimum quality control. The median time to retraction for neurosurgery articles was 16.2 months (IQR 5.1-36.5). Basic science and allied specialty papers, as well as multi-department collaborative studies, had a longer time to retraction. Journals with a higher H-index took longer to retract compromised papers. This is of concern, since the "high-impact" journals are also the ones from which articles are heavily referenced and cited. Thus, a longer time to retraction for papers in these journals would lead to wide dissemination of false data. The time to retraction was much higher when the reason for retraction was misconduct (22.33 months) than when papers were retracted due to genuine errors (7.13 months). The lag-time to retraction correlated with the number of citations that the paper received and thus with how widely the compromised findings were disseminated. A median of 50% of citations to retracted neurosurgery papers occurred after they were retracted. Moreover, papers that reported level 1 evidence and are more likely to change clinical practice received more postretraction citations than other papers. The issue of compromised information continuing to be cited and disseminated after retraction is of concern [13] . For instance, despite the well-publicized retraction of the compromised Wakefield vaccine-autism paper in 2010, a study in 2019 found that 8.2% of papers that had cited the retracted paper reported attempts to replicate the discredited findings. Of equal importance is the fact that 28.2% of the papers that were published after 2010 and that had cited the retracted paper did not mention the retraction in relation to the citation [19] . Thus, it is imperative to reduce the time to retraction and expeditiously complete independent reviews once a paper has been flagged. It is also essential to prominently display the retracted status of the paper in all search engines and reference management software. The country from which the highest number of retracted papers in neurosurgery originated was China (44.7%), followed by the USA (18.8%) and Japan (4.3%). Previous studies have also found that China, the USA, and Japan have been associated with the highest numbers of retractions in various scientific disciplines [3, 9] . The reasons for retraction varied by country. China, the United States, Germany, and Japan accounted for the majority of retractions because of fraud or suspected fraud, whereas the USA, Japan, China, and India accounted for most cases of plagiarism [9] . In countries such as China and India, there could be enormous institutional pressure to publish so as to demonstrate returns on the investment in science as well as for career advancement [5, 10, 18] . Similar pressures could exist in countries such as the USA and Japan, owing to a competition for grants and tenured positions. There are several reasons why scientific papers are retracted. In a study of 134 retraction notices issued on the open-access platform BioMed Central, the majority were due to misconduct (76%). This included compromised peer review (33%), plagiarism (16%), and data falsification/fabrication (7%). "Honest" errors accounted for only 13% of the total [16] . In an analysis of the reasons for retraction in human studies overall, misconduct (51%), errors (14.4%), and duplication (12.6%) accounted for the highest number [12] . A similar trend was seen in anesthesiology where misconduct (specifically fraud or data fabrication) accounted for nearly 50% of all retractions [17] . This pattern was replicated across surgical specialties as well; plagiarism, data falsification, and duplication were the most common reasons for retraction in surgery, orthopedics, and obstetrics and gynecology [6, 11, 21] . In congruence with published data, the present study found that plagiarism (18.59%), duplication of articles (14.59%), and compromised peer review (12.56%) were the most common reasons for retraction of neurosurgical papers. When categorized, genuine errors accounted only for 15% of the retracted neurosurgery papers, whereas misconduct accounted for 67.6% of all retractions. Despite the wide availability of plagiarism and reference-check software, the continued high incidence of retractions due to plagiarism could only imply that these resources are not universally employed. The reasons for retraction were not available for 11% of neurosurgical papers. This is a serious issue that needs to be addressed. Ideally, full texts of all retraction notices must mandatorily be made available. The majority of retracted papers in neurosurgery had been published in hybrid model (HM) journals (n = 126, 66%). The majority of the publishing journals did not levy mandatory publication or article processing fees. The numbers of neurosurgery papers retracted due to compromised peer review and misconduct were not different between openaccess (OA) and HM journals or between the PP and FP journals. However, when retractions in "pure" neurosurgery journals were examined, the median rate of retractions for OAJ was higher (0.16%) than for HMJ (0.026%). Thus, although the eventual reasons for retraction may not differ between the HMJ and OAJ, a higher rate of retractions in OAJ emphasizes the need for better quality control in neurosurgical OAJs. The present analysis estimated the overall rate of retractions in neurosurgery journals at 0.037%, vis-a-vis that for life sciences as a whole (0.01%). The rate of retractions in neurosurgery was lower than the rate for some basic science disciplines, such as genetics (0.15%), and higher than that for some allied medical branches such as nursing (0.029%) [2, 7] . The data regarding the number and rate of retractions are not available for several scientific and medical disciplines. It is critical for periodic analyses to be periodically undertaken for each specialty, so as to monitor the state of the science that is being published in that specialty. There is, thus, an urgent need for standardization when reporting the reasons for retraction. Standard categories would ensure that uniformity is maintained in communications regarding retracted papers. Citations to retracted papers should be minimized; for this reason, we have avoided citing any of the retracted papers included in this study. When necessary, the retraction notice rather than the actual retracted paper could be cited. The major reasons for retraction in neurosurgery and allied specialties were plagiarism, duplication of articles, and compromised peer review. The majority of retractions occurred due to misconduct. The median time to retraction was more than a year. Basic science papers, the number of collaborating departments, and a higher H-index of the journal were all associated with a longer time to retraction. Papers continued to accrue citations even after retraction; the category of paper, level 1 papers, the 2y-CPD, and the H-index of the journals and time to retraction correlated with the post-retraction CPY statistic. The rate of retractions in open-access journals was higher than that in hybrid model journals for neurosurgery journals. The rate of retractions in neurosurgery journals overall appears to be higher than the rate in the life sciences as a whole. There is an urgent need to address the issues of standardization of communications regarding retraction and to prevent the accrual of post-retraction citations. Conflict of interest The authors declare that they have no conflict of interest. Ethical approval For this type of study, formal consent and/or ethical committee approval is not required. Informed consent This article does not contain any studies with human participants performed by any of the authors. An "alarming" and "exceptionally high" rate of COVID-19 retractions? Retraction of publications in nursing and midwifery research: a systematic review The ethics of scholarly publishing: exploring differences in plagiarism and duplicate publication across nations Notice of retraction: Effectiveness of surgical and cotton masks in blocking SARS-CoV-2 To publish and perish: a Faustian bargain or a Hobson's choice Plagiarism and data falsification are the most common reasons for retracted publications in obstetrics and gynaecology Reasons for and time to retraction of genetics articles published between 1970 and 2018 How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data Misconduct accounts for the majority of retracted scientific publications Publish or perish? Analysis of retracted articles in the surgical literature Exploring the characteristics, global distribution and reasons for retraction of published articles involving human research participants: a literature survey The (lack of) impact of retraction on citation networks Retraction: cardiovascular disease, drug therapy, and mortality in C o v i d -1 9 Retraction-Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis Why articles are retracted: a retrospective cross-sectional study of retraction notices at BioMed Central Reasons for article retraction in anesthesiology: a comprehensive analysis Publish or perish in China Assessment of citations of the retracted article by Wakefield et al with fraudulent claims of an association between vaccination and autism Retraction of neurosurgical publications: a systematic review Retractions in orthopaedic research: a systematic review Retraction-Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations