key: cord-1000580-h60b4df6 authors: Arslan, Baris; Sugur, Tayfun; Ciloglu, Osman; Arslan, Ali; Acik, Vedat title: A cross-sectional study analyzing the quality of YouTube videos as a source of information for COVID-19 intubation date: 2021-11-15 journal: Braz J Anesthesiol DOI: 10.1016/j.bjane.2021.10.002 sha: d53b76905b4df1c2cc30abd078f1309f36d56c2b doc_id: 1000580 cord_uid: h60b4df6 INTRODUCTION: : There are many possible sources of medical information; however, the quality of the information varies. Poor quality or inaccurate resources may be harmful if they are trusted by providers. This study aimed to analyze the quality of coronavirus disease 2019 (COVID-19)-related intubation videos on YouTube. METHODS: : The term “COVID-19 intubation” was searched on YouTube. The top 100 videos retrieved were sorted by relevance and the 37 videos were included. The video demographics were recorded. The quality of the videos was analyzed using an 18-point checklist, which was designed for evaluating COVID-19 intubation. Videos were also evaluated using general video quality scores and the modified Journal of the American Medical Association score. RESULTS: : The educational quality was graded as good for eight (21.6%) videos, moderate for 13 (35.1%) videos, and poor for 16 (43.2%) videos. The median safe COVID-19 intubation score (SCIS) was 11 (IQR = 5–13). The SCISs indicated that videos prepared in an intensive care unit were higher in quality than videos from other sources (p < 0.05). The length of the video was predictive of quality (area under the curve = 0.802, 95% CI = 0.658–0.945, p = 0.10). CONCLUSIONS: : The quality of YouTube videos for COVID-19 intubation is substandard. Poor quality videos may provide inaccurate knowledge to viewers and potentially cause harm. The educational quality was graded as good for eight (21.6%) videos, moderate for 13 (35.1%) videos, and poor for 16 (43.2%) videos. The median safe COVID-19 intubation score (SCIS) was 11 (IQR = 5-13). The SCISs indicated that videos prepared in an intensive care unit were higher in quality than videos from other sources (p < 0.05). The length of the video was predictive of quality (area under the curve = 0.802, 95% CI = 0.658-0.945, p = 0.10). Conclusions: The quality of YouTube videos for COVID-19 intubation is substandard. Poor quality videos may provide inaccurate knowledge to viewers and potentially cause harm. Airway management, COVID-19, Coronavirus, Hand washing, Intubation YouTube (www.youtube.com) is the second most visited website in the world behind Google. [1] Free and easy access to YouTube makes it one of the most popular sources of information. Considering its popularity and easy accessibility, YouTube offers invaluable opportunities for dissemination of medical information. However, the quality of unfiltered information may be unscientific, misleading, or even harmful. [2, 3] Intubation in a patient with COVID-19 carries a high risk to healthcare providers because of the highly contagious nature of the disease, which is transmitted by droplets or aerosols. Although there are some videos on YouTube about COVID-19 intubation, the quality of these videos has not been evaluated. We therefore aimed to assess the quality of COVID-19 intubation videos that are accessible on YouTube. The term "COVID-19 intubation" was searched on YouTube on May 9, 2020. The only search filter used was the "sort by" filter of "relevance", which is the default filter for a typical YouTube search. Using methods previously described, on the assumption that it is rare for users to go beyond the first 100 videos for a specific search term, only the first 100 videos were evaluated. [2] The search was performed using a cleared-cache web browser that consists of the most current version of Google Chrome in incognito mode with all available updates enabled. The main researcher prescreened the top 100 videos and created a watch list. First, two of the researchers (BA, TS) independently reviewed and scored the videos, then, a third researcher (OC) reviewed and resolved any final discrepancies between the first two researchers. Only videos in English (or with comments or subtitles in the English language) were included, as English is a global language. Duplicates and irrelevant videos were excluded. Videos without a demonstration of intubation or that were unrelated to COVID-19 were also excluded. Videos that met the study criteria were assessed in terms of video length, total number of views, days online, daily views, likes, dislikes, upload source, video recording place, general video quality, JAMA, and COVID-19 intubation score. The upload source was classified as an intensive care unit, an emergency room, or an operating room. When the upload source could not be determined, it was classified as "other". Approval by the Institutional Review Board for this report was unnecessary because only publicly accessible data were used. There were no validated evaluation tools available to assess online information regarding COVID-19 patients' intubation. Thus, to determine the educational quality of video content, the authors BA and TS created a novel 18-point Safe COVID-19 Intubation Score (SCIS) based on a recently published clinical consensus statement and current recommendations. [4, 5] The SCIS consists of 18 items including preparation, equipment, number of staff members, prevention measures, and precautions related to COVID-19 intubation recommendations. One point was assigned for each item fulfilled resulting in a maximum possible score of 18 points (Table 1 ). The quality of videos was graded based on the SCIS as (1) good, if SCIS > 13; (2) moderate, if SCIS 13 ≤ but > 7; and (3) poor, if SCIS ≤ 7. The reliability of the videos was assessed using the modified JAMA benchmark criteria. [6] The JAMA benchmark assesses the reliability of online knowledge based on four parameters: authorship, attribution, disclosure, and currency. One point is given for each parameter. Four points indicate the information with the highest quality. To evaluate the general video quality, the authors used a variation of the parameters defined in the Evaluation of the Video Media Guidelines. This tool consists of four sections (content, production, users, and presentation free of bias). The authors chose only the first three for the current study. These sections were previously used in another similar study. [7] Each parameter was evaluated with a Likert-type scale from 0-5: 0 = does not apply; 1 = very unsatisfying; 2 = unsatisfying; 3 = regular; 4 = satisfying; and 5 = very satisfying. Therefore, each video could reach a maximum score of 70. Data were analyzed using IBM SPSS statistics version 21.0 software (IBM Co., Armonk, NY, USA). The data distribution was assessed using a Shapiro-Wilk test. Numerical variables are presented as median values (IQR interquartile ratios) and categorical variables are reported as the frequencies. Pairwise group comparison of numeric variables was performed by using Mann-Whitney U tests, while Kruskal-Wallis tests were used for comparisons of three or more groups. Categorical data were analyzed using Fisher's exact test. Spearman's rho correlation test was used to assess the correlation between the parameters. Interrater reliability (IRR) was separately calculated for the SCIS using Cohen's kappa coefficient (κ). Kappa values were interpreted according to criteria defined by Landis and Koch. [8] The cutoff points were obtained by evaluating the best Youden index (sensitivity + specificity-1) and the maximum area under the receiver operating characteristic (ROC) curve. A p-value < 0.05 was considered significant. The SCIS positively correlated with the general video quality score, JAMA score, and length of the videos (rho = 0.875, p < 0.001; rho = 0.552, p < 0.001; rho = 0.508, p = 0.001, respectively). The recording location of the video was an intensive care unit for 12 (32.4%) videos, an operation room for eight (21.6%) videos, an emergency room for six (16.2%) videos, and other places for 11 (29.7%) videos. The SCIS and general video quality scores were significantly higher for intensive care unitbased videos than for the other videos (p < 0.05). The main finding of this study was that YouTube videos do not provide sufficient, and comprehensive educational information for COVID-19 intubation. Poor results were found twice as often as good results in terms of SCIS. More importantly, hand hygiene, double gloving, and doffing (16.2%, 24.3%, 10.8% of videos, respectively)which are key steps to preventing contaminationwere demonstrated only in a limited number of videos. The median SCIS of the videos was 11, which also shows low-quality. Our findings are consistent with the results of previous studies. found low-quality scores for various medical conditions. [3] A report evaluating the quality of regional anesthesia videos found that half of the videos were of poor quality in relation to the procedure technique. [9] Similarly, a study on the brachial plexus also showed low-quality scores. [7] Umut et al. recently assessed endotracheal intubation videos on YouTube using their specific intubation score system, which included 15 items. They reported a mean score of 4.6/15 (± 2.7) among videos posted by academics. [10] The study demonstrates that most of the videos related to COVID-19 intubation on YouTube is of poor quality, as many omit key steps to prevent COVID-19 transmission during procedure. Also, there was no correlation between the number of views and the quality of the content. As such, many viewers may obtain information from low-quality materials. The authors declare no conflicts of interest. com Competitive Analysis, Marketing Mix and Traffic -Alexa The reliability of bariatric surgery videos in YouTube platform YouTube as a source of information on immunization: A content analysis Anaesthesia and caring for patients during the COVID-19 outbreak Recommendations for endotracheal intubation of COVID-19 patients Assessing, controlling, and assuring the quality of medical information on the Internet: Caveant lector et viewor -Let the reader and viewer beware YouTube as an informational source for brachial plexus blocks: evaluation of content and educational value The measurement of observer agreement for categorical data YouTube as an information source of spinal anesthesia, epidural anesthesia and combined spinal and epidural anesthesia Evaluation of the content, quality, reliability and accuracy of YouTube videos regarding endotracheal intubation techniques