key: cord-0590208-wr2v9i1a authors: Cohen, Joseph Paul; Dao, Lan; Morrison, Paul; Roth, Karsten; Bengio, Yoshua; Shen, Beiyi; Abbasi, Almas; Hoshmand-Kochi, Mahsa; Ghassemi, Marzyeh; Li, Haifang; Duong, Tim Q title: Predicting COVID-19 Pneumonia Severity on Chest X-ray with Deep Learning date: 2020-05-24 journal: nan DOI: nan sha: e25686037f3bac6132718cce051c5fc0df705bb8 doc_id: 590208 cord_uid: wr2v9i1a The need to streamline patient management for COVID-19 has become more pressing than ever. Chest X-rays provide a non-invasive (potentially bedside) tool to monitor the progression of the disease. In this study, we present a severity score prediction model for COVID-19 pneumonia for frontal chest X-ray images. Such a tool can gauge severity of COVID-19 lung infections (and pneumonia in general) that can be used for escalation or de-escalation of care as well as monitoring treatment efficacy, especially in the ICU. Images from a public COVID-19 database were scored retrospectively by three blinded experts in terms of the extent of lung involvement as well as the degree of opacity. A neural network model that was pre-trained on large (non-COVID-19) chest X-ray datasets is used to construct features for COVID-19 images which are predictive for our task. This study finds that training a regression model on a subset of the outputs from an this pre-trained chest X-ray model predicts our geographic extent score (range 0-8) with 1.14 mean absolute error (MAE) and our lung opacity score (range 0-6) with 0.78 MAE. All code, labels, and data are made available at https://github.com/mlmed/torchxrayvision and https://github.com/ieee8023/covid-chestxray-dataset As the first countries explore deconfinement strategies [Wilson & Moulson, 2020] the death toll of COVID-19 keeps rising [O'Grady et al., 2020] . The increased strain caused by the pandemic on healthcare systems worldwide has prompted many physicians to resort to new strategies and technologies. Chest X-rays (CXRs) provide a non-invasive (potentially bedside) tool to monitor the progression of the disease [Yoon et al., 2020; Ng et al., 2020] . As early as March 2020, Chinese hospitals used artificial intelligence (AI)-assisted computed tomography (CT) imaging analysis to screen COVID-19 cases and streamline diagnosis [Jin et al., 2020] . Many teams have since launched AI initiatives to improve triaging of COVID-19 patients (i.e., discharge, general admission or ICU care) and allocation of hospital resources (i.e., direct non-invasive ventilation to invasive ventilation) [Strickland, 2020] . While these recent tools exploit clinical data, practically deployable CXR-based predictive models remain lacking. In this work, we build and study a model which predicts the severity of COVID-19 pneumonia, based on CXRs, to be used as an assistive tool when managing patient care. The ability to gauge severity of COVID-19 lung infections can be used for escalation or de-escalation of care, especially in the ICU. An automated tool can be applied to patients over time to objectively and quantitatively track disease progression and treatment response. We used a retrospective cohort of 94 posteroanterior (PA) CXR images from a public COVID-19 image data collection [Cohen et al., 2020b] . While the dataset currently contains 153 images, it only counted 94 images at the time of the experiment, all of which were included in the study. All patients were reported COVID-19 positive and sourced from many hospitals around the world from December 2019 to March 2020. The images were de-identified prior to our use and there was no missing data. The ratio between male/female was 44/36 with an average age of 56±14.8 (55±15.6 for male and 57±13.9 for female). Radiological scoring was performed by three blinded experts: two chest radiologists (each with at least 20 years of experience) and a radiology resident. They staged disease severity using a score system adapted from [Wong et al., 2019] , based on two types of scores (parameters): extent of lung involvement and degree of opacity. arXiv:2005.11856v1 [eess.IV] 24 May 2020 1. The extent of lung involvement by ground glass opacity or consolidation for each lung (right lung and left lung separately) was scored as: 0 = no involvement; 1 = <25% involvement; 2 = 25-50% involvement; 3 = 50-75% involvement; 4 = >75% involvement. The total extent score ranged from 0 to 8 (right lung and left lung together). 2. The degree of opacity for each lung (right lung and left lung separately) was scored as: 0 = no opacity; 1 = ground glass opacity; 2 = consolidation; 3 = white-out. The total opacity score ranged from 0 to 6 (right lung and left lung together). A spreadsheet was maintained to pair filenames with their respective scores. Fleiss Kappa for inter-rater agreement was 0.45 for the opacity score and 0.71 for the extent score.. Prior to the experiment, the model was trained on the following public datasets, none of which contained COVID-19 cases: These seven datasets were manually aligned to each other on 18 common radiological finding tasks in order to train a model using all datasets at once (atelectasis, consolidation, infiltration, pneumothorax, edema, emphysema, fibrosis, fibrosis, effusion, pneumonia, pleural thickening, cardiomegaly, nodule, mass, hernia, lung lesion, fracture, lung opacity, and enlarged cardiomediastinum). For example "pleural effusion" from one dataset is considered the same as "effusion" from another dataset in order to consider these labels equal. In total, 88,079 non-COVID-19 images were used to train the model on these tasks. In this study, we used a DenseNet model [Huang et al., 2017] from the TorchXRayVision library [Cohen et al., 2020c ;a]. DenseNet models have been shown to predict Pneumonia well [Rajpurkar et al., 2017] . Images were resized to 224 224 pixels, utilizing a center crop if the aspect ratio was uneven, and the pixel values were scaled to [-1024, 1024] for the training. More details about the training can be found in [Cohen et al., 2020a] . Before even processing the COVID-19 images, a pretraining step was performed using the seven datasets to train feature extraction layers and a task prediction layer (shown in Figure 1 ). This "pre-training" step was performed on a large set of data in order to construct general representations about lungs and other aspects of CXRs that we would have been unable to achieve on the small set of COVID-19 images available. Some of these representations are expected to be relevant to our downstream tasks. There are a few ways we can extract useful features from the pre-trained model as detailed in Figure 1 . Similarly to the images from non-COVID-19 datasets used for pre-training, each image from the COVID-19 dataset was preprocessed (resized, centercropped, rescaled), then processed by the feature extraction layers and the task prediction layer of the network. The network was trained on existing datasets before the weights were frozen. COVID-19 images were processed by the network to generate features used in place of the images. As was the case with images from the seven non-COVID-19 datasets, the feature extraction layers produced a representation of the 94 COVID-19 images using a 1024 dimensional vector, then the fully connected task prediction layer produced outputs for each of the 18 original tasks. We build models on the pre-sigmoid outputs. Linear regression was performed to predict the aforementioned scores (extent of lung involvement and opacity) using these different sets of features in place of the image itself: 1. Intermediate network features -the result of the convolutional layers applied to the image resulting in a 1024 dimensional vector which is passed to the task prediction layer; 2. 18 outputs -each image was represented by the 18 outputs (pre-sigmoid) from the pre-trained model; 3. 4 outputs -a hand picked subset of outputs (presigmoid) were used containing radiological findings more frequent in pneumonia (lung opacity, pneumonia, infiltration, and consolidation); 4. Lung opacity output -the single output (pre-sigmoid) for lung opacity was used because it was very related to this task. This feature was different from the opacity score that we would like to predict. For each experiment performed, the 94 images COVID-19 dataset was randomly split into a train and test set roughly 50/50. Multiple timepoints from the same patient were grouped together into the same split so that a patient did not span both sets. Sampling was repeated throughout training in order to obtain a mean and standard deviation for each performance. As linear regression was used, there was no early stopping that had to be done to prevent the model from overfitting. Therefore, the criterion for determining the final model was only the mean squared error (MSE) on the training set. In order to ensure that the models are looking at reasonable aspects of the images [Reed & Marks, 1999; Zech et al., 2018; Viviano et al., 2019] , a saliency map is computed by computing the gradient of the output prediction with respect to the input image (if a pixel is changed how much will it change the prediction). In order to smooth out the saliency map, it is blurred using a 5x5 Gaussian kernel. Keep in mind that these saliency maps have limitations and only offer a restricted view into why a model made a prediction [Ross et al., 2017; Viviano et al., 2019] . Quantitative performance metrics The single "lung opacity" output as a feature yielded the best correlation (0.80), followed by 4 outputs (lung opacity, pneumonia, infiltration, and consolidation) parameters (0.79) ( Table 1 and 2). Building a model on only a few outputs provides the best performance. The mean absolute error (MAE) is useful to understand the error in units of the scores that are predicted while the mean squared error (MSE) helps to rank the different methods based on their furthest outliers. One possible reason that fewer features work best is that having fewer parameters prevents overfitting. Some features could serve as proxy variables to confounding attributes such as sex or age and preventing these features from being used prevents the distraction from hurting generalization performance. Hand selecting feature subsets which are intuitively related to this task imparts domain knowledge as a bias on the model which improves performance. Thus, the top performing model (using the single "lung opacity" output as a feature) is used for the subsequent qualitative analysis. Qualitative analysis of predicted scores Figure 2 shows the top performing model's (using the single "lung opacity" output as a feature) predictions against the ground truth score (given by the blinded experts) on held out test data. Majority of the data points fall close to the line of unity. The model overestimates scores between 1 and 3 and underestimates scores above 4. However, generally the predictions seem reasonable given the agreement of the raters. Studying learned representations In Figure 3 , we explore what the representation used by one of the best models looks at in order to identify signs of overfitting and to gain insights into the variation of the data. A t-distributed stochastic neighbor embedding (t-SNE) [van der Maaten & Hinton, 2008] is computed on all data (even those not scored) in order to project the features into a two-dimensional (2D) space. Each CXR is represented by a point in a space where relationships to other points are preserved from the higher dimensional space. The cases of the survival group tend to cluster together as well as the cases of the deceased group. This clustering indicates that score predictions align with clinical outcomes. Inspecting saliency maps In Figure 4 , images are studied which were not seen by the model during training. For most of the results, the model is correctly looking at opaque regions of the lungs. Figure 4b shows no signs of opacity and the model is focused on the heart and diaphragm, which is likely a sign that they are used as a color reference when determining what qualifies as opaque. In Figure 4c and 4d, we see erroneous predictions. In the context of a pandemic and the urgency to contain the crisis, research has increased exponentially in order to alleviate the healthcare systems burden. However, many prediction models for diagnosis and prognosis of COVID-19 infection are at high risk of bias and model overfitting as well as poorly reported, their alleged performance being likely optimistic [Wynants et al., 2020] . In order to prevent premature implementation in hospitals [Ross, 2020] , tools must be robustly evaluated along several practical axes [Wiens et al., 2019; Ghassemi et al., 2019; Cohen et al., 2020a] . Indeed, while some AI-assisted tools might be powerful, they do not replace clinical judgment and their diagnostic performance cannot be assessed or claimed without a proper clinical trial [Nagendran et al., 2020] . Our models ability to gauge severity of COVID-19 lung infections could be used for escalation or de-escalation of care as well as monitoring treatment efficacy, especially in the intensive care unit (ICU) [Toussie et al., 2020] . The use of a score combining geographical extent and degree of opacity allows clinicians to compare CXR images with each other using a quantitative and objective measure. Also, this can be done at scale for a large scale analysis. Existing work focuses on predicting severity from a variety of clinical indicators which include findings from chest imaging [Jiang et al., 2020; Shi et al., 2020] . Models such as the one presented in this work can complement and improve these models and potentially help to make decisions from CXR as opposed to CT. Challenges in creating a predictive model involve labelling the data and achieving good inter-rater agreement as well as learning a representation which will generalize to new images when the number of labelled images is so low. In the case of building a predictive tool for COVID-19 CXR images, the lack of a public database made it difficult to conduct large-scale robust evaluations. This small number of samples prevents proper cohort selection which is a limitation of this study and exposes our evaluation to sample bias. However, we use a model which was trained on a large dataset with related tasks which provided us with a robust unbiased COVID-19 feature extractor and allows us to learn only two parameters for our best linear regression model. Restricting the complexity of the learned model in this way reduces the possibility of overfitting. Our evaluation could be improved if we were able to obtain new cohorts labelled with the same severity score to ascertain the generalization of our model. Also, it is unknown if these radiographic scores of disease severity reflect actual functional or clinical outcomes as the open data do not have those data. We make the images, labels, model, and code public from this work so that other groups can perform follow-up evaluations. A large chest x-ray image dataset with multi-label annotated reports On the limits of cross-domain generalization in automated X-ray prediction Preparing a collection of radiology examinations for distribution and retrieval Practical guidance on artificial intelligence for health-care data Densely Connected Convolutional Networks A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison Predicting COVID-19 Pneumonia Severity on Chest X-ray with Deep Learning In AAAI Conference on Artificial Intelligence Towards an Artificial Intelligence Framework for Data-Driven Prediction of Coronavirus Clinical Severity. Computers, Materials & Continua Military Medical Research MIMIC-CXR: A large publicly available database of labeled chest radiographs Chest Radiograph Interpretation with Deep Learning Models: Assessment with Radiologist-adjudicated Reference Standards and Population-adjusted Evaluation Artificial intelligence versus clinicians: Systematic review of design, reporting standards, and claims of deep learning studies in medical imaging Imaging Profile of the {COVID}-19 Infection: Radiologic Findings and Literature Review. Radiology: Cardiothoracic Imaging covid-19 death toll surpasses 2,000 in one day and 100,000 total worldwide Neural smithing : supervised learning in feedforward artificial neural networks Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations AI used to predict Covid-19 patients' decline before proven to work Host susceptibility to severe COVID-19 and establishment of a host risk score: Findings of 487 cases outside Wuhan AI Can Help Hospitals Triage COVID-19 Patients Clinical and Chest Radiography Features Determine Patient Outcomes In Young and Middle Age Adults with COVID-19. Radiology Underwhelming Generalization Improvements From Controlling Feature Attribution ChestX-ray8: Hospital-scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases Do no harm: a roadmap for responsible machine learning for health care Children in Spain allowed to play outdoors as country eases COVID-19 lockdown Frequency and Distribution of Chest Radiographic Findings in COVID-19 Positive Patients Prediction models for diagnosis and prognosis of covid-19 infection: Systematic review and critical appraisal Chest Radiographic and {CT} Findings of the 2019 Novel Coronavirus Disease ({COVID}-19): Analysis of Nine Patients Treated in Korea Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A crosssectional study This research is based on work partially supported by the CI-FAR AI and COVID-19 Catalyst Grants. This work utilized the supercomputing facilities managed by Compute Canada and Calcul Quebec. We thank AcademicTorrents.com for making data available for our research. This project is approved by the University of Montreal's Ethics Committee #CERSES-20-058-D