key: cord-1012193-qrwo846j authors: Martinez-Lacalzada, M.; Viteri-Noel, A.; Manzano, L.; Fabregate-Fuente, M.; Rubio-Rivas, M.; Luis Garcia, S.; Arnalich Fernandez, F.; Beato Perez, J. L.; Calvo Manuel, E.; Constanza Espino, A.; Freire Castro, S. J.; Loureiro-Amigo, J.; Pesqueira Fontan, M.; Pina, A.; Alvarez Suarez, A. M.; Silva Asiain, A.; Garcia Lopez, B.; Luque del Pino, J.; Sanz Canovas, J.; Chazarra Perez, P.; Garcia Garcia, G. M.; Millan Nunez-Cortes, J.; Casas Rojo, J. M.; Gomez Huelgas, R. title: Predicting critical illness on initial diagnosis of COVID-19: Development and validation of the PRIORITY model for outpatient applicability. date: 2020-11-30 journal: nan DOI: 10.1101/2020.11.27.20237966 sha: 31cac566ad3d554f620284ac6b2c308deffbeead doc_id: 1012193 cord_uid: qrwo846j OBJECTIVE To develop and validate a prediction model, based on clinical history and examination findings on initial diagnosis of COVID-19, to identify patients at risk of critical outcomes. DESIGN National multicenter cohort study. SETTING Data from the SEMI (Sociedad Espanola de Medicina Interna) COVID-19 Registry, a nationwide cohort of consecutive COVID-19 patients presenting in 132 centers between March 23 and May 21, 2020. Model development used data from hospitals with >300 beds, and validation used those from hospitals with <300 beds. PARTICIPANTS Adults (age [≥] 18 years) presenting with COVID-19 diagnosis. MAIN OUTCOME MEASURE Composite of in-hospital death, mechanical ventilation or admission to intensive care unit. RESULTS There were 10,433 patients, 7,850 (main outcome rate 25.1%) in the model development cohort and 2,583 (main outcome rate 27.0%) in the validation cohort. The clinical variables in the final model were: age, cardiovascular disease, moderate or severe chronic kidney disease, dyspnea, tachypnea, confusion, systolic blood pressure, and SpO2 [≤] 93% or supplementary oxygen requirement at presentation. The model developed had C-statistic of 0.823 (95% confidence interval [CI] 0.813 to 0.834) and calibration slope of 0.995. The external validation had C-statistic of 0.792 (95% CI, 0.772 to 0.812) and calibration slope of 0.872. The model showed positive net benefit in terms of hospitalizations avoided for the predicted probability thresholds between 3% and 79%. CONCLUSIONS Among patients presenting with COVID-19, easily-obtained basic clinical information had good discrimination for identifying patients at risk of critical outcomes, and the model showed good generalizability. A model-based online prediction calculator provided with this paper would facilitate triage of patients during the pandemic. The clinical spectrum of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection ranges from an asymptomatic state to critical illness; the symptomatic profile is called coronavirus disease 2019 or COVID-19. 1,2 As of November 25, 2020, the COVID-19 pandemic has affected more than 57 million people worldwide, and has led to nearly 1 300 000 deaths. 3 Notably, Spain has been one of the countries with the highest number of patients with COVID-19. 4 To optimize the use of limited healthcare resources, it would be essential to identify, as early as possible, those patients who are at high risk of progressing to critical illness that necessitates admission to intensive care unit (ICU) or mechanical ventilation, or that may lead to mortality. To date, studies of COVID-19 prognostic factors [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] have focused on laboratory test results and in-hospital data obtained following admission. They have tended not to include clinical variables that could easily be obtained from history and examination carried out on initial assessment in an outpatient setting. Where one machine learning model has addressed basic clinical features, 16 it has narrowed down the prediction to the mortality outcome only and it lacks wider generalizability. A critical appraisal of the COVID-19 models 14 has shown poor reporting and high risk of bias. Recently published well-developed models [9] [10] [11] [12] [13] deploy radiological examinations and laboratory measurements such as blood counts, creatinine, lactate dehydrogenase, direct bilirubin, urea and C-reactive protein levels. These tests are not available on initial assessment or in resource-limited settings. Prediction models based on easy-to-collect data without using imaging or laboratory measures have previously been developed for other infectious diseases, e.g. during meningitis epidemics and pneumonia. [17] [18] [19] As a global health emergency, management of COVID-19 too would benefit from a readily applicable prediction model that can be applied on initial diagnosis without the need for radiological and laboratory tests. Therefore, we developed and externally validated a prediction model, based on easily obtained All rights reserved. No reuse allowed without permission. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this this version posted November 30, 2020. ; https://doi.org/10.1101/2020. 11.27.20237966 doi: medRxiv preprint clinical measures at presentation with confirmed COVID-19 diagnosis, to identify patients at risk of developing critical outcomes. This study was based on the SEMI (Sociedad Española de Medicina Interna) COVID-19 Registry, the Spanish national registry of COVID-19 patients. 20 It is an ongoing multicenter nationwide cohort of consecutive patients hospitalized for COVID-19 across Spain. At the inception of the cohort, patients were confirmed to be COVID-19 cases defined as a positive result on real-time reverse-transcription-polymerase-chainreaction (RT-PCR) for the presence of SARS-CoV-2 in nasopharyngeal swab specimens or sputum samples. Exclusion criteria were age under 18 years, subsequent admissions of the same patient and refusal or withdrawal of informed consent. Clinical baseline data, history of previous medication, known comorbidities, laboratory and imaging variables were collected on admission. In addition, treatments administered, complications during hospitalization, status on day of discharge and/or 30 days after diagnosis were obtained. Registry's characteristics have been previously described in detail. 20 The SEMI-COVID-19 Registry was approved by the Provincial Research Ethics Committee of Malaga (Spain) and by Institutional Research Ethics Committees of each participating hospital. For the study, we used data from patients admitted in 132 hospitals between March 23 and May 21, 2020 provided by the Registry. Development and validation cohorts were defined according to the size of hospitals. Model development was performed on a cohort of patients from hospitals with at least 300 beds, and validated on a separate cohort from hospitals with less than 300 beds. This approach was taken to examine the external validity of the prognostic model 21 in a lower complexity level setting compared to the development setting 22 . The study was reported following the TRIPOD (Transparent Reporting of a 7 history of dialysis), malignancy (solid tumor, leukemia or lymphoma), chronic liver disease, immunocompromised status (autoimmune diseases, solid-organ transplant recipients, HIV infection or previous immunosuppressive treatment including systemic steroids). Clinical signs and symptoms were cough, arthromyalgia, ageusia/anosmia, asthenia/anorexia, headache, gastrointestinal symptoms, fever (defined as temperature ≥ 38 °C or history of fever), systolic blood pressure, heart rate, tachypnea (respiratory rate > 20 breaths per minute), pulmonary rales, confusion, dyspnea and peripheral oxygen saturation by pulse oximetry (SpO2) ≤ 93% at room air or supplementary oxygen requirement at admission. 26 To improve consensus on model applicability, a 1-round online questionnaire was conducted among a multidisciplinary panel of 24 physicians involved in COVID-19 clinical management at nursing homes, emergency departments, primary care centers and hospitalization wards (6 per each setting). The panelists were asked to rate (on a 1 to 9 Likert scale) the availability/reliability of each predictor, as well as its ability to predict the outcome, the best way to merge predictors of rare occurrence and the maximum number of variables this model should contain. Agreement was considered when ≤ 7 panelists rated outside the 3-point region containing the median. 27 Due to the global public health emergency status of the COVID-19 pandemic this research study was conducted without the opportunity for patient and public involvement. The predictive model, called PRIORITY, was presented as the formula for estimating the probability of COVID-19 critical illness outcome, as well as an associated web-based calculator. To develop and validate the model patients' characteristics were summarized in terms of frequencies and percentages for All rights reserved. No reuse allowed without permission. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this this version posted November 30, 2020. ; https://doi.org/10.1101/2020.11.27.20237966 doi: medRxiv preprint categorical variables and by the mean and standard deviation (SD) for continuous variables. Statistical analysis was performed with R software version 4.0.0 (The R Foundation for Statistical Computing), with the mice, mfp, glmnet, pROC, and rmda packages. Model development: Missing values in the potential predictors were imputed using single imputation. A stochastic single imputation dataset was created for both cohorts (development and validation) as the first of a series of datasets created by multiple imputation by chained reactions. Single imputation was selected as a reasonable alternative to dealing with multiple completed datasets with relatively few missings. 28 Quantitative variables were kept as continuous to avoid loss of prognostic information, and non-linear relationships were modelled using the multivariate fractional polynomials with a maximum of 2 degrees of freedom. 29 The least absolute shrinkage and selection operator (LASSO) method 30 was used to identify a parsimonious set of potential predictors of critical illness. We selected the regulation penalty parameter (λ) that minimized the 10-fold cross-validation mean squared error (MSE) for a maximum number of predictive features in the model settled by the expert panel agreement. Then, this subset of predictors was entered into a logistic regression model, and those that were statistically significant (p<0.05) were retained for the final model. The model coefficients were represented as odds ratios (OR), and 95% confidence intervals (95% CI) were obtained using 1000 bootstrap samples. Model performance: We used Nagelkerke's R 2 to evaluate the overall predictive accuracy of the model. The overall discriminatory ability was assessed using the C-statistic, as the area under the receiver operating characteristic curve (AUC ROC), with 95% CI by stratified bootstrap resampling. Calibration of the model was assessed graphically in a plot with predictions by deciles of risk on the x-axis and observed proportions of outcomes on the y-axis, as well as the locally estimated scatterplot smoothing (LOESS). We got an overfitting-corrected estimate of the calibration slope from the calibration plot by bootstrapping 1000 resamples, with well-calibrated models having a slope of 1. 31 Model validation: To estimate the reproducibility of the model's predictions for the underlying population from which the data were originated (internal validation), the potential overfitting and optimism in the model was assessed by 10-fold cross-validation. 30 Moreover, to assess the model´s stability and generalizability to different settings, we externally validated the final model in a separate cohort including patients admitted at smaller hospitals (< 300 beds). The use of less complex hospital setting also helped to assess model generalizability. 21 We reported the same measures of performance as used in the model development cohort. To assess the impact of imputation of missing values, we carried out a complete-case analysis, using for model development only those patients with complete data in the potential predictors. We also developed a full model with no restriction in the maximum number of predictors (selecting λ at which the MSE was within one standard error of the minimal MSE). Then, we developed an alternative model using linear continuous predictors instead of fractional polynomial terms. Decision curve analysis: We undertook decision curve analysis (DCA) 32 to assess the clinical usefulness of the predictive model in terms of net benefit (NB) if used to prioritize hospital referrals that are most likely to require critical care. For the whole range of decision threshold probabilities (pt), the net benefit of the model was compared to default strategies of treating all or no patients. The NB was calculated as the percentage of true positives minus the percentage of false positives and weighted by the "harm-tobenefit" ratio (pt /(1-pt). We represented the NB vs. pt in a decision curve plot. The benefit of the prediction model was also quantified in terms of reduction in avoidable hospitalization referrals per 100 patients, calculated as: (NB of the model -NB of treat all)/(harm-to-benefit ratio) × 100. 32 The choice of threshold probability pt will vary across different regions, according to changing epidemiological situations and availability of health resources, taking into consideration that the intervention would consist of referring the patient to a hospital. At a low threshold, false negatives are All rights reserved. No reuse allowed without permission. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this this version posted November 30, 2020. ; https://doi.org/10.1101/2020.11.27.20237966 doi: medRxiv preprint minimized at the expense of unnecessary referrals. At a high threshold, patients would be referred less frequently, but some high-risk patients may not be derived to the hospital. From a total of 11,523 patients of the SEMI-COVID- 19 From an initial list of 29 candidate variables, the expert panel forged an agreement on 21 potential predictors for further evaluation in the predictive model. So, chronic liver disease, previous medication (ACEi and ARBs), cough, arthromyalgia, ageusia/anosmia, asthenia/anorexia, headache, gastrointestinal symptoms were excluded. Moreover, consensus was achieved for including a range between 5 and 9 variables on the final model. These 21 potential predictors of critical illness were included in the LASSO predictor selection process. A subset of 9 variables were retained as the best predictors of critical illness (eFigure 1), including age squared, moderate or severe dependency, cardiovascular disease, moderate or severe chronic kidney disease, dyspnea, tachypnea, confusion, reciprocal of systolic blood pressure squared, and SpO2 ≤93% or supplementary oxygen requirement. A multivariable logistic regression model was then fitted with these 9 variables. All of them, except for moderate or severe dependency, were statistically significant independent predictors of critical illness and were therefore included in the final prediction model (Table 2) . Based on the logistic regression model, the probability of critical COVID-19 illness could be calculated as: All rights reserved. No reuse allowed without permission. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. After bootstrap resampling, the agreement between the observed outcomes and predicted probabilities in the development cohort showed good calibration with a slope of 0.995 (Figure 2a ). The validation cohort included 2583 ( (Figure 1b) , and a calibration slope of 0.883 ( Figure 2b ). We carried out a complete-case analysis selecting as development cohort the 5513 patients with complete data on the 21 potential predictors and the outcome. The resulting model had the same predictors as the final model with imputed data. R 2 was 0.324, with an apparent C-statistic of 0.813 (95% CI 0.800, 0.823) and a slope of 0.992. Next, using the original development cohort, we fitted a model with no restriction All rights reserved. No reuse allowed without permission. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this this version posted November 30, 2020. ; in maximum number of predictors in the model, resulting in a model with 15 variables, adding sex, moderate or severe dependency, diabetes mellitus, malignancy, immunocompromised status, pulmonary rales and heart rate cubed to the 8 predictors in the PRIORITY model. R 2 was 0.348, with a C-statistic of 0.832 (95% CI 0.821, 0.842) and a calibration slope of 0.995. Likewise, we fitted an alternative model using linear continuous predictors instead of fractional polynomial terms. The linear term of systolic blood pressure was not found to be a significant predictor of critical illness, while moderate or severe dependency was included in the model. R 2 was 0.339, C-statistic of 0.819 (95% CI 0.809, 0.830) and a slope of 0.996. The decision curve analysis ( Figure 3) showed a positive net benefit for threshold probabilities (pt) between 3% and 79%, compared to default strategies (treat-all or treat-none). For low thresholds, below 3%, the net benefit of the model was comparable to managing all COVID-19 patients as if they will progress to critical illness (treat-all strategy). Table 3 presents estimates of the net benefit of using the model and the reduction in avoidable hospitalization referrals for different probability thresholds. We developed and validated a new clinical risk model to predict COVID-19 critical illness based on eight simple clinical features easily available on initial assessment in out-of-hospital settings. The model was well calibrated, had good discrimination, and performed robustly in an external validation cohort. Moreover, it showed a potential clinical benefit in a variety of scenarios covering different healthcare situations over a range of threshold probabilities, highlighting its practical usefulness. Its web-based calculator can facilitate its immediate application for frontline clinicians facing the current COVID-19 peak. All rights reserved. No reuse allowed without permission. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this this version posted November 30, 2020. ; https://doi.org/10.1101/2020.11.27.20237966 doi: medRxiv preprint This study has several methodological strengths maximizing internal and external validity. 23 To the best of our knowledge, this is the first generalizable COVID-19 predictive model built with simple clinical information for use in the outpatient settings, excluding imaging and laboratory data. We developed and validated the model in a large multicenter, national cohort. Ours was a cohort twice as large as the previous model using simple information. 16 It was also one of the largest cohorts of all previous models published to date. 9-16 Our model excluded readmissions, a feature that focusses the analysis on the question of interest, i.e. the need of triage in patients at their first COVID-19 presentation. Moreover, methodology was rigorous, avoiding biases that affected previous studies. 14 We complied with the recommendation made regarding avoidance of data-driven predictor selection. 14 The strength of our findings should be interpreted in light of some limitations. We carefully selected easily available clinical and demographic variables that could be applied at outpatient setting, the data were collected at the time of hospital admission. In this regard it should be kept in mind that during the first COVID-19 peak many patients were hospitalized despite low symptom severity as part of prudent management since not much was known about clinical disease course. We used registry data collected in a situation of healthcare pressure due to the pandemic peak, so the data quality may be variable across All rights reserved. No reuse allowed without permission. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this this version posted November 30, 2020. ; https://doi.org/10.1101/2020.11.27.20237966 doi: medRxiv preprint centers. In this regard, it is notable that missing data per predictor variable were relatively low. To reduce the impact of data loss we used imputation. The sensitivity analysis found that our model with imputation was robust compared to the performance of the model with the complete cases. The complete-case dataset was 27% smaller than the imputed dataset, a feature that was favourable compared to a previous model using radiology and laboratory tests 12 where the complete dataset was 35% smaller. So, our rate of patients with missing data is even lower. The impact of other assumptions adopted in the model development were also evaluated. For example, restricting the maximum number of predictors to 8 (as recommended by the expert panel to enhance usability in clinical practice) was found not to limit model performance compared to a 15-predictor model developed without restrictions (R 2 from 0.346 to 0.348; C-statistic from 0.823 to 0.832, respectively). Considering the balance between strengths and limitations, our model is ready for application as a triage tool within the context of an evaluative study to allow solidification of evidence about model effectiveness in practice. An external validation study of 22 previously published prognostic models including laboratory and radiological had shown that oxygen saturation and age were the most discriminating univariate predictors for in-hospital mortality, and that none of the multivariate models had superior performance than these individual predictors. 15 It is important to point out that the PRIORITY model, despite its simplicity, showed a performance similar to the previously published models that included imaging and laboratory data for prediction. For example, our model (apparent C-statistic 0.823, 95% CI 0.813 to 0.834; external validation C-statistic 0.792, 95% CI 0.772 to 0.812) would be expected to dominate in health economic terms the model of Knight et al. 12 (apparent C-statistic of 0.79, 95% CI 0.78 to 0.79; external validation C-statistic 0.77, 95% CI 0.76 to 0.77) on the basis that it would not incur costs involved in imaging and laboratory tests. All rights reserved. No reuse allowed without permission. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this this version posted November 30, 2020. ; https://doi.org/10.1101/2020.11.27.20237966 doi: medRxiv preprint Our model could be applied in triage, using easily measurable variables available in outpatient settings, identifying high-risk patients for referral to hospital. The DCA (Figure 3 ) provides information to underpin clinical management and policy-making under COVID-19 pandemic pressure. The PRIORITY model has potential value, resulting in higher net benefit than the default strategies of treat-all or treat-none (hospitalize all or hospitalize none), over a range of risk thresholds which could be considered as relevant in clinical practice. For example, in situations under pandemic peak pressure or low-resource healthcare systems, policy-makers may consider a cut-off point up to 20%, a threshold that will be associated with higher reduction in unnecessary critical care admissions. However, in situations with low numbers of COVID-19 cases and little risk of overwhelming the critical care capacity, a lower threshold may be considered. For example, a 5% cut-off could be appropriate to make decisions on early referral to hospital attention, minimizing the risk of critical illness without ward level in-patient monitoring. We recommend objectively defining specific cut-off points considering the circumstances and the availability of health resources. This approach would allow for patients under the risk threshold to be as safely managed within the community as possible. We developed and validated a new prediction model, called PRIORITY, to estimate the risk of critical illness in patients with COVID-19, based on eight clinical variables easily measurable in out-of-hospital settings. This model could help in triage of outpatients at risk for critical COVID-19 illness. The study provides underpinning evidence to inform decision-making in health systems under pandemic pressure. We gratefully acknowledge all the investigators who participate in the SEMI-COVID-19 Registry, especially All rights reserved. No reuse allowed without permission. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this this version posted November 30, 2020. This research did not receive external funding. All rights reserved. No reuse allowed without permission. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this this version posted November 30, 2020. ; https://doi.org /10.1101 /10. /2020 The authors declare no conflict of interest. All rights reserved. No reuse allowed without permission. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. Net benefit: percentage of true positives minus the percentage of false positives weighted by the ratio (pt /(1-pt)). Reduction in avoidable hospitalization referrals per 100 patients: (net benefit of the model -net benefit of treat all)/(pt/(1− pt)) × 100. All rights reserved. No reuse allowed without permission. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this this version posted November 30, 2020. ; https://doi.org/10.1101/2020.11.27.20237966 doi: medRxiv preprint Figure 1 . Area under the receiver-operator characteristic curve (AUC ROC) of the predictive model for critical illness among patients hospitalized with COVID-19. (a). AUC ROC in the development cohort, n=7850 patients from hospitals with equal or more than 300 beds. (b). AUC ROC in the validation cohort, n=2583 patients from hospitals with less than 300 beds. 95% coefficient intervals (CI) computed with 1000 bootstrap replicates. All rights reserved. No reuse allowed without permission. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this this version posted November 30, 2020. ; https://doi.org/10.1101/2020.11.27.20237966 doi: medRxiv preprint Figure 2 . Calibration curves of the model predicting COVID-19 critical illness. (a). AUC ROC in the development cohort, n=7850 patients from hospitals with equal or more than 300 beds. (b). AUC ROC in the validation cohort, n=2583 patients from hospitals with less than 300 beds. Upper, X-axis representing model predictions and y-axis observed critical illness rates. Circles representing deciles of risk according to the model predictions, plotted against the observed critical illness rate. Linear and local (LOESS) regression were represented to visualize the agreement between observed and predicted values. Below, histogram of predicted critical illness across the range of risk predictions. All rights reserved. No reuse allowed without permission. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this this version posted November 30, 2020. ; https://doi.org/10.1101/2020.11.27.20237966 doi: medRxiv preprint Figure 3 . Decision curves of the predictive model for severe COVID-19. The x-axis represents threshold probabilities and the y-axis the net benefit. All rights reserved. No reuse allowed without permission. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this this version posted November 30, 2020. ; https://doi.org/10.1101/2020.11.27.20237966 doi: medRxiv preprint Nuestra Señora del Prado Hospital Features of 20 133 UK patients in hospital with covid-19 using the ISARIC WHO Clinical Characterisation Protocol: prospective observational cohort study Clinical Characteristics of Coronavirus Disease 2019 in China COVID-19 situation reports Consumo y Bienestar Social. Enfermedad por el coronavirus Clinical Characteristics of 138 Hospitalized Patients with 2019 Novel Coronavirus-Infected Pneumonia in Wuhan, China Clinical course and risk factors for mortality of adult inpatients with COVID-19 in Wuhan, China: a retrospective cohort study Presenting Characteristics, Comorbidities, and Outcomes Among 5700 Patients Hospitalized With COVID-19 in the New York City Area Characteristics and predictors of death among 4,035 consecutively hospitalized patients with COVID-19 in Spain Clinical risk score to predict in-hospital mortality in COVID-19 patients: a retrospective cohort study Development and Validation of a Clinical Risk Score to Predict the Occurrence of Critical Illness in Hospitalized Patients With COVID-19 Development and validation of a prediction model for severe respiratory failure in hospitalized patients with SARS-Cov-2 infection: a multicenter cohort study (PREDI-CO study) Prediction models for diagnosis and prognosis of COVID-19 infection: systematic review and critical appraisal Systematic Evaluation and External Validation of 22 Prognostic Models among Hospitalised Adults with COVID-19: An Observational Cohort Study Clinical features of COVID-19 mortality: development and validation of a clinical prediction model Prognostic scores for use in African meningococcal epidemics Hypoxaemia in Mozambican children <5 years of age admitted to hospital with clinical severe pneumonia: clinical features and performance of predictor models High reliability in respiratory rate assessment in children with respiratory symptomatology in a rural area in Mozambique Clinical characteristics of patients hospitalized with COVID-19 in Spain: Results from the SEMI-COVID-19 Network Assessing the generalizability of prognostic information Registro de Altas de los Hospitales Generales del Sistema Nacional de Salud Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): explanation and elaboration Diagnosis and treatment of adults with community-acquired pneumonia. An official clinical practice guideline of the American Thoracic Society and Infectious Diseases Society of America International Survey to Establish Prioritized Outcomes for Trials in People With Coronavirus Disease Clinical guide for the management of emergency department patients during the coronavirus pandemic The RAND/UCLA Appropriateness Method User's Manual. 1 st Edition Clinical Prediction Models: A Practical Approach to Development, Validation and Updating. 2 nd ed Regression using fractional polynomials of continuous covariates: parsimonious parametric modelling The elements of statistical learning: data mining, inference, and prediction. 2 nd ed Towards better clinical prediction models: seven steps for development and an ABCD for validation Decision curve analysis: a novel method for evaluating prediction models