key: cord-1008552-p1ivp9rl authors: Fu, Y.; Zhao, L.; Zheng, H.; Sun, Q.; Yang, L.; Li, H.; Xie, J.; Sue, X.; Li, F.; Li, Y.; Yang, W.; Pei, Y.; Wang, J.; Wu, X.; Zheng, Y.; Tian, H.; Gu, M. title: Report on Three Round COVID-19 Risk Blind Tests by Screening Eye-region Manifestations date: 2021-06-25 journal: nan DOI: 10.1101/2021.06.23.21258626 sha: 23562ec0799818a63fd4f266a72685234225589a doc_id: 1008552 cord_uid: p1ivp9rl The Coronavirus disease 2019 (COVID-19) has affected several million people since 2019. Despite various vaccines of COVID-19 protect million people in many countries, the worldwide situations of more the asymptomatic and mutated strain discovered are urging the more sensitive COVID-19 testing in this turnaround time. Unfortunately, it is still nontrivial to develop a new fast COVID-19 screening method with the easier access and lower cost, due to the technical and cost limitations of the current testing methods in the medical resource-poor districts. On the other hand, there are more and more ocular manifestations that have been reported in the COVID-19 patients as growing clinical evidence[1]. This inspired this project. We have conducted the joint clinical research since January 2021 at the ShiJiaZhuang City, Heibei province, China, which approved by the ethics committee of The fifth hospital of ShiJiaZhuang of Hebei Medical University. We undertake several blind tests of COVID-19 patients by Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China. Meantime as an important part of the ongoing globally COVID-19 eye test program by AIMOMICS since February 2020, we propose a new fast screening method of analyzing the eye-region images, captured by common CCD and CMOS cameras. This could reliably make a rapid risk screening of COVID-19 with the sustainable stable high performance in different countries and races. For this clinical trial in ShiJiaZhuang, we compare and analyze 1194 eye-region images of 115 patients, including 66 COVID-19 positive patients, 44 rehabilitation patients (nucleic acid changed from positive to negative), 5 liver patients, as well as 117 healthy people. Remarkably, we consistently achieved very high testing results (> 0.94) in terms of both sensitivity and specificity in our blind test of COVID-19 patients. This confirms the viability of the COVID-19 fast screening by the eye-region manifestations. Particularly and impressively, the results have the similar conclusion as the other clinical trials of the globally COVID-19 eye test program[1]. Hopefully, this series of ongoing globally COVID-19 eye test study, and potential rapid solution of fully self-performed COVID risk screening method, can be inspiring and helpful to more researchers in the world soon. Our model for COVID-19 rapid prescreening have the merits of the lower cost, fully self-performed, non-invasive, importantly real-time, and thus enables the continuous health surveillance. We further implement it as the open accessible APIs, and provide public service to the world. Our pilot experiments show that our model is ready to be usable to all kinds of surveillance scenarios, such as infrared temperature measurement device at airports and stations, or directly pushing to the target people groups smartphones as a packaged application. In December 2019, novel coronavirus disease 2019 (COVID-19) broke out globally. The pathogenic virus was named severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). As of June 2, 2021, there are about 170 million confirmed cases and 3.7 million deaths worldwide, and the World Health Organization (WHO) declared COVID-19 a public health emergency of international concern [2, 3] . SARS-CoV-2 belongs to the family of Coronaviridae, which is a large family of positive-stranded single-stranded ribonucleic acids with envelopes. Previous studies confirmed that there are six coronaviruses can infect humans, including human coronavirus 229E (HCoV-229E), HCoV-OC43, HCoV-NL63, HCoV-HKU1, Middle East respiratory syndrome coronavirus (MERS-CoV), SARS-CoV [4] , SARS-CoV-2 is the seventh member of the coronavirus family that can infect humans [5] . When infected with SARS-CoV-2, the most common symptoms are fever, dry cough, and fatigue, the less common symptoms are expectoration, diarrhea, headache, hemoptysis, chest pain, anorexia, myalgia, chills, nausea and vomiting, and so on [6, 7] . In addition, in small cohort studies and case reports dysosmia and taste disorders like anosmia, phantosmia, parosmia ageusia, dysgeusia also have been reported [8] [9] [10] . But it is impossible to make an accurate diagnosis of COVID-19 according to the patient's clinical symptoms. At present, imaging, nucleic acid, and serum antibody are the most common methods used for diagnostic [11] , but limited to laboratory or hospital environments, demanding expert-level operations. Thus, patients must be tested in the hospital, and take some waiting time. All these points, greatly limit these stateof-the-art detection methods deployed at large-scale cities, and enabling real-time patient tracking. With the further development of the research, many studies have reported that COVID-19 patients can be combined with different ocular manifestations, such as hyperemia, increased secretion, epiphora, edema, follicular conjunctivitis, scleritis, photophobia, foreign body sensation, itchiness, and so on [11] [12] [13] [14] [15] . Recently, deep learning based classification networks have been widely used to support the disease diagnosis and management [ 1 6 , 1 7 ] . Inspired by these points, we propose a rapid COVID-19 risk screening and diagnosis model with the deep learning method based on eye-region images, captured by normal CCD or CMOS camera and cellphone. We propose the classification model with the corresponding eye images as the input to identify the COVID-19 as a binary classification task. We trained the models by the data from ShiJianZhuang and Shanghai. The performance was measured at the Area Under receiver-operating-characteristic Curve (AUC), sensitivity, specificity, and accuracy, and F1. After more than three months of study and clinical trials, we found that the confirmed cases of COVID-19 present the consistent eye pathological symbols, including the asymptomatic. This study included 20 subjects in the pre-test dataset (train and validation) and 212 subjects in the blind test dataset. The subject-level performance of COVID-19 prescreening model in the combined study have achieved an AUC of 0.940(95% CI, 0.888-0.992), 0.990 (95% CI, 0.972-1.000) and 0.975 (95% CI, 0.940-1.000) with respect to the first, second and third blind tests. As part of the ongoing globally COVID-19 eye test program by AIMOMICS since February 2020, collaborated research worldwide and across racial boundaries has been set up, to call for a more massive database and further refinement to improve the model performance. In the past and ongoing globally registered clinical trials, we show that this model can successfully classify COVID-19 patients from healthy persons, pulmonary patients except for COVID-19 (e.g., pulmonary fungal infection, bronchopneumonia, chronic obstructive pulmonary disease, and lung cancer), liver patients, diabetes patients and ocular patients. The experimental results reveal that patients with COVID-19 have different ocular features from others, which can be used to differentiate them from the public. The convenient method of eye-region image diagnosis can help the disease control researchers to fully understand the prevalence and pathogenicity of the virus in different ages, time, region, climate, environment, occupation, and population with basic diseases, and guide effective prevention and control measures against COVID-19. Meantime, our model for COVID-19 rapid prescreening have the merits of the lower cost, fully self-performed, non-invasive, real-time results, and thus enables the continuous surveillance. We further implement it as the open accessible APIs, and provide public service to the world. Essentially, our model is also open to other form of embedding for fast screening of COVID-19, such as all kind of infrared temperature measurement device at airports and stations, or directly push to the target people groups' smartphones as a packaged application. We believe a system implementing such an algorithm should assist the large-scale rapid large-scale screening and real-time follow-up, that can be inspiring and helpful for encouraging more researches on this topic. COVID-19, eye-region image, symptom classification, deep-learning, rapid screening, All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The data for the study comes from the following three resources, • COVID-19 patients: all COVID-19 cases were acquired from January to Marth, 2021 by the Fifth Hospital of Shijiazhuang, Hebei Medical University, Shijiazhuang, China. All patients were diagnosed according to the diagnostic criteria of the National Health Commission of China and confirmed by RT-PCR detection of viral nucleic acids. the photos as well as the patient's information are collected. Meanwhile, we obtained the epidemiological, medical history, clinical characteristics, laboratory tests and treatment history of the enrolled patients from electronic medical records and nursing records. Especially, all patients were given chest X-rays or CT. Some of the data needs to be supplemented and confirmed, and we obtain the data through direct communication with the doctor at the bedside, the details of demographics, basic characteristics, clinical characteristics and outcomes of the collected COVID-19 patients are summaries in Table. All patients were taken images according to our datacollection guidelines by two smartphones. • The data included 1194 photographs (pre-test 101, blind test 1093) from 232 participants (pre-test 20, test 212). We randomly merge the pre-test dataset into the previous work for model updating. The blind test is divided into 3 tests. • In the pre-test dataset, 20 COVID-19 patients were included, the blind test dataset comprises 66 COVID-19 patients and 166 control group participants. The demographic and clinical characteristics of COVID-19 patients were shown in Table 1 . All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted June 25, 2021. ; Table 1 . Summary of Developement (Pre-test:Training/Validation), and Testing Datasets. In blind test the detail of control group subject is denoted by 'total(HL/RH/LV)'. were diagnosed as hepatopathy. HL (the health volunteers) were collected from individuals who had taken physical examination, no obviously abnormal results were demonstrated and no contact history with COVID-19 patients. All the subjects were tested for the COVID-19, and no participants showed viral nucleic acids positive during the following days, except for COVID-19 patients. No death events were observed in this study. For each participant, 3-10 photographs of the eye-region were captured by using the common CCD and CMOS cameras, assisted by doctors or healthcare workers. The same shooting plain mode and parameters were used, All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. and avoid shooting filters. The photos were captured in a good lighting condition, and not in dark or red background. The image resolution is at least 1900x500 96dpi. The average time for taking a set of 5 eye photos is around 1 minute. In the re-examination step, all the data that cannot reveal the details of the eyes are discarded. The study was conducted in accordance with the principles of the Declaration of Helsinki. All participants were provided with the written informed consent at the time of recruitment. And this study was approved by the ethics committee of the fifth hospital of ShiJiaZhuang of Hebei Medical University (approval No.: 2021005), and was approved by ClinicalTrials.gov PRS (ClinicalTrials.gov ID: NCT04907981). First, we observed the pictures with an unaided eye, and found that among the 212 patients who were included(20 patients with pre-test were not counted), there were 21 patients (9.9%) combined with ocular manifestations. In the COVID-19 group, 17 patients (36.9%) showed ocular manifestations (Figure 1 ), including conjunctival congestion(N=13), increased secretion(N=4), conjunctival hemorrhage(N=1), and ptosis(N=1). While only 4 cases (9.1%) showed conjunctival congestion(N=4) and ptosis(N=2) in the rehabilitation patients. At the same time, no ocular manifestation was observed in liver disease and healthy control group. Then, in order to evaluate the study, we develop a model from the previous work and the Hubei pre-test data, using eye-region photos on the dataset. The blind test group includes COVID-19 positive patients, and control group as rehabilitation patients (nucleic acid changed from positive to negative), liver patients, healthy people. As shown in Figure 2 , the experiment is divided into 3 blind tests. During the blind test on each group, the data from the previous blind test is used as extra training data. As for the COVID-19 category, the average sensitivity target is no less than 80%, which could demonstrate that the efficacy of our method to distinguish COVID-19 patients. All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. In the first blind test, the pre-test data is combined to improve the model from previous-work. During the second and third blind, the previous blind test data is used as extra data for training and validation. The pipeline of our proposed method is illustrated in Figure 3 , which is composed of two steps, the image- To measure the performance of the binary classification network, we define a set of metrics, such as the area under the receiver-operating-characteristic curve (AUC), sensitivity, specificity, and accuracy. Bootstrapping strategy with 1000 replicates was used to estimate 95% confidence intervals of the metrics, with an image- (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted June 25, 2021. ; The overall classification system performed well, misjudgment also appeared. In the blind test, the true positive, false positive, false negative and true negative results are summarized in the confusing matrix on both the subject-level and image-level, and the details were shown in Table 3 . To investigate the interpretability of the model, we conduct a visual analysis of the key areas of the model's attention in the classification process. The key areas of the model's attention were converted into heat maps based on gradients and activation maps by GradCAM. th All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. For case study heatmap in Figure 5 , we found that the attention of heatmaps of Non-COVID-19 and COVID-19 mainly covers the iris, upper and lower eyelid, the inner and outer eye corner. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. Various publications showed the ocular involvement with Coronavirus family [21] [22] [23] . In our study, the Ocular symptoms of COVID-19 patients accounted for about 36%, the main manifestation is conjunctivitis, which was consistent with the results of previous studies [15, 24] . SARS-CoV-2 infects the body through the binding of angiotensin converting enzyme 2 (ACE2) receptor , it is widely expressed in a variety of cells, primarily in the lungs, kidneys, heart, gastrointestinal tract, and liver [ 2 7 ] . Recently, it has also been reported that ACE2 expressed in conjunctiva [ 2 8 ] . Its expression in conjunctiva may also be related to the occurrence of COVID-19 conjunctivitis.It is worth noting that some of the recovered patients still have ocular manifestations, and there are also some patients are positive in the model test. Studies have shown that the ocular performance of rehabilitated patients is accompanied by a significant increase in the level of inflammatory factors , which indicate the eye performance is not caused by the direct invasion and destruction of SARS-CoV-2, but the virus causes a wide range of pro-inflammatory cytokines and chemokine responses in conjunctival and limbal epithelial cells , resulting in a proliferation of cytokines, resulting in apoptosis of peripheral corneal epithelial cells and reactive inflammatory injury. Therefore, we believe that longer and more frequent followup is necessary for the recovered patients. Meanwhile, there are some limits to this study. First, the study sample size is small and most COVID-19 patients were collected from East Asia (China). A larger multicenter study covering more patients of diverse race would be necessary to test the performance of the ocular surface feature-based deep learning system. At last, the pathological significance of extracted features from COVID-19 patients should be carefully interpreted and re-verified by ophthalmologists. Our deep learning method in this clinical trial study, which discriminates over 85% COVID-19 positive patients from the test group, including asymptomatics. Consider that eye exam technology has been used to screen a variety of diseases, such as diabetes and kidney disease. In this paper, we proposed a deep learning model for rapidly risk screening COVID-19 with eye-region images. Different from previous studies, which utilize RT-PCR or CT imaging, the input of our system is the face image or binocular image captured by common CCD and CMOS cameras. Combining with the development of deep learning, it enables the real-time COVID-19 screening from two aspects, sample acquisition, and testing. All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. We are conducting the large-scale experiments to further validate the effectiveness and efficacy of our algorithm as the tool of the new screening method. On the other hand, due to the privacy policy and the difficulty of data collection in different countries by racial diversity, we have furthered investigate and extended the capability of our algorithm in few-shot learning settings. We have achieved sustainable stable high performance in a series of registered clinical trials at different counties, thanks to our previous works from the computer vision and machine learning communities. We believe that this study can be inspiring and helpful for encouraging more researches in this direction, and provide effective and rapid assist for clinical risk screening, especially during outbreaks. For scientific research, we need some eye region data. We promise that the data will not be used for commercial, only for scientific research. The specific requirements are as follows: 1. The eyes in the image need to be clear, a total of five required angles are shown on the left. 2. When taking photos, please do not wear cosmetic products such as contact lenses. 3. When taking photos, please do not use beauty camera mode, or post-production beauty filters, we need the original images. 1) For the above sampling, the same model of mobile phone or shooting equipment must be used to prevent the sampling data domain from being interfered with by the equipment. 2) If the conditions are not available, it is necessary to use the same model of mobile phone to collect all the sampling images at the same time, to maximize the elimination of device data domain interference. 3) The same shooting parameters must be used when shooting, and the beauty, soft light, and other shooting filters must not be used. 4) The image resolution of the eyes is at least: 1900x500 96dpi 5) The shooting environment should be well-lit and bright. It should not shoot in front of dark and red backgrounds. A white background is best. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. All participants were provided with written informed consent at the time of recruitment. And this study was approved by the ethics committee of The Fifth Hospital of ShiJiaZhuang of Hebei Medical University. The study was in accordance with the Declaration of Helsinki 1964 and its successive amendments. All other authors declare no competing interests. We would like to thank Dr. DaiErHei and Dr. Yu Liu for their kind assistance with this research project. All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. A New Screening Method for COVID-19 based on Ocular Feature Recognition by Machine Learning Tools A Novel Coronavirus from Patients with Pneumonia in China Going global -Travel and the 2019 novel coronavirus Hosts and Sources of Endemic Human Coronaviruses The genetic sequence, origin, and diagnosis of SARS-CoV-2 Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: a descriptive study Clinical features of patients infected with 2019 novel coronavirus in Wuhan Virological assessment of hospitalized patients with COVID-2019 The neuroinvasive potential of SARS-CoV2 may play a role in the respiratory failure of COVID-19 patients Characteristics of and Important Lessons From the Coronavirus Disease 2019 (COVID-19) Outbreak in China: Summary of a Report of 72314 Cases From the Chinese Center for Disease Control and Prevention Diagnosing COVID-19: The Disease and Tools for Detection Ocular Symptoms among Nonhospitalized Patients Who Underwent COVID-19 Ocular manifestations of a hospitalised patient with confirmed 2019 novel coronavirus disease Episcleritis as an ocular manifestation in a patient with COVID-19 Characteristics of Ocular Findings of Patients With Coronavirus Disease 2019 (COVID-19) in Hubei Province Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning COVID-19 Artificial Intelligence Diagnosis using only Cough Recordings Ocular manifestations and clinical characteristics of 535 cases of COVID-19 in Wuhan, China: a cross-sectional study Ophthalmic and Neuro-ophthalmic Manifestations of Coronavirus Disease Update and Recommendations for Ocular Manifestations of COVID-19 in Adults and Children: A Narrative Review Evaluation of coronavirus in tears and conjunctival secretions of patients with SARS-CoV-2 infection 2019-nCoV transmission through the ocular surface must not be ignored Revisiting the dangers of the coronavirus in the ophthalmology practice Conjunctivitis can be the only presenting sign and symptom of COVID-19 Structural basis for the recognition of the 2019-nCoV by human ACE2 Structure of the SARS-CoV-2 spike receptor-binding domain bound to the ACE2 receptor ACE2 receptor polymorphism: Susceptibility to SARS-CoV-2, hypertension, multi-organ failure, and COVID-19 disease outcome Mechanism of the action between the SARS-CoV S240 protein and the ACE2 receptor in eyes Relapsing viral keratoconjunctivitis in COVID-19: a case report Late manifestation of follicular conjunctivitis in ventilated patient following COVID-19 positive severe pneumonia COVID-19 and Eye: A Review of Ophthalmic Manifestations of COVID-19 How to trust unlabeled data Instance Credibility Inference for Few-Shot Learning Transductive multi-view zero-shot learning All rights reserved. No reuse allowed without permission.(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.The copyright holder for this preprint this version posted June 25, 2021.