key: cord-0834053-fx5c0frt authors: Pham, Chi; Poorzargar, Khashayar; Nagappa, Mahesh; Saripella, Aparna; Parotto, Matteo; Englesakis, Marina; Lee, Kang; Chung, Frances title: Effectiveness of consumer-grade contactless vital signs monitors: a systematic review and meta-analysis date: 2021-07-09 journal: J Clin Monit Comput DOI: 10.1007/s10877-021-00734-9 sha: 7f581683a36c4734c45f30985edee6af17bb143b doc_id: 834053 cord_uid: fx5c0frt The objective of this systematic review and meta-analysis was to analyze the effectiveness of contactless vital sign monitors that utilize a consumer-friendly camera versus medical grade instruments. A multiple database search was conducted from inception to September 2020. Inclusion criteria were as follows: studies that used a consumer-grade camera (smartphone/webcam) to examine contactless vital signs in adults; evaluated the non-contact device against a reference medical device; and used the participants’ face for measurement. Twenty-six studies were included in the review of which 16 were included in Pearson’s correlation and 14 studies were included in the Bland–Altman meta-analysis. Twenty-two studies measured heart rate (HR) (92%), three measured blood pressure (BP) (12%), and respiratory rate (RR) (12%). No study examined blood oxygen saturation (SpO(2)). Most studies had a small sample size (≤ 30 participants) and were performed in a laboratory setting. Our meta-analysis found that consumer-grade contactless vital sign monitors were accurate in comparison to a medical device in measuring HR. Current contactless monitors have limitations such as motion, poor lighting, and lack of automatic face tracking. Currently available consumer-friendly contactless monitors measure HR accurately compared to standard medical devices. More studies are needed to assess the accuracy of contactless BP and RR monitors. Implementation of contactless vital sign monitors for clinical use will require validation in a larger population, in a clinical setting, and expanded to encompass other vital signs including BP, RR, and SpO(2). SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s10877-021-00734-9. The COVID-19 pandemic has drastically impacted the healthcare system and has changed the way in which healthcare is delivered. Virtual care in the form of telemedicine Chi Pham and Khashayar Poorzargar have contributed equally to this manuscript. transitioned from being a novelty to a necessity during the COVID-19 pandemic [1] . Current telemedicine methods are facilitated through telephone or video calls. Telephone visits have the advantage of being universally available to all patients but lack the ability to facilitate a physical examination. The current standard for virtual care is a video call, which allows health care providers to visually examine patients [2] . The measurement of vital signs is part of inperson medical visits, but it has not been integrated into the current telemedicine system. Vital sign measurements that can be obtained by a consumer-grade camera, such as a webcam or smartphone camera, can offer patients a standard of care that closely resembles a visit to the clinic. These contactless monitors require the user to take a short video of their face using a mounted smartphone or webcam. The video will then be processed by a computer algorithm can that detect changes in light reflection off the facial skin and convert this to vital signs measurements. This process does not require the user to come into contact with the phone or webcam, if the buttons on the device are selected by another individual, which gives it the potential to act as electronic personal protective equipment in public clinics or hospitals. Given the ubiquity of smartphones, vital sign monitors that only require a camera are more accessible compared to existing self-monitoring alternatives such as smartwatches or automatic blood pressure cuffs. The primary objective of this systematic review and meta-analysis is to assess the effectiveness and accuracy of consumer-grade contactless vital signs measuring technologies such as smartphone cameras or webcams in comparison to standard medical devices. This systematic review was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [3] . This systematic review was registered in PROSPERO (CRD42020210938). The inclusion criteria were as follows: (1) use of contactless vital signs measuring technologies in adults 18 years and older measuring vital signs [blood pressure (BP), heart rate (HR), respiratory rate (RR), and hemoglobin oxygen saturation (SpO 2 )]; (2) compared to a reference medical device; and 3) using a consumer-grade camera (smartphone or webcam) with the participants' face as the region of interest (ROI). The exclusion criteria were as follows: (1) all conference papers, case series, case reports and (2) non-English articles. We searched for articles published in the following electronic databases from inception (1946) to September 2020: MEDLINE (Ovid), PubMed-Not-MEDLINE (NLM), Embase, Cochrane Central Register of Controlled Trials, and Cochrane Database of Systematic Reviews. The Citations were searched to capture potentially missed articles. The search strategy was designed by an information specialist experienced in a systematic review (ME). The search terms included "non-contact", "cell phones", "webcam", "camera", "vital signs", "heart rate", "respiratory rate", "blood pressure", "oximetry" and all search terms are listed in Supplementary File. The search yielded 16,927 articles after the duplicates were removed. After screening the titles and abstracts, 97 articles were assessed for full-text eligibility. An additional 37 articles were identified from citations. A total of 26 studies with 1,937 patients were included in the qualitative analysis. The PRISMA diagram of the search results is shown in Fig. 1 . Two reviewers (KP and AS) independently performed the title, abstract, and full-text screening. A third reviewer (CP) and a senior author (FC) were consulted on any discrepancy that was not resolved by discussion. Data were extracted by three reviewers (CP, KP, and AS) and managed using Excel. Two reviewers (KP and AS) divided the studies and extracted the data while a third reviewer (CP) independently extracted data from all studies for comparison. We extracted the vital sign(s) of interest, the number of participants, age, ethnicity, eligibility criteria, image technology, reference method(s), distance from camera to facial region, position of the camera, source of light, performance of new technology compared to the reference method, and reported limitations. Data on Bland-Altman bias and its standard deviation, as well as Pearson's correlation coefficient, were extracted from 14 and 16 studies respectively. WebPlotDigitizer was used to extract Bland-Altman values from Bland-Altman plots and Pearson's correlation was calculated using Bland-Altman values when possible. Two reviewers (KP and AS) independently assessed the quality of the methodology reporting with respect to the Guidelines for Reporting Reliability and Agreement Studies (GRRAS) [4] . We used a modified GRRAS reporting tool based on a review of image-based, non-contact sensors, and vital sign monitoring methods [5] . The quality of the statistical analyses was assessed by assigning a rating, ranging from 0 to 3, based on a classification of statistical methodologies and reported measurements described in each study. The standard, with a rating of 3, was assigned if Bland-Altman plot/limits of agreement analysis, intraclass correlation coefficient, Lin's concordance correlation coefficient, and British standards reproducibility/repeatability coefficients were reported [5] . Studies meeting the standard but that did not account for repeat measurements per subject [6] scored a rating of 2. Studies using other acceptable statistical methods, or a mixture of standard and inappropriate methods scored a rating of 1. Studies that used inappropriate statistical methods scored a rating of 0. The studies were given a score out of 3 for both population/set-ups reporting and statistical analysis. We gave an overall study reporting quality rating based on the aggregate number of points for each study, with a maximal score of 6. Studies with scores of 5 or 6 were assessed as 'Good', those with scores of 3 or 4 were 'Fair', and those with scores of 2 or less were 'Weak'. We contacted authors via emails to inquire about missing data but did not receive any responses. We used the comprehensive meta-analysis software, R program, and the "meta" package for our analysis. The continuous data are presented as the mean ± standard deviation and discrete data are presented as a percentage. To determine the agreement between non-contact vital sign devices and vital signs, we conducted a meta-analysis on the final list of Pearson Correlation coefficient values using the Hunter-Shmidt method [7] . All vital signs were assessed separately and compared with other studies that used the same for the reference device in their comparison. A random-effect meta-analysis was used to pool the mean difference in the Bland-Altman and standard error meta-analyses. To assess the variance in outcomes of non-contact devices in comparison to standard medical grade instruments, we conducted a meta-analysis on Bland-Altman values from different studies [8] . To explore the heterogeneity and pooled estimate, we used "one study removed meta-analysis" and "cumulative meta-analysis" to further confirm our results. Additionally, meta-regression analysis was done, with age, male gender, year of publication, and sample size as the confounders. Publication bias was assessed by visual inspection of the funnel plot, Egger's test, and Begg's test. Two-tailed p value less than 0.05 was considered significant. The demographics and characteristics of the studies are shown in Table 1 . Participants' age in all studies ranged from 18 to 89 years old. Ethnicity was reported in 65% of studies, which included participants from all skin colour categories of the Fitzpatrick scale (Table 1) [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] . Most studies had a small sample size (≤ 30 participants) and were done in a laboratory setting. Twenty-three studies (88%) reported testing their technology in healthy volunteers, two studies tested their technology in a patient population with cardiovascular disease [14, 26] and one study did not report their participant population [27] . For vital sign measurements, twenty-two studies measured HR (92%), three measured BP (12%), and three measured RR (12%). No study examined SpO 2 . The pulse oximeter was commonly used (54%) as a reference device. (Table 2) Electrocardiogram (ECG) (5/26) [11, 14, 24, 26, 28] , sphygmomanometer (4/26) [9, 12, 29, 30] HR monitor (1/26) [31] , and respiratory monitor such as chest belts (4/26) [13, 17, 18, 32] were also used as reference devices. (Table 2 ). All studies used cameras that recorded in the visible light spectrum. The majority of studies used multiple camera channels [9-20, 22-25, 28-30, 32-35] and two studies used a single-channel [21, 31] . Most studies used a distance of 0.3 m (m) to 2 m between the camera and the participant [9, 11-13, 15-18, 29-31, 34] . Seventy percent of studies used 30 frames per second [9, 11, 13, 14, 16, 20, 21, 23, 25, 28, [31] [32] [33] [34] and the remaining used 15 frames per second [15, 17-19, 22, 23, 29] . The resolution of the camera mostly ranged from 320 × 240 pixels to 1280 × 720 pixels [11, 13-18, 21-23, 25, 26, 28, 30-34] . Lighting conditions were variable; most used ambient lighting or sunlight [9, 11, 13, 15-20, 22-26, 28-30, 32-34] and three used lighting as an experimental condition [20, 23, 33] . Of the eighteen studies that reported duration of vital sign measurements, most recorded a time between 0.5 to 3 min [9, 12, 13, 15-20, 22-24, 28-35] . The majority of studies (92%) measured participants at rest in a sitting position, [9, 10, 12, 13, [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [28] [29] [30] [31] [32] [33] [34] [35] . Only three studies were done in a standing position, [23, 25, 35] and two in a supine position [14, 35] Five studies measured participants in motion and used motion as an experimental condition (Table 2) Using the adapted Harford et al. 2019 GRRAS quality assessment tool, (Supplementary Table S1 ) four studies were assessed as 'Good', sixteen were assessed as 'Fair', and five were deemed 'Weak'. Most studies did not report their study population and experimental set-up in detail. No study included an a priori description of their statistical analysis. The majority of studies used an inappropriate statistical analysis method to measure agreement between the contactless method and the reference device. Meta-analysis was performed on HR only as there were only three studies on BP and RR respectively. Sixteen studies on HR were included in the Pearson's correlation metaanalysis with either direct extraction [10, 13, 15, 17-20, 24, 26, 29, 31] or manual calculation [22, 23, 30, 32, 33] . The Specifications for only the non-contact camera used in each study were recorded Technologies random-effects meta-analysis on Pearson's values (Fig. 2) showed that the pooled weighted correlation coefficient was 0.962 (95% CI 0.905 to 0.985; p value < 0.00) (Predictive Intervals: − 0.0839 to 0.99). The overall inter-study heterogeneity (I 2 ) is 93%, while the between-study variance is (Tau 2 ) 0.873 (Tau = 0.934). The weights of all studies were evenly spread despite overall low participant counts (< 50). To explore the impact/influence of one study on the pooled correlation coefficient and study's effect on the heterogeneity, we conducted the "one study removed metaanalysis" (Supplementary Figure S1) . With this analysis, the pooled correlation coefficient varied from highest value of 0.96 (after removal of Coppetti et al. [26] ) to lowest value of 0.95 (after removal of Yan et al. [24] ). After removal of Coppetti et al. [26] and Yan et al. [24] , heterogeneity decreased to 79%, suggesting that these two studies contributed the maximum heterogeneity to the pooled correlation coefficient (Supplementary Table S2 ). The results were further confirmed by cumulative meta-analysis, which did not change the final inference of our results. We identified age, male gender, year of publication, and sample size as the confounders and conducted the meta-regression analysis. Meta-regression by age, male gender, year of publication, and sample size showed that the results were similar over these confounders since the slope of effect size was nonsignificantly altered (Supplementary Table S3 , Figure S4 ). On visualization of the Funnel plot there was no publication bias (Supplementary Figure S3) . The absence of publication bias was further confirmed by Egger's test (p = 0.595) and Begg's test (p = 0.458). Fourteen studies were included in the Bland-Altman standard error of mean meta-analysis [10, 11, 13, 15, 17, 19, 20, 22, 24, 26, 30, 32, 33, 35] . The pooled estimate of the mean difference between non-contact and standard method for HR detection was 0.36, with the pooled 95% confidence interval ranging from − 1.22 to 1.95 (Fig. 3 ). Three studies examined the accuracy of contactless BP monitors [9, 12, 29] . Luo et al. recorded average prediction bias ± error SDs of 0.39 ± 7.30 mmHg for systolic BP and − 0.20 ± 6.00 mmHg for diastolic BP compared to a sphygmomanometer (CNAP® Monitor 500/Biopac) [12] . Yang et al. recorded -0.4 ± 6.7 mmHg for systolic BP and 1.2 ± 7.0 mmHg for diastolic BP compared to a mercury sphygmomanometer [9] . Both studies validated the 'Anura' application and found that the contactless monitor falls within the key accuracy threshold of 5 ± 8 mmHg set by the Association for the Advancement of Medical Instrumentation (AAMI) standard [9, 12] . Gonzalez et al. indicated that their HR and BP contactless monitor had a high pooled correlation (0.97) compared to a sphygmomanometer (Omron (I 2 ) is 93%, while between-study variance is (Tau 2 ) 0.873 (Tau = 0.934) Hem-790IT) but did not explicitly provide data for BP measurement alone [29] . Three studies examined the accuracy of contactless RR monitors [18, 20, 32] . Two studies found that RR was less accurate than HR measurement [18, 20] one study found that RR and HR measurements had comparable accuracy [32] . Sanyal et al. recorded a correlation value of 0.66 for RR compared to 0.92 for HR when comparing their contactless RR monitor against self-reported measurement [20] . Poh et al. recorded a correlation value of 0.94 for RR compared to 1.00 for HR when comparing their contactless RR monitor against a chest belt respirator sensor [18] . Wei et al. calculated their correlation value per measurement, in which RR ranged from 0.90-0.98 (mean = 0.96, median = 0.96) and HR ranged from 0.91-0.98 (mean = 0.95, median = 0.96), showing comparable accuracy between the contactless method and a breathing apparatus sensor (Model: HKH-11B, Hefei Huake Info Technology Co.) [32] . Motions that range from head movements, back and forth body movement to light exercises have been reported to affect the accuracy of the contactless method [9-11, 14, 16, 34] . Since only normotensive participants were used to train the predictive models on vital signs, more errors were reported in the lower and higher range of HR and BP measurement for three studies [9, 11, 12] . Two studies reported racial homogeneity as a potential limitation yet found that skin tone did not influence the predictive model's accuracy [9, 12] . As for camera specification, lack of automatic face tracking [15, 32] , low frame rate, and resolution [11, 16, 33] decrease the accuracy of vital sign prediction. Poor lighting [9, 16, 30, 33, 34] and facial obstruction [20] were reported as a limitation for capturing the participant's face. Our review included contactless vital sign monitors that can be used by the public. At present, four applications are currently available on the Apple App store for consumer use. The 'Anura' application (Nuralogix Inc., Canada) measures BP and was deemed accurate compared to a sphygmomanometer by two studies [9, 12] . The 'Cardiio' application (Cardiio Inc. Cambridge, USA), was validated in two studies with contradicting results [24, 26] . One study found the application to be accurate using the iPhone 6 with a better camera of higher resolution and frame rate for video recording [24] while another study found inaccuracy using the iPhone 5/4 [26] when compared to ECG for measuring HR. The remaining applications, 'Whats My Heart Rate' (Vitrox Technologies, Malaysia) [26] and 'Cardio Buddy' (Azumio Inc., USA) [35] were reported as not accurate or effective in measuring HR compared to a pulse oximeter. To date, this is the first systematic review and meta-analysis in the literature to evaluate the accuracy of consumerfriendly contactless vital sign monitors that utilizes a smartphone or webcam. We found that contactless vital sign monitors were accurate when compared to standard medical devices in HR measurement. The mean absolute error of the estimated heart rate using the contactless vital sign monitors is 0.36 bpm, while the best performance by the contactless vital sign monitor is 1.95 bpm. The consumer-friendly contactless vital sign monitors showed a high correlation of 0.962 with standard medical devices. Almost all contactless devices measured HR, while only 3 studies measured BP [9, 12, 29] and RR [18, 20, 32] . Due to the small number of studies on RR and BP, we were not able to perform a meta-analysis on these parameters. Two out of three studies which measured BP using the same application found that the contactless monitor achieved the key accuracy according to the AAMI standard [9, 12] . Two out of three studies which measured RR found that RR contactless monitor had lower accuracy when compared to their respective reference devices [18, 20] . We did not find any study using this new technology on SpO 2 . SpO 2 is a vital sign that is essential for medical evaluations and should be added to future contactless monitoring technology. To date, only studies that used high performing, expensive, non-consumer grade cameras have attempted to measure SpO 2 [36, 37] . More than half of the studies used the pulse oximeter as a reference medical device for measuring HR while other studies used ECG [11, 14, 24, 26, 28] or an HR monitor [31] . The pulse oximeter uses PPG (Photo-Plethysmography) technology, which is similar to imaging PPG (iPPG) technology implemented in contactless devices. The similarity between both optical methods supports the pulse oximeter's use as a reference device [38, 39] . Melanin concentration and skin pigmentation can affect PPG technologies in recording pulse rate [40] , which raises the importance of testing camera detection in a wide range of skin types. Our review shows that the current contactless technology has been validated in all skin types since 65% of studies included all possible skin types. Most studies used cameras that captured videos at 30 frames per second, while resolution ranged from 320 × 240 pixels to 1280 × 720 pixels. Three studies observed that higher resolution correlates with higher measurement accuracy, and suggested that at minimum a frame rate of 15 fps should be used in all contactless monitors [11, 15, 16] . As camera technology improves with higher specifications, the accuracy of these technologies will also improve as time continues. The majority of studies used healthy volunteers in a laboratory setting to validate their contactless technology. Only two studies tested contactless technology on patients with atrial fibrillation [14] and heart problems in a hospital setting [26] . Couderc et al. noted that contactless HR measurements using video plethysmography is useful for at-home monitoring of patients with atrial fibrillation, highlighting the utility of this new technology in chronically ill patients [14] . In order to validate the use of this new technology in clinical settings, studies will need to include patients with comorbidities to assess its performance on a wide range of vital signs. Validation of the contactless technology in clinical settings will be crucial to factor in movement, variable lighting, and patient perceptions on ease of use. To date, the American National Standards Institute (ANSI)/ Association for the Advancement of Medical Instrumentation (AAMI)/International Organization for Standardization (ISO) 81,060-2:2013 standard has not been developed for contactless, camera-, vital sign measuring technologies [9] , which makes it difficult to deem if currently available technologies are adequate for clinical use. Further work is needed in this area. A variety of technological barriers still restrict the accuracy of contactless technology and must be resolved before widespread implementation. Motion [9-11, 14, 16, 34] , poor lighting [16, 34] , and manual face tracking [15, 31, 32] were reported as a limitation for accuracy. Only four contactless vital sign monitors are currently available for consumer use, three of which measures HR [24, 26, 35] and one measures BP [9, 12] . It is important to note that the majority of studies were conducted on healthy participants, and the accuracy of contactless monitors with abnormal vital signs has yet to be determined. To truly assess accuracy, the contactless monitors must be referenced against ground truth measurements, such as ECG for HR and mercury column manometer for BP. Contactless, camera-based vital sign monitors are accessible and can greatly advance virtual health care. Tools for remote patient home monitoring are relevant and necessary as a way to encourage social distancing measures and as a bedside tool during the COVID-19 pandemic [41] . Effective and widely spread contactless camera-based monitoring can free-up hospital beds for very sick patients by allowing health care providers to monitor moderately sick patients at home and reduce peer-to-peer contact in clinical settings [41, 42] . Compared to other self-monitoring alternatives like smart-watches and automatic blood pressure cuffs, the ubiquity of smartphones and webcams make this technology accessible for the general population. Although existing vital signs monitors (automatic blood pressure cuffs, SpO 2 monitors) range from $50-100, many individuals do not possess such devices nor have access to them. While novel smartwatches can measure all vital signs, they are expensive ($200-$500) and features such as SpO 2 measurement have not been clinically validated for most brands. Overall, contactless vital sign monitors need further development before widespread implementation and measurement of SpO 2 is essential as part of vital signs. It is crucial that contactless vital sign monitors are tested for accuracy in a large and diverse population before being used in a clinical setting. One of the main limitations to the findings of this meta-analysis is the absence of studies which used randomized controlled trials with a large sample size. Although no evidence of bias was noticed, the pooled analysis found high heterogeneity, primarily attributable to differences in the patient population, varying technology, study outliers [10, 18, 26, 35] and possibly due to the use of different algorithms/methods in developing the contactless technology over a period of time. Our metaanalysis may be underpowered for pooled estimates increasing the chances of type II errors. Some of these limitations and other unknown confounding factors which are inherent to the prospective observational studies may have contributed to bias in pooled estimates and its dispersion. Despite these limitations, our meta-regression analysis offers a comprehensive analysis of the available evidence on the accuracy of contactless monitors to measure vitals. In future studies, it will be useful to evaluate the specific technologies for measuring specific vitals over different populations. Studies that aim to validate their contactless technology should use a medical grade reference device; aim to measure a wide range of vital signs, and test their device in a diverse patient population in order to determine the technology's practical application in clinical scenarios. We recommend using the Bland-Altman analysis to test for the agreement between contactless devices and medical devices since this method has been widely used in agreement research [43] . A modified Clark Error Grid analysis can also be used to quantify clinical accuracy and the consequences for clinical decision-making of HR, RR and BP measurements [44] . This analysis has been used in studies to validate the accuracy of wearable vital sign monitors [45, 46] . The use of Pearson's correlation without another statistical test, such as the Bland-Altman analysis, is inappropriate since the conclusions drawn from Pearson's correlation coefficients alone do not create a full picture to showcase the variance of the devices being compared [47] . Future regulations on contactless monitors should cover standardization of reporting of results [48] , and guidelines on how to validate these technologies for clinical use, such as a comparison to specific medical devices and running experiments with adequate sample size. This systematic review and meta-analysis showed that contactless vital sign monitors which utilize a consumerfriendly camera, such as a webcam or smartphone camera, are accurate for measuring HR when compared to a medical device. More studies on contactless BP and RR are needed to assess their accuracy. In order for contactless vital sign monitors to be implemented for clinical use, they must be validated in a larger and more diverse population, and expanded to encompass all important vital signs, including blood oxygen saturation. Virtually perfect? Telemedicine for Covid-19 Telehealth transformation: COVID-19 and the rise of virtual care Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement Guidelines for reporting reliability and agreement studies (GRRAS) were proposed Availability and performance of image-based, non-contact methods of monitoring heart rate, blood pressure, respiratory rate, and oxygen saturation: a systematic review Agreement between methods of measurement with multiple observations per individual Methods of meta-analysis: correcting error and bias in research findings A framework for the meta-analysis of Bland-Altman studies based on a limits of agreement approach Preliminary assessment of videobased blood pressure measurement according to ANSI/AAMI/ ISO81060-2: 2013 guideline accuracy criteria: anura smartphone app with transdermal optimal imaging technology Detail-preserving pulse wave extraction from facial videos using consume-level camera Remote heart rate monitoring-assessment of the facereader rPPg by Noldus Smartphone-based blood pressure measurement using transdermal optical imaging technology Continuous wavelet filtering on webcam photoplethysmographic signals to remotely assess the instantaneous heart rate Detection of atrial fibrillation using contactless facial video monitoring Signal recovery in imaging photoplethysmography Measuring pulse rate with a webcam Non-contact, automated cardiac pulse measurements using video imaging and blind source separation Advancements in noncontact, multiparameter physiological measurements using a webcam Video pulse rate variability analysis in stationary and motion conditions Algorithms for monitoring heart rate and respiratory rate from the video of a user's face Non-contact heart rate monitoring by combining convolutional neural network skin detection and remote photoplethysmography via a low-cost camera Constrained independent component analysis approach to nonobtrusive pulse rate measurements Robust efficient estimation of heart rate pulse from video Resting and postexercise heart rate detection from fingertip and facial photoplethysmography using a smartphone camera: a validation study Non-contact detection of cardiac rate based on visible light imaging device. Optics and Photonics for Information Processing VI: International Society for Optics and Photonics Accuracy of smartphone apps for heart rate measurement Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range Measurement of heart rate variability using off-the-shelf smart phones Non-contact heart rate and blood pressure estimations from video analysis and machine learning modelling applied to food sensory responses: A case study for chocolate Use of ambient light in remote photoplethysmographic systems: comparison between a high-performance camera and a low-cost webcam Remote heart rate measurement from face videos under realistic situations Non-contact, synchronous dynamic measurement of respiratory rate and heart rate based on dual sensitive regions Blood pulsation measurement using cameras operating in visible light: limitations Improving video based heart rate monitoring Concurrent validity of resting pulse-rate measurements: a comparison of 2 smartphone applications, the polar H7 belt monitor, and a pulse oximeter with bluetooth Noncontact simultaneous dual wavelength photoplethysmography a further step toward noncontact pulse oximetry Contactless multiple wavelength photoplethysmographic imaging: A first step toward "SpO 2 camera" technology Pulse oximetry A new look at the essence of the imaging photoplethysmography Influence of skin type and wavelength on light wave reflectance Technological Developments and strategic management for overcoming the COVID-19 challenge within the hospital setting in Israel Exploring the adoption of telemedicine and virtual software for care of outpatients during and after COVID-19 pandemic Statistical methods used to test for agreement of medical instruments measuring continuous variables in method comparison studies: a systematic review Evaluating clinical accuracy of systems for self-monitoring of blood glucose Vital signs monitoring with wearable sensors in high-risk surgical patients: a clinical validation study Reliability of wireless monitoring using a wearable patch sensor in high-risk surgical patients at a step-down unit in the Netherlands: a clinical validation study Measuring agreement, more complicated than it seems A broader look: camera-based vital sign estimation across the spectrum Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations