key: cord-0537180-ivrcfwdq authors: Liu, Xin; Fromm, Josh; Patel, Shwetak; McDuff, Daniel title: Multi-Task Temporal Shift Attention Networks for On-Device Contactless Vitals Measurement date: 2020-06-06 journal: nan DOI: nan sha: 70e01f8de3cc71023f57c23cde7c37d3408a7858 doc_id: 537180 cord_uid: ivrcfwdq Telehealth and remote health monitoring have become increasingly important during the SARS-CoV-2 pandemic and it is widely expected that this will have a lasting impact on healthcare practices. These tools can help reduce the risk of exposing patients and medical staff to infection, make healthcare services more accessible, and allow providers to see more patients. However, objective measurement of vital signs is challenging without direct contact with a patient. We present a video-based and on-device optical cardiopulmonary vital sign measurement approach. It leverages a novel multi-task temporal shift convolutional attention network (MTTS-CAN) and enables real-time cardiovascular and respiratory measurements on mobile platforms. We evaluate our system on an ARM CPU and achieve state-of-the-art accuracy while running at over 150 frames per second which enables real-time applications. Systematic experimentation on large benchmark datasets reveals that our approach leads to substantial (20%-50%) reductions in error and generalizes well across datasets. The SARS-CoV-2 (COVID- 19) pandemic is transforming the face of healthcare around the world [1, 2] . One example of this is the sharp increase (by more than 10x) in the number of medical appointments held via telehealth platforms because of the increased pressures on healthcare systems, the desire to protect healthcare workers and restrictions on travel [2] . Telehealth includes the use of telecommunication tools, such as phone calls and messaging, and online health portals that allow patients to communicate with their providers. The Center for Disease Control and Prevention is recommending the "use of telehealth strategies when feasible to provide high-quality patient care and reduce the risk of COVID-19 transmission in healthcare settings" 1 . Performing primary care visits from a patient's home reduces the risk of exposing people to infections, increases the efficiency of visits and facilitates care for people in remote locations or who are unable to travel. These are longstanding arguments for telehealth and will still be valid after the end of the current pandemic. Healthcare systems are likely to maintain a high number of telehealth appointments beyond the current pandemic [3] . Despite the longstanding promise of telehealth, it is difficult to provide a similar level of care on a video call as during an in-person visit. The physician can diagnose a patient based on observations and self-reported symptoms; however, in most cases they cannot objectively assess the patient's physiological state. This means that physicians have to make decisions (e.g., recommending a trip to the ER) without important data. In the case of COVID-19, there are severe cardiopulmonary (heart and lung related) symptoms that are difficult to evaluate remotely. The symptoms seen in patients have drawn links to acute respiratory distress syndrome [4] , myocardial injury, and chronic damage to the cardiovascular system. Experts suggest that particular attention should be given to cardiovascular protection during treatment [5] . The development of more accurate and efficient non-contact cardiopulmonary measurement technology would give remote physicians access to the data to make more informed decisions. Beyond telehealth, the same technology could impact passive health monitoring, improving the standard of care for infants in the NICU [6] . Cameras can be used to measure physiological signals, including heart rate, respiration, and blood oxygenation levels [7, 8, 9] , based on facial videos [10, 11] . Non-contact cardiopulmonary measurement involves capturing subtle changes in light reflected by the body caused by physiological processes. Imaging methods can be used to measure volumetric changes in blood close to the surface of the skin (known as photoplethysmography or PPG) and mechanical motion of the body due to the cardiac pulse (known as ballistocardiography or BCG). The PPG and BCG signals provide complementary information to one another and also contain information about respiration due to respiratory sinus arrhythmia [12] . Respiratory signals (breathing) can also be recovered from motion-based analyses of the head and torso [13] . Computer vision for remote cardiopulmonary measurement is a growing field; however, there is room for improvement in the existing methods. First, accuracy of measurements is critical to avoid false alarms or misdiagnoses. The US Federal Drug Administration (FDA) mandates that testing of a new device for cardiac monitoring should show "substantial equivalence" in accuracy with a legal predicate device 2 , which means a contact sensor. This standard has not been obtained. Second, designing models that run on-device helps reduce the need for high-bandwidth Internet connections making telehealth more practical and accessible. Moreover, camera-based cardiopulmonary measurement is a highly privacy sensitive application. These data are personally identifiable, combining videos of a patient's face with sensitive physiological signals. Therefore, streaming and uploading data to the cloud to perform analysis is not ideal. Finally, the ability to run at a high frame rates enables opportunistic sensing (e.g., obtaining measurements each time you look at your phone) and helps capture waveform dynamics that could help detect arterial fibrillation [14] and hypertension [15] . We propose a novel multi-task temporal shift convolutional attention network (MTTS-CAN) to address the challenges of privacy, portability, and precision in contactless cardiopulmonary measurement. Our end-to-end MTTS-CAN leverages a temporal shift module to perform efficient temporal modeling to offset various motion noise without any additional computational overhead, an attention module to improve signal source separation, and a multi-task mechanism to share the intermediate representation between pulse and respiration and jointly estimate both simultaneously. By combining these three techniques, our proposed network can run on an ARM CPU and achieve the state-of-the-art accuracy and speed. The contributions of this paper are to 1) present an accurate and efficient approach to perform on-device real-time spatial-temporal modeling of vitals signal, 2) evaluate our system and show state-of-the-art performance on two public large datasets, 3) provide an implementation of core tensor operations required for MTTS-CAN using a modern deep learning compiler and an on-device latency evaluation across different architectures where MTTS-CAN to run at more than 150 frame per second. Our code, models, and video figures are provided in the supplementary materials. Camera-based Physiological Measurement: Early work established that the blood volume pulse can be extracted by analysing skin pixel intensity changes over time [10, 11] . These methods are grounded by optical models (e.g., the Lambert-Beer law (LBL) and Shafer's dichromatic reflection model (DRM)) that provide a framework for modeling how light interacts with the skin. However, traditional signal processing techniques are quite sensitive to noise from other sources in video data, including head motions and illumination changes [7, 12] . To help address these issues, some approaches incorporate prior knowledge about the physical properties of the patient's skin [16, 17] . Although effective, these handcrafted signal processing pipelines make it difficult to capture the complexity of the spatial and temporal dynamics of physiological signals in video. Neural network based approaches have been successfully applied using the BVP or respiration as the target signal [9, 18, 19, 20] , but these methods still struggle with effectively combining spatial and Starting from previous work that presented a 2D-CAN [30] , we introduce a fully 3D-CAN, a 2D-3D Hybrid CAN in which the appearance branch takes a single frame, and our proposed temporal shift CAN. Each of these models can be applied in a single or multi-task manner. temporal information while maintaining a low computational budget. More recently, researchers have investigated on-device remote camera-based heart rate variability measurement using facial videos from smartphone cameras [21] . However, their proposed architecture takes approximately 200ms per frame inference, which is insufficient for real-time performance, and was not evaluated on public datasets. Efficient Temporal Models: Yu et al. [19] have shown that applying 3D convolutional neural networks (CNNs) significantly improves performance and achieves better accuracy compared to using a combination of 2D CNNs and recurrent neural networks. The benefit of 3D CNNs implies that incorporating temporal data in all layers of the model is necessary for high accuracy systems. However, direct temporal modeling with 3D CNNs requires dramatically more compute and parameters than 2D based models. In addition to reducing computational cost, there are several reasons that it is highly desirable to be able to have efficient non-contact physiological measurement models that run on device. Temporal Shift Modules [22] provide a clever mechanism that can be used to replace 3D CNNs without reducing accuracy and requiring only the computational budget of a 2D CNN. This is achieved by shifting the tensor along the temporal dimension, facilitating information exchange across multiple frames. TSM has been evaluated on the tasks of video recognition and video object detection and achieved superior performance in both latency and accuracy. Xiao et al. [23] further used pretrained TSM based residual networks as a backbone followed by two attention modules for reasoning about human-object interactions. The differences between this aforementioned work and ours is they applied attention modules as the head followed by pretrained TSM based residual feature maps while our work applies two attention modules to the intermediate feature maps generated from regular 2D CNNs with TSM. Machine Learning and COVID-19: Researchers have explored the use of machine learning from various perspectives to help combat COVID-19 [24] . Recent studies have shown that applying convolutional neural networks to CT scans can help extract meaningful radiological features for COVID-19 diagnosis and facilitate automatic pulmonary CT screening as well as cough monitoring [25, 26, 27, 28] . Researchers have also looked at the correlation between resting heart rate generated from wearable sensors and COVID-19 related symptoms and behaviors at population scale [29] . For our optical basis we start with Shafer's Dichromatic Reflection Model (DRM), as in prior work [17, 9] . Specifically, we aim to capture both spatial and temporal changes and the relationship between multiple physiological processes. Let us start with the RGB values captured by the cameras as given by: is the luminance intensity level, modulated by the specular reflection v v v s (t) and the diffuse reflection v v v d (t). The quantization noise of the camera sensor is captured by v v v n (t). Following [17] we can decompose I(t), v v v s (t) and v v v d (t) into stationary and time-dependent parts: where u u u d is the unit color vector of the skin-tissue; d 0 is the stationary reflection strength; u u u p is the relative pulsatile strengths caused by hemoglobin and melanin absorption; where u u u s denotes the unit color vector of the light source spectrum; s 0 and Φ(m(t), p(t)) denote the stationary and varying parts of specular reflections; m(t) denotes all the non-physiological variations such as flickering of the light source, head rotation, and facial expressions. where I 0 is the stationary part of the luminance intensity, and I 0 · Ψ(m(t), p(t)) is the intensity variation observed by the camera. As in [9] we can disregard products of time-varying components as they are relatively small: However, unlike in previous work which modeled pulse and respiration signals as independent [30] , we leverage the fact that p(t) actually captures a complex combination of both pulse and respiration information. Specifically, both the specular and diffuse reflections are influenced by related physiological processes. Respiratory sinus arrhythmia (RSA) is a rhythmical fluctuation in heart periods at the respiration frequency [31] . Furthermore, the respiration and pulse signals both cause outward motions of the body in the form of chest and head motions. Thus we can say that the physiological process p(t) is a complex combination of both the blood volume pulse, b(t), and the respiration wave, r(t). Thus, p(t) = Θ(b(t), r(t)) and the following equation gives a more accurate representation of the underlying process: Since b(t) and r(t) are so closely intertwined, a temporal multi-task learning approach would seem optimal for this problem and at very least could leverage redundancies between the two signals. Efficient Spatial-Temporal Modeling: To achieve state-of-the-art performance in on-device optical cardiopulmonary measurement, an architecture should have the ability to: 1) efficiently learn spatial features that map raw RGB values to latent representations corresponding to the pulse and respiratory signals as well as temporal features that offset various noise signals (e.g., head motion, ambient illumination, skin tone, etc.), 2) learn the relationships between associated physiological processes, 3) work in real-time to support various telehealth deployments. Our solution is a novel temporal shift convolutional attention architecture (Fig. 1D ) which we systematically compare to its variants ( Fig. 1A -C) to illustrate its benefits. Because of the strong performance shown in prior work [9] , our architecture leverages a two-branch structure with a spatial attention module (Fig. 1A) . One branch is used for motion modeling, and the other branch for extracting meaningful spatial (i.e., facial) features. However, it fails to capture temporal dependencies beyond consecutive frames and thus is still vulnerable to many sources of noise. Perhaps the simplest way to introduce a strong temporal dependency is a 3D-CAN that leverages 3D convolutions to model temporal relationships (Fig. 1B) which is similar to the model used in [19] but adds an attention module. However, since 3D convolutions incur quadratic computational cost compared to 2D convolutions, it is not feasible to achieve real-time on-device performance using a primarily 3D architecture. Therefore, we present a Hybrid-CAN architecture that is more computationally efficient than a purely 3D model. Hybrid-CAN combines a 2D-CAN and a 3D-CAN to maintain temporal modeling while leveraging more efficient 2D convolutions where possible. Since spatial position changes between adjacent frames are subtle, using 3D convolutions in the appearance branch is unnecessary. As Fig. 1C illustrates, the input of the appearance branch is a single frame generated by averaging N (window size) adjacent frames. Although Hybrid-CAN reduces computational cost significantly, the computational overhead from 3D convolutions in the motion branch is still not tolerable if we want to achieve real-time inference on low-end mobile platforms (i.e., ideally at least 60 FPS). Therefore, we introduce TS-CAN to remove the 3D convolution operations from the architecture entirely while preserving spatial-temporal modeling. TS-CAN has two major additional components: the temporal shift module (TSM) [22] and the attention module. TSM performs tensor shifting before the tensor is fed into the convolutional layer as visualized in Fig.2 . More specifically, TSM splits the input tensor into three chunks across the channel dimension. Then, it shifts the first chunk to the left by one place (advancing time by one frame) and shifts the second chunk to the right by one place (delaying time by one frame). Both shifting operations are along the temporal axis, and the third chunk remains unchanged. It is worth noting that tensor shifting does not add any additional parameters to the network, but does enable information exchange among neighbouring frames. We used TSM in the motion branch to mimic the effects of 3D convolution, while the appearance branch in the TS-CAN is the same as Hybrid-CAN and only takes a single averaged frame. By doing so, the model not only significantly reduces computational time by only calculating the attention mask once, but also captures most of the pixels that contain skin and reduces camera quantization error. Attention on Temporal Shift: Given there are already many different sources of noise described in the previous section, naively shifting an input tensor in time will introduce extra temporal information to our representation. It is then important that we pay attention to the pixels with physiological signals or risk amplifying noise. Therefore, we propose inserting an attention module in TSM to minimize the negative effects introduced by tensor shifting as well as to enable the network to focus on the target signals. The spatial and temporal distribution of physiological signals are not uniform on human skin. Soft-attention masks can assign higher weights to certain shifted pixels with stronger signal on intermediate representations from convolutional operations. More concretely, our attention modules are the bridges between the appearance branch and the motion branch (See Fig. 2 ). Softmax attention masks are generated via 1 × 1 convolutions before pooling layers. The attention mask is calculated as in Equation 7 where k is the index of a layer, ω k is the 1 × 1 convolution and followed by a sigmoid activate function σ(·). l 1 normalization was applied to soften the extreme values in the mask to make sure the network avoided pixel anomalies. Finally, we perform an element-wise product to the corresponding representation X k from the motion branch. Multi-Task TS-CAN: We now have an efficient on-device architecture to predict physiological signal at real-time. However, we still have two independent networks, one for estimating the blood volume pulse and another for the respiration signals. Thus, the computational cost is doubled while preventing the possibility for information sharing across these related physiological signals. As we know that pulse and respiration are linked, we propose a multi-task variant of our network (see Fig. 2 ). This shrinks the computational budget by approximately 50% and the tasks of estimating BVP and respiration can share an intermediate representation. The loss function of this multi-tasking TS-CAN (MTTS-CAN) is described in Eqn. 8 where b(t) is the gold-standard BVP waveform and r(t) is gold-standard respiration waveform. b(t)' and r(t)' are the respective predictions from the model. We compare our methods to four approaches for pulse measurement: POS [17] , CHROM [16] , ICA [12] , 2D-CAN [9] and two for respiration measurement: 2D-CAN and ARM [13] . Other than DeepPhys [9] , we are not aware of methods that work for both pulse and respiration measurement. We run our experiments using the following datasets: AFRL [32] : 300 videos of 25 participants (17 males) were recorded at 658x492 resolution. Fingertip reflectance photoplethysmograms were used to record ground-truth signals for training our network and electrocardiograms were recorded for evaluating performance (this is the medical gold-standard). Each participant was recorded six times with increasing head motion in each task. The participants were asked to sit still for the first two tasks and perform three motion tasks rotating their head about the vertical axis with an angular velocity of 10 degrees/second, 20 degrees/second, 30 degrees/second, respectively. In last tasks, participants were asked to orient their head randomly once every second to one of nine predefined locations. The six recording were repeated twice in front of two backgrounds. : 102 videos of 40 participants were recorded at 25 fps capturing 1040x1392 resolution images during spontaneous emotion elicitation experiments. The gold standard contact signal was measured via a Biopac2 MP150 system 3 which provided pulse rate at 1000 fps and was updated after each heart beat. These videos feature smaller but more spontaneous motions than those in the AFRL dataset including facial expressions. Respiration measurements were not provided. Experiment Details: At a high-level all our proposed networks share a similar two-branch architecture. Each branch has four convolutional layers. There is an averaging pooling layer and dropout layer placed after the second and fourth convolutional layers as shown in Fig. 2 . Different architectures in Fig. 1 require different convolutional operations (e.g., 3D-CAN requires 3D CNNs). To preprocess the input of the appearance branch, we downsample each frame c(t) to 36×36, which balances maintaining spatial resolution quality and suppressing camera noise [34] . For the motion branch, we calculate normalized frames using every two adjacent frames as (c(t + 1) − c(t))/(c(t) + c(t + 1)). The normalized frames are less vulnerable to changes in brightness and skin appearance compared to the raw frames c(t) and reduce the chance of over-fitting to certain datasets. Our system is implemented in TensorFlow [35] . We trained our proposed MTTS-CAN architectures using the Adadelta optimizer [36] with a learning rate of 1.0, batch size of 32, kernel size of 3×3, pooling size of 2×2, and dropout rates of 0.25 and 0.5. The final model was chosen after the training converged (12 epochs on the respiration task and 24 epochs on the pulse task). We implemented 2D-CAN, 3D-CAN and Hybrid-CAN as baselines to compare against our proposed architectures. For the 3D and Hybrid models the training schema is similar to TS-CAN, but we use a kernel size of 3×3×3 and a pooling size of 2×2×2. We used a window size of 10 frames in all temporal models to provide a fair comparison for our proposed architectures. We picked α = 0.5 for the multi-tasking loss function in the MTTS-CAN to force estimations of pulse and respiration treated equally (pulse and respiration signals were both normalized in amplitude). To calculate the performance metrics, we post-processed the outputs of all methods in the same way using a 2nd-order Butterworth filter (cut-off frequencies of 0.75 and 2.5 Hz for HR and 0.08 and 0.5 Hz for BR). For the AFRL data, we divided the dataset into 30-second windows with no overlap. For the MMSE-HR dataset we used a window size equal to the number of frames in each video. We then computed four standard metrics for each window: mean absolute error (MAE), root mean squared error (RMSE) and correlation (ρ) in heart/breathing rate estimations and the corresponding BVP/respiration signal-to-noise ratio (SNR) [16] . Details of the calculation for these metrics, training code, architecture and the trained models are available in the supplementary material. On-Device Evaluation: Our proposed architectures were deployed on an open-source embedded system called the Firefly-RK3399 4 for latency evaluation. This embedded system has two large Cortex-A72 cores and four small Cortex-A53 cores. Although RK3399 also has a mobile Mali GPU, we focus our evaluation on CPU such that our proposed end-to-end architecture can be generalized to any ARM based mobile platform and IoT device. In this work, we extend a deep learning compiler stack -TVM [37] to support the core temporal shift operation required for TS-CAN. TVM takes a high-level description of a function and generates highly optimized low-level code for a targeted device. More specifically, our TVM-based on-device system first converts a TensorFlow graph to a Relay graph [38] and complies the code to Firefly-RK3399 using LLVM. We take advantage of TVM's scheduling primitives to generate efficient low-level LLVM code that accelerates expensive operations such as 2D and 3D convolutions. Comparison with the State-of-the-Art: For the AFRL dataset all 25 participants were randomly divided into five folds of five participants each (same folds as in [9] ). The learning models were trained and tested via five-fold cross-validation using data from all tasks. The evaluation metrics are averaged over five folds and shown in Table 1 . All of our proposed models outperform the 2D-CAN and other baselines. Hybrid-CAN and 3D-CAN achieve similar accuracy, reducing MAE by 50% on pulse and 20% on respiration measurement. The hybrid model has lower computational cost and is therefore preferable. TS-CAN also surpasses the 2D-CAN by more than 43% on pulse and 20% on respiration measurement. We also evaluated a multi-tasking version of TS-CAN and Hybrid-CAN, and call them MTTS-CAN and MT-Hybrid-CAN respectively. We observe that there is not an accuracy benefit from the multi-tasking model variants relative to the single task versions because the network must use almost all the same parameters for both tasks. However, the MT models require half as much computation and half as many parameters as running pulse and respiration models separately which is a considerable benefit. To test whether our model can generalize to videos with a different resolution, background, and lighting, we trained our proposed models on AFRL and tested on the MMSE-HR dataset. Our proposed TS-CAN, Hybrid-CAN and 3D-CAN reduce errors by 25-50% compared to 2D-CAN (see Table 1 ). Furthermore, MTTS-CAN and MT-Hybrid-CAN both perform quite strongly, showing that it is possible to share the representations between pulse and respiration. Computation Cost and Latency: Fig. 3A and the last column of Table 1 show that MTTS-CAN and TS-CAN are the fastest architectures of those evaluated, taking 6 ms and 12 ms per frame inference respectively. It is worth noting that TS-CAN is 40% faster than the 2D-CAN because the unique design of the appearance branch that only executes once and provides the generated attention mask to all the frames in the motion branch. MT-Hybrid-CAN and Hybrid-CAN also achieve 13ms and 26ms inference times respectively, this is approximately double that of our TS-based methods due to the cost of 3D convolutions relative to 2D convolutions. The 2D-CAN not only has a higher latency compared to TS-CAN, but the accuracy is significantly lower. It is not surprising that the 3D-CAN achieved the worst inference speed because it has costly 3D convolutions in both of its branches. Latency is important because we want our models to run at as high a frame rate as possible. 30 fps is the bare minimum required to accurately measure heart rate variability and subtle waveform dynamics and 100 fps would be preferable, thereby increasing the precision at which we could measure inter-beat and systolic-diastolic intervals [39] and helping with non-invasive blood pressure measurement [15] and detecting arterial fibrillation (AFib) [14] . Temporal Modeling: Capturing such waveform dynamics requires good temporal modeling, therefore we compared several designs to help improve this. Our proposed MTTS-CAN, TS-CAN, MT-Hybrid-CAN, Hybrid-CAN and 3D-CAN all outperform the 2D-CAN and other baseline methods. This is consistent with prior work that found a 3D-CNN without attention outperformed a 2D-CNN (without attention) [19] . We would anticipate that the focus on modeling the temporal aspects of the physiological waveform would lead to greater resilience to noise. We perform a systematic evaluation on videos with varying velocities of angular (rotational) head motion. The results are shown in Table 2 . As expected, all the proposed temporal models perform particularly strongly on tasks with greater velocity head motion; reducing the error on the most challenging task (6) by over 75%. Moreover, as Fig. 3B illustrates, although tensor shifting provides important temporal information, it also introduces extra noise. The results in Table 1 indicate that our attention module is effective at separating the signal from the added noise. Multi-task Learning: Comparing our MT models with the non-MT models, we observe that the MT models do not reduce the error in pulse and respiration rate estimates. But they do significantly improve the efficiency of inference as shown in Fig. 3A which is critical in resource constrained mobile platforms. Moreover, in order to estimate heart beat and respiration rate from a video, there is a number of mandatory pre-processing and post-processing steps to be included in the pipeline such as down-sampling images, computing averaged frames, calculating the number of peaks etc. Since MTTS-CAN only takes 6ms for one-frame inference, even with the pre-processing overhead real-time inference is still eminently feasible. Also, memory is a valuable resource on edge devices, and MTTS-CAN only requires half of the memory to store the parameters as TS-CAN. We believe MTTS-CAN can be deployed and especially useful in resource constrained settings. Applications of MTTS-CAN: The low latency and high accuracy of our system opens the door for many other applications. For example, it could be used to improve the measurement of heart rate variability which is a measure of the variation in the time between each heartbeat. Tracking the subtle changes between consecutive heart beats requires low latency like that provided by MTTS-CAN. Contactless and on-device HRV tracking will enable numerous novel applications towards mental health and personalized health. Besides health applications, MTTS-CAN is also potentially be applied to various computer vision tasks that require on-device computation such as activity recognition and video understanding. Telehealth and the SARS-CoV-2 pandemic have acutely highlighted the specific need for accurate and computationally efficient cardiovascular and pulmonary sensing. We have presented a novel multi-task temporal shift convolutional attention network (MTTS-CAN) that improves on the state-of-the-art in both of these dimensions. Non-contact camera-based vital sign monitoring has great potential as a tool for telehealth. Our proposed system can promote global health equity and make healthcare more accessible for those in rural areas or those who find it difficult to travel to clinics and hospitals in-person (perhaps because of age, mobility issues or care responsibilities). These needs are likely to be particularly acute in low-resource settings. Non-contact sensing has other potential benefits for measuring the vitals of infants who ideally would not have contact sensors attached to their delicate skin. Furthermore, due to the exceptionally fast inference speed, the computational budget required for our proposed system is minimal. Therefore, people who cannot afford high-end computing devices still will be able to access the technology. While low-cost, ubiquitous sensing democratizes physiological measurement, it presents other challenges. If measurement can be performed from only a video, what happens if we detect a health condition in an individual when analyzing a video for other purposes. When and how should that information be disclosed? If the system fails in a context where a person is in a remote location, it may lead them to panic. It is also important to consider how such technology could be used by "bad actors" or applied with negligence and without sufficient forethought for the implications. Non-contact sensing could be used to measure personal physiological information without the knowledge of the subject. Law enforcement might be tempted to apply this in an attempt to detect individuals who appear "nervous" via signals such as an elevated heart rate or irregular breathing, or an employer may surreptitiously screen prospective employees for health conditions without their knowledge during an interview. These applications would set a very dangerous precedent and would be illegal in many cases. Just as is the case with traditional contact sensors, it must be made very transparent when these methods are being used and subjects should be required to consent before physiological data is measured or recorded. There should be no penalty for individuals who decline to be measured. Ubiquitous sensing offers the ability to measure signals in more contexts, but that does not mean that this should necessarily be acceptable. Just because cameras may be able to measure these signals in new context, or with less effort, it does not mean they should be subject to any less regulation than existing sensors. In the United States, the Health Insurance Portability and Accountability Act (HIPAA) and the HIPAA Privacy Rule sets a standard for protecting sensitive patient data and there should be no exception with regard to camera-based sensing. In the case of videos there should be particular care in how videos are transferred, given that significant health data can be contained with the channel. That was one of the motivations for designing our methods to run on-device, as it can minimize the risks involved in data transfer. The role of telemedicine during the covid-19 epidemic in china-experience from shandong province Telehealth for global emergencies: Implications for coronavirus disease 2019 (covid-19) Embracing telemedicine into your otolaryngology practice amid the covid-19 crisis: An invited commentary Pathological findings of covid-19 associated with acute respiratory distress syndrome. The Lancet respiratory medicine Covid-19 and the cardiovascular system Non-contact physiological monitoring of preterm infants in the neonatal intensive care unit Non-contact, automated cardiac pulse measurements using video imaging and blind source separation Non-contact measurement of oxygen saturation with an rgb camera Deepphys: Video-based physiological measurement using convolutional attention networks Heart rate measurement based on a time-lapse image Remote plethysmographic imaging using ambient light Advancements in noncontact, multiparameter physiological measurements using a webcam Non-contact video-based vital sign monitoring using ambient light and auto-regressive models Diagnostic performance of a smartphone-based photoplethysmographic application for atrial fibrillation screening in a primary care setting Cuffless single-site photoplethysmography for blood pressure monitoring Robust pulse rate from chrominance-based rppg Algorithmic principles of remote ppg Visual heart rate estimation with convolutional neural network Remote photoplethysmograph signal measurement from facial videos using spatio-temporal networks Heart rate estimation from facial videos using a spatiotemporal representation with convolutional neural networks Vitamon: measuring heart rate variability using smartphone front camera Tsm: Temporal shift module for efficient video understanding Reasoning about human-object interactions through dual attention networks Leveraging data science to combat covid-19: A comprehensive review A deep learning algorithm using ct images to screen for corona virus disease (covid-19) Deep learning system to screen coronavirus disease 2019 pneumonia Flusense: A contactless syndromic surveillance platform for influenza-like illness in hospital waiting areas Ai4covid-19: Ai enabled preliminary diagnosis for covid-19 from cough samples via an app Coronavirus effects on pregnant women in the world Deepmag: Source specific motion magnification using gradient ascent Respiratory sinus arrhythmia: autonomic origins, physiological mechanisms, and psychophysiological implications Recovering pulse rate during motion artifact with a multi-imager array for non-contact imaging photoplethysmography Multimodal spontaneous emotion corpus for human behavior analysis Exploiting spatial redundancy of image sensor for motion robust rppg Tensorflow: A system for large-scale machine learning Adadelta: an adaptive learning rate method {TVM}: An automated end-to-end optimizing compiler for deep learning Relay: A new ir for machine learning frameworks Remote detection of photoplethysmographic systolic and diastolic peaks using a digital camera