key: cord-0557106-p4qruah1 authors: Mishra, Simoni title: Simulation Study and At Home Diagnostic Tool for Early Detection of Parkinsons Disease date: 2021-11-22 journal: nan DOI: nan sha: e866b51f548cd0d3f0b48c141e5660e95a53581f doc_id: 557106 cord_uid: p4qruah1 Hypomimia is a condition in the early stages of the progression of Parkinsons disease that limits the movement of facial muscles, restricting the accurate depiction of facial expressions. Also known as facial masking, this condition is an early symptom of Parkinson's disease, a neurodegenerative disorder that affects movement. To date, no specific test exists to diagnose the disease. Instead, doctors rely on the patient medical history and symptoms to confirm the onslaught of the disease, delaying treatment. This study aims to develop a diagnostic tool for Parkinsons disease utilizing the Facial Action Coding System, a comprehensive system describing all facially discernible movement. In addition, this project generates image datasets by simulating faces or action unit sets for both Parkinsons patients and non-affected individuals through coding. Accordingly, the model is trained using supervised learning and neural networks. The model efficiency is compared using Convolutional Neural Network and Vision Transformer models and later tested with a test program by passing a Parkinsons disease-affected image as input. The future goal is to develop a self-determining mobile application (PDiagnosis) by utilizing the model to determine the probability of Parkinsons progression. One of Parkinson's disease (PD) early signs is the gradual loss of facial mobility developing a "mask-like" appearance. This study investigates facial expressivity from simulated PD patients and Control populations and detects if the subject has progressed Parkinson's. In this work, 22,000 images neutral image using the facial action coding system. These images were generated for four facial expressions: happy, surprise, disgust, and neutral for both control and Parkinson's-affected subjects. Clinical trials with human subjects are often not practical due to many limitations. However, virtual clinical trials and simulated subjects are slowly gaining popularity for clinical trials to evaluate and optimize many concepts and technologies. Due to the lack of datasets containing faces of Parkinson's patients, this study replaced human patients with simulated facial images of both Control and Parkinson's-affected patients. Facial Action Coding System (FACS) is a coded technique created by many researchers to understand facial muscle movements and expressions. Each muscle movement corresponds to a specific facial action unit. OpenFACS is an open-source, FACS based software that allows the simulation of facial expressions (Cuculo & D' Amelio) . This system relies on Action Units for creating muscle movements. The OpenFACS API is open source free software available in Github. For this project, OpenFACS API is used to simulate Parkinson's images. Below is the list of facial action units defined by OpenFACs and the muscles associated with it. Facial muscle (Ekman & Friesen, 1978) Disgust (AU4+AU7+AU9) This study required data from the Parkinson's subjects which were not feasible due to lack of access to labs and human subjects during the Covid19 pandemic. One alternative solution was discovered to generate and simulate the patients using OpenFACS software. OpenFACS is an open-source API available freely in Github that generates the necessary png files with facial expressions. The building block of this API is based on action units. Action Units (AUs) are the fundamental actions of individual muscles or groups of muscles and external representations of muscle movements defined by their appearance on the face. This software required the Linux platform to operate. For this purpose, a system was configured by installing Ubuntu OS. The images were generated by calling the OpenFACS API's "sendAU'' method. This method expects action units that comprise a set of facial muscles to create the image. The images were generated for the following facial expressions: Happy, Disgust, Surprise, and Neutral. However, the current OpenFACS API does not include any functionality to export or screen capture in any format, such 5 as an image or video. After a few trial and error methods, an operating system command was discovered and issued to import the images from OpenFACS to save in the computer for training the models. For this study, a few research articles were used to understand the different muscle movements in the case The following action units were taken by compiling the charts published in an article by Wu and Gonzalez. The list of action units for each expression in Parkinson's was gathered from previous research articles, and the action units were computed using the variance from a prior study (Table3). The action units are shown above ( Figure 2 ) were used as the base action units for the specific expression to generate simulated images for Parkinson's and Control cases. Numpy random function was used to apply a randomized variance value on speed and intensity of the muscle movements to create Parkinson's and Control cases. Action units for each emotion were passed as a JSON string to the "SendAU" method of OpenFACS API. Simulated images were imported and saved by providing the os.import command. For this study, 11,000 Parkinson's cases (images) and 11000 Control cases (images) were simulated. 2,750 images were generated for each expression. The code to generate the images by using the OpenFACS SendAPI is found in the appendix. Since currently, the OpenFACS API does not provide a way to save the images, the operating system "import" command was called to save the images to the disk. For all four expressions in both Parkinson's and Control images, the process was repeated. For this study, a total of 22000 simulated images were generated with varied emotions ranging from Disgust, Surprise, Neutral, and Happy (Ricciardi et al.,2017) . A 75% and 25% ratio rule split the data between the training and test datasets. Initially, the model was built using CNN, which has specialized applications in image recognition (Singh, 2021) . CNN model consists of three layers: convolutional layer, pooling layer, and fully connected layer. For this purpose, each image was resized into a 150 X 150-pixel square. The initial CNN model ran from 10 epochs to 100 epochs, measuring the accuracy, efficiency, and other issues like overfitting. However, at 50 epochs, the model was found to be effective. By studying the model summary and the charts from 9 Tensorboard, it appeared that the pooling layer was losing a lot of information. So another attempt was made to experiment with the Vit model. However, Vit is a text-based task but works efficiently on input images as a sequence of image patches. This model divides an image into multiple square patches (Panteleyev, 2017) . Each patch was flattened into a single vector and eventually helped the model to learn about each patch. The study used data augmentation and regularization to improve the model. This multilayer deep learning model uses a neural network to pass every neuron from one layer to another. CNN provides promising results in image processing. A trained CNN has hidden layers whose neurons correspond to possible abstract representations over the input features. Since images are naturally non-linear, to increase the non-linearity, the rectifier (ReLU) function was applied. The image below shows the model summary of the CNN for this study. The dropout layer was applied to avoid overfitting, and the images were shuffled to reduce variance and provide better accuracy. In the first few runs, the accuracy of the 1st epoch was at 71 % and increased later as the epoch increased. Later, the dropout layer was added to generalize and reduce overfitting. With dropout, the neurons from the current layer randomly disconnect from the successive layers so that it relies on the existing connections reducing the probability of overfitting. Here are the details from the summary: Flatten takes 2D matrix and creates 1D output, giving 17 X 17 X 64 = 18496 pixels Tensorboard generates the histogram and shows the changes in loss and accuracy after every epoch when the data was passed through a CNN model. alternatively, in the chart to the right, it shows that the accuracy increased as the epoch increased. The Vit model was trained using the same number of simulated Parkinson's and Control subjects as the CNN model. The small number of images used for the study was a limitation for this model, and a few issues were observed, which were improved through optimization and regularization. As shown above in Figure 13 , after a few epochs, the dataset was underfitting which was difficult to optimize. A test program was created to test the model by inputting the Parkinson's and Non-Parkinson simulated images. As mentioned earlier, the test result provides a way to detect Parkinson's from everyday activities. These findings provide evidence that the early detection of Parkinson's through images and facial expressions could soon become a reality using mobile applications. OpenFACS API allows the generation of 3D images (animated). However, due to a technical constraint, 3D images couldn't be saved for analysis; instead, images were generated. The future effort will include 3D images. Currently, an effort is underway to create a mobile application using this model to read the user's picture from a mobile device and diagnose if the user has Parkinson's disease. This mobile application uses TensorFlow Lite to run the application on a mobile device. This proof-of-principle demonstrated two approaches. It used a facial action coding system to generate Parkinson's subjects for study and later created a model to detect Parkinson's from the hypomimia cases. The goal of this study is to use this model in a mobile application called PDiagnosis, which can be used as an at-home Parkinson's detection tool to detect the progression of Parkinson's from the user's daily activities like when taking pictures or by integrating with Facial recognition software (Patil, 2021) . The results of this study open new avenues for future research and may serve as a source of hypothesis generation for future researchers. The approach of generating simulated images using action units can be used for other studies without using human subjects. PDiagonsis can be extended to use human subjects and use a high volume of real images to improve accuracy. Facial expressions can detect Parkinson's disease: Preliminary evidence from videos collected online OpenFACS: an open-source FACS-based 3D face animation system Facial expression processing is not affected by Parkinson's disease but by age-related factors. Frontiers in psychology Spontaneous and posed facial expression in Parkinson's disease Facial emotion recognition and expression in Parkinson's disease: An emotional mirror mechanism? PLOS ONE The ultimate guide to emotion recognition from facial expressions using python Objectifying facial expressivity assessment of Parkinson's patients: Preliminary study How To Implement Object Recognition on Live Stream Study on Seizure Detection from the Features of EEG Facial Action Coding System (FACS) -A Visual Guidebook A neural network underlying intentional emotional facial expression in neurodegenerative disease Methods and approaches on emotions recognition in neurodegenerative disorders: A review FACS -Facial Action Coding System Recognizing action units for facial expression analysis Evaluation of EMG processing techniques using Information Theory