key: cord-0975060-q1janj1n authors: Meivel, S.; Indira Devi, K.; Uma Maheswari, S.; Vijaya Menaka, J. title: Real time data analysis of face mask detection and social distance measurement using Matlab date: 2021-02-20 journal: Mater Today Proc DOI: 10.1016/j.matpr.2020.12.1042 sha: 46eedc01f0a2b586fda7a23754165b792545b197 doc_id: 975060 cord_uid: q1janj1n This paper describes mask detection using Matlab when complex images in the dataset. Matlab specified the Faster R-CNN algorithm and Dataset allotment for mask detection. This paper manages complex pictures using facial recognition packages. The Faster R-CNN methodology used in the security system and the medical system. The proposed work balanced face restriction, color changes, brightness changes, and contrast changes. Segmentation and feature extraction used in face restriction of the person image. We chose RCNN, Fast RCNN, and Faster RCNN algorithm for detecting Mask detection and Social distance. Regions with Convolutional neural network Based on Mixing pictures, pixel prediction, and specific enhancements. The main objective was to solving multiple and multitask picture detection problems with speed rates. The Methodology used for face detection and detection of Unmask person in a dataset of face database. Coronavirus (COVID-19) is a type of coronavirus family. This pandemic locks down the world and causes a global recession. This virus spreads mostly because of the droplets produced by sneezing and coughing. Regular hand washing, wearing a face mask, and maintaining the social distance in a public place can be preventive measures to stop the spread of the virus. First, the face detection method is applied to extract the face region from the entire image. Color based [] face detection algorithm applied for face detection. Extraction of the face is a challenging task because of the change of face structures and their skin color variations. Pose variation, lighting conditions on the face, edges of output camera will also affect the accurate face localization. After face localization, face mask detection process applied which will show that a face wearing a mask or not. An alarm implemented which can inform that a person does not wear a mask. Social distancing is also a crucial step to control the spread of the virus. This paper proposed a combined approach for the detection of the mask and monitoring social distance also (see Fig. 1 ). More quality images required for detecting face with mask and without detection. It trains this model using a classification in the dataset. Different faces may be confused about the analysis of the manual feature extraction and it cannot predict the mask or Unmask pictures. Biometrics sensors used for face capturing and face detection, which faces determined from the nose, lips, eyebrow size, and eyes. Multiface issues reduced using the R-CNN algorithm. In multimedia application, video streaming possible to driving fast detection of unmasking using Faster R-CNN with 97% accuracy. Face limitation and facial communication focused on security purposes. Therefore, Framework shorted for facial detecting in the disable proof of dataset. Human face color changed when skin concealing and changing skin structure. Person counts based on lighting appearance and edge of yield. Picture pixels examined to serving or merging face features using image segmentation when feature extraction required. Unmistakable pictures based on good contrast, good brightness, and facial recognition for generating face surfaces. Skin, the color of hair, size of the nose, and length of the eyebrow, and black of eyes majority utilized to generating accuracy of face detection. Color data stored in the dataset and identified face surface [3] . Skin color varied depends upon the brightness of the photo and contrast of photos. When increased contrast, color decreased to white. When decreased contrast of the picture, color id increased to black. Skin color affected to change facial conditions, the total size of the face, relocation of facial and finding age, or sex of pictures. After accurate face detection, face contains some dark portions like eyes, eyebrows, and some light portion of skin color like cheeks, forehead, etc. It is likely to validate face skin color and characterize each single-pixel shade in a picture by restrictions of the space. In the field of colorimetry, imaging, and picture preparation, different color spaces have different properties. Each model has different skin color subspaces and structures as well. Characterization techniques can be different for different color subspace. RGB subspace contains red green and blue colors by the combinations of these three colors, the color image formed. YCbCr color subspace represents the JPEG picture format. Y contains the luminance part Cb, and Cr represents the color difference. Face area divided into squares of skin like pixels and non-skin like pixels as the detected face contains some dark and skin like portions. First, the skin color area characterized. It has skin color like pixels. It is better to characterize skin like pixels based on examples in color space. It is the tentative distribution by examination of distinctive designs. It is simple in use and gives better results. It decided tentatively the nature of results relies on model and exactness of rules. First skin like color pixels extracted and form a set, which contains skin-like color subspace, is characterized based on the characterization rule. The same pixel is similar skin color subspace form a setting that will be different from the other non-skin color subspace pixels. Parametric models remaining portion, which contains the nonskin region, form another set. Skin color subspace and non-skin color subspace boundaries formed and separated each other. Non-skin region contains eyebrows eyes and other portions that are not similar to skin color subspace. Binary pattern Dataset assigned an adequate number of various skin picture designs when used 400 times trailed and trained. Minimum 1000 faces must store in the database for higher accuracy using R-CNN. The 256x256x256 framework made for representing the database in the color estimation, when storing RGB pictures and distance of the image. It mentioned Skin color point = 1 and skin is not color point = 0. The contrast value stored pixel range in the picture with the database. This guide harmonizes of pictures stored in the color zone. The Correlation scientific methodology used in the database for detecting facial coloring area. The correlation method is better than other methods to overcome facial recognition problems. Color spaces YCbCr, HSV, and RG utilized from RGB color. YCbCr is one of the ideal color space for facial identification when the identification of RGB color. Face detection is a crucial step in this paper. After accurate face detection, skin color subspace, and non-skin color subspace which contain the pixels values similar to the skin portion and non-skin portion. Skin color like subspace contains the pixel values to the skin color regions, non-skin color subspace regions contain different pixel values that do not resemble the skin color pixel values. To find the skin and non-skin pixel values, we proposed a 3x3 pixel square & it will run on the whole face region. Mean of all the pixel values estimated and this 3x3 pixel square runs on all the face image. After taking the mean, the new value compared with all the other square values, and on behalf of that, we will distinguish the skin color region and non-skin color region. The mean of the pixels in square gives high values for the skin color region as compared to non-skin regions. We can also assign different values for squares like one for (high mean worth, low disperse-square which are related to skin color regions) and zero for (low mean worth, high disperse which are related to non-skin regions). This 3 Â 3 pixel square can also run horizontally as well as vertically to better filtering of skin color and non-skin color regions. The stochastic framework distinguishes the skin like Color and non-skin like the one Colors separately based on the mean of 3 Â 3 pixels square. However, in a non-stochastic framework, another concept luminance segments of the picture to wavelet change. In this paper Daubeschi, wavelet type 2-wavelet transform method is used. There is expressive progress in the vertical direction as there are eye regions, lips, etc. In a vertical direction, a point to point coefficient and the first degree of disintegration is taken. A low pass filter is utilized as the size of the face is bigger. There is an assumption considered to widen the coefficients more vertically. We utilize uncommon arrived at the midpoint out of the fiction of sinusoid. We change the field of coefficients on a two-fold guide measure and duplicate them with this guide. Coronavirus (COVID-19) pandemic locks down the world and causes a global recession. This virus spreads mostly because of the droplets produced by sneezing and coughing. Regular hand washing, wearing a face mask, and maintaining the social distance in public places can be preventive measures to stop the spread of the virus. In this paper, a CORONA veil location made with a thoughtful learning model. It includes self-loader information marking, model preparing, and GPU code age for the constant deduction. Some test information added for the Sample Mask Data, which shows a person, is wearing a mask or not. In the figure, Prepare data collected from data access and pre-processing ground-truth labeling, and simulation-based data generation. After prepare data, the Train model generated from model design, hyperactive parameter tuning, model exchange across frameworks, and hardware-accelerated training. After the trained model generated, deploy system set in iteration and refinement, which includes multiplatform code generation, embedded deployment, and enterprise deployment. Label image based on automated labeling with a pre-trained model for facial recognition. Train Object detection model based on Single-Shot Multibox Detector: SSD and You Only Look Once v2: YOLO. Generate the CUDA mix for inference speed acceleration. See Fig. 2 This file used for ground truth labelling with a semi-automated and trained model with a dataset. This file interfaced from original data to labeled data. It is one of the Matlab live Xml (mlx) code. This file has a training model for getting from evaluation, framework architecture creation, and data segmentation. API utilized in SSD and YoloV3 files analysis. After finishing the training model, the next test for still image and live streaming data. After completion of trained image, Test trained model for facial recognition. After test the still image, Test trained model for existing video. This file used for testing a trained model for live image. Here required supported packages like a USB webcam package to installation. When connected USB webcam, System installed automatically. See Fig. 3 . This file have simple concept. This code can be used for image conversion to detecting unmask image. The strain of illness person affected by COVID 19 before 14 days testing. SARS COV and MERS -COV respiratory syndrome give feedback to lungs after this virus affection, Social distance of people monitored heartbeat, the distance between people, and mask detection to reduce spreading. In the isolation period, some people escape from hospitals when the head of pressure. At that time, this system Identify the persons and who spread to the virus. Matlab software handle analysis of the detection of the people and where they are spreading the virus in which place (see Fig. 4 ). You Only Look Once v2 -Yolov2 file identifies pedestrians' foot recognition after the trained model using the classification ion of daBirdseyerds eye concept used to detection of masking persons using faster R-CNN algorithm social Social distance measured between point so the point (x, birds in birds-eye position (top-down) and ground plane option. It found movement birds of birds' eye coverage images and estimated passer-by's (x, y) This file extracted different datasets and provide ground information. This is written in Matlab supported file. It can be connected or interfaced trained file and tested file for executing final social distance. This file trained for detecting pedestrian detection model from GTD with extracted people extract GT mlx file. It convert normal view to Bird's-eye and detect (x,y) distance in between (see Fig. 7 ). After extracted and trained model in dataset, the social distancing Script Matlab file had script with pretrained model. It can be covered calibrated tasks from bird's-eye view, find extract images and its location distance. After running the Social distancing script file, it executed to measuring distance between nearest two point. This file can be find social distance using Faster R-CNN when video streaming (see Fig. 8 ). This file detect social distance between two images, when running lightweight mlapp file. After set distance of two persons in When built an unmasked detecting file and social distancemeasuring file using the Faster R-CNN algorithm, It automatically shown who is Unmasked person and Who is not keep a social distance. Here Social distance set 3 m in this video streaming Matlab file. In benchmarking, Social-distance measured 50 times for getting accuracy. The distance measured from two object movements. High Contrasted and High brightness image adjusted using the Faster R-CNN algorithm, when capture complex image. Face turning, wearing classes, beard faces, and scarf images easily detected using Faster CNN with 93.4% accuracy, when various image and different color image occurred. Both Applications of Mask detection and Social distance measurements detected using the CNN algorithm when it extracted its features in Matlab Toolbox. A method for nose tip location and head pose estimation in 3d face data Further Reading Production of Mandarin lexical tones: auditory and visual components Rigid vs non-rigid face and head motion in phone and tone perception Annual Conference of the International Speech Communication Association Encara2: real-time detection of multiple faces at different resolutions in video streams Active contours without edges Eyebrow raises in dialogue and their relation to discourse structure, utterance function and pitch accents in english Joint gender-, tone-, vowel-classification via novel hierarchical classification for annotation of monosyllabic Mandarin word tokens Computer-vision analysis reveals facial movements made during Mandarin tone production align with pitch trajectories Eyebrow movements and vocal pitch height: evidence consistent with an ethological signal Tracking eyebrows and head gestures associated with spoken prosody Empirical analysis of detection cascades of boosted classifiers for rapid object detection Exploratory undersampling for class-imbalance learning An iterative image registration technique with an application to stereo vision Multiresolution gray-scale and rotation invariant texture classification with local binary patterns Inferring statistically significant features from random forests Optical phonetics and visual perception of lexical and phrasal stress in english On the interdependence of tonal and vocalic production goals in Chinese Detection and tracking of point features Locating nose-tips and estimating head poses in images by tensorposes Random forests, Machine Learn Rozpoznavani akustickeho signalu feci spodporou vizualni informace, diserta^ni prace Technicka univerzita v Liberci Adaptive color space switching for face tracking in multicolored lighting environments Detection of moving shadows using mean shift clustering and a significance test A color-based method for face detection The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.