key: cord-0738002-uwhtrep2 authors: Jain, Nikita; Gupta, Vedika; Shubham, Shubham; Madan, Agam; Chaudhary, Ankit; Santosh, K. C. title: Understanding cartoon emotion using integrated deep neural network on large dataset date: 2021-04-21 journal: Neural Comput Appl DOI: 10.1007/s00521-021-06003-9 sha: ef5f8c1eb0117e71b7edbfba198efe239e71fa71 doc_id: 738002 cord_uid: uwhtrep2 Emotion is an instinctive or intuitive feeling as distinguished from reasoning or knowledge. It varies over time, since it is a natural instinctive state of mind deriving from one’s circumstances, mood, or relationships with others. Since emotions vary over time, it is important to understand and analyze them appropriately. Existing works have mostly focused well on recognizing basic emotions from human faces. However, the emotion recognition from cartoon images has not been extensively covered. Therefore, in this paper, we present an integrated Deep Neural Network (DNN) approach that deals with recognizing emotions from cartoon images. Since state-of-works do not have large amount of data, we collected a dataset of size 8 K from two cartoon characters: ‘Tom’ & ‘Jerry’ with four different emotions, namely happy, sad, angry, and surprise. The proposed integrated DNN approach, trained on a large dataset consisting of animations for both the characters (Tom and Jerry), correctly identifies the character, segments their face masks, and recognizes the consequent emotions with an accuracy score of 0.96. The approach utilizes Mask R-CNN for character detection and state-of-the-art deep learning models, namely ResNet-50, MobileNetV2, InceptionV3, and VGG 16 for emotion classification. In our study, to classify emotions, VGG 16 outperforms others with an accuracy of 96% and F1 score of 0.85. The proposed integrated DNN outperforms the state-of-the-art approaches. An emotion is a physiological state of mind that is subjective and is constituted by associated thoughts, feelings, and behavioral responses that are homogeneous. Recognizing emotions is highly usable in artificially intelligent systems, to enable such systems in recognizing and predicting human emotions to enhance productivity and effectiveness of working with computers. Recently, automatic recognition of emotion has become a popular research field involving researchers from the industry, as well as academia, specializing in artificial consciousness, computer vision, brain computing, physiology, and, more recently, deep learning. Its ubiquity emerges from extensible regions of potential applications. Ekman and Friesen [1] has conducted one popular research work in the field of Emotion Recognition, who have classified the emotions into six basic expressions of happiness, sadness, disgust, anger, and fear and stated they are universal. Their research To the best of our knowledge, these are the largest data for cartoon emotion classification and are available for research purpose. has become a benchmark for the evaluation of studies conducted in the field of emotion detection. Recognizing emotions is gaining thrust since the past few years. It can be performed by analyzing data in any medium: text [2] [3] [4] [5] , audio or speech [6] [7] [8] [9] , video, or images [10] [11] [12] [13] . The research in the field of emotion recognition has provided valuable data giving the emotional state of patients [14] , response to an advertisement, and even in times of crisis like coronavirus [15] . Emotion recognition from images generally involves facial expressions or gestures. Facial emotions are a form of non-verbal communication that conveys both the emotional state and behavioral intentions of an individual. The task of recognizing such emotions can be performed over the facial image data of human beings or animals or any real-world entity. There are several applications of this task in a variety of fields ranging from medicine [16] [17] [18] and e-learning [19] [20] [21] [22] to entertainment [23] and marketing [24] [25] [26] and even judiciary [27] . The application of facial emotion recognition is not limited to reading human face emotions. It can also be implemented to detect the emotions of animated characters or cartoons. Cartoons are mostly made, keeping in mind the entertainment and suitability of viewers (especially children). They are often filled with various kinds of emotions that are portrayed in multiple forms by the same character. The motivation behind current research lies in the fact that there are plenty of emotions portrayed in cartoons, even by the same character and animated cartoons provide an opportunity, where one can extract emotions from these characters (from one or more videos). This idea of identifying emotions is useful in cases where parents or guardians often want to choose a category of the cartoon (sci-fi, comic, humor, mystery, and horror) based on their child's interest or according to their suitability. To identify human faces, an image can be segmented using the OpenCV library 1 [28] . However, the utilized algorithm misses detecting any other real-world entity [29] (cartoon, in this paper). Therefore, we propose a generic approach that uses a popular method Mask R-CNN to efficiently segment objects. Furthermore, it was improbable to find an existing dataset online owing to the timeconsuming nature of data preparation from videos in this task, encompassing character identification, as well as emotion identification. Therefore, it is in this regard that a dataset is built (with two cartoon characters-Tom & Jerry, currently) that can even be utilized in different applications if publicly released on the web. Moreover, the dataset is extensible for any number of cartoon characters keeping the generic approach/implementation same. The current work deals with recognizing emotions from facial expressions of cartoon characters. The objective is to find out if DNNs can be deployed to extract and recognize emotions from cartoons. Even though emotion recognition has been extensively performed over human facial images; yet, recognizing emotions from cartoon images is still an under-explored area. To handle the same, a novel integrated DNN approach has been developed to identify emotions from cartoon characters wherein the faces of characters are segmented into masks using the Mask R-CNN technique. These generated masks are further used as input to the emotion recognition model to recognize the emotions of the character. For the analysis conducted in this paper, two cartoon characters-Tom and Jerry-have been taken into account. The recognized emotions fall into the following categories (Sad, Happy, Angry, and Surprise). The deployed approach gives an F-score of 0.85 when implemented on a created dataset of two characters used here viz. Tom and Jerry. The proposed approach is generic and scalable to recognize emotions. The rest of the paper has been organized as follows: Sect. 2 puts forward the existing literature and datasets for emotion recognition from animated images, including cartoons. Section 3 presents the outline of the contributions done in this paper. Section 4 discusses materials used in this work: dataset collection, preparation and the deep neural networks in detail. Section 5 presents the methodology of the proposed work. Section 6 shows the experimental analysis, results and comparison with the state of the art. Section 7 concludes the paper with an explanation of the obtained results and their evaluation with the baseline methods with further scope of enhancement. Existing research on emotion recognition from facial images is extensive. Facial expressions give accurate information allowing the viewer to differentiate between various negative and positive emotions; however, the evidence relates to posed emotions only [30] . For emotion recognition, one of the works that has been conducted by [31] where the dataset prepared from several photo sites such as Flickr, Tumblr, and Twitter has been classified into five emotions viz. Love, Happiness, Violence, Fear, and Sadness. The authors have tested various pre-trained Convolutional Neural Network (CNN) models like VGG-Image Net, VGG-Places205, and ResNet-50, out of which ResNet-50 has performed the best, giving an accuracy of 73% after fine-tuning. Another contribution by [32] where human emotions learned from 2D images have been transferred to animated cartoon 3D animated character for further classification into seven emotions (joy, sad, anger, fear, disgust, surprise, and neutral). The authors have proposed a fused CNN architecture (f-CNN), giving a total recognition rate of 75.5%, having initially a CNN trained on human expression dataset followed by a transfer learning-based classification to analyze the relationship between emotion transfer from 2D human images to 3D cartoon images. The recognition rate here is a parameter that shows how well an animated character can simulate a human face emotion. Based on the emotion transfer concept, as mentioned in the previous paragraph, authors in [33] have proposed a human animated face emotion classification where human expressions have been simulated using an animated face. The experiments have been performed on a dataset of 50,000 annotated (cartoonish) face images of several human stylized characters. Using a modified CNN architecture, the experiments obtained different expression recognition for all emotions mentioned (as compared in Section: Conclusion). However, a recent article [34, 35] has argued that recognizing facial emotion of specific cartoon characters adds another challenge to detect emotion since cartoons usually depict extreme levels of emotions that are not otherwise seen and captured from human faces. The authors have also shown through interviewbased experiments that specific cartoon face emotion recognition requires a higher processing intensity and speed than real faces during the early processing stage. The author in [29] gives another pioneer contribution specific to cartoon character emotion recognition using Haar Cascade [28] and modified CNN architecture for character detection and emotion classification, respectively, from cartoon movie videos. Classifying three emotions (Happy, Angry, and Surprise), the experiments achieved a classification accuracy of 80%. However, the authors claim that an improvement in accuracy can be achieved by transfer learning and hence proposes it as an open problem, thereby contributing a public dataset of 1600 emotion labeled cartoon character images. Recently, several datasets [36, 37] have been contributed intended for experimentation of cartoon face detection to the state of the art. These datasets, however, only contain the character labels and do not provide any information about emotion labels for those characters. Hence, contributing a dataset having emotion labeled cartoon faces becomes another significant contribution of this work. Such a dataset can be used for prospective research if the dataset is released for the public. Applications in emotion recognition, as [38] points out, include avoidance, alerting, production, tutoring, and entertainment. The focus of this article is to address the applications of training and entertainment. This can allow a computer to recognize emotion in an animated cartoon automatically. It could generate subtitles (text or audio) explaining and teaching the emotions of the characters throughout the video to children. The latter relies on the fact that cartoons are a form of entertainment, to adults and especially children. For example, a recommendation system can be designed where an animated cartoon has an emotion rating outlining which characters possess various emotions in an episode. Although there has been much work over human facial emotion recognition [39, 40] , existing literature on emotion recognition from cartoons is limited and has scope for extensive work. The mentioned contributions do not provide any cartoon emotion labeled dataset (obtained from cartoon videos) and instead propose specific human facial expressions simulated animated data. However, contouring of features in non-human faces (such as cartoon characters) is different from human faces, which requires specific detection methods. Also, the existing character detection algorithms that enable efficient emotion classification mainly rely on default libraries and modified CNN architectures helping in feature extraction. Such methods miss detecting a real-world entity (here a specific cartoon face) thereby giving low emotion recognition accuracy (ref. Section: Results). The major contributions of this paper are presented in Sect. 3 as follows. Integrating DNN and validating it on a fairly (with respect to state-of-the-art works) large amount of data to understand and analyze emotions are the primary aim of the study. This allows us to have multiple objectives: (a) The proposed integrated DNN includes Mask R-CNN for cartoon character detection and wellknown deep learning architectures/models, namely VGG16, InceptionV3, ResNet-50, and MobileNetV2 for emotion classification. Compared to the state-ofthe-art works, the use of Mask R-CNN makes a difference in terms of performance (ref. Section Results). Further, employing multiple deep learning architectures/models provides a fair comparison among them. (b) As no state-of-the-art works provide large amount of data for validation, we created a dataset 2 of 8,113 images and annotated them for emotion classification. This brings us a solution to quantify/test DNN appropriately and is available for research purpose (upon request). The proposed approach, as depicted in Fig. 1 , works by collecting and preparing dataset by downloading videos from a popular YouTube channel (ref. Sect. 4.1) followed by character detection and consequent emotion classification through an integrated DNN. In this paper, a custom dataset consisting of the images extracted from Tom and Jerry episodes was created. The images needed pre-labeled emotion classes for supervised classification. The labeling process is defined in Sect. 4.1. Currently, there is no active dataset in this regard available. To correctly recognize cartoon emotion from a given input video or image frame, the first and the foremost step includes character detection followed by accurate face segmentation procedures. The episodes for extracting the images have been identified and then downloaded from the Jonni Valentayn channel 3 on YouTube using a downloading tool named Videoder 4 in MP4 format. The frames were obtained at a ratio of 1:15 in the JPG format using OpenCV from the downloaded videos. From the frames extracted, a training dataset has been generated for the Mask R-CNN model, which is later used to classify and segment Tom & Jerry's faces from the input frames. Regarding the dataset prepared for training the Mask R-CNN model, the frames extracted were classified into two classes, Tom and Jerry. Each frame has an associated class, which specifies that the cartoon character's face is present in the frame. A frame can have Tom's face, Jerry's face, or both. For preprocessing, each data frame is augmented with a JSON file that stores the frame name, cartoon character name, and the X-Y coordinates of the corresponding cartoon face. The X-Y coordinates of the face were marked using a labeling tool, i.e., VGG Image Annotator (VIA). 5 The marked regions were Tom's face and Jerry's face. The Mask R-CNN model learns this Region of Interests through the X-Y coordinates of cartoon character's faces marked through VIA tools. Frames with unknown faces were left unmarked. Figure 2 shows the screenshot of the output given by the VGG annotator tool via where Tom's face is marked. The dataset consists of 10 k images, i.e., 28 animations of Tom and Jerry. The dataset generated from the Mask R-CNN model is annotated into four emotions. These emotions are Happy, Angry, Sad, and Surprise. For both cartoon characters, around 1000 images depicting each out of the four emotions were manually segregated. In total, 8113 images (or masks) of size 256 9 256 were used for the purpose of training after discarding the poor-quality images. Three independent annotators were employed to annotate the masked faces manually. These annotators are skilled and well versed in the field of animation and multimedia. The annotators were asked to interpret each masked face with one of the identified labels in the dataset. The quality of annotation was measured by two standard agreement parameters: Inter-Indexer Consistency (IIC) [41] and Cohen's Kappa [42] , with values of 92.06 and 84.9%, respectively. The emotions of both the cartoon character masked faces, namely Tom and Jerry, were annotated and then labeled in the following order of Sad, Happy, Angry, and Surprise, respectively, in which the emotions of Jerry have recorded first and then emotions of Tom next in a similar manner. The masked face images with their respective labels so obtained after annotation (as described above) were used to train the emotion recognition model. Figure 3 shows the distribution of emotion labels used for training purposes. A fair distribution of emotions was used to have balanced supervised classification training. The further process to obtain the masked faces from the character images (as prepared) is explained in Sect. 4.3. This section provides an explanation on the working of CNN along with the other fundamental concepts used. where 0 a I þ M þ 1 and 0 b J þ N þ 1: (ii) Pooling Operation Along with convolutions layers, CNN's use pooling layers like max-pooling and average pooling. Thus, for an image of size H  W with a filter size of k and stride size s, the size of the output is HÀk s þ 1 (v) Activation Function ReLU is the most regularly used nonlinear function since it provides better performance than its alternatives. Some of the commonly used activation functions are mathematically expressed as follows: Leaky Parameteric Mask R-CNN is an extension of Faster R-CNN. Faster R-CNN gives two outputs for every object-a class label and the bounding box coordinates. In Mask R-CNN, a third branch for the output of the object mask is added, which enables it to perform image instance segmentation. The third added branch also shows the prediction of the object mask in parallel with existing branches performing classification and localization. For appropriate instance segmentation, pixel-level segmentation is performed, which requires precise alignment as compared to just the boundary boxes. Hence, Mask R-CNN uses RoI (Region of Interest) pooling layer known as RoIAlign Layer, so much more precise regions can be mapped for segmentation. The backbone of Mask R-CNN is a standard convolutional neural network (like ResNet-50 or Resnet-101), and it helps in the extraction of features. These early layers detect features like edges and corners. After passing through the backbone network, the image is transformed to a 32 9 32 9 2048 feature map from the given image. Figure 4 is the visual model for the architecture of Mask R-CNN. The Region Proposed Network (RPN) is a lightweight neural network that checks the image in a sliding door fashion to find the regions that contain the objects. RPN scans over these regions (also known as anchors) using the backbone feature map instead of directly scanning over the image, enabling it to run faster and more efficiently. Consequently, it avoids duplicate calculations by reusing extracted features. The use of RoIAlign Layer fixes the location misalignment caused due to quantization in the case of RoIPool used in Faster R-CNN. Figure 5 shows the use of spatial transformer and bilinear sampling kernel. Bilinear interpolation is used to compute the exact floating-point location values of input features at four regular sampled locations in each RoI bin and then aggregates the result. Figure 6 shows the use of spatial transformer and bilinear sampling kernel in RoIAlign operation. The following output, i.e., Target feature value at location i in channel c, is obtained from the use of a sampling kernel from the sampler: where x s i and y s i are sampling coordinates at location i. After applying the bilinear sampling kernel to the above output, the equation transforms to: where the bilinear sampling kernel copies the value at the nearest pixel to ðx s i ; y s i Þ to the output locationðx o i ; y o i Þ. The objective multitasking function, which includes classification lossL cls , bounding box location loss L bbox , and the segmentation loss for mask L mask is defined. The loss equation can represent this: or it can also be written as: where c is the predicted class, g is GT (ground truth) class, b g is predicted bounding box for class g, z is the GT bounding box, the classification loss is be defined as: L cls c; g ð Þ ¼ Àlogc g ð9Þ L mask is the mean binary cross-entropy k ða  aÞ the sigmoid output helps in pixel-wise binary classification and allows one mask for each class and hence eliminates competition. This definition of L mask allows Mask R-CNN to generate masks for every class without competition between the classes; only the classification branch is used to predict the class label of the output mask. This process disconnects mask and class prediction. Since the considered case includes a per-pixel sigmoid and a binary loss, the masks across classes do not compete and hence provide good instance segmentation results. (a) VGG16-VGG16 is a deep convolutional neural network which is 16 layers deep as its name suggests. It is trained on ImageNet database and takes an input of size 224 9 224. An image is passed through a group of convolutional layers with small receptive field, i.e., it uses 3 9 3 kernel with stride size 1. It uses three fully connected layers after the convolutional layers to perform classification. Figure 7 shows the complete architecture of VGG16 with all the layers. Networks. Instead of relying on depth of network to learn more features, the residual networks try to lean features from the residual of previous layer which helps in improving accuracy and also helps to solve the problem of vanishing gradient. ResNet has many variants out of which ResNet-50 has been used which is 50 layers deep. short connection between these bottlenecks. This allows this network to be implemented even on mobile devices. It is 53 layers deep and has also been trained on ImageNet database. Table 1 describes the number of layer, total parameters, and trainable parameters for different CNN architectures used in this paper, i.e., VGG16, InceptionV3, ResNet50, and MobileNetV2. Facial expressions are the significant contributor to interpersonal communication [43] . In this paper, Mask R-CNN is used for character face detection, which separates the foreground-background pixels from each other using a bounding box that segments the face [44] . The model takes images or videos as input to extract masks of Tom's face or Jerry's face out of it. Algorithm 1 specifies the step-wise methodology adopted for the character face detection Mask R-CNN, also explained in subsequent sections:- The colored frames (from videos) are extracted using OpenCV and are then converted from BGR (Blue, Green, Red) to RGB (Red, Green, Blue) color order. Frame extraction process used here is explained in Sect. 4. Mask R-CNN takes input images of same size. Further, all the images are resized to a fixed dimension of (1280 9 1280) and the aspect ratio is maintained using padding. (ii) Mask generation and image storage Then, the resized image (H' 9 W') is given to Mask R-CNN as input to detect faces of Tom and Jerry. It then returns a dictionary for each image with four key-value pairs (parameters) concerning the face detected: (a) boundary box coordinates (y1, 9 1, y2, 9 2), which are generated around the cartoon's face, (b) class id for both cartoon characters (1 for Tom and 2 for Jerry), (c) binary masks. (d) mask confidence score. After obtaining the bounding boxes and refining them, the instance segmentation model generates the masks for each detected object. The masks are soft masks (with float pixel values) and of size 28 9 28 during training. Finally, the predicted masks are rescaled to the bounding box dimensions using padding and scaling factors to generate binary masks (0 and 1). Here, 1 denotes the region of the face detected and 0 denotes the rest of the image features. The face detected is marked with a mask confidence score that signifies the confidence in recognizing the mentioned character. As shown in Fig. 8 , the score of detecting Tom's face is 0.999. These masks can be overlaid on the original image to visualize the final output. The processed masks generated are shown in Fig. 9 . As can be observed from the figure, the employed Mask R-CNN model detects straight faces, or tilted faces or even the faces surrounded by any object, such as helmet (ref. Fig. 9 ). Using boundary box coordinates or pixel value (y1, 9 1, y2, 9 2) of the detected face, images((y2 -y1) 9 (x2 -x1)) containing only faces of Tom and Jerry were cropped from the original image. Since the image size of both the original image and binary mask was the same, each RGB pixel value of the image is multiplied by the corresponding binary value present in the mask. The binary value 0, when multiplied with any pixel value, results in 0 (representing black color). This process removes extra features from the image. While binary value 1 in the mask, when multiplied with any pixel value, results in the same value. Hence, only the pixel values of the face were included in this mask (hereafter, will be called as segmented masks). After this, the cropped image and segmented masks are resized into dimension 256 9 256. The cropped images contained extra features (background regions), whereas segmented masks contained the required features for emotion classification. Transfer learning uses the weights and knowledge gained from solving a specific problem and applying that knowledge to solve other similar tasks. It helps in leveraging the weights and biases of different state-of-the-art algorithms and hence uses it as an advantage without it being necessary to have vast amounts of data or extensive computation capabilities. The final step includes fine-tuning the model by unfreezing the specific parts of the model and re-training it on the new data with a small learning rate. The generic pipeline followed for emotion classification is as follows: (a) Preprocessing of segmented masks The segmented masks (of size 256 9 256) are received as an output from character detection stage using Mask R-CNN (ref. Sect. 5.1). These masks were resized to 224 9 224 for the classification of emotions using four baseline deep neural networks. These images are then converted into tensors. The value of these tensors is in the range 0 to 255, normalized to the range of 0 to 1. Afterward, data batches are created, each having 32 pictures for input into the emotion classification model. Figure 10 shows an example of a data batch created. Four deep neural networks are trained using transfer learning from the data batches created. Based on the results (ref. Sect. 6) , the best-trained model for every deep neural network is used for the classification of emotions. A snapshot of the results obtained by this proposed end-toend approach is shown in Fig. 11 . For instance, 0.96 is an emotion confidence score depicting an angry 'Tom'. After applying deep neural network on fairly large data collection, the data go through a preprocessing phase. This phase which uses the Mask R-CNN model produces Sect. 5.1). Further, these masks are given as input to four deep neural network models, as described in Sect. 4.3. Then, the emotion of a particular character is recognized with an emotion confidence score (which is recognized emotion probability of the respective character) scaled in the range of 0 to 1. Classifying emotions on a large dataset is a multi-class classification problem. In this paper, each sample in the prepared dataset has been categorized into one of the eight (4 9 2) different classes. Standard metrics-precision, recall, F1-score, and accuracy score-have been computed for each class for the purpose of evaluation. The precision score for 'happy_jerry' label is the number of correctly recognized 'Jerry' images with happy emotion out of total 'Jerry' images with actual happy emotion. From Table 5 given in Appendix -I, the number of accurately detected 'happy_jerry' label is 198 out of 221 total recognized 'happy_jerry' label, resulting into precision score of 0.90 (198/221) for the proposed approach, whereas the precision score for other three models, i.e., InceptionV3, Mobile-NetV2, and ResNet-50 is 0.71, 0.63, and 0.74, respectively, for 'happy_jerry' label. Next, the recall for 'happy_jerry' label is the number of correctly recognized 'Jerry' with happy emotion out of the number of actual 'Jerry' images with happy emotion. As inferred from Table 5 given in Appendix-I, the number of correctly recognized Table 2 , whereas the recall score for other three models is comparatively less for 'happy_jerry' label. For multiclass classification problem, F1-score is a preferable metric because there may be a large number of actual negatives. It gives the balance between precision and recall. For instance, the F1-score for 'happy_jerry' is 0.88, for the proposed approach. The other approaches namely InceptionV3, MobileNetV2 and ResNet-50 result in F1score of 0.75, 0.66, and 0.64, respectively, for the same label. Among these models, VGG16 has outperformed the rest in terms of precision, recall, and F1-score as shown in Table 2 . Also, the table depicts a combined classification report of the four models on which experimentation has been conducted. Accuracy score for each emotion class (here 8) for the two characters taken in this work is also shown in Table 2 . The combined accuracy for a particular emotion (say 'sad') is calculated by averaging that emotion over both the characters. For example, the 'sad' emotion accuracy accounts to be 95% which is an average of two emotion classes (sad for Tom and sad for Jerry). Therefore, the combined accuracy for each emotion comes out to be Happy (97%), Sad (95%), Angry (96%), and Surprised (96%). Overall scores for each metric are calculated by averaging the scores obtained from each class depicted as weighted average and micro-average. The latter is calculated to handle the imbalance in the class distribution if any. VGG16 shows a weighted average of 0.85 across all metrics as compared to other three models showing the same as 0.75, 0.66, and 0.64, respectively, whereas the micro-average comes out to be 0.85 outperforming rest of the models. The results of these models without the use of Mask R-CNN are shown in Table 3 . Without the use of Mask R-CNN, these models perform poorly as they are unable to learn the features of the character's faces which are required for emotion recognition. This happens because these models learn unnecessary features from the background of the image which are not required as the preprocessing stage, i.e., Mask R-CNN is not used to segment the faces. VGG16 outperformed the other three models (ref. Section 6.1) when trained on the created dataset of 8 K labeled images of Tom and Jerry. This section draws out certain additional results for the best-performing model (VGG16). Figure 12a -d shows the plot for the precision-recall curve, which depicts the trade-off between precision and recall for all four DNNs. In an ideal scenario with high recall and precision values, a larger area under the curve can be obtained. Since the area under the curve is large, indicating high precision (accurate results) and high recall (majority positive results). This signifies that both the false-positive rate and false-negative rate are low. Using VGG16, the micro-average precision-recall score evaluates to 0.91 for all classes. This score accounts to be the highest among all evaluated DNNs. The correlation between the false positives and the true positives can be computed using the AUC (Area Under Curve) in a ROC curve also shown in Fig. 12a -d for all DNNs. As shown, the micro-average ROC score for VGG16 evaluates to 0.91 which exceeds scores obtained of the other three DNNs. The figure also shows a macro-average ROC-AUC score for VGG16 that returns the average without considering the proportion for each label in the dataset. A model is said to be a good fit if it is able to generalize and learn the features from the training data without overfitting or under-fitting. This means that the model can generalize the features and perform even on unseen data. In Fig. 13b and c, the models namely ResNet-50 and Mobi-leNetV2 are unable to generalize the features. Hence, even if they perform well on training data with accuracy nearing 1, they are unable to give similar results on unseen data as can been seen from the fluctuation in the graph for validation accuracy curve. The proposed model depicted in Fig. 13a is able to outperform the above-stated models and is able to generalize the results giving an average accuracy score of 0.96. As shown in Fig. 13d , the InceptionV3 model is unable to learn enough features from training data as compared to our proposed model and thus gives lower scores for all the evaluation metrics. The figure also shows the train and validation accuracies recorded and plotted using tensor-board from the logs saved while training. Inspired by the fact that DNN models require fairly large amount of data, in our study, we have created a dataset (source:https://github.com/TheSSJ2612/Cartoon_Emo tion_Dataset/releases). A fair comparison is not possible since we have created a new dataset for our proposed tool, which we call, integrated DNN. However, even though datasets and number of emotions are varied in the literature, we find it interesting to report previous works that were focused on emotion classification. In what follows, let us revisit previous works (ref. Table 4 ) and check how fair the comparison can be made. In the presented comparison, one of the existing works has proposed an emotion recognition model on human animated faces [33] . The adopted methodology includes training two different CNNs to recognize human expressions and stylized characters expressions independently. A shared embedding feature space is then created by mapping human faces to character faces using transfer learning, resulting in an accuracy score of 0.89. Similarly, other contributions [45, 46] have evaluated the proposed approaches on animated characters (not specific to a cartoon character) from various sources like books, video games, etc. A different kind of approach generates 3D Table 4 Comparison between the proposed approach and the existing methods Hill [29] Li et al. [45] Ma et al. [46] Aneja et al. [33] Aneja et al. [47] Number animated faces using human facial expressions by transferring the emotion features to the 3D character face using semi-supervised learning model 'ExprGen' [47] . The above-referred state-of-the-art methods-Li et al. [45] , Ma et al. [46] , and Aneja et al. [47] -do not provide their overall end-to-end model accuracy score and instead, present an accuracy score for each emotion label classified as shown in Table 4 . Hill [29] is the only similar contribution that exists in the state-of-the-art methods where the author has proposed an end-to-end emotion recognition model on cartoon videos. This approach gives an overall accuracy score of 0.80. Figure 14 depicts the outperformance of the integrated DNN model (contributed in this paper) using Mask R-CNN over the already existing methodology [29] . As mentioned earlier, even though datasets are varied from one work to another, we find that our study (with integrated DNN tool) performs better on the dataset of size 8113. Recognizing emotions from facial expressions of faces other than human beings is an interesting and challenging problem. Although the existing literature has endeavored to detect and recognize objects, however, recognizing emotions has not been extensively covered. Therefore, in this paper, we have presented an integrated Deep Neural Network (DNN) approach that has successfully recognized emotions from cartoon images. We have collected a dataset of size 8 K from two cartoon characters: 'Tom' & 'Jerry' with four different emotions, namely happy, sad, angry, and surprise. The proposed integrated DNN approach has been trained on the large dataset and has correctly identified the character, segmented their face masks, and recognized the consequent emotions with an accuracy score of 0.96. The approach has utilized Mask R-CNN for character detection and state-of-the-art deep learning models, namely ResNet-50, MobileNetV2, InceptionV3, and VGG 16, for emotion classification. The experimental analysis has depicted the outperformance of VGG 16 over others with an accuracy of 96% and F1 score of 0.85. The proposed integrated DNN has also outperformed the state-of-the-art approaches. The work would be beneficial to the animators, illustrators, and cartoonists. It can also be used to build a recommender system that allows users to associatively select emotion and cartoon pair. Studying emotions encased in cartoons also extracts other allied information, which if combined with artificial intelligence can open a plethora of opportunities, for instance, recognizing emotions from body gestures. Measuring facial movement Emotion detection from text Aspect-based sentiment analysis of mobile reviews Movie Prism: A novel system for aspect level sentiment profiling of movies Social emotion classification of short text via topic-level maximum entropy model Emotion Recognition from Speech. arXiv preprint Emotion recognition of audio/speech data using deep learning approaches Speech emotion classification using machine learning algorithms Speech emotion classification with the combination of statistic features and temporal features Emotion detection algorithm using frontal face image Emotion recognition in the wild from videos using images A deep learning based analysis of the big five personality traits from handwriting samples using image processing Emotion recognition system in images based on fuzzy neural network and HMM. In: 5th IEEE International Conference on Cognitive Informatics Developing multimodal intelligent affective interfaces for telehome health care An emotion care model using multimodal textual analysis on COVID-19 Facial emotion recognition in patients with bipolar I and bipolar II disorder Convolutional neural network based Alzheimer's disease classification from magnetic resonance brain images Cell image analysis for malaria detection using deep convolutional network. Intelligent Decision Technologies Data fusion for realtime multimodal emotion recognition through webcams and microphones in e-learning Design and implementation of affective e-learning strategy based on facial emotion recognition Facial emotion recognition with transition detection for students with highfunctioning autism in adaptive e-learning Affective e-learning: Using ''emotional'' data to improve learning in pervasive learning environment A linguistic rulebased approach for aspect-level sentiment analysis of movie reviews Linguistic-based emotion analysis and recognition for measuring consumer satisfaction: an application of affective computing Generating aspect-based extractive opinion summary: drawing inferences from social media texts Towards robust real-time valence recognition from facial expressions for market research applications Facial emotion recognition in Scottish prisoners Research of usage of Haar-like features and AdaBoost algorithm in Viola-Jones method of object detection. International Conference on the Experience of Designing and Application of CAD Systems in Microelectronics Deep Learning for Emotion Recognition in Cartoons(Unpublished master's dissertation). The University of Lincoln Facial expressions of emotion Emotion detection and sentiment analysis of images Deep-emotion: Facial expression recognition using attentional convolutional network Modeling stylized character expressions via deep learning An event-related potential comparison of facial expression processing between cartoon and real faces Iconic faces are not real faces: enhanced emotion detection and altered neural processing as faces become more iconic iCartoonFace: A Benchmark of Cartoon Person Recognition ToonNet: a cartoon image dataset and a DNN-based semantic classification system Emotion recognition in humancomputer interaction Person-independent facial expression recognition method based on improved Wasserstein generative adversarial networks in combination with identity aware Facial expression recognition using active contour-based face detection, facial movement-based feature extraction, and nonlinear feature selection Rolling L (1981) Indexing consistency, quality and efficiency How good is that agreement Automatic analysis of facial expressions: the state of the art Face Detection and Segmentation Based on Improved Mask R-CNN Speechdriven cartoon animation with emotions Guidelines for depicting emotions in storyboard scenarios Learning to generate 3D stylized character expressions from humans See Table 5 . Authors' contributions All authors have equally contributed toward the formation of this paper.Funding Not Applicable. Conflicts of interest The authors declare that they have no competing interests.Availability of data and material https://github.com/TheSSJ2612/ Cartoon_Emotion_Dataset/releases.