key: cord-0908952-jqpdhiko authors: Paul, Sushil Kumar; Bouakaz, Saida; Rahman, Chowdhury Mofizur; Uddin, Mohammad Shorif title: Component-based face recognition using statistical pattern matching analysis date: 2020-07-18 journal: Pattern Anal Appl DOI: 10.1007/s10044-020-00895-4 sha: 92ac15e765686a4f925d83cd05cac194dd4eb123 doc_id: 908952 cord_uid: jqpdhiko The aim of this research is to develop a fusion concept to component-based face recognition algorithms for features analysis of binary facial components (BFCs), which are invariant to illumination, expression, pose variations and partial occlusion. To analyze the features, using statistical pattern matching concepts, which are the combination of Chi-square (CSQ), Hu moment invariants (HuMIs), absolute difference probability of white pixels (AbsDifPWPs) and geometric distance values (GDVs) have been proposed for face recognition. The individual grayscale face image is cropped by applying the Viola–Jones face detection algorithm from a face database having variations in illumination, appearance, pose and partial occlusion with complex backgrounds. Doing illumination correction through histogram linearization technique, the grayscale face components such as eyes, nose and mouth regions are extracted using the 2D geometric positions. The binary face image is created by applying cumulative probability distribution function with Otsu adaptive thresholding method and then extracted BFCs such as eyes, nose and mouth regions. Five statistical pattern matching tools such as the standard deviation of CSQ values with probability of white pixels (PWPs), standard deviation of HuMIs with Hu’s seven moment invariants, AbsDifPWPs and GDVs are developed for recognition purpose. GDVs are determined between two similar facial corner points (FCPs) and nine FCPs are extracted from binary whole face and BFCs. Pixel Intensity Values (PIVs) which are determined using L(2) norms from grayscale values of the whole face and grayscale values of the face components. Experiment is performed using BioID Face Database on the basis of these pattern matching tools and appropriate threshold values with logical and conditional operators and gives the best expected results from true positive rate perspective. Human face recognition has the most significant role for identifying a person in many real-world applications in computer vision like identification, authentication, security, surveillance system, human-computer interaction, antiterrorism, psychology and so on. It is not an intrusive technique (i.e., not convey any health risks like corona virus), and it does not need to touch anything during the acquisition level. Moreover, face is a rich source of nonverbal information about human behavior [1] . Due to complex and multidimensional structure of face, it requires enormous process and computation. On the other hand, face perception is the most developed visual perceptual ability in human beings. Infants desire to look at faces and try to recognize faces from shortly after birth [2] . Most people consume more time looking at faces than at any other type of objects. Humans have not the ability to recognize unlimited number of different faces, but computer can do it because computer has a bulk amount of memory and processing capabilities. Although automatic face recognition is not a new topic, the challenge of developing an appropriate face recognition algorithm remains unsolved. The main problems of face recognition are illuminations, different expressions, lighting variations, different poses, partial occlusion, complex and multidimensional structure of face and cluttered background. Face recognition is the technique to identify one or more person in images. Algorithms for face recognition typically extract facial features such as facial corner points and facial component values and compare them to a database to find the best match. Facial features extraction task is the initial stage for face recognition in the field of computer vision. The most significant facial features are facial corner points, eyes, nostrils and mouth areas. Facial corner points are eyes corners, nostrils, nose tip and mouth corners. Eyes, nostrils and mouth areas are also the key feature regions for face recognition. Using all of these information with some appropriate pattern, matching tools are accomplished the recognition decision [3] [4] [5] [6] [7] [8] . Face recognition algorithm complies with appropriate matching to identify a similar face and reject the dissimilar face images from the face database. Face recognition techniques are mainly classified into two principal categories: appearance based and model based. In appearance-based techniques, facial feature extraction concept, are holistic (global), facial components information (local) and fusion (hybrid) approaches play the vital roles for the classification and recognition of faces. In global (holistic) approach, entire information is extracted with a single vector from the whole face image. In local feature-based methods, facial component features such as eyes, nose, mouth, etc. are extracted from face images. Eyes are robust to facial expressions and occlusions because of its constant inter-ocular distance among people and unaffected by mustache or beard [7, 9] . Nose indicates the head pose and its nostrils and nose tip are the symmetry points of both right and left side face regions and also robust to facial expressions. Mouth also is a key component of face recognition, while mouth length and lips convey facial expression information [7] . Feature-based approach can perform with diverse imaging conditions to achieve better performance. Fusion (hybrid) method is the combination of both global and local concepts and applied for achieving the desired outcome. So, holistic features may not be used if the exact and accurate facial component features extraction is possible [1] . Various holistic approaches used for face recognition, such as Eigenfaces [10] , Fisherfaces [11] , Independent Component Analysis (ICA) [12] , Moment Invariants [13] , Discrete Cosine Transform (DCT) [14] , etc. Moreover, component feature-based techniques are described by facial components with various ideas, for examples, components with support vector machine (SVM) [15] , LDA [16] , 3D models [17] , etc. Zhang et al. [18] proposed a holistic-based illumination invariant face recognition approach used only high-frequency components and some useful identity information are lost due to discarding of the low-frequency components. Wada et al. [19] proposed a holistic-based expression invariant face recognition system, which used a constrained optical flow algorithm. It needs high computational cost as well as the obtained results are not shown satisfactory. As a model-based approach, Weyrauch et al. [17] constructed a component feature-based algorithm using 3D morphable models. It took fourteen component features but applied only nine components due to high computational complexity. Bonnen et al. [20] developed a componentbased technique utilizing heterogeneous concepts, which is too much intricate. Hua [21] developed a pose invariant component-based face recognition technique, but it needs high computational cost and does not produce good performance. Turk and Pentland [10] used Eigenfaces, but it cannot incorporate additional training data into an existing PCA projection matrix and not invariant to change in shape, pose and expression [11, 22] . Rajiv Kapoor and Pallavi Mathur [23] developed three kinds of moment invariants including Hu moment invariants and achieved poor result. Harguess and Aggarwal [24] demonstrated average-halfface with multiple concepts used full frontal view face applied six types of face recognition algorithms and shown not so good results. Nabatchian et al. [13] applied nine moment invariants (MIs) but got poor results except Pseudo-Zernike Moment Invariants (PZMI). Brunelli and Poggio [25] applied geometrical feature and templatematching concepts in frontal view face with the same illumination condition and applied facial component features. Sushil et al. developed the component-based face recognition algorithms using statistical pattern matching concepts and does not show good result as the TPR perspective [3, 4] . The above-mentioned holistic and model-based techniques need high computational cost and are not suitable due to variations in illumination, expression, pose and partial occlusion. Therefore, it is very much indispensable to develop a less computation cost component-based face recognition algorithms that are able to include new feature, discard or modify existing feature to summarize similarity decision as a fusion concept if any one facial component feature of a test face is matched with corresponding facial component feature of reference database to establish an invariant properties of different illumination, expression, pose variation and partial occlusions for better performance. The main objective of this research activity is that the different pattern matching algorithms are recognized the same images as well as different images of the same individual using entire face database. Therefore, combination of multiple algorithms will be provided the same percent of TPR values for each individual. The purposes of this research work are: (1) to extract some crucial facial features such as eyes, nostrils and mouth regions as the key components and also to apply shape invariant concept of these components, (2) to develop an invariant technique of multiple imaging situations such as illumination, expression, pose variations and partial occlusion, (3) to analyze of facial features that which feature need to be included or discarded for achieving better performance as TPR perspective, (4) to need simple mathematical calculation for constructing desired pattern matching tools and (5) to use two or more pattern matching tools can be formed new or expected recognition algorithms. As a fusion-based similarity technique, the five statistical pattern matching tools such as standard deviation of CSQ values [3] , standard deviation of Hu moment invariants (HuMIs) [4] , Geometric Distance Values (GDVs) [5] [6] [7] [8] , absolute difference probability of white pixels (AbsDifPWPs) [3, 4] and Pixel Intensity Values (PIVs) [3, 4] are constructed for face recognition algorithms. In this research activity the facial component feature extraction concept is based on BFCs that are mainly employed on the three independent methods such as probability of white pixels (PWPs) with CSQ formula [3] , shape similarities with Hu moment invariants (HuMIs) [4] and geometric distance values (GDVs) of nine similar FCPs between test face and reference database images using each of individual facial component and whole face [5] [6] [7] [8] . The PWPs depend on facial appearance and expression but independent on illumination variations. The PWPs are calculated from the number of white pixels divided by total number of pixels of each binary face component. To establish matching technique without complex processing, only normalization idea (sum of probability of white and black pixels of each binary facial component is equal to one) is applied to confirm more invariance of facial appearance and expression. Matching of each component is done with the help of Chi-square (CSQ) formula [3] . The shape of all images of each individual (person) is same, but the shape of different people is different. The shape similarity as a statistical pattern matching concepts have been established using Hu's seven nonlinear moment functions for recognition purpose due to its invariance characteristics. A BFC will be invariant if its moment value shows constant to alter in scale, rotation, translation or/and reflection in each image component to ensure the independent of appearance, expression, pose and lighting variations, and it is computationally efficient to get better outcome because it does not require any prior information about the face model [4] . The geometric distance values (GDVs) is used for recognition activity, and it is calculated from the automatic extraction of nine similar FCPs, such as both eyes corner points, mouth corner points, nostrils and nose tip between test face and reference database images [5] [6] [7] [8] . The benefit of FCPs is that it is completely independent of facial appearance, expression and illumination, pose variations and partial occlusion and requires the least computational cost [8] . The absolute difference of probability of white pixels (AbsDifPWPs) is calculated using binary test face and binary reference database images. The pixel intensity values (PIVs) between grayscale test face and reference database are determined using L 2 norm is known as Euclidean distance formula [3, 4, 8] . In this paper, we have shown six types of recognition algorithms such as: (i) Chi-Square (CSQ) [3] [3, 4] . Algorithms (i) to (iii) are existing methods and algorithms (iv) to (vi) are to be developed for features analysis. The major contributions of this paper are: (1) Construction of an invariant technique of multiple imaging conditions and shape invariant concept using binary facial components, (2) Development of a new recognition algorithm by the combination of two or more pattern matching tools, (3) Making a decision framework for finding an appropriate pattern matching tool on the basis of TPR perspective and (4) Performance analysis to use of each pattern matching tool. The remainder of this work is organized accommodating the following six sections. Sections 2 and 3 describe the preprocessing and processing works, respectively. Determination of five types of statistical pattern matching tools is described in Sect. 4. The similarity evaluation and recognition decisions are performed in Sect. 5. In Sect. 6, implementation and results are described and finally, conclusions and future work are mentioned in Sect. 7. Detection of face image, normalization face size, eliminating forehead portion and conditionally, histogram linearization technique (HLT) are done in this section. Localizing and cropping the exact face portion from the cluttered background is included in the preprocessing work. Since not all the detected faces are the same size, so it is essential to build all images into same size is known as normalization task for uniformity purpose. The Viola-Jones face detection algorithm is used for localizing and cropping the exact face area [26] . There is no information available on forehead portion in the binary face image, therefore, it is necessary to eliminate forehead portion for speed up processing activity [3] [4] [5] [6] 25] . The complete proposed work is shown in Fig. 1 . Face detection in an input face image is done by Paul Viola and Michael Jones called Viola-Jones face detector. It is based on four key concepts such as rectangular features like Haar features, an integral image for fast feature detection, classifier training and feature using the Ada-Boost machine-learning algorithm and a cascade classifier to combine many features efficiently [26] . Rectangular feature (Haar-like features) image is represented by two-, three-and four-rectangular features. A two-rectangle feature value is computed by the difference between the sums of the pixels within two vertically or horizontally adjacent rectangular regions. A three-rectangle feature value is computed by the sum within two outside rectangles subtracted from the sum in a center rectangle and a four-rectangle feature value is computed by the difference between diagonal pairs of rectangles. The integral image at pixel location p(x, y) is the sum of all the pixels above it and to its left. It is calculated in one pass over the original image. Therefore, feature can be determined rapidly because the value of each rectangle needs only four array references. A machine-learning algorithm called AdaBoost is applied to construct a strong classifier through a weighted combination of weak classifiers. Combination of a series of AdaBoost classifier is called as a filter chain and each filter, which is a separate AdaBoost classifier, contains a fairly small number of weak classifier. If any one of these filters fails to pass an image region is immediately classify as ''Not Face''. If image regions that pass through all filters in the chain are classify as ''Face'' and this filtering chain is known as cascade [26] . Finally, the detected face is cropped (face size = W 9 H pixels) from an input image. The detected and cropped face are shown in Fig. 3a , b, respectively. All the detected faces are not of the same size during face detection task, therefore, it is indispensable to convert all images into equal size (normalized face size = W1 9 H1 = 128 9 128 pixels)) for uniformity purpose (see Fig. 3c ). There is no information (i.e., no white pixels are present in the forehead portion) is available in the forehead region during binary image conversion task. So, it is necessary to discard forehead portion(face size without forehead region = W2 9 H2 = 0.75W1 9 0.60H1 = 96 9 76 pixels) for speed up processing activity(see Fig. 3d ). Preprocessing Work Face image having weak illumination can be reduced the recognition performance. Therefore, illumination adjustment activity is necessary in the preprocessing stage. HLT transforms the intensity values so that the histogram of the output image approximately appears the flat (uniform) histogram. We have applied HLT for illumination uniformity activity on a grayscale face image if its average pixel intensity value is less than 170 [27] (see Fig. 3e ). Conversion of binary face image using cumulative probability distribution function (CPDF) with Otsu's optimal global thresholding value, and cropping of four BFCs (such as both eyes, nose and mouth components), four grayscale facial components (GFCs) (such as both eyes, nose and mouth components) and nine FCPs extraction are the principal tasks in this section. Details procedures are shown in Fig. 3 . Cropping of facial components is explained in details in [5] [6] [7] (see Fig. 3f , h). The following mathematical concepts are applied to explain the Otsu's adaptive thresholding, binary image conversion, cropping of four BFCs (such as both eyes, nose and mouth components), four grayscale facial components (GFCs) (such as both eyes, nose and mouth components and nine FCPs extraction techniques [3] [4] [5] [6] [7] 28] . These are as follows. Otsu's thresholding is a nonlinear, nonparametric and unsupervised statistical discriminant analysis image segmentation method known as clustering-based adapting thresholding that transforms a grayscale image into a binary image. Its fundamental concept is to split the image's pixels into two classes and confirms the optimal threshold to maximize between-class (inter-class) variance or minimize the weighted sum of within-class (intra-class) variances. An image, I (x, y), is a 2D grayscale intensity function, and contain N ¼ W  H pixels with gray levels from 0 to L À 1. The number of pixels with gray level i is denoted n i giving a probability of gray level i in an image In the case of bi-level thresholding of an image, I (x, y), the pixels are divided into two classes, C 1 with gray levels 0; 1; 2; . . .; t ½ and C 2 with gray levels t þ 1; t þ 2; t þ 3; . . .; L À 1 ½ and suppose, a threshold value, s threshold t ð Þ ¼ t; and 0 \L À 1: Then, the probabilities, x 1 t ð Þ and x 2 t ð Þ that the pixels are assigned to the two classes C 1 and C 2 are given by the following cumulative sum [28] : and The mean intensity values, l 1 t ð Þ and l 2 t ð Þ; of the pixels assigned to the two classes C 1 and C 2 using Bayes' rule are: And similarly for class C 2 ; The cumulative mean (average intensity), l C t ð Þ up to level t and the global mean of the entire image, l G up to level L À 1 are expressed by the following equations: and The global mean, l G ; can be written as sum of weighed mean of the two classes C 1 and C 2 up to level L À 1 is: and Global variance (total variance i.e., the intensity variance of all pixels in the image), r 2 G , is the sum of withinclass variance, r 2 W and between-class variance, r 2 B . where, which is constant and independent of threshold value Pattern Analysis and Applications Indicate weighted within-class variance for classes C 1 and C 2 indicate between-class variance for classes C 1 and C 2 . Since, total variance (global variance) is constant and does not depend on threshold value, so that the algorithm must concentrate on minimizing r 2 W t ð Þ or maximizing r 2 B t ð Þ. Calculating within-class variance for each of the two classes for each possible threshold needs a lot of computation, but it is easy way to maximize r 2 B t ð Þ because r 2 B t ð Þ depends only global mean, cumulative mean and probability of class C 1 which are easy to compute. The optimal threshold value t à , which is maximized r 2 can be shown the relation: and finally, we get the OTSU threshold value, (see Fig. 2b , d). The cumulative probability density function (CPDF) idea is applied on a cropped grayscale face image with Otsu's thresholding to get the desire binary face image. Finally, the four ROIs (Region of Interests) such as binary both eyes, nose and mouth regions are extracted using the 2D geometrical positions (Cartesian coordinate system on xyplane) concept [3] [4] [5] [6] [7] (see Fig. 3g , h). If I grayscale X ; Y ð Þ is a cropped grayscale face image having size W  H, the binary face image, I binary X ; Y ð Þis performed by using the following mathematical equations. where 0 Z ðL À 1Þ; L ¼ 256 and 0 P Z 1:0 where P Viewing from the human frontal face structure concept for both grayscale and binary images, eyes, nose and mouth regions are situated in upper, middle and lower portions of the face image, respectively. Again, the upper portion is divided vertically into left and right segments for isolating right and left eyes, respectively. For cropping nose region, the center portion is extracted from the middle portion of the human face structure with discarding equal size of the leftmost and rightmost regions, and finally, the mouth portion is extracted from the lower portion of the human face structure with discarding equal size of the leftmost and rightmost regions (see Fig. 3f , h). Details are explained in [5] [6] [7] . The details of ROI (Region of Interest) size are mentioned below: Face Image Size ðwithout forehead areaÞ ¼ W2  H2 pixels; Four binary ROI (Region of Interest) images such as Right-Eye, Left-Eye, Nose and Mouth regions are used to detect corner points automatically(see Fig. 3h ). Simple linear search concept is applied on Right Eye, Left Eye and Mouth binary ROIs to detect the first white pixel locations and the contour algorithm is applied on binary Nose ROI to detect nostrils locations. Finally, a Nose Tip is calculated from the nostrils locations. Details are explained in [5] [6] [7] . The position of nine facial corner points is summarized and shown in Table 1 . Results of detected nine human facial corner points are shown in Fig. 4 . The five statistical pattern matching tools such as standard deviation of five CSQ values, standard deviation of fifteen HuMIs values, five AbsDifPWPs values between binary test face and binary reference database images and five PIVs (L 2 norm) between grayscale test face and grayscale reference database images and nine GDVs using automatic extracted nine FCPs between test face and reference database images are constructed mathematically to describe the six different types of face recognition algorithms [3] [4] [5] [6] [7] [8] . Chi-square (CSQ) statistic calculates the goodness-of-fit of the data to the model. That is, Chi-square is the sum of the squared difference between observed (O) and the expected (e) data (or the deviation, d), divided by the sum of observed and expected data in all possible categories. If the observed values in each of b bins are O i , and the expected values from the model are e i , then v 2 CSQ statistic can be written by the following formula using Eqs. (19) and (20) [29, 30] : where P = Probability of White Pixels (PWPs), O ¼ P Ref ; e ¼ P Test ; Q ¼ Probability of Black Pixels PBPs ð Þ ¼ 1 À P; P Ref and P Test are the PWPs of reference and test images; respectively: A low value of d ¼ v 2 CSQ indicates a better match than a high score. An exact match is 0(zero) and a total mismatch is unbounded and it depends on the size of the bin. Standard deviation of five CSQ values for five BFCs between the test face and a reference database image is given in Eq. (21) [3] . where, r CSQ is a standard deviation of CSQ value, d CSQ i are CSQ values for five BFCs, d CSQ is an mean CSQ value, and i = 1, 2,…,5 Moment concept is mainly used for shape descriptor of a probability distribution function and use to many realworld applications such as computer vision, image processing and pattern recognition areas for object matching, recognition, classification and identification purposes. Mathematically, moments are ''projection'' of a function onto a polynomial basis. The image shape feature performs a vital role in image classification, identification and recognition activities. So, effectively and efficiently extraction of shape features is the key element of the image representation and comparison. Moment invariants mean invariant to certain class of image degradations such as invariant to translation, rotation,scaling (TRS) or affine transform, etc. In this research work, we use moment invariant concept as shape descriptor on five binary facial components for face recognition [31] . Hu [32] first introduced two-dimensional geometric moment invariants concept to apply for shape recognition task. A set of seven nonlinear moment functions are derived from the second and third order moment, which are translation, scale and rotation invariants. A digital image f a; b ð Þ having size W  H, the 2D traditional geometric moments of order a þ b ð Þ are expressed by: where a; b ¼ 0; 1; 2. . . are integer values. The double integrals are to be considered over the entire area of the image including its boundary. In the image plane, the image centroids are used to define the central moments to normalize for translation. The central moment of order a þ b ð Þ is defined as: where a ¼ K 10 K 00 ; and b ¼ K 01 K 00 Since central moments are origin independent and therefore they are translation invariant. But these moments are not invariant to scale or rotation in their original form. When a scaling normalization is applied the central moments become as: Hu's [32] , seven scalar values, determined by normalizing central moments through order three, that are invariant to object scale, position and orientation. In terms of the central moments, a set of the seven moments are derived by Eqs. (25-31): These seven invariant moments, w z ,1 z 7(Eqs. [25] [26] [27] [28] [29] [30] [31] , are independent of scale, translation and rotation. We have applied binary five images (the binary four ROIs such as right eye, left eye, nostrils and mouth component and binary face image discarding forehead portion) of both for test and reference images by w z as matching the shapes. The three types of Hu's invariant moment values such as e i , u i and f i are computed using the following equations [4] (32-36): where i = 1,2,3,4,5, Absolute difference of probability of white pixels (Abs-DifPWPs) between the binary form of test image and a reference database image is shown in Eq. (38) [3, 4] . Similarity (or closeness) is calculated by using a Euclidean distance formula (EDF). The formula for the square root of sum of the squares differences between the corresponding pixel values of the same size W  H ð Þof two grayscale images I Test i x; y À Á andJ Ref i x; y À Á is known as L 2 norm. The equation of L 2 (39) is [3, 4] : where i ¼ 1; 2; . . .; 5; d EDF i ! 0:0 and d EDF i ¼ 0:0 indicates perfect match. The Geometric Distance Values (GDVs) between two Cartesian coordinate points on the xy-plane is defined by the geometric distance formula. The distance between the two similar corner points of Q Test Þ of the same size (size: 128 9 128) of two face images Img Test r x; y ð Þand Img Ref r x; y ð Þ, respectively is (see Fig. 4 ) [5] [6] [7] [8] : Pattern Analysis and Applications . . .; 9 and GDVs r ! 0:0 and GDVs r ¼ 0:0 confirm perfect match By applying the combination of five pattern matching tools to establish the six different types of recognition algorithms are described each of the individual(person) results on the basis of the best true positive rate(TPR) and average result in the tabular form, the entire results in graphical form and some true output face images(true alarms) using only method 6(CSQ ? HuMIs ? FCPs) are shown in pictorial form, and all methods are compared to the five basic performance measurement parameters such as the best True Positive Rate (TPR)/Recall/Sensitivity and corresponding to Recognition Rate (RR)/Accuracy, Precision, F-Score and False Positive Rate (FPR)/1-Specificity are derived from the confusion matrix [33] . Only result of false positive rate(FPR) is arranged in descending order and the rest of the four performance measurement parameters such as TPR, precision, F-Score and RR are arranged in ascending order (see Tables 3, 4 and 5 and Figs. 6, 7). The confusion matrix has four outcomes: True Positives (T p ), True Negatives (T n ), False Positives (F p ) and False Negatives (F n ). The five basic performance measures from the confusion matrix are described in Table 2 [33] . Where, T p = Correctly predicted an authorize person (True Alarms), T n = Correctly predicted a non-authorize person (True No-Alarms), F p = Incorrectly accepted a nonauthorize person (False Alarms), F n = Incorrectly rejected an authorize person (Missed Alarms), N = Total Number of Images of all person in a database = 1306 In this paper, cropping only face area is done using a popular face detection algorithm known as Viola-Jones face detector [26] . BioID face database [34] is used for experimental purpose, which is consists of 1521 grayscale images of 25 persons having the properties of diverse illuminations, expression, pose variations and partial occlusions; face area is located in different positions and complex background with a resolution of 384 9 286 pixels. Only N Total ¼ 1306 faces are detected and rest of the images are discarded due to detect false region (not face) by the face detector. Each face image is considered as a test face and compare to all faces (N Total ¼ 1306) as a reference database during the assessment activities. The proposed task is implemented and tested in OpenCV platform using c/c?? and GNU GCC compiler with Code::Blocks. Face detection and localization, cropping accurate face region, face size normalization, Table 3 , Table 5 , Table 6 and Fig. 7) Tp ðTpþFnÞ Probability of, given positive example, outcome will be a positive test result. TPR is independent on F p . TPR = 1.0 (Shown in Table 3 , Table 5 , Table 6 and Fig. 6a) Tp ðTpþFpÞ Probability that, given a positive test result, sample will be positive. Precision is inversely related to F p . Precision = 1.0 (Shown in Table 4 , Table 5 , Table 6 and Fig. 6b) 4. F-Score Combine both precision and recall into a single measures that conveys both properties. F-Score is inversely related to F p . F-Score = 1.0 (Shown in Table 4 , Table 5 , Table 6 and Fig. 6b) Fp ðFpþTnÞ Probability of, given a negative example, outcome will be a negative test result. FPR is directly related to F p . FPR = 0.0 (Shown in Table 5, Table 6 & Fig. 6a ) Pattern Analysis and Applications determination of OTSU's dynamic thresholding and shape matching tasks are implemented by OpenCV library functions [3, 4] . For BioID face database, the five thresholds such as T CSQ , T HuMIs , T ABS , T PIVs and T GDVs for five pattern matching tools such as r CSQ , r HuMIs ,d ABS i d EDF i and GDVs r are taken as 0.008, 0.08, 0.005, 1.0 and 3.25, respectively(see Table 5 ). Tables 3 and 4 are shown the results of performance evaluation parameters such as TPR, RR, precision, F-Score on the basis of the best true positive rate(TPR) values using the 25 individuals(person) with the concept of the confusion matrix. Using six different types of recognition algorithms: the true positive rate(TPR), recognition rate(RR) are shown in Table 3 and precision, F-Score are shown in Table 4 using the the concept of the best TPR values, respectively. Table 5 shows the summary of average performance evaluation parameters using Tables 3 and 4 and Table 6 shows the summary of optimal number of TPR, Precision and FPR values(TPR = precision = 1.0 and FPR = 0.0 are the optimum values) using the entire database(see Fig. 6a , b). The best average TPR and optimal number of TPR values for method 6 are 0.7859 and 95, respectively, as compare to the rest of the five methods which is satisfied our proposed work. The different pattern matching algorithms recognized the same images as well as different images of the same individual (person) using entire face database. Therefore, combination of multiple algorithms has achieved the same percent of TPR values (optimal number of TPR = 95) for each individual which is fulfilled our research objective, but combination of multiple algorithms support the large numbers of false alarms (F p ). The other performance evaluation parameters such as RR, Precision, F-Score and FPR are shown in little bit deteriorated due to support the false alarms(F p ). Because, these parameters(RR, Precision, F-Score and FPR) are related to false alarms(F p ) except True Positive Rate (TPR/ Recall). TPR is not depend on F p and (T p ? F n ) is always shown constant for each individual(see the Table 2, second column of Tables 3 or 4 and Table 5 ). So, system is not effect the other performance evaluation parameters such as RR, Precision, F-Score and FPR to increase the T p (true alarms). Since, individually, every algorithm is supported false alarms(F p ), therefore, fusion algorithms are also supported a large number of alarms(F p ) using the same threshold values setting as compare to individual pattern matching algorithm. Figure 5 shows that algorithm (method 6) is invariants to illumination, expression, pose variation and partial occlusion. The left most column indicates test face as an input image and the rest right five columns indicate recognized faces as an output images which is suported the invariants properties of multiple imaging conditions. Figures 6 and 7 shows the performance and accuracy curves using six different types of recognition algorithms, respectively, and it is shown that the method 6 (CSQ ? MuMIs ? FCPs) has achieved the best performance of TPR(Recall), and the rest of the curves such as FPR, Precision, F-Score and RR are laid little bit below for other five methods. The sorted TPR(Recall), Precision, F-Score, FPR and RR using six algorithms are shown below using FPR CSQ \FPR HuMIs \FPR CSQþFCPs \ FPR HuMIsþFCPs \ FPR CSQþHuMIs \FPR CSQþHuMIsþFCPs (according to Fig. 6a Fig. 7) . On the other hand, using simple nine corner points, (even though, Geometric Distant Values (GDVs) concept needs the least computational cost), GDVs are required only distance values, but it is reduced the system performance due to assist to support false alarms(F p ). But it supports the invariants properties of shape and different imaging conditions (see Fig. 5 ). So, it is not good enough to use simple nine corner points. But rest of the four pattern matching tools such as r CSQ , r HuMIs , d ABS i , d EDF i and larger number of corner points are indispensable features for face recognition (see Tables 3, 4 and Figs. 6a, b and 7). Since the two or more pattern matching tools are to be constructed a recognition algorithm, it is easy to add or remove any tool to form a new algorithm for getting optimum performance. Setting of threshold values of each individual algorithm and combination of multiple algorithms are the same. As a result, performance of TPR(Recall) is increased, but the other performance evaluation measurement parameters such as RR, Precision, F-Score and FPR are decreased due to support false alarms(F p ). Therefore, it is necessary to reset(decreased) the threshold values to control the false alarms(F p ) using fusion algorithms. Some true recognition results as a pictorial form using method 6 are shown in Fig. 8 . The propose of this research activity as a facial component feature extraction analysis is based on BFCs that are basically constructed from three independent concepts such as PWPs with CSQ formula, shape similarities with HuMIs and GDVs with Cartesian coordinate system and the texture concept and employed five pattern matching tools on the basis on performance analysis with TPR perspective. Combination of any two or three tools to construct the six types of independent recognition algorithms. So, algorithms are supported to add, remove or modify any facial component feature to get the optimal outcome. The advantage of component-based face recognition algorithm is that it is invariant to shape and also different imaging conditions such as illumination, expression, pose and partial occulation, and the simple mathematical calculation is also done to construct the desired pattern matching tool. Combination of multiple algorithms supported a large number of false alarms, therefore, it is necessary to reset the threshold values to restrict the false alarms using fusion algorithms. Nine FCPs are not enough to get the better outcome. The rest of the four pattern matching tools such as r CSQ , r HuMIs , d ABS i and d EDF i and large number of FCPs are the crucial features for face recognition. The main contributions of this work are achieved the same percent of TPR values using fusion algorithms and performance analysis of each crucial facial pattern matching tool on the basis of confusion matrix. Excluding GDVs and including Local Binary Pattern or Edge Detection Concepts as the new features and changing the threshold values to control the false alarm with multiple databases will be the focus on our future activity. Face recognition: a literature survey CONSPEC and CONLERN: a two-process theory of infant face recognition Face recognition using eyes, nostrils and mouth features Component based face recognition using feature matching through Hu moment invariants Extraction of facial feature points using cumulative distribution function by varying single threshold group Extraction of facial feature points using cumulative histogram Automatic adaptive facial feature extraction using CDF analysis Face recognition using facial features A probabilistic fusion methodology for face recognition Eigenfaces for recognition Eigenfaces vs. fisherfaces: recognition using class specific linear projection Independent component analysis; A new concept? Signal Process Human face recognition using different moment invariants: a comparative study, image and signal processing Local appearance based face recognition using discrete cosine transform A component-based framework for face detection and identification Componentbased cascade Linear Discriminant Analysis for face recognition Component-based face recognition with 3D Morphable Models A study on the effective approach to illumination-invariant face recognition based on a single image Component-based representation in automated face recognition Probabilistic elastic part model for real-world face recognition A novel incremental principal component analysis and its application for face recognition Face recognition using moments and wavelets A case for the average-half-face in 2D and 3D for face recognition Face recognition: features versus templates Robust real-time object detection Digital image processing A threshold selection method from gray-level histograms The v 2 test of goodness of fit An online Book of ''Learning OpenCV Moment invariants in image analysis Visual pattern recognition by moment invariants An introduction to ROC analysis