Online joint palmprint and palmvein verification 1 2 3 4 5 7 89 10 11 12 13 14 15 16 17 1 8 Q1 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 Expert Systems with Applications xxx (2010) xxx–xxx ESWA 5101 No. of Pages 11, Model 5G 3 September 2010 Contents lists available at ScienceDirect Expert Systems with Applications j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / e s w a Online joint palmprint and palmvein verification David Zhang ⇑, Zhenhua Guo, Guangming Lu, Lei Zhang, Yahui Liu, Wangmeng Zuo Biometrics Research Centre, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong a r t i c l e i n f o 19 20 21 22 23 24 25 26 Keywords: Biometric authentication Palmprint Palmvein Fusion Liveness detection Texture coding Matched filter 27 28 29 30 31 32 33 0957-4174/$ - see front matter � 2010 Published by doi:10.1016/j.eswa.2010.08.052 ⇑ Corresponding author. E-mail address: csdzhang@comp.polyu.edu.hk (D. Please cite this article in press as: Zhang, D., et j.eswa.2010.08.052 a b s t r a c t As a unique and reliable biometric characteristic, palmprint verification has achieved a great success. However, palmprint alone may not be able to meet the increasing demand of highly accurate and robust biometric systems. Recently, palmvein, which refers to the palm feature under near-infrared spectrum, has been attracting much research interest. Since palmprint and palmvein can be captured simulta- neously by using specially designed devices, the joint use of palmprint and palmvein features can effec- tively increase the accuracy, robustness and anti-spoof capability of palm based biometric techniques. This paper presents an online personal verification system by fusing palmprint and palmvein informa- tion. Considering that the palmvein image quality can vary much, a dynamic fusion scheme which is adaptive to image quality is developed. To increase the anti-spoof capability of the system, a liveness detection method based on the image property is proposed. A comprehensive database of palmprint– palmvein images was established to verify the proposed system, and the experimental results demon- strated that since palmprint and palmvein contain complementary information, much higher accuracy could be achieved by fusing them than using only one of them. In addition, the whole verification proce- dure can be completed in 1.2 s, which implies that the system can work in real time. � 2010 Published by Elsevier Ltd. 34 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 1. Introduction Biometric characteristics, including fingerprint, facial features, iris, voice, signature, and palmprint, etc. (Jain, Bolle, & Pankanti, 1999) are now widely used in security applications. Each biometric characteristic has its own advantages and limitations. Among var- ious biometric techniques, palmprint recognition is getting popular in personal authentication because it provides robust features from a large palm area and the palmprint image can be captured with a cost-effective device. In general, a typical palmprint acquisition de- vice operates under visible light and can acquire three kinds of fea- tures: principal lines (usually the three dominant lines on the palm), wrinkles (weaker and more irregular lines) and ridges (pat- terns of raised skin similar to fingerprint patterns). A resolution of about 100 dpi (dots per inch) (Han, Cheng, Lin, & Fan, 2003; Zhang, Kong, You, & Wong, 2003) can be used to acquire principal lines and wrinkles while a higher resolution, usually 500 dpi, is required to acquire ridge features. However, such a high resolution will in- crease significantly the computational cost to extract ridge features because of the large image size of palm, and hence prevents the system from being implemented in real time. Therefore, most of the palmprint base systems capture low resolution palmprint images using CCD (charge-coupled device) cameras and many algorithms have been proposed for feature extraction and match- 83 84 85 Elsevier Ltd. Zhang). al. Online joint palmprint and p ing (Connie, Jin, Ong, & Ling, 2005; Han et al., 2003; Hennings-Yeo- mans, Kumar, & Savvides, 2007; Hu, Feng, & Zhou, 2007; Kong & Zhang, 2004; Kumar & Zhang, 2005; Ribaric & Fratric, 2005; Su, 2009a, 2009b; Wu, Zhang, & Wang, 2003; Zhang et al., 2003). Although palmprint recognition has achieved a great success, it has some intrinsic weaknesses. For example, some people may have similar palm lines, especially principal lines (Zhang et al., 2003); also it is not so difficult to forge a fake palmprint (Kong, Zhang, & Kamel, 2009). These problems can be addressed by using multi-biometric systems, such as fusing facial trait and palmprint trait (Yao, Jing, & Wong, 2007) or fusing iris and palmprint traits (Wu, Zhang, Wang, & Qi, 2007). However, such systems are clumsy as they involve two separate sensors to sense two traits. One way to improve the discriminativeness and anti-spoofing capability of palmprint systems is to use more features from the palm, such as the veins of the palm. The veins of the palm mainly refer to the inner vessel structures beneath the skin and the palm- vein images can be collected using both far infrared (FIR) and near- infrared (NIR) light (Zharov et al., 2004). Obviously, palmvein is much harder to fake than palmprint. There is a long history of using NIR and FIR to collect vein biometrics (Cross & Smith, 1995; Kono, Ueki, & Umemura, 2002; Lin & Fan, 2005; Macgregor & Welfold, 1991; Socolinsky, Wolff, Neuheisel, & Eveland, 2001; Wang, Leedham, & Cho, 2008; Wu & Ye, 2009), while recently palmvein system has also been proposed (Watanabe, Endoh, Shio- hara, & Sasaki, 2005). Intuitively, since both palmprint and palm- vein are from the palm, it is possible to establish a convenient almvein verification. Expert Systems with Applications (2010), doi:10.1016/ http://dx.doi.org/10.1016/j.eswa.2010.08.052 mailto:csdzhang@comp.polyu.edu.hk http://dx.doi.org/10.1016/j.eswa.2010.08.052 http://www.sciencedirect.com/science/journal/09574174 http://www.elsevier.com/locate/eswa http://dx.doi.org/10.1016/j.eswa.2010.08.052 http://dx.doi.org/10.1016/j.eswa.2010.08.052 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 Fig. 1. The prototype system. 2 D. Zhang et al. / Expert Systems with Applications xxx (2010) xxx–xxx ESWA 5101 No. of Pages 11, Model 5G 3 September 2010 multi-biometric system to acquire and use the two traits jointly, and the complementary information provided by the two traits will make the system more accurate in personal identification and more robust to spoof-attack. A number of studies have been conducted to fusing palmprint and palmvein information for personal recognition. Wang, Yau, Suwandy, and Sung (2008) developed such a system and obtained good results but their system uses two separate cameras and re- quires a time-consuming registration procedure, which makes it difficult to use in real-time. Hao, Sun, and Tan (2007) evaluated various image level fusion schemes of palmprint and palmvein images. However, they evaluated the method using a very small database (only 84 images), making it hard to draw strong conclu- sions. Toh et al. (2006) captured the palm image under IR (infrared) lighting and then extracted the palmvein and palmprint features separately; however, since there was only one IR light source, some valuable palmprint information was lost. In this paper, we design and construct a new device that can ac- quire palmprint and palmvein images simultaneously and in real- time. The device involves one CCD camera and two light sources (one NIR light source for palmvein and one visible blue light source for palmprint). The light sources switch quickly and the two images can then be acquired within 1 s. More details of the device are provided in Section 2. With the captured palmprint and palm- vein images, the personal verification framework has four main parts. (1) First, it does liveness detection by analyzing the bright- ness and texture of the acquired image. (2) Second it performs tex- ture coding to extract palmprint features. (3) Third, it uses matched filters to extract the palmvein features. (4) At last, a fu- sion matching score is computed through dynamic weight sum by considering the palmvein image quality. The rest of the paper is organized as follows. Section 2 describes the joint palmprint and palmvein system structure. Section 3 intro- duces the verification framework of the system. Section 4 reports experimental results on the established comprehensive database. Section 5 makes the conclusion and suggests some future work directions. 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 2. System description The developed data acquisition device is made up of two light sources, a light controller (which is used to switch the lights on or off), a grey level CCD camera, lens, and an A/D (analogue-to-dig- ital) converter connecting the CCD and the computer. The CCD is fixed at the bottom of the device. We use both visible white light and NIR light as the illumination sources. The palm lines can be clearly acquired under the visible light, while it is possible to ac- quire the vein structures beneath the palm skin but not palm lines or wrinkles under the NIR light. The uniformly distributed 880 nm LEDs (light emitting diode) are used for the NIR illumination. The light within range 700–1000 nm can penetrate human skin to a depth of 1–3 mm and it has been shown that 880–930 nm NIR light can provide a relatively good contrast of subcutaneous veins (Zha- rov et al., 2004). In order for a wider spectrum range for more com- plementary information, the uniformly distributed Blue LEDs (peaking at 470 nm) is used as the visible light. In order to reduce the cost of imaging system, a standard CCTV (Closed Circuit Television) camera, instead of a near-infrared sensi- tive camera, is used in our system. Its sensitivity in the NIR range is not as strong as that of NIR sensitive camera but has a much lower price. On the other hand, since this camera is also used to capture the palmprint image under the visible light illumination, no NIR fil- ter is used with the camera to cut out visible light. As a result, the quality of NIR palmvein images captured by our device is not as good as those by NIR cameras and/or with an NIR filter. However, Please cite this article in press as: Zhang, D., et al. Online joint palmprint and p j.eswa.2010.08.052 as we will see in this paper, the fusion of palmvein can still contrib- ute much in improving the performance of palmprint recognition. Fig. 1 shows the prototype of our system. The images can be captured in two different sizes, 352 * 288 and 704 * 576. Users are asked to put their palms on the platform with several pegs serving as control points for the placement of the hand. The com- puter then collects one palmprint image and one palmvein image under two different lighting conditions. The switching between two types of light is very fast and allows us to capture the two images in a very short time (<1 s). As shown in Fig. 2, the transla- tion or rotation is very small between the two images, so registra- tion between the two images can be omitted. Before feature extraction, it is necessary to extract from the original images a specific portion to work with. This is known as extraction of region of interest (ROI). The ROI extraction has two important advantages. First, it serves as a pre-processing to remove the translation and rotation of palmprint/palmvein images introduced in the data collection process. Second, ROI extraction extracts the most informative area in the images. It reduces a lot the data amount without losing much useful information. This will speed up the following feature extraction and matching processes. In this study, we set up ROI coordinates on palmprint image using the algorithm proposed in Zhang et al. (2003) and then use the coordinates to crop the ROI from palmprint and palmvein images. Fig. 3 shows some samples of ROI. 3. Joint palmprint and palmvein verification The flowchart of proposed online joint palmprint and palmvein verification system is illustrated in Fig. 4. It has four main stages. First the ROI is extracted; then a liveness detection algorithm based on image brightness and texture is applied; if the input images pass the liveness detection, palmprint features will be ex- tracted by texture coding and palmvein features will be extracted by matched filters; finally, the score level fusion is applied through dynamic weighted sum for decision making. The details of ROI extraction can be found in Zhang et al. (2003). In the following we discuss the processing of other stages. 3.1. Liveness detection There are various liveness detection methods in biometric sys- tems. For example, perspiration detection in a sequence of finger- print images (Parthasaradhi, Derakhshani, Hornak, & Schuckers, almvein verification. Expert Systems with Applications (2010), doi:10.1016/ http://dx.doi.org/10.1016/j.eswa.2010.08.052 http://dx.doi.org/10.1016/j.eswa.2010.08.052 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 220220 221 222 223 224 225 226 228228 229 230 231 232 233 Fig. 2. Sample images of (a) palmprint and (b) palmvein from the same palm. Fig. 3. ROI sample images. The top row shows the palmprint ROIs and the second row shows the associated palmvein ROIsQ2 . D. Zhang et al. / Expert Systems with Applications xxx (2010) xxx–xxx 3 ESWA 5101 No. of Pages 11, Model 5G 3 September 2010 2005); using additional hardware to acquire life signs; utilizing inherent liveness features such as facial thermograms (Schukers, 2002). However, these methods can be time-consuming, require addition hardware and costly. In our system, the low-cost NIR LED is used for illumination. It has been proved that 700– 1000 nm NIR light could penetrate human skin 1–3 mm inside, and blood will absorb more NIR energy than the surrounding tis- sues (e.g. fat or melanin), so the vein structure is darker than other areas in the palmvein image (Zharov et al., 2004). However, since the skin of some people, especially female, is relatively thicker (Lee & Hwang, 2002), their palmvein structures cannot be clearly captured (e.g. Fig. 3f). On the other hand, the fake palm made by some materials can also lead to dark lines under NIR illumination by using our system, e.g. Fig. 5a. Therefore, it will be difficult to ap- ply liveness detection by detecting only the existence of dark lines in the palmvein image. As human skin has special reflectance and absorbance proper- ties under the NIR spectral, the features associated with these properties can be extracted from the image brightness and texture for telling true palm from fake palms. Fig. 5 shows the palmvein images of several fake palms we made from different materials. After observing these images and palmvein images from true palms, we found that the image brightness and gray level co-occur- rence matrix (GLCM) (Haralick, Shanmugam, & Dinstein, 1973) Please cite this article in press as: Zhang, D., et al. Online joint palmprint and p j.eswa.2010.08.052 entropy could provide enough discriminant information to distin- guish them. Thus, we propose a liveness detection algorithm by analyzing palmvein image brightness and texture features. The brightness feature is defined as the average of the intensity over the image: B ¼ 1 M�N XM x¼1 XN y¼1 fðx; yÞ; ð1Þ where f(x, y) represents the gray value at pixel (x, y), and M and N represent the numbers of rows and columns in the image. On the other hand, the GLCM is a widely used texture operator in image processing and pattern recognition. For a given angle h and distance d, a GLCM is defined as: ph;dði;jÞ ¼ #f½ðx1;y1Þ;ðx1 þDx;y1 þDyÞ�2 Sjfðx1;y1Þ¼ i&fðx1 þDx;y1 þDyÞ¼ jg #S ; ð2Þ where Dx = d*cos h and Dy = d*sin h. S is the set of pixels in the im- age and ‘‘#” means the number of the elements in a set. (i, j) is the coordinate in the GLCM. With GLCM, several statistics could be computed, such as entro- py, contrast, correlation, energy, and homogeneity. Among them, almvein verification. Expert Systems with Applications (2010), doi:10.1016/ http://dx.doi.org/10.1016/j.eswa.2010.08.052 http://dx.doi.org/10.1016/j.eswa.2010.08.052 234 235 236 237 239239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 264264 265 266 267 268 269 270 Palmprint and palmvein image inputs ROI extraction No Live sample? Yes PalmVein Feature Extraction and Matching Palmprint Feature Extraction and Matching Dynamic Score Level Fusion End Fig. 4. Flowchart of online joint palmprint and palmvein verification. 4 D. Zhang et al. / Expert Systems with Applications xxx (2010) xxx–xxx ESWA 5101 No. of Pages 11, Model 5G 3 September 2010 entropy is a popular feature to represent the uniformity of image texture. The more uniform the texture distributes, the bigger the entropy is. The GLMC entropy is computed as E ¼ XL i¼1 XL j¼1 pði; jÞ�ð� ln pði; jÞÞ; ð3Þ where L is the level of quantization. Fig. 5. NIR palmvein images of fake palms made from (a) cardboard; (b) foam; Table 1 The brightness and GLCM entropy of true and fake palm samples. 3b 3d 3f 5a 5b B 105.9 104.6 107.1 110.2 108.0 E 5.5 5.1 5.1 6.6 7.5 Please cite this article in press as: Zhang, D., et al. Online joint palmprint and p j.eswa.2010.08.052 Table 1 shows the brightness and GLCM entropy of the palm- vein images in Figs. 3 and 5. We can see the brightness and GLCM entropy values of fake palms and true palms are very different. Therefore, with some training samples, a classifier can be learned to tell the true palm from fake palms. In Section 4, we will establish a training dataset and learn the classifier. A testing dataset is also built to test the liveness detection method. 3.2. Palmprint feature extraction and matching In general there are three kinds of palmprint feature extraction algorithms, subspace learning (Connie et al., 2005; Hu et al., 2007; Ribaric & Fratric, 2005; Wu et al., 2003), line detection (Han et al., 2003) and texture-based coding (Kong & Zhang, 2004; Zhang et al., 2003). Among them, orientation texture-based coding (Kong & Zhang, 2004) is preferred for online system as it could achieve high accuracy. It is also fast for matching and can be easily implemented in real-time. The orientation of palm lines is stable and can serve as distinc- tive features for personal identification. To extract the orientation features, six Gabor filters along different orientations (hi = jp/6, where j = {0, 1, 2, 3, 4, 5}) are applied to the palmprint image. Here the real part of the Gabor filter is used and it is defined as: wðx; y; x; hÞ¼ xffiffiffiffiffiffiffiffiffiffi 2pj p e� x2 8j2 ð4x02þy02Þ eixx 0 � e� j2 2 � � ; ð4Þ where x0 = (x � x0)cos h + (y � y0)sin h, y0 = �(x � x0)sin h + (y � y0)cos h, (x0, y0) is the center of the function; x is the radial frequency in radians per unit length and h is the orientation of the Gabor functions in radians. j is defined by j ¼ ffiffiffiffiffiffiffiffiffiffiffiffi 2 ln 2 p 2dþ1 2d�1 � � , where d is the half- amplitude bandwidth of the frequency response. To reduce the influ- ence of illumination, the direct current is removed from the filter. (c) glove; (d) plaster; (e) plastic; (f) plasticine; (g) print paper; and (f) wax. 5c 5d 5e 5f 5g 5h 115.5 126.7 113.9 71.1 127.6 114.4 7.2 6.3 6.5 6.8 6.6 6.5 almvein verification. Expert Systems with Applications (2010), doi:10.1016/ http://dx.doi.org/10.1016/j.eswa.2010.08.052 http://dx.doi.org/10.1016/j.eswa.2010.08.052 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 287287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 304304 305 306 307 308 309 310 311 312 313 315315 316 317 318 319 320 321 322 324324 325 326 327 328 329 330 331 332 333 334 335 337337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 368368 369 370 371 Fig. 6. Orientation feature map of palmprint. (a) Original palmprint image; and (b) extracted feature map (different gray levels represent different orientation). D. Zhang et al. / Expert Systems with Applications xxx (2010) xxx–xxx 5 ESWA 5101 No. of Pages 11, Model 5G 3 September 2010 By regarding palm lines as the negative lines, the orientation corresponding to the minimal Gabor filtering response (i.e. the negative response but with the highest magnitude) is taken as the feature for this pixel (Kong & Zhang, 2004). Because the con- tour of Gabor filters is similar to the cross-section profile of palm lines, the higher the magnitude of the response, the more likely there is a line. Since six filters are used to detect the orientation of each pixel, the detected orientation {0, p/6, p/3, p/2, 2p/3, 5p/ 6} can then be coded by using three bits {000, 001, 011, 111, 110, 100} (Kong & Zhang, 2004). Fig. 6 shows an example of the ex- tracted orientation feature map, where different gray levels repre- sent different orientation. Based on the extracted 3-bit orientation feature maps, the Ham- ming distance between two maps can be calculated as follows: DðP; QÞ¼ PM y¼0 PN x¼0 P3 i¼1ðP b i ðx; yÞ� Q b i ðx; yÞÞ 3M�N ; ð5Þ where P and Q are two feature maps, Pbi Q b i � � is the ithbit plane of P(Q) and � is bitwise exclusive OR. To further reduce the influence of imperfect ROI extraction, we translate one of the two feature maps vertically and horizontally from �4 to 4 when matching with another feature map. The minimal distance obtained by translated matching is regarded as the final distance. 3.3. Palmvein feature extraction and matching It is observed that the cross-sections of palmveins are similar to Gaussian functions. Fig. 7 shows some examples. Based on this observation, the matched filters (Hoover, Kouznetsova, & Gold- baum, 2000; Zhang, Li, You, & Bhattacharya, 2007), which are widely used in retinal vessel extraction, can be a good technique to extract these palmveins. The matched filters are Gaussian- shaped filters along angle h: grhðx; yÞ¼�exp � x02 r2 � � � m; for jx0j6 3r; jy0j6 L=2; ð6Þ where x0 = xcos h + ysin h, y0 = �xsin h + ycos h, r is the standard devi- ation of Gaussian, m is the mean value of the filter, and L is the length of the filter in y direction which is set empirically. In order to suppress the background pixels, the filter is designed as a zero- sum. For one r, four different angle filters (hj = jp/4, where j = {0, 1, 2, 3}) are applied for each pixel, and the maximal response among these four directions is kept as the final response for the given scale1: 372 373 374 375 1 Different from the CompCode in Section 3.2 where the minimal response is used, here, the shape of used matched filters is identical to the cross-section of vein, thus the maximal response is kept. Please cite this article in press as: Zhang, D., et al. Online joint palmprint and p j.eswa.2010.08.052 RrF ¼ max R r hj ðx; yÞ � � ; j ¼f0; 1; 2; 3g; Rrhjðx; yÞ¼ g r hj ðx; yÞ�fðx; yÞ; ð7Þ where f(x, y) is the original image and * denotes the convolution operation. As shown in Bao, Zhang, and Wu (2005), Zhang et al. (2007), the multi-scale product of filtering responses is a good way to enhance the edge structures and suppress noise. The product of matched fil- ter responses at two scales r1 and r2 is defined as: Pðx; yÞ¼ Rr1F ðx; yÞ � R r2 F ðx; yÞ: ð8Þ The scale parameters r1 and r2 are set empirically. After computing the multi-scale production, we binarize it by using a threshold which is empirically set based on a training dataset. The vein pixel, whose scale production response is greater than the threshold, is represented by ‘‘1”, while the background pixel is represented by ‘‘0”. At last, some post-processing operations are performed to re- move some small regions. Fig. 8 illustrates the whole procedure of palmvein extraction. The extracted palmvein maps are binary images, and the dis- tance between two palmvein maps is computed as: DðP; QÞ¼ 1 � PM y¼0 PN x¼0ðP bðx; yÞ&Q bðx; yÞÞPM y¼0 PN x¼0ðP bðx; yÞjQ bðx; yÞÞ ; ð9Þ where P and Q are two palmvein feature maps, ‘‘& ” is bitwise AND operator and ‘‘j” is bitwise OR operator. The dissimilarity measure- ment in (9) is different from the Hamming distance used in Zhang et al. (2007). This is because most of pixels in the palmvein map are non-palmvein pixels. For example, in our database the average ratio of non-palm-vein pixels is about 86%. Such an uneven distribution of palm-vein and non-palm-vein pixels makes the Hamming dis- tance less the discriminative (Daugman, 2003). Similar to that in palmprint feature map matching, we translate one of the palmvein feature maps vertically and horizontally from �4 to 4 and match it with another palmvein feature map. The min- imal distance obtained by translated matching is regarded as the final distance. 3.4. Palmprint and palmvein fusion The information presented by multiple traits can be fused at various levels: image level, feature level, matching score level or decision level (Ross, Nadakumar, & Jain, 2006). Although image and feature level fusion can integrate the information provided by different biometric, the required registration procedure is too time-consuming (Wang et al., 2008). As to matching score fusion and decision level fusion, it has been found (Ross et al., 2006) that the former usually works better than the later because match scores contain more information about the input pattern and it is easy to access and fuse the scores generated by different matchers. For these reasons, matching score level fusion is the most com- monly used approach in multimodal biometric systems. In this work, we test sum and weighted sum on palmprint and palmvein matching score fusion: FDSum ¼ DPalmprint þ DPalmvein; ð10Þ FDWeight sum ¼ W Palmprint DPalmprint þ W PalmveinDPalmvein; ð11Þ where DPalmprint and DPalmvein are the palmprint and palmvein matching scores obtained by Eqs. (5) and (9), respectively. WPalmprint and WPalmvein are the weights for palmprint and palmvein feature in the fusion. Considering that not all palmvein images have clear vein struc- tures (referring to Fig. 3), it is intuitive that good quality palmvein images should have higher weight in the fusion than those poor almvein verification. Expert Systems with Applications (2010), doi:10.1016/ http://dx.doi.org/10.1016/j.eswa.2010.08.052 http://dx.doi.org/10.1016/j.eswa.2010.08.052 376 377 378 379 380 382382 383 384 385 386 387 388 389 390 391 392 393 394 396396 397 399399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 (a) (b) (c) 0 2 4 6 8 10 12 90 95 100 105 110 115 Position G ra y V al ue vein (c) vein (b) vein (a) (d) Fig. 7. (a)–(c) Some palmvein images and (d) the cross-section of vein structures. 6 D. Zhang et al. / Expert Systems with Applications xxx (2010) xxx–xxx ESWA 5101 No. of Pages 11, Model 5G 3 September 2010 quality images. Here a dynamic weighted sum fusion scheme by incorporating the palmvein image quality is proposed. We define an objective criterion (Daugman, 2003) to evaluate the palmvein image quality: d0 ¼ jlvein � lnon-veinjffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r2vein þ r 2 non-vein � � =2 q ; ð12Þ where lvein and lnon-vein are the average intensity values of vein pix- els and non-vein pixels extracted by in Section 3.3, and rvein and rnon-vein are the standard deviation of vein and non-vein pixels, respectively. For a clear palmvein image, the boundary between vein and non-vein structures is clear, so a higher d0 will be obtained. While for an unclear palmvein image, the boundary is not clear and the d0 will be smaller. For example, the d0 values of Fig. 3b, d and f are 1.63, 1.11 and 0.16, respectively. It shows that these images could be well classified by the proposed criterion. By incorporating the palmvein image quality into consideration, a dynamic weighted sum scheme is proposed as: FDWeight sum ¼ W Palmprint DPalmprint þ W PalmveinDPalmvein ¼ 1 � �d0 2 ! DPalmprint þ �d0 2 DPalmvein; ð13Þ �d0 ¼ d0 � d0min d0max � d 0 min ; ð14Þ where d0min and d 0 max are the minimal and maximal value of d 0 com- puted by a training dataset. As shown in (13), if quality of palmvein image is very good, the weights of both palmprint and palmvein Please cite this article in press as: Zhang, D., et al. Online joint palmprint and p j.eswa.2010.08.052 will be close to 0.5; if quality of palmvein image is very poor, the system will depend on palmprint solely. The experiments in Section 4.4 will validate the effectiveness of our dynamic fusion scheme. 4. Experimental results 4.1. Database establishment By using the developed joint palmprint and palmvein device, we established a database which includes palmprint and palmvein images from 500 different palms. The subjects were mainly volun- teers from the Shenzhen Graduate School of Harbin Institute of Technology and The Hong Kong Polytechnic University. In this database, 396 palms are from male and age ranges from 20 to 60. The images were captured by two separate sessions. The aver- age time interval between two sessions was 9 days. On each ses- sion, the subject was asked to provide six samples from his/her palms. For each shot, the device collected one palmprint and one palmvein image simultaneously (less than 1 s). Finally, the data- base contains 6000 palmprint and palmvein images of resolution 352 * 288. 4.2. Experimental results of liveness detection We established a fake palm database, which includes 489 images from the fake palms made by eight different materials: cardboard, foam, glove, plaster, plastic, plasticine, print paper, and wax. This data set is randomly partitioned into two equal parts, a training set and a test set. The same division strategy is almvein verification. Expert Systems with Applications (2010), doi:10.1016/ http://dx.doi.org/10.1016/j.eswa.2010.08.052 http://dx.doi.org/10.1016/j.eswa.2010.08.052 427 428 429 430 Fig. 8. Palmvein feature extraction. (a) Original palmvein images; (b) matched filter response with scale r1; (c) matched filter response with scale r2; (d) response product of the two scales; (e) binary image after thresholding; and (f) final vein map after post-processing. D. Zhang et al. / Expert Systems with Applications xxx (2010) xxx–xxx 7 ESWA 5101 No. of Pages 11, Model 5G 3 September 2010 applied to the true palm database. The distribution of brightness (B, refer to Eq. (1)) and entropy (E, refer to Eq. (3)) of the training Please cite this article in press as: Zhang, D., et al. Online joint palmprint and p j.eswa.2010.08.052 and test sets are plotted in Fig. 9. Because the skin reflectance and absorbance are different from those eight materials, we can almvein verification. Expert Systems with Applications (2010), doi:10.1016/ http://dx.doi.org/10.1016/j.eswa.2010.08.052 http://dx.doi.org/10.1016/j.eswa.2010.08.052 431 432 433 434 435 436 438438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 8 D. Zhang et al. / Expert Systems with Applications xxx (2010) xxx–xxx ESWA 5101 No. of Pages 11, Model 5G 3 September 2010 see there is a clear boundary between them. In most of cases, dif- ferent materials are clustered into specific regions as they have specific properties under NIR illumination. Four thresholds (i.e. the boundaries of the rectangle in Fig. 9) are computed based on the training samples: Bmin ¼ minðBðTSÞÞ� rBðTSÞ; Bmax ¼ maxðBðTSÞÞþ rBðTSÞ; Emin ¼ minðEðTSÞÞ� rEðTSÞ; Emax ¼ maxðEðTSÞÞþ rEðTSÞ; ð15Þ where TS represents the whole training set, rB(TS) and rE(TS) are the standard deviation values of B and E in the training set, respectively. With these four thresholds, all of the training samples could be correctly classified as shown in Fig. 9a. For the test set, only one genuine palmvein is wrongly rejected as shown in Fig. 9b. In prac- tice, we can use more strict thresholds to reject the imposter palms. In case the user is wrongly rejected, he/she may put his/ her palm one more time on the system for verification. Currently, the fake palm database is a relatively small, and setting up a large fake palm database and investigate more advance feature extrac- tion methods will be our future focus. 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 4 4.5 5 5.5 6 6.5 7 7.5 8 60 70 80 90 100 110 120 130 140 Entropy B rig ht Cardboard Foam Glove Plaster Plastic Plasticine Print Paper Wax True (a) 4 4.5 5 5.5 6 6.5 7 7.5 8 60 70 80 90 100 110 120 130 140 Cardboard Foam Glove Plaster Plastic Plasticine Print Paper Wax True Entropy B rig ht Brightness and GLCM entropy distribution of test set. (b) Fig. 9. Brightness and GLCM entropy distribution of fake and true palm under NIR illumination. The rectangle is the boundary learned from the training set. (a) Distribution of the training set. (b) Distribution of the test set. Please cite this article in press as: Zhang, D., et al. Online joint palmprint and p j.eswa.2010.08.052 4.3. Experimental results on palmprint verification To obtain the verification accuracy for palmprint, each palmprint image is matched with all the other palmprint images. A match is counted as genuine matching if the two palmprint images are from the same palm; otherwise, the match is counted as impostor matching. The total number of palmprint matches is 6000 � 5999/2 = 17,997,000, and among them there are 33,000 (12 * 11/2 * 500) genuine matching, others are impostor matching. Equal error rate (EER), a point when false accept rate (FAR) is equal to false reject rate (FRR), is used to evaluate the performance. The distance distribution of genuine and impostor is shown in Fig. 10 and the receiver operating characteristic (ROC) curve is showed in Fig. 11. Using palmprint, we can get FRR = 2.10% when FAR = 5.6e�6%. The EER is about 0.0352%. The accuracy on palmprint is comparable to those of state-of-the-art (EER: 0.024%, multi-scale feature extraction) (Zuo, Yue, Wang, & Zhang, 2008) on the public palmprint database (PolyU Palmprint Database, 2006) collected under white illumination. 4.4. Experimental results on palmvein verification Using the same matching scheme as in Section 4.3, the distance distribution of genuine and impostor of palmvein images is illus- trated in Fig. 12. The ROC curve is displayed in Fig. 13. For compar- ison, the curve by using Hamming distance as in Zhang et al. (2007) is also plotted. From Fig. 13, we can see the proposed dissimilarity measure- ment could improve the verification accuracy significantly over the widely used Hamming distance. The EER of palmvein verifica- tion is about 0.3091%, which is not as accurate as palmprint verifi- cation but better than the result reported in Zhang et al. (2007) (98.8% GAR when FAR = 5.5%). This is largely due to the relatively low quality of palmvein images. As discussed in Section 2, in order to capture palmprint and palmvein image simultaneously, by using our developed device the palmvein image quality is not as good as that of palmprint. To better evaluate the effect of image quality on the verification accuracy, we partition the palmvein images into three equal sets by the proposed criterion d0: good quality images, average quality images and poor quality images. The ROC curves for three sets are plotted in Fig. 14. Good quality palmvein image set has much bet- ter results than poor quality image set. The EER values for good, average and poor quality image sets are is 0.0898%, 0.1214% and 0.3199%, respectively. Fig. 10. Matching distance distribution of palmprint. almvein verification. Expert Systems with Applications (2010), doi:10.1016/ http://dx.doi.org/10.1016/j.eswa.2010.08.052 http://dx.doi.org/10.1016/j.eswa.2010.08.052 492 493 494 495 496 497 498 499 500 501 503503 504 505 506 507 508 509 510 Fig. 11. ROC curve of palmprint. Fig. 12. Matching distance distribution of palmvein. Fig. 13. ROC curve of palmvein. Fig. 14. ROC curves of palmvein on different image quality. Fig. 15. ROC curves for different fusion schemes. D. Zhang et al. / Expert Systems with Applications xxx (2010) xxx–xxx 9 ESWA 5101 No. of Pages 11, Model 5G 3 September 2010 Please cite this article in press as: Zhang, D., et al. Online joint palmprint and p j.eswa.2010.08.052 4.5. Experimental results by palmprint and palmvein fusion The distance distributions of palmprint and palmvein are differ- ent, which can be seen in Figs. 10 and 12. For example, the impos- tor scores of palmprint concentrate on 0.45 with standard deviation being 0.015, and that of palmvein concentrates on 0.89 with standard deviation being 0.023. Thus normalizing the match- ing scores before fusion is necessary. As the impostor distribution looks like a Gaussian shape, the z-normalization (Ross et al., 2006) is used here: DN ¼ D � limpostor rimpostor : ð16Þ The ROC curves of palmprint and palmvein fusion are shown in Fig. 15. The EER values of sum (Eq. (11)) and weighted sum (Eq. (12)) are 0.0212% and 0.0158%, respectively. Because palmvein contains complementary information to palmprint, the fusion of them could improve the system accuracy significantly. Fig. 16 shows an example, the two palms have similar palmprint patterns and they will be falsely accepted by using only palmprint images as almvein verification. Expert Systems with Applications (2010), doi:10.1016/ http://dx.doi.org/10.1016/j.eswa.2010.08.052 http://dx.doi.org/10.1016/j.eswa.2010.08.052 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 Table 2 Execution time. Time (ms) Image acquisition <1000 ROI extraction 63 Liveness detection 0.67 Palmprint feature extraction 36 Palmvein feature extraction 21 Palmprint matching 0.10 Palmvein matching 0.19 Fig. 16. An example pair of palms with similar palmprint which may be recognized wrongly by palmprint, but different palmvein features could separate them well. (a)–(b) from one palm; (c)–(d) from another palm. 10 D. Zhang et al. / Expert Systems with Applications xxx (2010) xxx–xxx ESWA 5101 No. of Pages 11, Model 5G 3 September 2010 inputs. However, their palmvein patterns are very different. Thus, by combining palmprint with palmvein, they could be separated easily. Since the proposed weighted sum fusion incorporates palm- vein image quality, better accuracy could be achieved as shown in Fig. 15. Compared with the sum rule, the weighted sum could re- duce the EER up to 25%. 4.6. Speed The system is implemented using Visual C++6.0 on a Windows XP, T6400 CPU (2.13 GHz) and 2 GB Ram PC. The execution time for each step is listed in Table 2. The total execution time of verifica- tion is less than 1.2 s, which is fast enough for real-time applica- tion. As the speed of matching is fast, it can be easily extended to identification system. For example, for 1-to-1000 identification, the total time cost is only 1.4 s which could meet the real-time application. 5. Conclusion In this paper, we designed and developed an online palmprint verification system by fusing palmvein information. To improve anti-spoofing ability of the system, a liveness detection algorithm Please cite this article in press as: Zhang, D., et al. Online joint palmprint and p j.eswa.2010.08.052 based on the analysis of brightness and texture of image is pro- posed, and the experiment results show the effectiveness of the proposed method. Because the palmprint and palmvein informa- tion is very different, the improvement of simple sum is significant better than either information alone. Further improvement could be achieved by the proposed fusion scheme. Through experiments on a large database, our system shows that it can verify a person in 1.2 s with EER only about 0.0158%. In summary, we conclude that fusion of palmprint and palm- vein is a good method to get an accurate and robust personal ver- ification system. For further improvement of the system we will focus on three directions: (1) to investigate image and feature level fusion schemes; (2) to improve the palmvein image quality; (3) to collect more fake palm samples. Acknowledgements The work is partially supported by the GRF fund from the HKSAR Government (PolyU 5351/08E), the central fund from Hong Kong Polytechnic University, Key Laboratory of Network Oriented Intelligent Computation (Shenzhen), Science Foundation of Shenz- hen City (CXQ2008019), and the Natural Science Foundation of China (NSFC) (Nos. 60620160097 and 60803090). References Bao, P., Zhang, L., & Wu, X. L. (2005). Canny edge detection enhancement by scale multiplication. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27, 1485–1490. Connie, T., Jin, A., Ong, M., & Ling, D. (2005). An automated palmprint recognition system. Image and Vision Computing, 23, 501–515. Cross, J. M., & Smith, C. L. (1995). Thermographic imaging of subcutaneous vascular network of the back of the hand for biometric identification. In Proceedings of IEEE 29th international Carnahan conference on security technology (pp. 20–35). Daugman, J. (2003). The importance of being random: statistical principles of iris recognition. Pattern Recognition, 36, 279–291. Han, C., Cheng, H., Lin, C., & Fan, K. (2003). Personal authentication using palm-print features. Pattern Recognition, 36, 371–381. Hao, Y., Sun, Z., & Tan, T. (2007). Comparative studies on multispectral palm image fusion for biometrics. In Asian conference on computer vision (pp. 12–21). Haralick, R. M., Shanmugam, K., & Dinstein, I. (1973). Textural features for image classification. IEEE Transactions on Systems, Man, and Cybernetics SMC, 3, 610–621. Hennings-Yeomans, P. H., Kumar, B. V. K., & Savvides, M. (2007). Palmprint classification using multiple advanced correlation filters and palm-specific segmentation. IEEE Transactions on Information Forensics and Security, 2, 613–622. Hoover, A., Kouznetsova, V., & Goldbaum, M. (2000). Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Transactions on Medical Imaging, 19, 203–210. Hu, D., Feng, G., & Zhou, Z. (2007). Two-dimensional locality preserving projections (2DLPP) with its application to palmprint recognition. Pattern Recognition, 40, 339–342. Jain, A., Bolle, R., & Pankanti, S. (Eds.). (1999). Biometrics: Personal identification in network society. Boston: Kluwer Academic. Kong, A., & Zhang, D. (2004). Competitive coding scheme for palmprint verification. In International conference on pattern recognition (pp. 520–523). Kong, A., Zhang, D., & Kamel, M. (2009). A survey of palmprint recognition. Pattern Recognition, 42, 1408–1418. Kono, M., Ueki, H., & Umemura, S.-I. (2002). Near-infrared finger vein patterns for personal identification. Applied Optics, 41, 7429–7436. Kumar, A., & Zhang, D. (2005). Personal authentication using multiple palmprint representation. Pattern Recognition, 38, 1695–1704. Lee, Y., & Hwang, K. (2002). Skin thickness of Korean adults. Surgical and Radiologic Anatomy, 24, 183–189. Lin, C. L., & Fan, K. C. (2005). Biometric verification using thermal images of palm- dorsa vein patterns. IEEE Transactions on Circuits and Systems for Video Technology, 14, 58–65. Macgregor, P., & Welfold, R. (1991). Veincheck: Imaging for security and personnel identification. Advanced Imaging, 6, 52–56. Parthasaradhi, S. T. V., Derakhshani, R., Hornak, L. A., & Schuckers, S. A. C. (2005). Time-series detection of perspiration as a liveness test in fingerprint devices. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 35, 335–343. PolyU Palmprint Database (2006). . Ribaric, S., & Fratric, I. (2005). A biometric identification system based on eigenpalm and eigenfinger features. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27, 1698–1709. almvein verification. Expert Systems with Applications (2010), doi:10.1016/ http://www.comp.polyu.edu.hk/~biometrics http://www.comp.polyu.edu.hk/~biometrics http://dx.doi.org/10.1016/j.eswa.2010.08.052 http://dx.doi.org/10.1016/j.eswa.2010.08.052 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 D. Zhang et al. / Expert Systems with Applications xxx (2010) xxx–xxx 11 ESWA 5101 No. of Pages 11, Model 5G 3 September 2010 Ross, A., Nadakumar, A. K., & Jain, A. K. (2006). Handbook of multibiometrics. Springer. Schukers, S. A. C. (2002). Spoofing and anti-spoofing measures. Information security technical report 7 (pp. 56–62). Socolinsky, D., Wolff, L., Neuheisel, J., & Eveland, C. (2001). Illumination invariant face recognition using thermal infrared imagery. In Proceedings of the IEEE computer society conference on computer vision and pattern recognition (pp. 527– 534). Su, C. (2009a). Palm extraction and identification. Expert Systems with Applications, 36, 1082–1091. Su, C. (2009b). Palm-print recognition by matrix discriminator. Expert Systems with Applications, 36, 10259–10265. Toh, K. A., Eng, H. L., Choo, Y. S., Cha, Y. L., Yau, W. Y., & Low, K. S. (2006). Identity verification through palm vein and crease texture. In International conference on biometrics (pp. 546–553). Wang, L., Leedham, G., & Cho, D. S.-Y. (2008). Minutiae feature analysis for infrared hand vein pattern biometrics. Pattern Recognition, 41, 920–929. Wang, J., Yau, W., Suwandy, A., & Sung, E. (2008). Person recognition by fusing palmprint and palm vein images based on ‘‘Laplacianpalm” representation. Pattern Recognition, 41, 1531–1544. Watanabe, M., Endoh, T., Shiohara, M., & Sasaki, S. (2005). Palm vein authentication technology and its applications. In The biometric consortium conference. Please cite this article in press as: Zhang, D., et al. Online joint palmprint and p j.eswa.2010.08.052 Available at . Wu, J., & Ye, S. (2009). Driver identification using finger-vein patterns with Radon transform and neural network. Expert Systems with Applications, 36, 5793–5799. Wu, X., Zhang, D., & Wang, K. (2003). Fisherpalm based palmprint recognition. Pattern Recognition Letter, 24, 2829–2838. Wu, X. Q., Zhang, D., Wang, K. Q., & Qi, N. (2007). Fusion of palmprint and iris for personal authentication. Advanced Data Mining and Applications, 466–475. Yao, Y. F., Jing, X. Y., & Wong, H. S. (2007). Face and palmprint feature level fusion for single sample biometrics recognition. Neurocomputing, 70, 1582–1586. Zhang, Y. B., Li, Q., You, J., & Bhattacharya, P. (2007). Palm vein extraction and matching for personal authentication. In 9th International conference VISUAL (pp. 154–164). Zhang, D., Kong, W., You, J., & Wong, M. (2003). Online palmprint identification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25, 1041–1050. Zharov, V. P., Ferguson, S., Eidt, J. F., Howard, P. C., Fink, L. M., & Waner, M. (2004). Infrared imaging of subcutaneous veins. Lasers in Surgery and Medicine, 34, 56–61. Zuo, W., Yue, F., Wang, K., & Zhang, D. (2008). Multiscale competitive code for efficient palmprint recognition. In International conference on pattern recognition (pp. 1–4). almvein verification. Expert Systems with Applications (2010), doi:10.1016/ http://www.fujitsu.com/downloads/GLOBAL/labs/papers/palmvein.pdf http://www.fujitsu.com/downloads/GLOBAL/labs/papers/palmvein.pdf http://dx.doi.org/10.1016/j.eswa.2010.08.052 http://dx.doi.org/10.1016/j.eswa.2010.08.052 Online joint palmprint and palmvein verification Introduction System description Joint palmprint and palmvein verification Liveness detection Palmprint feature extraction and matching Palmvein feature extraction and matching Palmprint and palmvein fusion Experimental results Database establishment Experimental results of liveness detection Experimental results on palmprint verification Experimental results on palmvein verification Experimental results by palmprint and palmvein fusion Speed Conclusion Acknowledgements References