Multiple modality biometric approaches are proposed integrating two-dimensional face appearance with ear appearance, three-dimensional face shape, and the pattern of heat emission on face. A single source biometric recognition method, such as face, has been shown to improve its identification rate by incorporating other biometric sources. The investigation of multi-modal biometrics involves a variety of sensors. For the recognition task, each sensor captures different aspects of human facial features; for example, appearance representing the levels of brightness on surface reflectance by a light source, shape data representing depth values defined at points on an object, and the pattern of heat emitted from an object. The results of our multiple biometric approach shown in this investigation appear to support the conclusion that the path to higher accuracy and robustness in biometrics involves the use of multiple biometrics rather than the best possible sensor and algorithm for a single biometric. A new evaluation scheme is designed to assess the improvement gained by multiple biometrics. Because multi-modal recognition employs multiple samples of facial data, it is also possible that the improvement achieved over considering multiple samples from all modalities for recognition. Therefore, this evaluation scheme will determine the recognition accuracy gained by multiple modality approach and multiple sample approach. Also, a new algorithm for 3D face recognition is proposed for handling expression variation. It uses a surface registration-based technique for 3D face recognition. We evaluate and compare the performance of approaches to 3D face recognition based on PCA-based and on iterative closest point algorithms. The proposed 3D face recognition method is fully automatic to use to initialize the 3D matching. The evaluation results show that the proposed algorithm substantially improves performance in the case of varying facial expression. This is the first study to compare the PCA and ICP approaches to 3D face recognition, and the first to propose a multiple-region approach to coping with expression variation in 3D face recognition. The proposed method outperforms 3D eigenfaces when 3D face scans were acquired in different times without expression changes and also with expression changes.