SENSORS, SIGNALS, VISUALIZATION, IMAGING, SIMULATION AND MATERIALS
Face Recognition using Principle Component Analysis, Eigenface and Neural Network Mayank Agarwal Student Member IEEE Jaypee Institute of Information Technology University Noida ,India
[email protected] Nikunj Jain Student Jaypee Institute of Information Technology University Noida ,India
[email protected] Mr. Manish Kumar Sr. Lecturer (ECE) Jaypee Institute of Information Technology University Noida, India
[email protected] Himanshu Agrawal Student Member IEEE Jaypee Institute of Information Technology University Noida, India
[email protected] Face recognition has become an important issue in many applications such as security systems, credit card verification, criminal identification etc. Even the ability to merely detect faces, as opposed to recognizing them, can be important. Although it is clear that people are good at face recognition, it is not at all obvious how faces are encoded or decoded by a human brain. Human face recognition has been studied for more than twenty years. Developing a computational model of face recognition is quite difficult, because faces are complex, multi-dimensional visual stimuli. Therefore, face recognition is a very high level computer vision task, in which many early vision techniques can be involved. For face identification the starting step involves extraction of the relevant features from facial images. A big challenge is how to quantize facial features so that a computer is able to recognize a face, given a set of features. Investigations by numerous researchers over the past several years indicate that certain facial characteristics are used by human beings to identify faces.
Abstract-Face is a complex multidimensional visual model and developing a computational model for face recognition is difficult. The paper presents a methodology for face recognition based on information theory approach of coding and decoding the face image. Proposed methodology is connection of two stages – Feature extraction using principle component analysis and recognition using the feed forward back propagation Neural Network. The algorithm has been tested on 400 images (40 classes). A recognition score for test lot is calculated by considering almost all the variants of feature extraction. The proposed methods were tested on Olivetti and Oracle Research Laboratory (ORL) face database. Test results gave a recognition rate of 97.018% Keywords: Face recognition, Principal component analysis (PCA), Artificial Neural network (ANN), Eigenvector, Eigenface. I.
INTRODUCTION
The face is the primary focus of attention in the society, playing a major role in conveying identity and emotion. Although the ability to infer intelligence or character from facial appearance is suspect, the human ability to recognize faces is remarkable. A human can recognize thousands of faces learned throughout the lifetime and identify familiar faces at a glance even after years of separation. This skill is quite robust, despite of large changes in the visual stimulus due to viewing conditions, expression, aging, and distractions such as glasses, beards or changes in hair style.
ISSN: 1790-5117
II.
RELATED WORK
There are two basic methods for face recognition. The first method is based on extracting feature vectors from the basic parts of a face such as eyes, nose, mouth, and chin, with the help of deformable templates and extensive mathematics. Then key information from the basic parts of face is gathered and converted into a feature vector. Yullie and Cohen [1] used deformable templates in contour extraction of face images.
204
ISBN: 978-960-474-135-9
SENSORS, SIGNALS, VISUALIZATION, IMAGING, SIMULATION AND MATERIALS
Another method is based on the information theory concepts viz. principal component analysis method. In this method, information that best describes a face is derived from the entire face image. Based on the Karhunen-Loeve expansion in pattern recognition, Kirby and Sirovich [5], [6] have shown that any particular face can be represented in terms of a best coordinate system termed as "eigenfaces". These are the eigen functions of the average covariance of the ensemble of faces. Later, Turk and Pentland [7] proposed a face recognition method based on the eigenfaces approach. An unsupervised pattern recognition scheme is proposed in this paper which is independent of excessive geometry and computation. Recognition system is implemented based on eigenface, PCA and ANN. Principal component analysis for face recognition is based on the information theory approach in which the relevant information in a face image is extracted as efficiently as possible. Further Artificial Neural Network was used for classification. Neural Network concept is used because of its ability to learn ' from observed data. III.
A. Preprocessing And Face Library Formation Image size normalization, histogram equalization and conversion into gray scale are used for preprocessing of the image. This module automatically reduce every face image to X*Y pixels(based on user request), can distribute the intensity of face images (histogram equalization) in order to improve face recognition performance. Face images are stored in a face library in the system. Every action such as training set or Eigen face formation is performed on this face library. The face library is further divided into two sets – training dataset (60% of individual image) and testing dataset (rest 40% images). The process is described in Fig. 1. B. Get the Face Descriptor Using Eigen Face The face library entries are normalized. Eigenfaces are calculated from the training set and stored. An individual face can be represented exactly in terms of a linear combination of eigenfaces. The face can also be approximated using only the best M eigenfaces, which have the largest eigenvalues. It accounts for the most variance within the set of face images. Best M eigenfaces span an M-dimensional subspace which is called the "face space" of all possible images. For calculating the eigenface PCA algorithm [5], [8], was used. It includes the calculation of the average face(φ) in the face space and then further compute each face difference from the average. The difference is used to compute a covariance matrix (C) for the dataset. The covariance between two sets of data reveals how much the sets correlate. Based on the statistical technique known as PCA, the number of eigenvector for covariance matrix can be reduced from N (the no. of pixels in image) to the number of images in the training dataset. Only M eigenfaces( ) of highest eigenvalue are actually needed to produce a complete basis for the face space. A new face image ( Г ) is transformed into its eigenface components (projected onto "face space") by a simple operation,
PROPOSED TECHNIQUE
The proposed technique is coding and decoding of face images, emphasizing the significant local and global features. In the language of information theory, the relevant information in a face image is extracted, encoded and then compared with a database of models. The proposed method is independent of any judgment of features (open/closed eyes, different facial expressions, with and without Glasses). The face recognition system is as follows:
=
(Г − φ)
= 1,2, … … ′ The weights Wk formed a feature vector or face descriptor, Ω =[
′
]
ΩT describes the contribution of each eigenface in representing the input face image, treating the eigenfaces as a basis set for face images. The feature vector/face descriptor is then used in a standard pattern recognition algorithm.
Fig. 1 – Face Library Formation and getting face descriptor
ISSN: 1790-5117
…….
205
ISBN: 978-960-474-135-9
SENSORS, SIGNALS, VISUALIZATION, IMAGING, SIMULATION AND MATERIALS
In the end, one can get a decent reconstruction of the image using only a few eigenfaces (M). C.
IV. EXPERIMENT
Training of Neural Networks
The proposed method is tested on ORL face database. Database has more than one image of an individual’s face with different conditions. (expression, illumination, etc.) There are ten different images of each of 40 distinct subjects. Each image has the size of 112 x 92 pixels with 256 levels of grey. For some subjects, the images were taken at different times, varying the lighting, facial expressions (open / closed eyes, smiling / not smiling) and facial details (glasses / no glasses). All the images were taken against a dark homogeneous background with the subjects in an upright, frontal position (with tolerance for some side movement). A preview image of the Database of Faces is available (Fig. 4). The original pictures of 112 x 92 pixels have been resized to 56 x 46 so that the input space has the dimension of 2576.
One ANN is used for each person in the database in which face descriptors are used as inputs to train the networks [3]. During training of the ANN’s, the faces descriptors that belong to same person are used as positive examples for the person’s network (such that network gives 1 as output), and negative examples for the others network. (such that network gives 0 as output). Fig. 2 shows schematic diagram for the networks training.
Eigenfaces are calculated by using PCA algorithm and experiment is performed by varying the number of eigenfaces used in face space to calculate the face descriptors of the images. The numbers of network used are equal to number of subjects in the database. The initial parameters of the Neural Network used in the experiment are given below:
Fig. 2 – Training of Neural Network D. Simulation of ANN for Recognition
New test image is taken for recognition (from test dataset) and its face descriptor is calculated from the eigenfaces (M) found before. These new descriptors are given as an input to every network; further these networks are simulated. Compare the simulated results and if the maximum output exceeds the predefined threshold level, then it is confirmed that this new face belongs to the recognized person with the maximum output (fig. 3).
Type: Feed forward back propagation network Number of layers: 3 (input, one hidden, output layer) - Number of neurons in input layer : Number of eigenfaces to describe the faces - Number of neurons in hidden layer : 10 - Number of neurons in output layer : 1 Transfer function of the ith layer: Tansig Training Function: Trainlm Number of epochs used in training: 100 Back propagation weight/bias learning function: learngdm Performance function: mse
Since the number of networks is equal to the number of people in the database, therefore forty networks, one for each person was created. Among the ten images, first 6 of them are used for training the neural networks, then these networks are tested and their properties are updated. The trained networks would be used later on for recognition purposes.
Fig. 3 – Testing of Neural Network
ISSN: 1790-5117
206
ISBN: 978-960-474-135-9
SENSORS, SIGNALS, VISUALIZATION, IMAGING, SIMULATION AND MATERIALS
For testing the whole database, the faces used in training, testing and recognition are changed and the recognition performance is given for whole database.
V.
ANALYSIS
The proposed technique is analyzed by varying the number of eigenfaces used for feature extraction. The recognition performance is shown in Table I.
The complete face recognition process is shown in Fig. 4.
The result derived from proposed method is compared with the other techniques which are 1. K-means [2], 2. Fuzzy Ant with fuzzy C-means.[2] Comparison of the result has been tabulated in Table II.
Table I: Recognition score of Face recognition using PCA and ANN. No of Recognition Rate (%) Eigen Result Result Result Average of Faces 1 2 3 Result 1-3 20 98.037 96.425 96.487 96.983 30 96.037 96.581 96.581 96.399 40 96.506 96.45 97.012 96.656 50 96.525 97.231 97.3 97.018 60 94.006 94.987 95.587 94.860 70 94.643 96.031 95.556 95.410 80 94.950 94.837 95.212 95 90 93.356 94.431 93.439 93.742 100 95.250 93.993 93.893 94.379 Table II: Comparison of the result Method K-means Fuzzy Ant with fuzzy C-means Proposed
Test image Descriptors
Recognition Rate 86.75 94.82 97.018
VI. CONCLUSION The paper presents a face recognition approach using PCA and Neural Network techniques. The result is compared with K-means, Fuzzy Ant with fuzzy C-means and proposed technique gives a better recognition rate then the other two. In the Table I one can see the recognition rate by varying the eigenfaces and the maximum recognition rate obtained for the whole dataset is 97.018. M Eigenfaces ( ) of highest eigenvalue are actually needed to produce a complete basis for the face space, As shown in Table I, maximum recognition rate is for M = 50.
Fig. 4 – A complete process of PCA, Eigenface and ANN based faced recognition system
ISSN: 1790-5117
207
ISBN: 978-960-474-135-9
SENSORS, SIGNALS, VISUALIZATION, IMAGING, SIMULATION AND MATERIALS
In the Table II one can see the advantage of using the proposed face recognition over K-means method and Fuzzy Ant with fuzzy C-means based algorithm. The eigenface method is very sensitive to head orientations, and most of the mismatches occur for the images with large head orientations. By choosing PCA as a feature selection technique (for the set of images from the ORL Database of Faces), one can reduce the space dimension from 2576 to 50 (equal to no. of selected eigenfaces of highest eigenvalue).
IX References [1] Yuille, A. L., Cohen, D. S., and Hallinan, P. W., "Feature extraction from faces using deformable templates", Proc. of CVPR, (1989) [2] S. Makdee, C. Kimpan, S. Pansang, “Invariant range image multi - pose face recognition using Fuzzy ant algorithm and membership matching score” Proceedings of 2007 IEEE International Symposium on Signal Processing and Information Technology,2007, pp. 252256 [3] Victor-Emil and Luliana-Florentina, “Face Rcognition using a fuzzy – Gaussian ANN”, IEEE 2002. Proceedings , Aug. 2002 Page(s):361 – 368 [4] Howard Demuth,Mark Bele,Martin Hagan, “Neural Network Toolbox” [5] Kirby, M., and Sirovich, L., "Application of the Karhunen-Loeve procedure for thecharacterization of human faces", IEEE PAMI, Vol. 12, pp. 103-108, (1990). [6] Sirovich, L., and Kirby, M., "Low-dimensional procedure for the characterization of human faces", J. Opt. Soc. Am. A, 4, 3,pp. 519-524, (1987). [7] Turk, M., and Pentland, A., "Eigenfaces for recognition", Journal of Cognitive Neuroscience, Vol. 3, pp. 71-86, (1991). [8] S. Gong, S. J. McKeANNa, and A. Psarron, Dynamic Vision, Imperial College Press, London, 2000. [9] Manjunath, B. S., Chellappa, R., and Malsburg, C., "A feature based approach to face recognition", Trans. of IEEE, pp. 373-378, (1992) [10]http://www.cl.cam.ac.uk/research/dtg/attarchive/faced atabase.html for downloading the ORL database.
ISSN: 1790-5117
208
ISBN: 978-960-474-135-9