Face Recognition Experiments with Random Projection - Computer ...

Report 8 Downloads 54 Views
Face Recognition Experiments with Random Projection Navin Goela , George Bebisa , and Ara Nefianb a Computer

b Future

Vision Laboratory, University of Nevada, Reno Platforms Department, Intel Corporation, Santa Clara ABSTRACT

There has been a strong trend lately in face processing research away from geometric models towards appearance models. Appearance-based methods employ dimensionality reduction to represent faces more compactly in a low-dimensional subspace which is found by optimizing certain criteria. The most popular appearance-based method is the method of eigenfaces that uses Principal Component Analysis (PCA) to represent faces in a lowdimensional subspace spanned by the eigenvectors of the covariance matrix of the data corresponding to the largest eigenvalues (i.e., directions of maximum variance). Recently, Random Projection (RP) has emerged as a powerful method for dimensionality reduction. It represents a computationally simple and efficient method that preserves the structure of the data without introducing significant distortion. Despite its simplicity, RP has promising theoretical properties that make it an attractive tool for dimensionality reduction. Our focus in this paper is on investigating the feasibility of RP for face recognition. In this context, we have performed a large number of experiments using three popular face databases and comparisons using PCA. Our experimental results illustrate that although RP represents faces in a random, low-dimensional subspace, its overall performance is comparable to that of PCA while having lower computational requirements and being data independent. Keywords: Face Recognition, Random Projection, Principal Component Analysis

1. INTRODUCTION Considerable progress has been made in face recognition research over the last decade, especially with the development of powerful models of face appearance.1 These models represent faces as points in high-dimensional image spaces and employ dimensionality reduction to find a more meaningful representation, therefore, addressing the issue of the ”curse of dimensionality”.2 The key observation is that although face images can be regarded as points in a high-dimensional space, they often lie on a manifold (i.e., subspace) of much lower dimensionality, embedded in the high-dimensional image space.3 The main issue is how to properly define and determine a low-dimensional subspace of face appearance in a high-dimensional image space. Dimensionality reduction techniques using linear transformations have been very popular in determining the intrinsic dimensionality of the manifold as well as extracting its principal directions (i.e., basis vectors). The most prominent method in this category is PCA.2 PCA determines the basis vectors by finding the directions of maximum variance in the data and it is optimal in the sense that it minimizes the error between the original image and the one reconstructed from its low-dimensional representation. PCA has been very popular in face recognition, especially with the development of the method of eigenfaces.4 Its success has triggered significant research in the area of face recognition and many powerful dimensionality reduction techniques (e.g., Probabilistic PCA, Linear Discriminant Analysis (LDA) Independent Component Analysis (ICA), Local Feature Analysis (LFA), Kernel PCA) have been proposed for finding appropriate low-dimensional face representations.1 Recently, RP has emerged as a powerful dimensionality reduction method.5 Its most important property is that it is a general data reduction method. In RP, the original high-dimensional data is projected onto a lowdimensional subspace using a random matrix whose columns have unit length. In contrast to other methods, such as PCA, that compute a low-dimensional subspace by optimizing certain criteria (e.g., PCA finds a subspace that maximizes the variance in the data), RP does not use such criteria, therefore, it is data independent. Moreover, it represents a computationally simple and efficient method that preserves the structure of the data without introducing significant distortion. For example, there exist a number of theoretical results supporting that RP Email: (goel,bebis)@cs.unr.edu, [email protected]

preserves approximately pairwise distances of points in Euclidean space,6 volumes and affine distances,7 and the structure of data (e.g., clustering).5 RP has been applied on various types of problems yielding promising results (see Section 2.2 for a brief review). In this paper, our goal is to investigate the feasibility of RP for face recognition. Specifically, we have evaluated RP for face recognition under various conditions and assumptions, and have compared its performance to PCA. Our results indicate that RP compares quite favorably with PCA while, at the same time, being simpler, more computationally efficient, and data independent. Our results are consistent with previous studies comparing RP with PCA,5, 6, 8, 9 indicating that RP might be an attractive alternative for dimensionality reduction in certain face recognition applications. The rest of the paper is organized as follows: in Section 2 we review RP and present a brief review of its properties and applications. In Section 3, we discuss using RP for face recognition and present the main steps of such an approach. Section 4 presents our experiments and results. Finally, Section 5 contains our conclusions and directions for future research.

2. RANDOM PROJECTION RP is a simple yet powerful dimension reduction technique that uses random projection matrices to project data into low-dimensional spaces. We summarize below the key points of RP and present a brief review of its applications.

2.1. Background Let us suppose that we are given a set of vectors Γi , i = 1, 2, ..., M , in a p-dimensional space. Then, RP transforms Γi to a lower dimension d, with d