Face Recognition - Rochester CS

Report 4 Downloads 296 Views
         

 

Face Recognition                                          

   

Programming Project

Haofu Liao, BSEE June 23, 2013

                                   

 

Department of Electrical and Computer Engineering Northeastern University

1.  How  to  build  the  PCA  Mex  Funtion   1.1  Basic  Information   The  table  below  is  the  basic  information  about  how  to  build  the  mex  function.   Operating  System  

OX  S  

IDE  

Xcode  

Programming  Language  

C++  

PCA  package  

ALGLIB  

Source  files  

Mex_PCA.cpp,  Mex_PCA.h  

Object  file  

Mex_PCA.mexmaci64  

  The  mex  function  is  built  on  Mac  OX  S  with  Xcode.  The  C++  linear  algebra   package  that  implements  the  PCA  is  ALGLIB.  User  can  download  this  package  at   http://www.alglib.net/download.php.  Mex_PCA.cpp  and  Mex_PCA.h  are  the   source  files  that  used  to  build  the  object  file.  With  these  two  files,  one  can  use   Xcode  build  the  Mex_PCA.mexmaci64  object  file  automatically.  User  can  use  the   Mex_PCA.mexmaci64  just  like  a  .m  function  in  MATLAB.  For  more  details  about   how  to  use  Mex_PCA.mexmaci64  file  on  MATLAB,  see  ~local/PCA/Source   Code/read  me  before  use.txt.  

1.2  ALGLIB  package  and  PCA  subpackage     ALGLIB  is  a  cross-­‐platform  numerical  analysis  and  data  processing  library.   User  can  look  up  its  C++  user  manual  for  details  about  how  to  use  this  package.   The  webpage  of  the  manual  is   http://www.alglib.net/translator/man/manual.cpp.html   User  can  use  the  PCA  subpackage  by  including  the  DataAnalysis  subpackage   in  his  code.  The  function  that  actually  do  the    PCA  algorithm  is   void pcabuildbasis(const real_2d_array &x, const ae_int_t npoints, const ae_int_t nvars, ae_int_t &info, real_1d_array &s2, real_2d_array &v)

Here  const real_2d_array &x and  real_2d_array &v are  two   important  parameters.  X  is  the  input  matrix,  Rows  of  X  correspond  to   observations  and  columns  correspond  to  variables.  V  is  the  principal  component   coefficients  for  X.  For  more  information  about  other  parameters,  please  see  the   user  manual.   Notice  that  X  has  a  real_2d_array  type.  Which  means  user  need  to  build  an   interface  to  make  sure  all  the  data  read  from  MATLAB  can  be  used  in  the  PCA   package.  We  will  talk  more  about  the  interface  later.  

1.3  Workflow  of  the  mex  function   The  figure  below  shows  the  work  flow  of  the  mex  function                    

Read data from MATLAB

Convert  MATLAB  data   type  to  PCA  package   data  type  

    MATLAB   Interface  

                   

Do the principal component analysis by ALGLIB PCA package

PCA  package   Interface   Interface  

Convert  PCA  package   data  type  back  to   MATLAB  data  type  

 

 

Output PCA results to MATLAB

From  the  figure  above  we  can  know  that  our  mex  function  works  as  an   interface  that  transfers  data  from  MATLAB  to  PCA  package  and  then  back  to   MATLAB.  

1.4  MATLAB  interface  and  PCA  package  interface   The  MATLAB  interface  has  the  syntax  below   #include "mex.h" void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[])

Here,  mexFunction  is  the  name  of  the  gateway  function  in  C  /C++  which  every   MEX-­‐file  requires.  When  you  invoke  the  PCA  mex  function,  MATLAB   automatically  seeds  nlhs,  plhs,  nrhs,  and  prhs  with  the  caller's  information.  NLHS   is  the  number  of  expected  output  mxArrays.  PLHS  is  the  array  of  pointers  to  the   expected  output  mxArrays.  NRHA  is  the  number  of  input  mxArrays.  PRHS  is  the   array  of  pointers  to  the  input  mxArrays.    

As  we  have  mentioned  in  1.2,  the  PCA  package  requires  a  real_2d_array  type   matrix,  which  we  can  know  from  the  user  manual  can  only  be  initialized  by  a   char*  type  data.  Hence,  we  need  to  write  an  interface  to  convert  the  different   data  type.  The  function  that  implement  the  conversion  is                                                char* mat2char(const mwSize *x1, double *x2) For  more  details  about  this  function,  ~local/PCA/Source  Code/Mex_PCA/                Mex_PCA.  

2.  Comparison  between  PCA  functions   2.1  PCA  functions   Name  

Description  

Performance  

pca(svd)  

The  built-­‐in  MATLAB  PCA  function   using  singular  value  decomposition.  

0.07  sec  

pca(evd)  

The  built-­‐in  MATLAB  PCA  function   using  eigen  value  decomposition.  

388.7  sec  

Mex_PCA  

The  mex  function  PCA  using  ALGLIB   PCA  package.  This  function  is  also  use   evd  to  perform  PCA  

17.9  sec  

fastPCA  

A  PCA  function  using  algorithm   mentioned  in  [1]  

0.03  sec  

  From  the  table  above  we  know  that  there  are  four  functions  that  implement   PCA  algorithm.  The  first  two  functions  are  built-­‐in  MATLAB  functions.  They   implement  PCA  using  svd  and  evd.  The  last  two  functions  are  created  by  myself.   Mex_PCA  is  the  function  build  from  C++  code  which  implements  PCA  by  ALGLIB   PCA  package.  fastPCA  function  implements  a  new  algorithm  other  than  evd  and   svd,  it  should  have  a  better  perform  in  theory.  

2.2  Performance         Next,  We  test  the  performance  of  the  four  functions.  We  use  a  10  by  10304   matrix  that  contains  10  faces  as  the  input  of  our  functions.  Then,  we  run  these   functions  one  by  one  and  record  the  time  consumption  of  each  function.     From  the  table  of  2.1  we  can  find  that  fastPCA  runs  faster  than  other   functions,  which  suggests  that  it  is  a  good  choice  of  using  the  fastPCA  algorithm   in  our  face  recognition  project.  We  also  find  that  Mex_PCA  beats  over  the   pca(evd)  function.  Since  they  both  use  the  eigen  value  decomposition  way  to  do  

PCA.  It  means  the  C++  PCA  ALGLIB  package  have  better  performance  than  the   built-­‐in  MATLAB  function.  

3.  Face  Recognition     3.1  Eigenfaces   Our  training  images  are  from  ~local/PCA/Source  Code/att_faces/s1.  They   are  10  images  from  one  person.  Each  of  them  is  a  92×112 pixel training image. First, we transform each image to a 10304-dimensional vector and concatenate these vectors from 10 images into a 10×10304 dimensional matrix. Then, we do a principle component analysis on the matrix to find the vectors that best account for the distribution of face images within the entire image space. (Here, we use fastPCA function by default, because it has a better performance. Actually, all the four functions are ok to do PCA. The results are exactly the same). These vectors define the subspace of face images, which we call “face space.” Each vector is of length 10304, describes a 92×112 image, and is a linear combination of the original face images. Because these vectors are the eigenvectors of the covariance matrix corresponding to the original face images, and because they are face-like in appearance, we refer to them as “eigenfaces. Below is the eigenfaces we got from folder s1.

 

3.2  Projection   Once get the eigenfaces, we can project a new face image on the image space that the eigenfaces defines. The equation that we use to get the projection is 𝛺 = 𝑈! 𝛤 − Ψ 𝛷 = 𝑈𝛺! Here, 𝑈 is a 10304×9 matrix whose column is an eigenfaces from the figure in 3.1. 𝛤 is the new face image. Ψ is the average 10304×1 face of the training images. 𝛺 is a 9×1 weights vector that describes the contribution of each eigenface in representing the input face image, treating the eigenfaces as a basis set for face images. 𝛷 is the 10304×1 projection. Below shows two input images and their projections

Input  Image  

Projection  

Input  Image  

Projection  

The upper figure is an original image from the training set and its corresponding projection. We can find that the projection and the input image is nearly the same (the difference is caused by the subtraction of the average image). It is reasonable because the training images are the linear combination of the eigenfaces. The bottom figure is a new face image and its projection. We can find these two doesn’t look like the same. But we can find the eigenfaces try their best to match the input image.

3.3  Face  Recognition   We can use the input image and its projection to do the face recognition. The equation we use to evaluate an input image is 𝜖 ! = 𝛷 − 𝛷! Here, 𝛷 is the mean adjusted input image 𝛷 = 𝛤 − Ψ, 𝛷! is the projection of the input image and 𝜖 is the distance between the image and the face space. Hence, we can use 𝜖 to decide whether an input image is a face image. If 𝜖 is larger some value 𝜃, we say it is not a face image, otherwise we say yes.

 

A:  

                               

B:  

C:   Input  Image  

Projection  

To  get  a  more  precise  results  we  include  more  face  images  to  our  training  set.   Hence,  our  training  set  now  has  100  images  from  10  folders.  And  to  reduce  the   computational  complexity,  we  don’t  choose  all  the  99  eigenfaces  but  just  choose   first  50  of  them.  The  above  figure  shows  three  images  and  their  projections  onto   the  new  face  space  defined  by  50  eigenfaces.  The  relative  measures  of  distance   from  face  space  are  (a)  28.1,  (b)  52.7,  (c)  81.3.  Images  (a)  is  in  the  original   training  set.  Images  (b)  is  from  the  att_faces  but  not  in  the  original  training  set.   We  can  see  that  the  image  that  contains  face  has  a  closer  distance  from  the  face   space.  This  property  will  help  us  to  determine  the  which  image  contains  a   human  face.      

References   [1]    M. Turk and A. Pentland, “Eigenfaces for Recognition”, Journal of Cognitive Neuroscience, March 1991 [2] M. Turk and A. Pentland, “Face recognition using eigenfaces”, Conference of Computer Science and Pattern Recognition, June 1991