A Half-Eye Wavelet Based Method for Iris Recognition A. Poursaberi and B. N. Araabi Control and Intelligent Processing Center of Excellence, University of Tehran, Iran a.poursaberi@,ece.ut.ac.ir and araabi@,ut.ac.ir Abstract Iris detection is a crucial part of an iris recognition system. One of the main issues in iris segmentation is coping with occlusions that happen due to eyelids and eyelashes. In this paper, only the lower part of the iris is utilized for recognition. Wavelet based texture features along with a mixed Hamming; harmonic mean distance classifier is used for identzjication. It is observed that relying in a smaller but more reliable part of the iris, though reducing the net amount of information, improves the overall peformance. Experimental results on CASIA database show that the method has a promising pe$ormance with an accuracy of more than 99%
1. Introduction Biometrics is the science of recognizing a person on the basis of physical or behavioral characteristics. Biometrics relies on who you are--on one of any number of unique characteristics that you cannot lose or forget. Most biometric systems can be set to varying degrees of security, which gives you more flexibility to determine access levels. Increasing security in biometric systems sometimes makes them more restrictive, resulting in an increased false rejection rate. Iris recognition is of course based on the visible qualities of the human iris (see Figure 1). Visible characteristics include rings, furrows, freckles, and the iris corona. Especially, it is protected by the body's own mechanisms and impossible to be modified without risk. Thus, iris is reputed to be the most accurate and reliable for person identification [ l ] and has received extensive attentions over the last decade. Whereas, iris has some disadvantages for identification. Some parts of the iris are usually occluded by the eyelid and eyelash when it is captured at a distance. The boundary of the pupil is not always in the shape of
circle. When we approximate the boundary of iris as circle, some parts of pupil will present to the localized iris region. All these factors will influence the subsequent processing because iris pattern represented improperly will inevitably result in poor recognition performance. To solve this problem, a new model is proposed for iris segmentation in this paper. John Daugman [2], [3] used multiscale quadrature wavelets to extract texture phase structure information of the iris to generate a 2,048-bit iris code and compared the difference between a pair of iris representationsby computing their Hamming distance. Ma et al. [4], [5] adopted a well-known texture analysis method (multichannel Gabor filtering) to capture both global and local details in iris. Boles and Boashash [6] used a zero-crossing of 1D wavelet at various resolution levels to distinguish the texture of the iris. Wildes et al. [7] with a Laplacian pyramid constructed in four different resolution levels and the normalized correlation for matching designed his system. Lim et al. [8] used 2D Haar wavelet and quantized the 4& level high frequency information to form an 87-binary code length as feature vector and applied a LVQ neural network for classification. Tisse et al. [9] constructed the analytic image (a combination of the original image and its Hilbert transform) to demodulate the iris texture. A modified Haralick's co-occurrence method with multilayer perceptron is also introduced for extraction and classification of the iris [lo], [l I].
Figure 1. Some iris samples
In this paper, first in Section 2 we outline and
Proceedings of the 2005 5th International Conference on Intelligent Systems Design and Applications (ISDA’05) 0-7695-2286-06/05 $20.00 © 2005
IEEE
describe our proposed algorithm. The remainder of this paper is organized as follows. Detailed descriptions of image preprocessing, feature extraction and matching process are given in Sections 3,4 and 5, respectively. Experimental results are reported in Section 6 . The paper is concluded in Section 7.
2. Outline of our approach To implement an automatic iris recognition system (AIRS), we propose a new algorithm in both iris detection and feature extraction modes. Using morphological operators for pupil detection and selecting the appropriate radius around the pupil to pick the region of iris which contains the collarette that appears as a zigzag pattern - are the main aspect of paper. This region provides the extraordinary textures for feature extraction. Selected coefficients of 3-level decomposition by Daubechies wavelet of iris are chosen to generate a feature vector. Due to the sign of each entry in feature vector to reduce space and computational time for manipulating the feature vector, we quantize each real value into binary value by simply converting the positive value into 1 and the negative value into 0. The details of each step will be described well as follows. A typical iris recognition system includes some major .steps as depicted in Figure 2. At first an imaging system must be designed to capture a sequence of iris images from the subject in front of camera. A comprehensive study is done in [12], [13]. After image capturing, with morphological image processing operators, the edge of pupil and its center and radius are determined.
mentioned in our previous work [14]. The advantage of this kind of edge detection is its speed and good performance because in morphological processing we deal with binary images and process on binary images is very fast. After pupil detection, with trial and error we got that by choosing a radius of I;: = 38 + r, where r, is the radius of pupil, the selected region by this
threshold usually contains the collarette structure well. Preprocessing on selected iris's region is the next step that includes iris normalization, iris image enhancement and de-noising. After that, depends on the selected Daubechies wavelet decomposition coefficients, the feature vector is generated. Matching process is the last step. By comparing iris code with database, the minimum hamming or harmonic distance between class of each iris code and input code is selected. Descriptions of process are given in the next parts.
3. Image preprocessing This step contains three sub stages. A captured image contains not only the iris (desired region) but also some parts such as eyelid, eyelash, pupil sclera that are not informative for us. Distance between camera and eye, environment light conditions (dilation of pupil) can influence the size of iris. Therefore, before feature extraction step, the image must be preprocessed to overcome these problems. These sub-stages are as follows. In our system, we use 320 x 280 grayscale images.
3.1. Iris localization Iris boundaries can be supposed as two non concentric circles. We must determine the inner and outer boundaries with their relevant radius and centers. Several approaches concern to iris edge detection was proposed. Our method is based on iris localization using image morphological operators and suitable threshold [14]. As mentioned earlier, after pupil edge detection (inner boundary) the outer boundary is the edge of a circle with a radius of5 = 38 + r, . The two mentioned edges of Fig .3a. is shown in Figure 3b.
3.2. Iris normalization Figure 2. Flowchart of process We can also obtain the edge of iris that is
Different image acquisition conditions influence and disturb the process of identification. When the iris region is successfilly segmented from an eye image,
Proceedings of the 2005 5th International Conference on Intelligent Systems Design and Applications (ISDA’05) 0-7695-2286-06/05 $20.00 © 2005
IEEE
the next stage is to find a transformation which can project the iris region to the fixed two dimensional areas in order to be prepared for comparison process. The dimensional incongruities between eye images are mostly due to the stretching of the iris caused by pupil expansion1 contraction from variation of the illumination and other factors. Other circumstances include variance of camera and eye distance, rotation of the camera or head. Hence a solution must be contrived to remove these deformations. The normalization process projects iris region into a constant dimensional ribbon so that two images of the same iris under different conditions have characteristic features at the same spatial location. Dougman [2] suggested a normal Cartesian to Polar transform that remaps each pixel in iris area into a pair of polar coordinates (r,O) where rand 8 are on the interval [0 I] and [0,2x] respectively. The equations of process are as follows:
I ( x ( r , @ ) , y ( r , g+ ) )I ( F - , ~ ) that x(r,8 ) = (1 - r)x, ( 8 )+ mi ( 8 )
where I(x,y),(x,y),(r,O), (x,, Y , 1, (xi, y i lare the iris region, Cartesian coordinates, corresponding Polar coordinates, coordinates of the pupil and iris boundaries along the 8 direction respectively. This representation (the rubber sheet model) removes the above mentioned deformations. We select 64 pixels along r and 512 pixels along 8 and a 512 x 64 unwrapped strip is obtained. On account of asymmetry of pupil (not being a circle perfectly) and probability of overlapping outer boundaries with sclera or eyelids in some cases and due to the safety chosen radius around pupil, we select 3 5 0 pixels from 64 along r and 257:512 along 8 in unwrapped iris. Figure 3c shows normalized iris region of Figure 3b. The normalization not only reduces not exactly the distortion of the iris caused by pupil movement but also simplifies subsequent of processing. We used only the region on the right side of green line in unwrapped .. ins.
3.3. Iris de-noising and enhancement On account of imaging conditions and situations of light sources, the normalized iris image has not an
appropriate quality. These factors may affect the performance of feature extraction and matching process. Hence for getting a uniform distributedillumination and better contrast in iris image, we first equalize the intensity of pixels in unwrapped iris image with a mask of 5 x 5 and then filter it with an adaptive lowpass Wiener2D filter to remove high frequency noises. Figure 3d. shows the enhanced image of Figure 3c. and Figure 3e. is used in feature extraction process.
(b)
Eyelid occlusion
1
IEEE
Pupil asymmetry
(c)
Figure 3. (a) Original image. (b) Localized iris. (c) Normalized iris. (d) Enhanced iris. (e) Region of interest for feature extraction (star)
4. Feature extraction The most important step in AIRS is the ability of extracting some unique attributes from iris which help to generate a specific code for each individual. Gabor and wavelet transforms are typically used for analyzing the human iris patterns and extracting features from them. In our earlier work [IS], wavelet Daubecies2 has been applied to iris region. Now by new detection of iris region which has mentioned above, we applied same wavelet. The results show that on account of not including the useless regions in the limited iris boundary, the identification rate is improved well. The approximation coefficients matrix and details coefficients matrices (horizontal, vertical, and diagonal, respectively), are obtained by wavelet decomposition of the input image. We got the 3-level wavelet decomposition detail and approximation coefficients of projected iris image. After 3 times decompositions, the size of last part is8 x 68. We
Proceedings of the 2005 5th International Conference on Intelligent Systems Design and Applications (ISDA’05) 0-7695-2286-06/05 $20.00 © 2005
I
arrange our feature vector by combining 1088( = [8 x 68 8 x 681) features in the LH (LowpassHighpass) and HL (Highpass-Lowpass) of level3.Then based on the sign of each entry, we assign +1 to positive and 0 to others. Finally, we built a 1088 binary feature vector. A typical feature vector is shown in Figure 4.
5. Classification By comparing the similarity between corresponding feature vectors of two irises we can determine whether they are from the same class or not. Since the feature vector is binary, matching process will be fast and simple accordingly. After applying some various distance measurements criteria, we chose two kind of classifiers as follows: Minimum Hamming distance (MED) Harmonic Mean (HM) HD = -
1
(XOR (CodeA ( i ) ,CodeB ( i ) ) length (code )
follows: In first classifier, the minimum HD between input iris code with codes of each class is computed as follows: 1. For each image of class, the HD between input code and its eleven related codes are computed. Finally the minimum of them is recorded. 2. If we have n images in each class, the minimum of these n HD is assigned to the class. In second classifier, the Harmonic mean of the n HD which have been recorded yet is assigned to the class. Accordingly when we sort the results of two classifiers in ascending order, each class is labeled with its relative distance and call them SHD and SHM respectively. Even if one of the two first numbers of SHD or SHM denotes to correct class, the goal is achieved. Input iris images after coding is compared with all iris code which exist in database. The flowchart of this classifier is depicted in Figure 5.
Figure 4. A sample code for iris It is desirable to obtain an iris representation invariant to translation, scale, and rotation. In our algorithm, translation and scale invariance are achieved by normalizing the original image at the preprocessing step. The most rotation invariance methods that suggested in related papers are achieved by rotating the feature vector before matching or by registering the input image with the model before feature extraction. Since features in our method are the selected coefficients of decomposition levels which are gotten via wavelet, there is no explicit relation between features and the original image. Therefore, we obtain approximate rotation invariance by unwrapping the iris disk at different initial angles. Considering that the eye rotation is not very large in practical applications, these initial angle values are 15:3:15 degrees. This means that we define eleven templates which denote the eleven rotation angles for each iris class in the database. Matching the input feature vector with the templates of an iris class means the minimum of the eleven scores is taken as the final matching distance. The Hamming distances (HD) between input image and images in each class are calculated then applying two different classifiers as
6. Experimental results To evaluate the performance of the proposed algorithm, we tested our alghoritm on CASIA ver.1 Database. Unlike fingerprints and face, there is no reasonably sized public-domain iris database. The Chinese Academy of Sciences - Institute of Automation (CASIA) eye image database [16] contains 756 greyscale eye images with 108 unique eyes or classes and 7 different images of each unique eye. Images fiom each class are taken fiom two sessions with one month interval between sessions. Due to specialised imaging conditions using near infia-red light, features in the iris region are highly visible and there is good contrast between pupil, iris and sclera regions. For each iris class, we chose three samples taken at the first session for training and all samples captured at the second session served as test samples. This is also consistent with the widely accepted standard for biometrics algorithm testing [17], [18]. We tested the proposed algorithm in two modes: 1) identification and 2) verification. In identification tests, an average correct classification rate of 99. 31% is achieved. The verification results
Proceedings of the 2005 5th International Conference on Intelligent Systems Design and Applications (ISDA’05) 0-7695-2286-06/05 $20.00 © 2005
IEEE
are shown in Figure 6a. which is the ROC curve of the proposed method. It is the false non-match rate (FNMR) versus false match rate (FMR) curve which measures the accuracy of iris matching process and shows the overall performance of an algorithm. Points in this curve denote all possible system operating states in different tradeoffs. The EER is the point where the false match rate and the false nonmatch rate are equal in value. The smaller the EER is, the better the algorithm. EER is only 0.2687. Figure 6b. shows the distribution of intra-class and inter-class matching distance. We analysed the images that failed in the process and realized that all of the images are damaged partially with eyelidfeyelashocclusion . If we can detect and withdraw these kind of occluded images in imaging step, the success rate will be improved well. However, with the improvement of iris imaging, such cases can be reduced. Three typical system operating states of the proposed method are listed in Table 1.
7. Conclusions In this paper, we described a half-eye wavelet based method for iris recognition. Our iris segmentation method not only contains complex and abundant textual patterns of iris, but also does not contain eyeliddeyelashes occlusions. Detouring iris detection, which is a time- consuming step in iris recognition, was one of the advantages of our proposed method. According to the distinct distribution of the iris characteristics, wavelet decomposition was used for feature extraction. The selected coefficients of 3-level decomposition with Daubechies 2 wavelet have been used as features. A mixed hamming harmonic mean distance classifier was used for classification. The experimental results have demonstrated the effectiveness of the proposed method.
Acknowledgements The CASIA iris image database collected by Institute of Automation, Chinese Academy of Sciences" and a citation to "CASIA Iris Image Database, http:Nwww.sinobiometrics.com". This work is funded by research grants from the ITRC (Iran Telecommunication Research Center Grant No. 50012939).
J. Daugman, "High confidence visual recognition of persons by a test of statistical independence", IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 15, no. 10, pp. 1048-1061, Nov. 1993. [3] J. Daugman, "Demodulation by complex-valued wavelets for stochastic pattern recognition", Int'l 1. Wavelets, Multiresolution and Information Processing, vol. I, no. 1, pp. 1-17,2003. [4] L. Ma, Y. Wang, and T. Tan, "Iris recognition using circular symmetric filters", Proc. 16th Int'l Conf. Pattern Recognition, vol. 11, pp. 4 14-4 17,2002. [5] L. Ma, Y. Wang, and T. Tan, "Personal identification based on iris texture analysis", IEEE Trans. Panern Analysis and Machine Intelligence, vol. 25, no. 12, December 2003. [6] W. Boles and B. Boashash, "A human identification technique using images of the iris and wavelet transform", IEEE Trans. Signal Processing, vol. 46, no. 4, pp. 1085-1088, 1998. [7] R. Wildes, J. Asmuth, G. Green, S. Hsu, R. Kolczynski, J. Matey,and S. McBride, "A machine-vision system for iris recognition", Machine Vision and Applications, vol. 9, pp. 1-8, 1996. [8] S. Lim, K.Lee, 0 . Byeon and T. Kim, "Efficient iris recognition through improvement of feature vector and classifier", ETRI Journal, Volume 23, Number 2, June 2001. [9] C. Tisse, L. Martin, L. Tor~es, and M. Robert," Person identification technique using human iris recognition", Proc. Vision Interface, pp. 294-299,2002. [lo] P. Jaboski, R. Szewczyk, Z. Kulesza, et. al., "Automatic people identification on the basis of iris pattern image processing and preliminary analysis", Roc. Int. Conf. on Microelectronics (MIEL 2002), VOL 2, Yugoslavia pp. 687-690, May, 2002. [I 11 R. Szewczyk, P. Jaboski, et. al., "Automatic people identification on the basis of iris pattern - extraction features and classification", Proc. Int. Conf. on Microelectronics (MIEL 2002), VOL 2, Yugoslavia pp. 691-694, May, 2002. [12] T. Camus, M. Salganicoff, A. Thomas, and K Hanna, "Method and apparatus for removal of bright or dark spots by the fusion of multiple images", United States Patent, no. 6088470, 1998. [13] J. McHugh, J. Lee, and C. Kuhla, "Handheld iris imaging. [I41 A. Poursaberi, B. N. Araabi, "A Novel Iris Recognition System Using Morphological Edge Detector and Wavelet Phase Features", ICGST International Journal on Graphics, Vision and Image Processing, P1150517004, June 2005. [I51 A. Poursaberi and B. N. Araabi, "Locally Iris Texture Analysis around the Pupil Based on Wavelet Coefficients", PRIP2005 Eighth International Conference on Pattern Recognition, Minsk, Belarus, May 2005. [I61 www.sinobiometrics.com [I71 T. Mansfield, G. Kelly, D. Chandler, and J. Kane, "Biometric product testing final report," issue 1.0, Nat'l Physical Laboratory of UK, 200 1. [I 81 A. Mansfield and J. Wayman, "Best practice standards for testing and reporting on biometric Device Performance," Nat'l Physical Laboratory of UK, 2002.
[2]
I
Table 1. Verification results
I False Match Rate (%)
8. References [I]
A. Jain, R. Bolle and S. Pankanti, "Biometrics: Personal Identification in a Networked Society", edas. Kluwer, 1999.
Proceedings of the 2005 5th International Conference on Intelligent Systems Design and Applications (ISDA’05) 0-7695-2286-06/05 $20.00 © 2005
IEEE
False Non-Match Rate
(