Efficient Alignment of Fingerprint Images - Semantic Scholar

Report 4 Downloads 248 Views
Efficient alignment of fingerprint images H. Ramoser1 , B. Wachmann2 , H. Bischof 3 (1) Advanced Computer Vision, Vienna, Austria, [email protected] ¨ (2) Siemens AG Osterreich, Programm- und Systementwicklung, Graz, Austria, [email protected] (3) Institute for Computer Graphics and Vision, Univ. of Technology, Graz, Austria, [email protected]

Abstract

database. This approach has obvious disadvantages: i) increased memory requirements, ii) increased matching time. These drawbacks can be avoided by fusing all acquired impressions into one (generally larger) image. This fusion can be performed using the gray value information of the fingerprint images, however, one advantage of solid state sensors is the possibility to use them in conjunction with embedded computers which do not have sufficient working memory to store a gray value image.

Fingerprint matching is a common technique for biometric authentication. Solid state sensors allow that fingerprint recognition is used in small sized embedded systems. The size of these sensors makes it necessary to store several impressions of the same finger. In order to reduce memory requirements and matching time all these images can be fused into one larger image. We present a RANSAC based method to determine a rigid transformation which aligns two fingerprint images using solely minutiae coordinates and minutiae angles. The reliability of the method is demonstrated with experimental results.

1. Introduction Many processes require a method of user authentication (e. g., logging into a computer system or withdrawing money from an ATM). Usually the user provides information about her identity (e. g., a username or a credit card) along with a verification of this identity (e. g, a password or a PIN). The advance of biometrics [5] makes it possible to replace the traditional methods of identification and verification with physiological or behavioural qualities of a person. The major advantage is that these properties can not be stolen or shared. Fingerprints are one of the commonly used biometric features. Recently developed solid state sensors make it possible to embed the capturing device into products like credit cards, mobile phones, computer mice, etc. The contact area of such a sensor is usually small (e. g., 11 mm x 14 mm, 224 x 288 pixels), see Figure 1 for a sample image. Hence, impressions of the same finger acquired at different instances may show only little overlap. In order to increase the overlap of the fingerprints it is common to store multiple impressions of the same finger in the reference

Figure 1. Fingerprint image acquired with a solid state sensor. Minutiae positions are highlighted. The right images show the minutiae types: ridge ending (top) and ridge bifurcation (bottom).

The goal of the work presented in this paper is to develop an algorithm which is able to align and fuse fingerprints using only following information about each minutiae: i) im1

1051-4651/02 $17.00 (c) 2002 IEEE

age coordinates, ii) angle (i. e., the angle of the orientation field at the location of the minutiae). Because of the limited computing resources we consider only rigid transformations, i. e., the transformation between two fingerprints is restricted to rotation, scaling, and translation. The coordinates of all corresponding minutiae MA,i and MB,i of fingerprints A and B must fulfill µ ¶µ ¶ cos δ sin δ S 0 MA,i = MB,i + T (1) − sin δ cos δ 0 S where δ is the rotation angle, S is the scale, and T is the translation between two fingerprints. A number of properties effect the choice of algorithm. Most relevant are

2. Methods The goal of our work is to develop an alignment algorithm which is highly reliable and performs well on an embedded computer system. Preliminary experiments have shown that global methods based on singular value decomposition (SVD) or eigenvalues are not capable of fingerprint alignment in case of little overlap. The comparatively small number of minutiae in a fingerprint makes it possible to adopt the random sample consensus method (RANSAC, [2]) for the alignment task.

• There may be little overlap (< 50%) and considerable rotation (±30◦ ) between corresponding images. • The fingertip is pressed onto a flat sensor. This causes a deformation of the skin and changes of relative minutiae positions in corresponding fingerprints. • Not every minutiae is detected in every image. Furthermore, so-called false minutae may be detected by the encoding algorithm. • The number of minutiae in an image ranges from 10– 40 (the average is about 20). It is worth noting that a fingerprint alignment algorithm has an important advantage over a fingerprint matching algorithm: it is known that the images are from the same finger. Thus, it is sensible to introduce a bias towards fusion. The method described in this paper can, therefore, not be expected to give reliable results for fingerprint matching. The problem of fingerprint alignment using minutiae coordinates can be basically viewed as a 2D point pattern matching problem. A large number of methods has been published on this topic. Most algorithms can be assigned to one of two groups • Global methods which use global properties of the two point sets (e. g., given by an eigendecomposition) to determine an alignment [7, 8, 4]. Generally, these algorithms are fast but not highly robust with respect to structural errors (e. g., missing points). • Statistical and optimization methods use repeated sampling or gradient descent to find the most probable alignment of the two sets [2, 6, 3, 1]. These algorithms tend to be more robust but computationally demanding. In the remainder of the paper we first describe the algorithm developed for the alignment of two fingerprint images. The functionality of the method is demonstrated with experimental results. This paper is concluded with a discussion of the results and possible future improvements.

Figure 2. Visualization of the fusion steps: i) rough alignment using two pairs of minutiae (thick lines) ii) determination of corresponding minutiae (thin lines).

The algorithm can be spilt into three parts. First a RANSAC method is used to determine a crude approximation of the most likely transformation between the two fingerprints. Using this transformation the correponding minutiae are determined using a global alignment method and the transformation is optimized using all corresponding minutiae. The final fusion step combines the two input fingerprints into a new fingerprint (represented by a set of minutiae coordinates and angles). Figure 2 shows the fusion steps. Note, that the gray value information of the fingerprint images is not used by the alignment algorithm.

2.1. Rough alignment The algorithm for determining a likely transformation between the two sets of minutiae coordiantes and angles is based on the RANSAC method. Basically, a random pair of minutiae is chosen from fingerprint A and compared to all pairs of minutiae in fingerprint B. Each combination of

1051-4651/02 $17.00 (c) 2002 IEEE

these pairs defines a transformation between the two fingerprints. The quality of every transformation is determined by the number of minutiae which are close in fingerprint A and in the transformed fingerprint B. This procedure is repeated several times and the most likely transformation is finally selected. The required number of iterations can be approximately determined as m≥

ln(1 − q) ln(1 − p2 )

(2)

where p is the probability that a minutiae is present in both fingerprints and q is the desired probability to find at least one good transformation. For eample, for p = 0.5 and q = 0.99 at least 17 iterations (i. e., pairs drawn from fingerprint A) are necessary. The performance of this procedure can be improved if only pairs from fingerprint B are considered which have approximately the same distance and minutiae angle as the pair selected from fingerprint A.

2.2. Optimization of the alignment The result of the rough alignment procedure is a transformation which perfectly aligns two pairs of minutiae (the two pairs which gave the most likely transformation). The goal is, however, to find a transformation which minimizes the distance between all corresponding minutiae. First, the corresponding minutiae are determined using the SVD-based algorithm described in [7]. In order to achieve reliable results the two fingerprints are aligned using the known transformation and only minutiae from the overlapping regions of the two fingerprints are considered. For all these minutiae a proximity matrix Gi,j = e−||MA,i −MB,j ||

2

/2σ 2

(3)

is built where σ controls the interaction between the two sets of minutiae (this parameter is of little relevance since the two fingerprints are already pre-aligned). Using SVD G can be decomposed into G = U V W T where U and W are orthogonal matrices and V is a diagonal matix. When V is converted into another diagonal matrix X by replacing every diagonal element Vii with 1 it can be shown that the pairing matrix P = U XW T provides information about corresponding minutiae [7]: If Pi,j is the largest element in row i and column j the two minutiae MA,i and MB,j correspond. Once the corresponding minutiae have been found it is straightforward to determine an improved transformation. There are only four unknown parameters (rotation, scale, and translation), hence, two pairs of minutiae are sufficient to determine these unknowns. Usually, there are more corresponding minutiae and the transformation can be determined as a least-squares solution of the overdetermined system.

Figure 3. Example a fused fingerprint. Minutiae locations are highlighted. The fusion has been performed using minutiae coordinates and angles of three impressions of the finger.

2.3. Fusion of the fingerprints To fuse the two fingerprints fingerprint B is transformed with the optimized transformation parameters. Corresponding minutiae are combined and the new coordinates and angle are obtained as average of the repective values of the two contributing minutiae. All minutiae which are present in only one fingerprint are added to the new fingerprint. When more than two impressions of the same finger are available the fusion process can be repeated until all impressions are fused.

3. Results The test data consisted of 30 fingerprint images (10 fingers with 3 impressions each). The minutiae coordinates and angles have been determined manually. In order to reduce storage requirements as much as possible all three impressions of a finger should be fused. A sample fusion result of all impressions of a finger is shown in Figure 3. Note, that the fingerprint images are used for visualization purposes but not during the fusion task. The proposed method is able to fuse all impressions of eight out of ten fingers and failes to merge one impression of the remaining two fingers. Visual inspection showed that

1051-4651/02 $17.00 (c) 2002 IEEE

Table 1. Results of the fusion on a database of ten fingers with three impressions each.

Input minutiae per finger Output minutae per finger Minutiae per impression Corresponding minutiae

Avg. 72.9 42.6 24.3 16.8

Min. 58 25 15 9

Max. 93 57 35 27

removal should further reduce storage requirements with little effects on the matching performance.

5. Acknowledgement This work has been carried out within the K plus Competence Center ADVANCED COMPUTER VISION. This work was funded from the K plus Program.

References all performed fusions are correct. The fusion statistics are shown in Table 1. The number of minutiae per fingerprint drops from 72.9 when stored in three impressions to 42.6 for the fused fingerprints (including impressions which could not be fused). The impressions where the algorithm failes have few corresponding minutiae (10 and 13) and considerable deformations. The implementation of the algorithm in Matlab has an average calculation time of 1.35 sec for a single fusion step on a 700 MHz PC. Results on a database of approximately 2000 fingerprint images indicate that the algorithm performs comparable to a gray value based fusion strategy. It should be noted that on this database the algorithm performed no incorrect fusions but failed to fuse some of the fingerprints mostly because of poor image quality.

4. Conclusions We have described a new method to combine several impressions of one finger into one larger fingerprint image. The algorithm is composed of a rough alignment procedure based on RANSAC and an optimization step which determines corresponding minutiae and an improved alignment. The method uses only minutiae coordinates and angles and can, thus, be implemented on an embedded computer with extremely limited working memory. The reliability of the method is demonstrated on a small database of fingerprint images. The proposed method relies heavily on the performance of the rough alignment (RANSAC) procedure. Due to its probabilistic nature this procedure does not always find a correct alignment of two fingerprints (there are also other causes which make an alignment impossible such as heavy deformations or little overlap). But since the alignment method is used during the training phase of the recognition system it is, generally, easy to aquire additional fingerprint images. Currently, the algorithm does not remove any minutiae from the fused fingerprints. In low quality fingerprint images there are, however, minutiae which are seldom detected. In the near future we plan to evalute the effect of removing these minutiae from the fused fingerprints. This

[1] M. Carcassoni and E. R. Hancock. Point pattern matching with robust spectral correspondence. In Proc. Conference on Computer Vision and Pattern Recognition, volume 1, pages 649–655, 2000. [2] M. A. Fischler and R. C. Bolles. The random sample consensus set: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6):381–395, 1981. [3] S. Gold, A. Rangarajan, C.-P. Lu, S. Pappu, and E. Mjolsness. New algorithms for 2D and 3D point matching: Pose estimation and correspondence. Pattern Recognition, 31(8):1019– 1031, 1998. [4] D. P. Huttenlocher, G. A. Klanderman, and W. J. Rucklidge. Comparing images using the Hausdorff distance. IEEE Trans. on Pattern Analysis and Machine Intelligence, 15(9):850– 863, 1993. [5] A. Jain, R. Bolle, and S. Pankanti, editors. Biometrics: personal identification in a networked society. Kluwer Academic Publishers, 1999. [6] C. F. Olson. Probabilistic indexing for object recognition. IEEE Trans. on Pattern Analysis and Machine Intelligence, 17(5):518–522, 1995. [7] G. L. Scott and H. C. Longuet-Higgins. An algorithm for associating the features of two images. Proc. Royal Society London B, 244:21–26, 1991. [8] L. S. Shapiro and J. M. Brady. Feature-based correspondence: an eigenvector approach. Image and Vision Computing, 10(5):283–288, 1992.

1051-4651/02 $17.00 (c) 2002 IEEE