Reconstructing Orientation Field From Fingerprint ... - Semantic Scholar

Report 3 Downloads 147 Views
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 18, NO. 7, JULY 2009

1665

Reconstructing Orientation Field From Fingerprint Minutiae to Improve Minutiae-Matching Accuracy Fanglin Chen, Jie Zhou, Senior Member, IEEE, and Chunyu Yang

Abstract—Minutiae are very important features for fingerprint representation, and most practical fingerprint recognition systems only store the minutiae template in the database for further usage. The conventional methods to utilize minutiae information are treating it as a point set and finding the matched points from different minutiae sets. In this paper, we propose a novel algorithm to use minutiae for fingerprint recognition, in which the fingerprint’s orientation field is reconstructed from minutiae and further utilized in the matching stage to enhance the system’s performance. First, we produce “virtual” minutiae by using interpolation in the sparse area, and then use an orientation model to reconstruct the orientation field from all “real” and “virtual” minutiae. A decision fusion scheme is used to combine the reconstructed orientation field matching with conventional minutiae-based matching. Since orientation field is an important global feature of fingerprints, the proposed method can obtain better results than conventional methods. Experimental results illustrate its effectiveness.

Fig. 1. Flowchart of the proposed algorithm.

Index Terms—Decision fusion, fingerprint recognition, interpolation, orientation field, polynomial model.

I. INTRODUCTION Recently, biometric technologies have shown more and more importance in various applications. Among them, fingerprint recognition is considered one of the most reliable technologies and has been extensively used in personal identification. In recent years, this technology has received increasingly more attention [1]. The minutiae are ridge endings or bifurcations on the fingerprints. They, including their coordinates and direction, are most distinctive features to represent the fingerprint. Most fingerprint recognition systems [1] store the minutiae template (sometimes with singular points together) in the database. This kind of minutiae-based fingerprint recognition systems consists of two steps, i.e., minutiae extraction and minutiae matching. In the minutiae matching process, the minutiae feature of a given fingerprint is compared with the minutiae template, and the matched minutiae will be found out. If the matching score exceeds a predefined threshold, the two fingerprints can be regarded as belonging to a same finger. Such algorithms are representative ways to utilize the minutiae information for fingerprint recognition. However, is it the best way? In [2], the authors showed that these kind of methods cannot provide enough distinguishing abilities for large-scale fingerprint identification tasks. Obviously, a better usage of minutiae is very important for fingerprint recognition systems. In this paper, we will propose a novel method to Manuscript received October 30, 2007; revised February 23, 2009. First published May 02, 2009; current version published June 12, 2009. This work was supported in part by the National 863 Hi-Tech Development Program of China under Grant 2008AA01Z123, in part by the Natural Science Foundation of China under Grants 60205002 and 60875017, and in part by the Natural Science Foundation of Beijing under Grant 4042020. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Gabriel Marcu. The authors are with the Department of Automation, Tsinghua National Laboratory for Information Science and Technology (TNList), and the State Key Laboratory on Intelligent Technology and Systems, Tsinghua University, Beijing 100084, China (e-mail: [email protected]; jzhou@tsinghua. edu.cn; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TIP.2009.2017995

use minutiae information for fingerprint recognition. The main idea is reconstructing fingerprint’s orientation field from minutiae and further utilizing it in the matching stage to enhance the system’s performance. Its usage lies in the following aspects. 1) In order to reduce storage, many practical fingerprint recognition systems only store the minutiae feature in the database; and the original images are not saved. By using the proposed method, the performance of the systems can be improved without the original fingerprint images. 2) In some other practical systems, though the original images are saved, it’s also unsuitable for us to compute the orientation fields and save them into the database. For these systems, very large databases of the minutiae templates have been established, then it will cost very much for a complete update of the database (e.g., adding the additional orientation features into the database). In this case, a better way is to compute the orientation field from the saved minutiae template and use it to improve the performance of the system. As a global feature, orientation field describes one of the basic structures of a fingerprint [3]–[6]. When it is complemented with the minutiae, a local feature, we can get more information. Thus, a better performance can be obtained by fusing the results of orientation field matching with conventional minutiae-based matching. Some studies [5], [7] showed that incorporating local (minutiae) and global (orientation field) feature can largely improve the performance. However, as stated above, in many practical fingerprint recognition systems, the original images and orientation field images are not saved, and we cannot compute the orientation field directly. In some other systems, additional orientation features cannot be saved into the existing database easily, and we have to compute the orientation field by only using the information of minutiae template. Ross et al. [8] proposed an interpolation algorithm to estimate the orientation field from minutiae template (they used it to predict the class of the fingerprint but not for fingerprint matching), in which the orientation of a given point was computed from its neighboring minutiae. To consider the global information, we will use the orientation model [3] to reconstruct the orientation field from minutiae. Firstly we interpolate a few “virtual” minutiae in the sparse areas, and then apply the model-based method on these mixed minutiae (including the “real” and “virtual” minutiae). After that, the reconstructed orientation field is used into the matching stage by combining with conventional minutiae-based matching. Fig. 1 shows the flowchart of the proposed matching algorithm.

1057-7149/$25.00 © 2009 IEEE

1666

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 18, NO. 7, JULY 2009

Fig. 3. Computation of a pixel in a triangle. Fig. 2. Illustration of effective region estimation.

The rest of the paper is organized as follows: Section II introduces the algorithm of reconstructing orientation field. In Section III, an algorithm of fingerprint recognition by combining minutiae and orientation field is described. The experimental results and the evaluation of the algorithm’s performance are presented in Section IV. We finish with conclusion and discussion on applications of our approach in Section V.

II. RECONSTRUCTING ORIENTATION FIELD FROM MINUTIAE A. Estimating the Effective Region When only having minutiae feature, we can extract the effective region by only using minutiae information but the ridges and valleys. In this situation, we can extract the effective region by finding the smallest envelope that contains all the minutiae points. See Fig. 2 for an illustration. Here, we put the original image together for the convenience to give a visual sense. B. Interpolation The minutiae of a fingerprint always distribute “nonuniformly,” which results in some sparse regions where there are few minutiae. If we use an approximation method to reconstruct the orientation field from these minutiae, it will result a poor performance in the sparse regions. In order to give high weights to the minutiae in the sparse area, we produce “virtual” minutiae by using interpolation [8] before modeling. Since the orientation field of fingerprints always change smoothly, it is possible to estimate the direction of a point by examining the direction of minutiae points in the local region. Therefore, by observing the direction of a group of neighboring minutiae, we can get the orientation field by interpolation. In order to interpolate “virtual” minutiae in the sparse area, we choose three minutiae points to construct a triangle, and estimate the orientation field in the triangle by these three minutiae. The algorithm has the following two main steps. 1) Triangulation: We divide the fingerprint into many triangles. Consider a set of (  3) points in the plane, the simplest way to triangulate them is to add to the diagonals from the first point to all of the others. However, this has the tendency to create skinny triangles. In this study, we want to avoid skinny triangles, or equivalently,

N N

small angles in the triangulation. We use Delaunay triangulation which minimizes the maximum angle over all possible triangulations (refer to [9]–[11] for details). Delaunay triangulation is pretty close, and it can be constructed in log time. The triangles have no intersection, so each point can be covered by only one triangle, as shown in Fig. 4(b). 2) Producing “virtual” minutiae using interpolation: ) ( and are coordinates of the corresponding Let ( point) denotes the “virtual” minutiae located inside the triangle, 1 1 2 3 0 i k be the Euclidean distance of i = k this “virtual” minutiae from the th vertex i . And let i be the direction corresponds to the vertex, i . It is clear that a vertex should affect more on the “virtual” minutiae if it is closer to ^P is than other vertex. Thus, the direction of the pixel estimated as in (1)–(4)

n

P x; y x MMM;d

y P

M

i

M

n

P

P



0

i

 ;



M

P; 

; 0  2 and 3 0 1 > 2 0

if 2



0

0



otherwise (2)

 < 2 and 3 0 1 > 2 0

if 2



0

0



otherwise.

;

The ridge line orientation is defined in the range of [0 ). (2) takes into account that phase jumps may appear in the estimation. For example, 10 = 2 20 = 177 30 = 177 . Then it is in case 1: 0 0 0 ( 2) and 3 0 1 ( 2). If we do not apply (2), the 2 ^ estimated P will be in the range of [2 177 ] according to (3) and (4). Actually, it should be in the range of [177 180 ) [ [0 2 ]

 ; ;   > =

  > = 

^

0

P

=

;

;

d2d3 d1 d2 + d2d3 + d3 d1 1 d3 d1 + d1 d2 + d2 d3 + d3 d1 2 d1 d2 + d1 d2 + d2 d3 + d3 d1 3

;

00

00

00

(3)

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 18, NO. 7, JULY 2009

1667

Fig. 4. Interpolation step: (a) the minutiae image; (b) the triangulated image; (c) virtual minutiae by interpolation (the bigger red minutiae are “real,” while the smaller purple ones are “virtual”).

^P is calculated as

^P =

0  + ^P ; 0 ^ P ; 2 ^P0 ;

0

0 2 < ^P < 0 0

0 ^P  otherwise

(15)

where  is the threshold to minimize FRR (False Rejection Rate) under a given false acceptance rate (FAR). Then, the key point here is to estimate the probability density functions: p(s1 ; s2 j !G ) and p(s1 ; s2 j !I ). We tackle this problem by using a Parzen density estimation method on a training set. In our study, we use a 0.02 2 0.02 Parzen window to estimate the probability density functions which can be saved for global usage.

V. CONCLUSION Orientation field is important for fingerprint representation. In order to utilize the orientation information in automatic fingerprint recognition systems which only stores minutiae feature, we propose a novel method to utilize the minutiae for fingerprint recognition. We also utilize the reconstructed orientation field information into the matching stage. The proposed algorithm combines the interpolation method and model-based method to reconstruct orientation field, and reduces the effect of wrongly detected minutiae. A fingerprint matching based on orientation field is used to combine with conventional minutiae matching for real applications.

REFERENCES IV. EXPERIMENTAL RESULTS Our experiments are conducted on three databases, including two public collections, FVC02 DB1 and DB2 [14], and the THU database established by our lab [4]. 3200 (400 2 8) fingerprints are randomly chosen from THU database to form the training set for the fusion scheme. The rest fingerprints in THU database and FVC02 DB1, DB2 are used as the testing set. For THU database, the number of genuine-matching pairs is 2 2 11200(C8 2 400) and 11956(C8 2 427) for the training set and testing set, respectively. Since the matching number is much larger than that of genuine pairs if we match all imposter pairs, so our strategy is randomly choosing two fingerprints from the eight ones which come from a same finger to form a subset, then, each imposter pair in this subset has to be tested. So, the number of imposter matching in the 2 training set and testing set is 319200(C400 22 0 C22 2 400) and 2 2 363804(C42722 0 C2 2 427), respectively. For DB1 or DB2, the number of genuine matching pairs is 2800(C82 2 100) and that of 2 imposter-matching pairs is 316800(C100 28 0 C82 2 100). Since the final fusion result is related with the verification algorithm of minutiae extraction, we carry out different experiments using two algorithms. Algorithm I is that used in [4], and Algorithm II is similar with that reported in [15]. For Algorithm I and Algorithm II, we compare two fingerprint recognition systems, viz., one is using conventional matching method and the other is the proposed method. In these two systems, the information of minutiae part is the same and the only difference lies in the reconstructed orientation field information which is added to the proposed scheme. Fig. 8 shows the receiver operating curves (ROC) plotting FAR versus FRR of conventional minutiae matching scheme (solid line) and the proposed scheme (dash line), (a), (b), (c) for Algorithm I, while (d), (e), (f) for Algorithm II on FVC02 DB1, DB2, and THU testing database, respectively. FRR is defined as the percentage of imposter matches in all genuine pairs, while FAR is defined as the percentage of genuine matches in all imposter pairs. The results show that: combining the reconstructed orientation field information with minutiae matching can largely improve the performance, either for the Algorithm I or Algorithm II. FRR can be reduced a lot by using the fusion scheme against the minutiae-based matching only. Our system is implemented with C on an AMD 2000 Hz PC. Compared with singly using minutiae-based matching scheme, the computational time of the fusion algorithm will be a little longer. The minutiae-based matching time (one-to-one) is about 5 ms. Average additional computation cost for reconstructing the orientation field is about 15 ms; additional matching time (one-to-one) is less than 3 ms. It shows a feasibility to utilize the reconstructed orientation field in real applications.

[1] BIOMETRICS: Personal Identification in Networked Society., A. K. Jain, R. Bolle, and S. P. , Eds. New York: Kluwer, 1999. [2] S. Pankanti, S. Prabhakar, and A. K. Jain, “On the individuality of fingerprints,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, pp. 1010–1025, 2002. [3] J. Gu, J. Zhou, and D. Zhang, “A combination model for orientation field of fingerprints,” Pattern Recognit., vol. 37, pp. 543–553, 2004. [4] J. Zhou and J. Gu, “A model-based method for the computation of fingerprints orientation field,” IEEE Trans. Image Process., vol. 13, pp. 821–835, 2004. [5] J. Gu, J. Zhou, and C. Yang, “Fingerprint recognition by combining global structure and local cues,” IEEE Trans. Image Process., vol. 15, pp. 1952–1964, 2006. [6] A. Jain, S. Prabhakar, and L. Hong, “A multichannel approach to fingerprint classification,” IEEE Trans. Pattern Anal. Machine Intell., vol. 21, pp. 348–359, 1999. [7] J. Qi, S. Yang, and Y. Wang, “Fingerprint matching combining the global orientation field with minutia,” Pattern Recognit. Lett., vol. 26, no. 15, pp. 2424–2430, 2005. [8] A. Ross, J. Shah, and A. K. Jain, “From template to image: Reconstructing fingerprints from minutiae points,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, pp. 544–560, 2007. [9] S. Sloan, “A fast algorithm for constructing Delaunay triangulations in the plane,” Adv. Eng. Softw., vol. 9, no. 1, pp. 34–55, 1987. [10] L. Guibas, D. Knuth, and M. Sharir, “Randomized incremental construction of Delaunay and Voronoi diagrams,” Algorithmica, vol. 7, no. 1, pp. 381–413, 1992. [11] H. Edelsbrunner, “Incremental topological flipping works for regular triangulations,” Algorithmica, vol. 15, no. 3, pp. 223–241, 1996. [12] N. K. Ratha, K. Karu, S. Chen, and A. Jain, “A real-time matching system for large fingerprint database,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 18, pp. 799–813, 1996. [13] S. Prabhakar and A. Jain, “Decision-level fusion in fingerprint verification,” Pattern Recognit., vol. 35, pp. 861–874, 2002. [14] D. Maio, D. Maltoni, R. Cappelli, J. Wayman, and A. Jain, “FVC2002: Second fingerprint verification competition,” in Proc. Int. Conf. Pattern Recognition, 2002, vol. 16, pp. 811–814. [15] A. Jain and L. Hong, “On-line fingerprint verification,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 19, pp. 302–314, 1997.