Robust Crease Detection in Fingerprint Images Chenyu Wu Jie Zhou Zhao-qi Bian Gang Rong Department of Automation, Tsinghua University, Beijing 100084, P.R.China
[email protected] [email protected] (a)
(b)
(c)
(d)
Figure 1. (a) Fingerprint image. (b) The red regions are called creases. (c) The red points denote the minutiae detected by the conventional feature detection algorithm. (d) The blue points denote the spurious minutiae caused by creases.
Abstract In this paper, we study a novel pattern in the fingerprint called crease, a kind of stripes irregularly crossing the normal fingerprint patterns (ridges and valleys). Creases will cause spurious minutiae by using conventional feature detection algorithm, and therefore decrease the recognition rate of fingerprint identification. By representing the crease using a parameterized rectangle, we design an optimal filter as a detector. We employ a multi-channel filtering framework to detect creases in different orientations. In each channel, PCA is used to extract rectangles’ parameters from the raw detected results. Our algorithm is demonstrated by experiments.
1
Introduction
With the increasing interests on automatic person identification applications, fingerprint-based identification is receiving a lot of attentions. Amongst available biometric information such as face, speech, iris and gesture, fingerprint is the most reliable evidence for identification. A general fingerprint identification system consists of three fundamental stages [4]: data acquisition, feature extraction and matching. For the stage of fingerprint matching, the fingerprint is mostly represented by two minute details—-the end and bifurcation of ridges, due to their robustness to various sources of fingerprint degradation. For instance, the ANSI-NIST standard fingerprint representation is built based on minutiae [1]. Since minutiae are the key for fingerprint identification,
minutiae extraction has been largely explored in the past [3, 5, 7, 11, 12]. Many approaches have been proposed to extract minutiae and they work well for clean and highquality images. However, the quality of fingerprint images may be affected by a number of factors, such as the dryness of the skin, shallow/worn-out ridges (due to aging/genetics), skin disease, sweat, dirt and humidity in the air. Moreover, manual work, accidents or inflict injuries to the finger may change the ridge structure of the finger either permanently or semi-permanently, and thus introduce additional spurious fingerprint features [4]. For these fingerprint images, existing minutiae extraction algorithms are likely to detect spurious minutiae or miss some. As a result, the recognition rate of the fingerprint identification would decrease. Therefore, it is a challenging problem of how to eliminate the spurious and find the missing minutiae. In this paper, we study a pattern in the fingerprint that may cause spurious minutiae. We call this pattern crease (see Figure 1 (b)). The crease is a kind of stripes irregularly crossing ridges and valleys in the fingerprint. They come into being because of the aging, manual work, accidents, etc. Some of them are permanent, while others are temporary, existing for a short term. Both the permanent and temporary creases introduce the spurious minutiae (see Figure 1 (d)) and thus, decrease the fingerprint identification performance. Although by introducing the orientation field, a popular preprocessing step before minutiae extraction [4], some breaks of ridges caused by narrow creases could be connected, there may remain some ridge breaks caused by broad creases, which would form the spurious minutiae. If
Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’03) 1063-6919/03 $17.00 © 2003 IEEE
we detect creases in advance, we could locate the spurious minutiae caused by creases and remove them. Creases can be regarded as white bars [8] on a textured area, bounded by illusory contours [9, 10]. However, because of the similarity between creases and valleys, it is a nontrivial task to extract creases in fingerprint images. Obviously Hough transforms [15] would not work well due to this similarity. In this paper, we propose an algorithm to robustly detect creases. We use a parameterized rectangle to represent a crease. Based on the representation, we design an optimal filter as a detector. We employ a multi-channel filtering framework to detect creases in different orientations. In each channel, PCA is used to extract rectangles’ parameters from the raw detected results. The rest of the paper is organized as follows. In Section 2, we give a representation for creases. In Section 3, we design an optimal filter as a detector. In Section 4, we explain the detection framework. Experiment results and conclusions are provided in Section 5 and Section 6.
2
Crease Definition
Because of the similarity between creases and valleys, we give a brief description for ridges and valleys before defining the crease. In [2], Acton etc. have proposed a model to describe the ridges and valleys. Along the direction of a ridge or valley, gray values vary little and smoothly. The direction of ridges and valleys at each point constitutes the orientation field of the whole fingerprint, which can be computed by using the algorithm proposed in [3]. In this paper, we call such a direction as texture’s direction. On the orthogonal direction of the texture’s direction, there is a prominent periodical variation in gray level through the ridges and valleys. Compared with ridges and valleys, a crease appears as a stripe irregularly crossing the ridges and valleys. The crease contains similar gray level to the neighboring valleys. There is a large difference between the crease and neighboring valleys in direction. Since most creases are like straight stripes, we employ a parameterized rectangle to represent a crease. And most importantly, both the similarity and difference between the crease and neighboring valleys are formulated as constraints to the representation. Let L(Cx , Cy , l, w, θ) denote a crease, where l, w, θ and (Cx , Cy ) are the length, width, direction and central point of the L, respectively. The rectangle satisfy some constraints as ∇I(x, y) < T h1 (1) (x,y)∈L
|α(x, y) − θ)| > T h2
(2)
(x,y)∈LN eighbor
T h3 < w < T h4
(3)
l > T h5
(4)
m{I(x, y)} > T h6
(5)
where I(x, y) is the gray value at the point (x, y), α(x, y) is the texture’s direction at (x, y), m{I(x, y)} is the average gray level in L, T h1 , T h2 , T h3 , T h4 , T h5 and T h6 are thresholds. Equation (1) indicates that there should not be a large difference in gray level among the crease. Equation (2) indicates that the crease’s direction have at least some difference with the texture’s direction. Equation (3-5) constraint the width, length and the average gray level of the crease, respectively.
3
Designing Optimal Filter
We regard a filter as an robust crease detector, if the filter has large responses to the crease region but small responses to the neighboring fingerprint texture. The optimal filter is therefore obtained by maximizing the difference between the responses to the crease and the neighboring fingerprint texture. In order to find an optimal filter, we firstly study the fingerprint images containing creases. We use a rectangle to model a ridge/valley. We assume the widths of the ridge and valley are equal, denoted as w0 (see Figure 2 (a)). In Figure 2 (b), an ideal model is used to represent a fingerprint image containing a crease. w denotes the crease’s width and ϕ denotes the cross angle between the crease and fingerprint texture. From Figure 2 (b), the crease’s direction is orthogonal to x coordinate, thus the optimal filter’s direction is x coordinate. The filter is a function f (x) only related to x. Because the filter’s direction is discrete but the crease’s is continuous, the difference between the optimal filter and crease in direction may not equal to π/2. We use β + π/2 to denote the difference (see Figure 2 (c)). When w, w0 , ϕ or β changes, the projection of the image (Figure 2 (b)) to x coordinate, i.e., the intensity histogram along x coordinate, will change. We count a cumulated histogram with w, w0 , ϕ and β changing (see Figure 2 (d)). Each variable varies in a certain range. The histogram denoted as h(x) can be regarded as an expected distribution of the signal (crease) and noise (fingerprint texture), along x coordinate. Since h(x) represents the crease when |x| < w, ¯ we use a Gaussian function Gx (0, σ12 ) to ¯ Let approximately represent the crease, where 3σ1 ≈ w/2. n(x) = h(x) − Gx (0, σ12 ) denote the fingerprint texture. As shown in Figure 2 (d), we also use a Gaussian function ¯ Thus we Gx (0, σ22 ) to approximate n(x) with σ2 w/2. have h(x) ≈ Gx (0, σ12 ) + Gx (0, σ22 ). Therefore, we can obtain the optimal filter by maximizing the response to Gx (0, σ12 ) and minimizing that to Gx (0, σ22 ), or maximizing the response to s(x) = Gx (0, σ12 )−Gx (0, σ22 ). From signal processing theory [14], a maximal response to s(x) could be reached through a filter
Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’03) 1063-6919/03 $17.00 © 2003 IEEE
y
y
y
w0 w0
β
o
x
ϕ
o
h(x)
o
x
x
o (a)
x
w
w
(b)
(c)
(d)
Figure 2. (a) A rectangular representation for fingerprint texture. w0 denotes the width of the ridge/valley. (b) An ideal model
representing a fingerprint image containing a crease. w represents the width of the crease, ϕ is the cross angle between the direction of the crease and fingerprint texture. (c) An example that the direction’s difference between the crease and optimal filter is equal to π/2 + β. (d) An accumulated intensity histogram along the x coordinate. w ¯ is average width of creases. See text for more details.
with a form s(u). Furthermore, it has been proven that the difference between two Gaussian functions with different variances could be approximated by a two-order derivative Gaussian function, when the variances are appropriately selected [6, 13]. Thus the optimal filter could be approximately formulated as a two-order derivative Gaussian function as f (x) = const · exp{−
x2 }(T 2 − x2 ) 2T 2
(6)
where T is the variance. Since we need a 2-D filter, we use a Gaussian function as y-coordinate filter. And then we have the 2-D optimal filter F (x, y) as F (x, y) = const · exp{−
x2 + ηy 2 }(T 2 − x2 ) 2T 2
(7)
where η is a parameter. To detect creases in any directions, we rotate the filter F (x, y) by γ. Let u = x cos γ + y sin γ, v = −x sin γ + y cos γ, Fγ (u, v) = const · exp{−
u2 + ηv 2 }(T 2 − u2 ) 2T 2
(8)
We use the ideal model in Figure 2 to select an appropriate T . Denoting max RC(γ) and max RF (γ) as maximal responses of filters (with different T ) to the crease and fingerprint texture, respectively. From the optimal filter, max RC(γ) will be much larger than max RF (γ), and then we could define a threshold to discriminate most creases from the fingerprint texture. In Figure 2, the direction of the crease θ = π/2 and the direction of the optimal filter γ = 0. Obviously, max RC(γ) occurs when |γ −θ| is close to π/2. max RF (γ) occurs when γ is orthogonal to the texture’s direction α. The best choice of T is dependent on w, w0 , ϕ and β, where ϕ = |θ − α| and β = |γ − θ|. For most fingerprints, the average width of ridge and valley, w0 , is about 5 ∼ 7 pixels. We could compute the range of T when we choose the thresholds T h2 , T h3 and T h4 (Equation (2,3))as π 1 , w0 and 2w0 , respectively. An appropriate T can be 18 2 selected from a range from 20 and 35 (in our experiment, we select T as 30 and η as 4).
4
Detection Framework
With the optimal filter, we devise a multi-channel framework to detect creases in any directions. The framework is showed in Figure 3. (1) we use a Gaussian filter to compute a mask image for the input fingerprint image. And then valid regions in the fingerprint marked by the mask image will be further processed [3]. (2) We select 12 channels or directions for Fγ (u, v), 11π π , respectively. The response where γ equals 0, , · · · , 12 12 images of these filters are denoted as I1 , I2 , · · · , I12 , respectively. (3) In each channel, based on the analysis in the section 3, we select a threshold, 200, to binary the response image. Regions with larger gray-level than the threshold are then selected as crease candidates. (4) For each candidate region, Principal Components Analysis (PCA) is used to estimate the parameters of the rectangle representing a crease (Section 2). Point (Cx , Cy ) is the mean of the region. θ is equal to the direction of the axis with the larger eigenvalue. l and w are computed as the average length and width of the region, using the larger and smaller eigenvalues, respectively. Constrained by Equation 1, Equation 4 and Equation 5, we remove some candidates and regard others as valid creases. (5) We combine creases extracted in each channel into one image and get the rectangle based result ICrease (see Figure 3).
5
Experiments and Discussions
We collected about 120 fingerprints containing creases, 40 of which are from FVC2000 (448×478 in size); 80 from our lab (320 × 512 in size), using a live-scanner. These data belongs to 30 persons. Each person has 4 fingerprint images taken at different time. We apply some preprocessing such as image enhancement, histogram equalization etc. before detection. Some detected creases are shown in Figure 4. In column 2, creases are represented by irregular regions, denoted as blue. In column 3, creases are represented by parameterized rectangles.
Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’03) 1063-6919/03 $17.00 © 2003 IEEE
I Crease
F0 Fπ / 12
F2π / 12
I 1′ = I ∗ F0
I1′′
I 2′ = I * Fπ / 12
I 2′′
I 3′ = I ∗ F2π / 12
I 3′′
I
Binarization
F11π / 12
Mask Computing
I 12′ = I * F11π / 12
I12′′
Combine Rectangle Extraction (PCA)
I Mask
Combine
Figure 3. The flowchart of crease detection. The width of the rectangle is the average width of the crease. Since some creases are not straight, there may be some error for computing the average width by using PCA. To verify the robust of our algorithm to the translation and rotation, we processed fingerprint images from the same finger but collected at different time. And we processed rotated images from the same image. Some results are listed in Figure 5. Experiments show that our algorithm has a good resistance to the noise, translation and rotation. Most creases in the fingerprint images can be extracted effectively. The false alarm rate is about 2.5% and missing rate is less than 10% by human inspection. There exist a few creases missed by our detect algorithm. For example, in Figure 4, the red circles in (1)-(c) indicate the missed creases. It is because that the directions between the crease and fingerprint texture are very small or there is little difference between neighboring valleys and ridges in gray level. Based on the assumption of the model, our method will represent such crease as several segments.
6
Conclusions
In this paper, we develop an algorithm to detect a pattern called crease in the fingerprint image, which may introduce spurious minutiae. We use a parameterized rectangle to model a crease. Based on the representation, we design an optimal filter and then develop a multi-channel filtering framework to detect creases in any orientation. By using PCA, we could compute the parameters of the rectangle. Experiments demonstrate our algorithm. In the future, we will use detected creases to help remove spurious minutiae. From experiment results, it is very promising to achieve a better performance of fingerprint identification with creases.
Acknowledgement The authors wish to acknowledge support from Natural Science Foundation of China under grants 60205002.
References [1] Ameriacn national standard for information systems. Data format for the interchange of fingerprint information, doc ansi/nist-csl 1-1993, 1993. [2] S. T. Acton, D. P. Mukherjee, J. P. Havlicek, and A. C. Bovik. Fingerprint classification using an am-fm model. IEEE Trans. Image Processing, 10(6):951–954, 2001. [3] A. Jain, L. Hong, S. Pankanti, and R. Bolle. An identity authentication system using fingerprints. Proc. of IEEE, 85(9):1365–1388, 1997. [4] A. Jain and S. Pankanti. Automated fingerprint identification and imaging systems. Technical report, IBM TJW Research Center, 2002. [5] H. C. Lee and R. E. Gaensslen, editors. Advanceds in Fingerprint Technology. Elsevier, New York, 1991. [6] S. D. Ma and B. C. Li. Derivative computation by multiscale filters. Europe-China Workshop on Geometric Modeling and Invariant, 1995. [7] D. Maio, D. Maltoni, R. Cappelli, J. L. Wayman, and A. K. Jain. Fvc2000: fingerprint verification competition. TPAMI, 24(3):402–412. [8] B. S. Manjunath, C. Shekhar, and R. Chellappa. A new approach to image feature detection with applications. Pattern Recognition, 29(4):627–640, 1996. [9] Janine D. Mendola, Anders M. Dale, Bruce Fischl, Arthur K. Liu, and Roger B. H. Tootell. The representation of illusory and real contours in human cortical visual areas revealed by functional magnetic resonance imaging. Jour. of Neuroscience, 19(19):8580–8572, 1999. [10] H. Neumann and W. Sepp. Recurrent v1-v2 interaction for early visual information processing. Proc. of European Symposium on Artificial Neural Networks, pages 165–170, 1999. [11] S. Prabhakar, J. Wang, A. K. Jain, S. Pankanti, and R. Bolle. Minutiae verification and classification for fingerprint matching. ICPR, 1:25–29. [12] N. Ratha, S. Chen, and A. K. Jain. Adaptive flow orientation based feature extraction in fingerprint images. Pattern Recognition, 28(11):1657–1672, 1995. [13] S. Sarkar and K. L. Boyer. Optimal infinite impulse response zero crossing based edge detection. CVGIP: Image Understanding, 54(2):224–243, 1991. [14] M. D. Srinath and P. K. Rajasekaran, editors. An Introduction to Statistical Signal Processing with Applications. John Wiley & Sons, New York, 1979. [15] Mark C. K. Yang, Jong-Sen Lee, Cheng-Chang Lien, and Chung-Lin Huang. Hough transform modified by line connectivity and line thickness. TPAMI, 19(8):905–910, 1997.
Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’03) 1063-6919/03 $17.00 © 2003 IEEE
(a)
(b)
(c)
(d)
(1)
(2)
(3)
Figure 4. Some crease detection results. Column 1 contains input fingerprint images. Column 2 lists the extracted creases, denoted as blue regions. Column 3 lists the extracted creases, denoted as blue rectangles. Fingerprints in the first two rows are from FVC2000 and other two from our lab. Red circles denote the missed creases. See text for explanations.
Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’03) 1063-6919/03 $17.00 © 2003 IEEE
(a)
(b)
(c)
(d)
(e)
˄1˅
˄2˅
˄3˅
Figure 5. Robustness of our algorithm. Column 1 contains input fingerprint images. Column 2 lists the extracted creases, denoted as blue regions. Column 3 lists the extracted creases, denoted as blue rectangles. The first three fingerprint images are collected from the same finger at different time. The last three fingerprints are from the same fingerprint image and have been rotated by π/4, π/2 and 8π/9, respectively.
Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’03) 1063-6919/03 $17.00 © 2003 IEEE