Kittiya Khongkraphan*An Efficient Fingerprint Matching by Multiple Reference PointsAbstract: This paper introduces an efficient fingerprint matching method based on multiple reference minutiae points. First, we attempt to effectively align two fingerprints by employing multiple reference minutiae points. However, the corresponding minutiae points between two fingerprints are ambiguous since a minutia of one fingerprint can be a match to any minutia of the other fingerprint. Therefore, we introduce a novel method based on linear classification concept to establish minutiae correspondences between two fingerprints. Each minutiae correspondence represents a possible alignment. For each possible alignment, a matching score is computed using minutiae and ridge orientation features and the maximum score is then selected to represent the similarity of the two fingerprints. The proposed method is evaluated using fingerprint databases, FVC2002 and FVC2004. In addition, we compare our approach with two existing methods and find that our approach outperforms them in term of matching accuracy, especially in the case of non-linear distorted fingerprints. Furthermore, the experiments show that our method provides additional advantages in low quality fingerprint images such as inaccurate position, missing minutiae, and spurious extracted minutiae. Keywords: Fingerprint Matching , Matching Score , Multiple Reference Points , Non-linear Distortion 1. IntroductionBiometrics is an authentication technique based on a unique physical. A fingerprint is one of biometrics that has received a great deal of attention due to its uniqueness and immutability. Another reason is that a fingerprint biometrics system has a lower cost compared to other biometrics systems. Automate fingerprint identification systems (AFIS) are widely used not only in several personal identification systems, but also in criminal investigations. Comparing two fingerprints is an important task in AFIS. For this task, two fingerprints are compared using a matching score of them. If the matching score is greater than a predefined threshold, the two fingerprints are reported as belonging to the same finger. Recently, numerous algorithms have been proposed or improved to deal with this task, however their performances are degraded due to poor quality of the acquired fingerprint images. One major cause affecting the quality of fingerprint images is a nonlinear distortion, which includes a warping process from different sensors, the non-orthogonal pressure of finger to a flat sensor surface, skin moisture content and elasticity of the skin. Therefore, various methods are proposed to alleviate the non-linear distortion problem occurring in fingerprint matching applications. Some researchers attempted to detect distortion during the fingerprint image capturing process. In [1], the authors introduced special hardware to measure the amount of force being pushed on the scanner surface and a good fingerprint image has to be obtained by examining a set of ground truth data. Dorai et al. [2] detected and estimated distortion from fingerprint image sequences. They used flowbased structural across frames to investigate dynamic characteristics of fingerprints. However, both methods cannot deal with the already-captured fingerprint image. Some researchers focused on estimating distortion models and removed them before a matching stage. Si et al. [3] introduced a method to detect and rectify distorted fingerprint. They used a SVM classifier trained during the offline stage to detect and rectify the distorted images. Cappelli et al. [4] introduced a mathematical model to descript a plastic distortion model. However, their model was unable to precisely approximate parameters due to insufficient information. Senior and Bolle [5] presented a method to remove a distortion from fingerprint images based on the assumption that inter ridges are constantly space. Unfortunately, their assumption is not always valid. Bazen and Gerez [6] described non-linear distortion when comparing two minutiae sets using a thin-plate spline (TPS) model. Their method was highly dependent on an initialization of minutiae pairs and had a heavy load of computation. Ross et al. [7] constructed a deformation model of a fingerprint using TPS model and several impressions of that finger. Their approach cannot be used online since they need precise correspondences in the training process. Moreover, several researchers introduced other interpolation techniques to replace the TPS model. Novikov and Ushmaev [8] used Navier linear partial differential equation (PDE) to define the distortion model, while Liang and Asano [9] used radial basis function (RBF) interpolation with multiquadratic base functions. In addition, some approaches were proposed to deal with the distortion using the alignment of fingerprint before the matching stage. The performance of these approaches is based on a possibility of detecting the reference point between two fingerprints. Several feature alignment methods have been proposed, and they can be divided into two main groups: local and global based groups. In local feature alignment-based group, they recover the alignment parameters from local feature called as a reference point [10-16]. These approaches properly aligns regions near the reference point, while the regions far from the reference point tended to be incorrectly aligned. For global alignment-based group, they find a transformation of a minutiae from one set to another set [17-19]. Hosseinbor et al. [20] proposed an iterative method to search for corresponding minutiae points in registration stage. The global alignmentbased methods are not biased toward a specific region. However, the essential step of detecting the reference points is not robust to error. Some approaches focused on matching process. Luo et al. [21] alleviated the effect of distortion using changeable tolerance boxes that were incrementally extended. However, the increasing of the tolerance box size had an effect in increasing a falsely fingerprint matching from different fingers. Zsoft [22] applied a triangular matching algorithm with dynamic time warping for comparing two minutiae sets. Qi and Wang [12] and Feng et al. [23] used the triangulation-based concept in minutiae matching stage using fit tolerance boxes. Lee et al. [11] employed local alignment and distance normalization in the minutiae matching step. Watson and Casasent [24] created distortion tolerant filters for each fingerprint to perform a correlation type matching. Choi et al. [25] increased a matching performance using both ridge and minutiae features in computing a matching score. The drawback of their method is the time taken for the ridge feature extraction. To solve this problem, Nguyen et al. [26] proposed a novel algorithm to improve the time of feature extraction. Bharkad and Kokare [27] worked on reduced dimension of a discrete cosine transform feature vector. Cao et al. [28] employed colony optimization to find corresponding minutiae in computing a matching score. Although several approaches are proposed to alleviate the non-linear distortion problem, their performances have yet to meet the expectation. In this paper, we propose a novel and efficient approach to handle a non-linear distorted fingerprint matching. We apply an alignment method to recover the precisely pose of the same finger. The fingerprint regions are properly aligned near some reference point. To increase the accuracy of fingerprint alignment, our approach is based on multiple reference points. We introduce a new substructure to represent a fingerprint and to find the corresponding minutiae of two fingerprints in the alignment stage. The obtained the corresponding minutiae, reference points are evenly distributed over fingerprint in various regions. The distributed reference points guarantee to satisfy the alignments for all points in the fingerprint region. Our approach is robust to damage of global alignment information because it needs only 10 corresponding points to align two fingerprints. The proper alignment does not only increase the performance of the matching, but it can also alleviate the effects of inaccurate position, missing minutiae, and spurious extracted minutiae. Additionally, we incorporate minutiae feature and ridge compatibility for the calculation of matching scores. Also, spurious minutiae which can affect the performance of fingerprint matching is considered. The rest of paper is organized as follows. Sections 2–4 describe our proposed method. In Section 5, we present some experimental results. Finally, conclusions are drawn in Section 6. 2. Our Proposed MethodThe main concept of our approach is to align fingerprints using multiple reference points. We additionally apply ridge compatibility and minutiae information to measure the similarity of the two fingerprints. In this sense, not only it is robust to non-linear distortion of fingerprint, but also it is tolerant to inaccurate position, missing minutiae, and spurious extracted minutiae. Our proposed method can be roughly divided into four main steps. First, preprocessing and feature extraction are performed. Secondly, a set of initial substructure pairs is extracted. To prune spurious substructure pairs, we introduce a new method based on data clustering concept to consider corresponding substructure pairs. The corresponding substructure pairs are then used to align fingerprint in the third step. Finally, matching scores between two aligned fingerprints are computed and the maximum score is selected to represent the similarity of two fingerprints. System overview of our proposed method is shown in Fig. 1. 2.1 Fingerprint Preprocessing and Feature ExtractionIn preprocessing step, we need to segment the fingerprint region from the background. Firstly, the fingerprint image is equally divided into n×n pixel blocks, and then the mean and variance values of the intensities in each block are computed in order to segment the fingerprint region. Moreover, we apply enhancement method [25] to improve the fingerprint image before generating a skeleton fingerprint image. The features are then extracted to represent the fingerprint. In our approach, we represent a fingerprint with set of minutiae features since it is widely believed that minutiae are highly discriminating features. A minutiae feature generally consists of position, type (end or bifurcate ridge types), and ridge orientation at minutiae point. However, we use only position and ridge orientation information in our approach. We do not utilize minutiae type since it is easy to falsely estimate minutiae types in low quality fingerprint images. We define the position of the minutiae point and the ridge orientation that represents the directionality of ridges around the minutiae point [29] as shown in Fig. 2. To prune the spurious minutiae points, we apply the existing post-processing algorithm [30,31]. Moreover, we propose a novel feature for each minutia called a neighbor. At last, a fingerprint is described by a set of minutiae M = {mi | i ∈ [1,…, N]} where N is the number of minutiae points and mi is the ith minutiae point that consists of four component values [xi yi θi ni] where (xi , yi) is the ith minutiae point’s coordinate and θi is the ridge orientation at the ith minutiae point. There is no difference between ridge orientation θ and θ+π. Thus, we regulate the ridge orientation to [0, π]. ni is a neighbor feature of the ith minutiae point. We represent a neighbor feature by a sequence of the minutiae points. The order of the minutiae in the sequence is based on ridge-based coordinate system. To construct this coordinate system, the origin point O is first set at each minutiae point. The horizontal axis is drawn passing through the origin point with its ridge orientation. The vertical axis is then drawn passing through the origin point and orthogonal to the horizontal axis. The remaining minutiae are then projected onto the vertical axis as shown with dashline in the Fig. 3. The distance between each projected point and the end point of the vertical axis is computed. These distances are then used to define the order the minutiae in the minutiae sequence. In Fig. 3, the neighbor feature of the minutiae O is no= [mb mo me ma md mc]. 3. Fingerprint Alignment based on Multiple Reference PointsIn this step, substructure pairs are generated and then a correspondence of each substructure pair is considered. If any substructure pair is correspondence, they are used to construct a transformation matrix for fingerprint alignment. The detail is described as follows. 3.1 SubstructureAfter performing the feature extraction step, we obtain the set of minutiae features. The corresponding minutiae of a test fingerprint and a template fingerprints are detected for fingerprint alignment. To find the corresponding minutiae, we match any minutiae pair where one point is from the test fingerprint image and another from the template fingerprint image. The neighbor features of both images are used to find a sequence of some corresponding minutiae. To assess corresponding minutiae, the different value between ridge orientation values is computed. If the different value is less than a predefined threshold, the two minutiae are regarded to be the corresponding minutiae. The sequence of some corresponding minutiae is called a substructure of fingerprint. To check the validity of a substructure pair, we propose a novel method based on unsupervised data clustering concept. A line is drawn passing through any pair of the minutiae in each substructure for both test and template fingerprints to separate the remaining minutiae into two subclasses; this line will be called a classifier line. Two substructures are indeed correspondence if any two corresponding classifier lines can separate the minutiae in each substructure into the same corresponding subclasses. Minutiae in the same subclass will generally have the same sign. To define the sign of a minutia, the minutia will first be projected onto the classifier line and then the vector pointing from the projected point to the minutia point and the vector pointing from the projected point along the direction of the classifier line are constructed as shown in Fig. 4. The sign of a minutiae is calculated as follows:
where s represents the sign of two vectors, [TeX:] $$\vec { K }$$ is a vector pointing from the projected point to the minutia point, and [TeX:] $$\vec { V }$$ is a vector pointing from the projected point along the direction of the classifier line. From Fig. 4, the substructure consists of 10 minutiae that are marked with circle. The classifier line is drawn passing from the 3th minutia to the 4th minutia. The sign of each minutia is calculated by product of [TeX:] $$\vec { V }$$ and [TeX:] $$\vec { K }$$ as shown in Eq. (1). From this linear classifier, the 1th, 6th, 7th, and 8th minutiae are separated into the one subclass, while others are in another subclass. The overall steps in finding multiple reference points for fingerprint alignment is described as follows: 1. Extract features for each fingerprint image when features consist of minutiae position, ridge orientation, and minutiae neighbor features. 2. Match any minutiae pair from the test and template fingerprint images with condition related to the difference of ridge orientation such that if |θi-θj| < thθ then go to 3, where θi is the ridge orientation of the ith minutiae of the test fingerprint, θj is the ridge orientation the jth minutiae of the template fingerprint, and thθ is the predefined threshold. 3. Select some minutiae in neighbor sequence to generate a substructure. 4. Check the correspondence for all pairs of substructure. The corresponding substructure pair is then used to construct a transformation matrix for fingerprint alignment. 3.2 Alignment of Multiple Reference MinutiaeAfter considering a substructure pair, we obtain two corresponding minutiae subsequences, one from the test fingerprint and another from the template fingerprint. Suppose {mi|1≤i ≤ c} and {mi*|1 ≤ i ≤ c} are substructures of the test fingerprint and the template fingerprint, respectively. c is the number of minutiae in both substructures. The correspondence between mi and mi* is defined by a transformation matrix T that can be expressed as
To align two fingerprints based on multiple reference points using our approach, we employ a polynomial transformation. In general, any polynomial transformation (T) of two variables can be expressed as follows:
(3)[TeX:] $$x _ { i } ^ { * } = \sum _ { k = 1 } ^ { d } \sum _ { l = 1 } ^ { d } a _ { k l } x _ { i } ^ { k } y _ { i } ^ { l } \\ y _ { i } ^ { * } = \sum _ { k = 1 } ^ { d } \sum _ { l = 1 } ^ { d } b _ { k l } x _ { i } ^ { k } y _ { i } ^ { l }$$where (xi,yi) and (xi*,yi*) are minutiae coordinates on the test and template fingerprint images, respectively. d is order of polynomial function. a and b are coefficients of polynomial transformation function. To find solution of this polynomial function, the least-squares method can be applied. These solutions are used to generate a transformation matrix for fingerprint alignment. However, we use command cp2tform in MATLAB to find such solution. 4. Matching Score ComputationAfter the alignment stage, matching scores are determined using both minutiae and ridge orientation features. A new minutiae set of the test fingerprint and a minutiae set of the template fingerprint are used to compute matching scores. A matching score is computed for each corresponding substructure pair, however maximum value is selected to represent the similarity of the two fingerprints. If the selected matching score is higher than some predefined threshold, it will report that the two fingerprints are obtained from the same finger. To compute each matching score, we consider only the minutiae points that locate inside an intersection area of the aligned test and template fingerprint images. The matching score (S) can be calculated by
where w (0 ≤ w ≤ 1) is used to weight the matching score between minutiae and ridge orientation features. Sm and Sr are matching scores computed from minutiae and orientation features, respectively. Sm can be computed by
(5)[TeX:] $$S _ { m } = \frac { \sum _ { i = 1 } ^ { N _ { 1 } } \sum _ { j = 1 } ^ { N 2 } S \left( m _ { i } , m _ { j } \right) } { N _ { 1 } N _ { 2 } }$$where mi is the ith minutia of the test fingerprint and mj is the jth minutia of the template fingerprint. N1 and N2 are the number of the minutiae of the test fingerprint and the template fingerprint in the intersection area. S(mi,mj) is the similarity level between the two fingerprints that can be achieved by
(6)[TeX:] $$S \left( m _ { i } , m _ { j } \right) = \left\{ \begin{array} { l } { 1 ; \operatorname { dist } \left( m _ { i } , m _ { j } \right) < t h _ { d } \text { and } \left| \theta _ { i } - \theta _ { j } \right| < t h _ { \theta } } \\ { 0 ; \text { otherwise } } \end{array} \right.$$where dist(mi,mj) is the Euclidian distance between coordinates of the ith minutia of the test fingerprint and the jth minutia of the template fingerprint. θi and θj are the ridge orientations at the ith minutia of the test fingerprint and the jth minutia of the template fingerprint, respectively. thd and thθ are the predefined threshold values of the distance and the different angle value, respectively. The different angle value is formulated as follows:
(7)[TeX:] $$\left| \theta _ { i } - \theta _ { j } \right| = \left\{ \begin{array} { c c } { \left| \theta _ { i } - \theta _ { j } \right| } \ { ; \left| \theta _ { i } - \theta _ { j } \right| < \pi } \\ { 2 \pi - \left| \theta _ { i } - \theta _ { j } \right| } \ { ; \text { otherwish } } \end{array} \right.$$For ridge compatibility, region located inside an intersection area of the test and template fingerprints is equally divided into k×k pixel blocks, and the ridge orientations of both center blocks on the test and template fingerprints are then detected. The ridge compatibility is computed from the similarity of those ridge orientations as followed:
(8)[TeX:] $$S _ { r } = \frac { \sum _ { i = 1 } ^ { N _ { r } } S \left( \theta _ { \text {test} , i } , \theta _ { \text {template} , i } \right) } { N _ { r } }$$where S(θtest,i, θtemplate,i) is the similarity level of the ridge orientation in the ith block of the test fingerprint and the template fingerprint. The similarity level value is set to 1, if the different angle value of the ridge orientation is less than some predefined threshold, and 0 otherwise. Nr is the number of blocks. 5. ExperimentsIn order to evaluate the performance of our approach, we tested our proposed method on FVC2002 and FVC2004 databases [32]. These fingerprint images are appropriate to measure the robustness of our approach since they include with dry, wet, scratched, distorted, rotated fingerprint images; especially, FVC2004 database has a great number of non-linear distortion fingerprint images than other databases. Moreover, FVC2004 database is used as a benchmark for performance measuring of minutiae matching algorithms. Therefore, it is reasonable to analyze the performance of the proposed method using these databases. We first describe the databases used in the performance evaluation and then present the matching result. 5.1 DatabaseBoth FVC2002 and FVC2004 databases contain four distinct datasets: DB1, DB2, DB3, and DB4. Each dataset contains 800 fingerprint images from 8 impressions of 100 fingers. We test both genuine and impostor matching tests. For genuine match tests, each impression of each finger is matched against the other impression of the same finger, then 2,800 genuine match tests are executed for each dataset. In impostor match test, each impression is matched against the impression of the different fingers. For each dataset, 316,800 impostor match tests are then executed. 5.2 Performance of Our Approach on FVC2002 and FVC2004To show the accuracy performance of our matching system, we report with an equal error rate (ERR) where the false match rate (FMR) is equal to its false non-match rate (FNMR). The smaller value of ERR indicates the better performance of the system. By changing the threshold value, we obtain receiver operating characteristic (ROC) curves that shows relation of FMR and FNMR. The ROC of our approach over the FVC2000 and FVC2004 databases are plotted in log-log scales as shown in Figs. 5 and 6, respectively. Moreover, ERR of our proposed method on FVC2002 and FVC2004 is shown in Table 1. It can be seen that ERR of our approach on FVC2002 is better than that on FVC2004. This result may due to the fact that datasets in FVC2004 have more distorted fingerprint than FVC2002. Table 1.
To benchmark the overall performance of our proposed method, we compare our approach with method of Nguyen et al. [26] and method of Cao et al. [31]. For the experiments on FVC2002, we found that the average ERRs of all methods are not different. However, when we test all methods on FVC2004 which has more non-linear distorted fingerprint images than FVC2002, the ERR of our proposed method is comparable to those of [26] and [31] as illustrated in Table 2. It can be seen that [26] gives a better result than other approaches in DB4. However, our approach produces significantly less ERR than those of [26] and [31] in DB1 and DB2 because our approach aligns fingerprint images based on multiple reference points. Because our approach distributes the reference points in various fingerprint regions, it is generally guaranteed to satisfy the alignment of regions not far from some reference points. For this sense, our method is robust to non-linear distorted fingerprint images. On average, it can be seen that our approach gives less average ERR than those of the other methods. 6. ConclusionWe propose a novel method for fingerprint matching, especially for non-linear distorted fingerprints. Our method has four main phases: feature extraction, corresponding point extraction, alignment, and matching score computation. We propose a novel alignment schema using multiple reference points. Following that, the matching scores of the two fingerprints are calculated using minutiae and ridge orientation features. The maximum matching score is selected to represent the similarity of the two fingerprints, which in turn is used to determine whether the match occurs. To evaluate the performance of our approach, we perform experiments on FVC2002 and FVC2004 databases. We discover that our approach outperforms the approaches of Nguyen et al. [27] and Cao et al. [32], especially in fingerprint images with strong non-linear distortion problem. BiographyKittiya Khongkraphanhttps://orcid.org/0000-0002-1833-0863She received the B.Sc. degree in Mathematics and M.Sc. degree in Computer Science from Prince of Songkla University (PSU), Thailand, in 1991 and 2000, respectively. She received Ph.D. degree in Electrical and Computer Engineering from King Mongkut’s University of Technology Thonburi (KMUTT), Thailand, in 2010. She is currently a lecturer at the faculty of Science and Technology, PSU. Her research interests include computer vision and image processing. References
|