IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 12, NO. 11, NOVEMBER 2003
1367
Image Analysis by Krawtchouk Moments Pew-Thian Yap, Raveendran Paramesran, Senior Member, IEEE, and Seng-Huat Ong
Abstract—In this paper, a new set of orthogonal moments based on the discrete classical Krawtchouk polynomials is introduced. The Krawtchouk polynomials are scaled to ensure numerical stability, thus creating a set of weighted Krawtchouk polynomials. The set of proposed Krawtchouk moments is then derived from the weighted Krawtchouk polynomials. The orthogonality of the proposed moments ensures minimal information redundancy. No numerical approximation is involved in deriving the moments, since the weighted Krawtchouk polynomials are discrete. These properties make the Krawtchouk moments well suited as pattern features in the analysis of two-dimensional images. It is shown that the Krawtchouk moments can be employed to extract local features of an image, unlike other orthogonal moments, which generally capture the global features. The computational aspects of the moments using the recursive and symmetry properties are discussed. The theoretical framework is validated by an experiment on image reconstruction using Krawtchouk moments and the results are compared to that of Zernike, Pseudo-Zernike, Legendre, and Tchebichef moments. Krawtchouk moment invariants is constructed using a linear combination of geometric moment invariants and an object recognition experiment shows Krawtchouk moment invariants perform significantly better than Hu’s moment invariants in both noise-free and noisy conditions. Index Terms—Discrete orthogonal systems, Krawtchouk moments, Krawtchouk polynomials, local features, orthogonal moments, region-of-interest, weighted Krawtchouk polynomials.
I. INTRODUCTION
M
OMENTS due to its ability to represent global features have found extensive applications in the field of image processing [1]–[9]. In 1961, Hu [1] introduced moment invariants. Based on the theory of algebraic invariants he derived a set of moment invariants, which are position, size and orientation independent. Dudani et al. [3] used Hu’s moment invariants up to the third order in the recognition of images of aircraft. The same invariants were also used for recognition of ships [4]. Markandey et al. [9] developed techniques for robot sensing based on high dimensional moment invariants and tensors. However, regular moments are not orthogonal and as a consequence, reconstructing the image from the moments is deemed to be a difficult task. Teague [2], based on the theory of continuous orthogonal polynomials, has shown that the image can be reconstructed easily from a set of orthogonal moments, such as Zernike and Manuscript received June 11, 2002; revised May 12, 2003. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Nasser Kehtarnavaz. P.-T. Yap and R. Paramesran are with the Department of Electrical Engineering, University of Malaya, 50603 Kuala Lumpur, Malaysia (e-mail:
[email protected];
[email protected]). S.-H. Ong is with the Institute of Mathematical Sciences, University of Malaya, 50603 Kuala Lumpur, Malaysia (e-mail:
[email protected]). Digital Object Identifier 10.1109/TIP.2003.818019
Legendre moments. Zenike moments are able to store image information with minimal information redundancy and have the property of being rotational invariant. Khotanzad et al. [5] used Zernike moment invariants in recognition and pose estimation of three-dimensional objects. Belkasim et al. [7], [6] did a comparative study on Zernike moment invariants and used them in shape recognition. Ghosal et al. [8] utilize Zernike moments in composite-edge detection of three-dimensional objects. Legendre moments are constructed using the Legendre polynomials. The image representation capability, information redundancy, noise sensitivity and computation aspects of Legendre moments are studied in [2], [10]–[12]. Bailey et al. [13] used Legendre moments for the recognition of handwritten Arabic numerals. One common problem with the aforementioned moments is the discretization error, which accumulates as the order of the moments increases, and hence limits the accuracy of the computed moments [14]–[16]. Liao and Pawlak [15] did a discretization error analysis of moments and proposed a variation of Simpson’s rule to keep the approximation error under certain level. Some studies concerning the discretization error in the case of geometric moments were also performed by Teh and Chin [17]. A general treatment on the quantization error can be found in [16]. Besides the discretization error, other problems associated with continuous orthogonal moments are large variations in the dynamic range of values and the need to transform coordinate spaces for Zernike and Legendre moments. Recently, to remedy this problem, a set of discrete orthogonal moment functions based on the discrete Tchebichef polynomials was introduced [18]. Their study showed that the implementation of Tchebichef moments does not involve any numerical approximation since the basis set is orthogonal in the discrete domain of the image coordinate space. This property makes Tchebichef moments superior to the conventional continuous orthogonal moments in terms of preserving the analytical property needed to ensure information redundancy in a moment set. In this paper, another new set of orthogonal moments is proposed. It is based on the discrete classical Krawtchouk polynomials [19]–[24]. Similar to Tchebichef moments, there is no need for spatial normalization; hence, the error in the computed Krawtchouk moments due to discretization is nonexistent. Unlike the above-mentioned moments, Krawtchouk moments have the ability of being able to extract local features from any region-of-interest in an image. This can be accomplished by varying parameter of the binomial distribution associated with the Krawtchouk polynomials. For a list of discrete polynomials, the readers are referred to [20], [21].
1057-7149/03$17.00 © 2003 IEEE
1368
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 12, NO. 11, NOVEMBER 2003
This paper also details the computational aspect of the Krawtchouk moments. The Krawtchouk polynomials are scaled to ensure that all the computed moments have equal weights. A three-term recurrence relation is used to avoid the direct computation of the hypergeometric and gamma functions, which tends to be numerically unstable as the order of the moments increases. The symmetry property of the Krawtchouk polynomials is utilized to reduce the number of the computation points of the Krawtchouk polynomials by half and correspondingly reduce the computation time to determine the moments. For the purpose of object recognition, it is vital that Krawtchouk moments be independent of rotation, scaling, and translation of the image. It is shown that Krawtchouk moment invariants can be formed by a linear combination of geometric moment invariants. The recognition accuracy of Krawtchouk moment invariants is tested with Hu moment invariants [1]. This paper is organized as follows. In Section II, the definitions of Krawtchouk polynomials, weighted Krawtchouk polynomials, Krawtchouk moments and Krawtchouk moment invariants are presented. Section III details the computation aspects of Krawtchouk moments. It is shown how the recursive and symmetry properties of the weighted Krawtchouk polynomials are used to facilitate the computation of the moments. Section IV provides experimental validations of the theoretical framework developed in the previous sections. The first subsection of this section examines how well an image can be reconstructed from Krawtchouk moments. The second subsection shows how Krawtchouk moments can be used to extract features of a selected region-of-interest of an image. A comparison of recognition accuracy of Krawtchouk moment invariants with Hu’s moment invariants in an object recognition task is provided in the last subsection. Section V concludes this paper.
is the Pochhammer symbol given by (3)
Krawtchouk polynomials The set of forms a complete set of discrete basis functions with weight function (4) and satisfies the orthogonality condition
(5) where
and
Examples of Krawtchouk polynomials up to the second order are
B. Weighted Krawtchouk Polynomials
II. KRAWTCHOUK MOMENTS Krawtchouk moments are a set of moments formed by using Krawtchouk polynomials as the basis function set. Krawtchouk polynomials, introduced by Mikhail Krawtchouk [23], [24], are a set of polynomials associated with the binomial distribution. In this section, the definitions of Krawtchouk and weighted Krawtchouk polynomials are first provided, followed by Krawtchouk moments and Krawtchouk moment invariants. A. Krawtchouk Polynomials The definition of the -th order classical Krawtchouk polynomial [21] is defined as
(1) where hypergeometric function, defined as
and
.
is the
(2)
The conventional method of avoiding numerical fluctuations for moment computations is by means of normalization by the norm. The normalized Krawtchouk polynomials with respect to is defined as the norm (6) Fig. 1(a) shows the plots for the first few orders of the normalized Krawtchouk polynomials and it is easily observed that the range of values of the polynomials expands rapidly with a slight increase of the order. Empirical results show that it is not unfor large values of usual for the range of values to exceed (e.g., 100). It is seen that this alone is inadequate to ensure stability of the Krawtchouk polynomials. Therefore to achieve numerical stability, a set of weighted Krawtchouk polynomials is introduced in this subsection. Therefore, in addition to normalizing the polynomials with the norm, the square root of the weight is also introduced as a scaling factor. The set of weighted is defined by Krawtchouk polynomials (7)
YAP et al.: IMAGE ANALYSIS BY KRAWTCHOUK MOMENTS
Fig. 1.
1369
Comparison between normalized and weighted Krawtchouk polynomials for p
such that the orthogonality condition becomes (8)
=05
: ;N
= 100.
Note that square norm of the weighted Krawtchouk polynomials is unity, and hence the values of the weighted Krawtchouk polynomials are confined within the range of , as shown in Fig. 1(b).
1370
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 12, NO. 11, NOVEMBER 2003
Fig. 2. Comparison between weighted Krawtchouk polynomials for p
Fig. 2 show the plots of weighted Krawtchouk polynomials under different conditions. It can be observed that as deviates
= 0 3 and :
p
= 0:7, for N = 100.
from the value of 0.5 by , that is, if , the weighted Krawtchouk polynomials are approximately shifted
YAP et al.: IMAGE ANALYSIS BY KRAWTCHOUK MOMENTS
1371
by
. The direction of shifting is dependent on the sign of , with the weighted Krawtchouk polynomials shifting in the direction when is positive and vice-versa. It shall be shown later that this property is crucial in the region-of-interest feature extraction capability of Krawtchouk moments. C. Krawtchouk Moments Krawtchouk moments have the interesting property of being able to extract local features of an image. The Krawtchouk moin terms of weighted Krawtchouk polyments of order , is defined nomials, for an image with intensity function, as
As the lower order Krawtchouk moments store information of a specific region-of-interest of an image, the higher order moments store information of the rest of the image. Therefore, by reconstructing the image from the lower order moments and discarding the higher order moments, a subimage can be extracted from the subject image. It is also evident that for each additional moment used in reconstructing the image, the square error of the reconstructed image is reduced, that is
(15)
(9) are substituted with and The parameters and respectively to match the pixel points of an image. is the The Krawtchouk moment corresponding to weighted mass of the image, i.e.,
the additional moment. where is the square error and moments are used, the image is It follows that, if all perfectly reconstructed. For the rest of the paper, we assume the . case of D. Krawtchouk Moment Invariants If the geometric moments of an image with image intensity is defined using discrete sum approximation as function
(10) , the image intensity function By solving (8) and (9) for can be written completely in terms of the Krawtchouk moments, i.e.,
(16)
then the standard set of geometric moment invariants, which are independent to rotation, scaling and translation [1] can be written as
(11) One way of interpreting the above equation is that the image intensity function can be represented as a series of weighted Krawtchouk polynomials weighted by the Krawtchouk mo, the ments. If the moments are limited to order series is truncated to
(17) where (18)
(12)
(19) (see (13) at the bottom where if of the page). Observe from (9) that Krawtchouk moments are and in fact the inner product of . Therefore, the appropriate selection of and enables local features at different positions of the image to be extracted by the lower order Krawtchouk moments. Using Parseval’s theorem it can be shown that (14)
(20) (21) and
are the central moments defined in [1] as (22)
(13)
1372
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 12, NO. 11, NOVEMBER 2003
The value of according to (21) is limited to . To obtain the exact angle in the range of 0 to 360 , modifications can be done as shown in [2]. The Krawtchouk moments of can be written in terms of geometric moment as
Note that the new set of moments is rotation, scale and translation invariant. We shall designate this set of moments as Krawtchouk moment invariants in the rest of this paper. Some examples of them are
(23)
are coefficients determined by (1). Hence, where is a linear combination of geometric moments, , up to order and , weigthed by coefficients . Notice that (23) transforms the nonorthogonal geometric moments to form the orthogonal Krawtchouk moments. Notice that the normalized image according to (17) does not fall inside the domain , as required by Krawtchouk moments; of therefore, it is modified to
where . Note , so that that, in our case, we set the parameters to the emphasis of the moments will be at the center of the image. This is consistent with the fact that (24) normalizes the image and shift the centroid to the center of the plane. III. COMPUTATION ASPECTS OF KRAWTCHOUK MOMENTS This section discusses the computational aspects of Krawtchouk moments. It is shown in the first subsection how the recurrence relations of the weighted Krawtchouk polynomials can be used to avoid numerical instability. In the next subsection, the utilization of the symmetry property (valid ) is shown to save computation time only for and storage space. A concise representation of the Krawtchouk moment transform and its inverse counterpart in the form of matrix is provided in the third subsection. A. Recurrence Relations
which can be written in terms of
(24)
In order to make computation of the moments less demanding on the processor, the recurrence relation can be used to avoid overflowing for mathematical functions like the hypergeometric function and gamma functions. The three-term recurrence relation of the weighted Krawtchouk polynomials with respect to is
(25)
(27)
as
where , and The centroid of the image is now shifted to . Since the image is scale-normalized such that is a linear combination of for odd and/or in does not vanish for symmetrical images; hence, (24) solves the symmetrical problem addressed by Palaniappan et al. in [25], [26]. The new set of moments can be formed by replacing the by their invariant counterregular geometric moments . From (23), we have parts
(26)
with
YAP et al.: IMAGE ANALYSIS BY KRAWTCHOUK MOMENTS
1373
Similarly, the weight function in (4) can be calculated recursively using (28) .
with B. Symmetry Property
The computation time of the Krawtchouk moments for the can be reduced considerably special case by using the symmetry property. The symmetry relation of the weighted Krawtchouk polynomials can be easily derived to be (29) For calculating Krawtchouk moments of the 2-D image intenthis would mean savings in terms of both sity function computation time and storage space for the Krawtchouk polynomials. For the case of is even, only the values of for need to be calculated. The symmetry relation for . can be used to determine If the calculated moments are limited to a maximum order of , the storage space require for the storage of the Krawtchouk . If the image is subdipolynomials is vided into four equal quadrants, only the polynomials in the first , needs to be calculated. If is quadrant, odd, the image can be zero-padded to achieve an even . For a more detailed explanation, refer to [18]. C. Representation in Matrix Form In matrix form, the Krawtchouk moment matrix,
, is defined
as
Fig. 3. Set of uppercase and lowercase English letters used as test images, the size of each is 30 30.
2
tested an compared to that of Hu’s moment invariants in an object recognition experiment. A. Image Reconstruction From Krawtchouk Moments In this subsection, the image representation capability of Krawtchouk moments is shown. The Krawtchouk moments of the image are first calculated and subsequently its image representation power is verified by reconstructing the image from the moments. An objective measure is used to characterize the , and the reconstructed error between the original image, , is defined as follows: image,
(30) where
(32)
denotes the transpose of the matrix and where
is the threshold operator (33)
The image can be reconstructed by (31) The matrix representation is very useful in software packages such as MATLAB. IV. EXPERIMENTAL STUDY In this section, experimental results are provided to validate the theoretical framework developed in the previous sections. This section is divided into to three subsections. In the first subsection, the question of how well an image can be represented as Krawtchouk moments is addressed. The image reconstruction capability of Krawtchouk moments is compared with that of Zernike, Pseudo-Zernike, Legendre [11], [15] and Tchebichef moments [18]. In the second subsection, the effect of parameters and of Krawtchouk moments on the region-of-interest of feature extraction is discussed. Finally, the recognition accuracy of Krawtchouk moment invariants (refer Section II-D) is
The set of English alphabet, both uppercase and lowercase, as shown in Fig. 3, are used as test images. The size of each of the binary image is 30 30. The Krawtchouk moments are obtained from the images shown in Fig. 3, and the images are reconstructed using the maximum order of 20. The results are tabulated in Fig. 4. In the next experiment, the letter “E” is considered for both noiseless and noisy conditions and the results are compared with that of Zernike, Pseudo-Zernike, Legendre, and Tchebichef moments. Note that in this experiment, the condition of is used. This is done to place the region-of-interest at the will be center of the image. The more general case of discussed in the next subsection. Fig. 5 shows a comparison of the reconstructed images and their respective errors for Krawtchouk, Zernike, Pseudo-Zernike, Legendre, and Tchebichef moments, from orders of 9 to 16. As can be seen from the figure, the reconstructed images using Krawtchouk moments show more visual resemblance to the original image in the early orders. The edges of the reconstructed images are also better defined with less
1374
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 12, NO. 11, NOVEMBER 2003
Fig. 6. Comparative study of reconstruction error of Krawtchouk, Zernike, Pseudo-Zernike, Legendre and Tchebichef moments of the image “E” without noise.
Fig. 4. Reconstruction using Krawtchouk moments up to the 20th order of both uppercase and lowercase English letters, each of size 30 30.
2
Fig. 7. Comparative study of reconstruction error of Krawtchouk, Zernike, Pseudo-Zernike, Legendre and Tchebichef moments of the image “E” with 10% salt-and-pepper noise.
Fig. 5. Image reconstruction of the alphabet “E” of size 30 9 to 16.
2 30 from orders
jaggedness. This observation is explained by examining Fig. 1. The lower order weighted Krawtchouk polynomials have relatively high spatial frequency components. This, together with the fact that Krawtchouk polynomials are polynomials of discrete variable, contributed to the ability of the Krawtchouk moments to represent edges (sharp changes of the image intensity values) more effectively. Fig. 6 shows the detail plot of the reconstruction error for the various moments of maximum order up to 20. As can be seen from the figure, Krawtchouk moments perform better as compared to the other moments. In addition to that, the rate of convergence in terms of reconstruction error is faster in the case of Krawtchouk moments. To test the robustness of Krawtchouk moments in the presence of noise, the same image “E” is perturbed with 10% of salt-and-pepper noise. The image reconstruction results are summarized in Fig. 7. It can be seen that the Krawtchouk moments produces lesser error with lower order of moments.
YAP et al.: IMAGE ANALYSIS BY KRAWTCHOUK MOMENTS
1375
2
Fig. 9. Image reconstruction of the “ABCD” image (a) of size 128 128 with different p ’s and p ’s. (b) (p ; p ) = (0:1; 0:1), (c) (p ; p ) = (0:9; 0:1), (d) (p ; p ) = (0:1; 0:9), and (e) (p ; p ) = (0:9; 0:9). Krawtchouk moments up to the 20th order are used. TABLE I CLASSIFICATION RESULTS OF THE IMAGES WITHOUT TRANSFORMATION
Fig. 8. Image reconstruction for gray level images of size 96 up to order 80 are used.
2 96. Moments C. Object Recognition
Fig. 8 gives some examples of image reconstruction using Krawtchouk moments for gray level images. The test images used are the “Camera Man” image and the “Airplane” image, respectively.
This section provides the experimental study on the recognition accuracy of Krawtchouk moment invariants. For the recognition task, the feature vector
B. Region-of-Interest Feature Extraction
are the Krawtchouk moment invariants deis used, where fined in Section II.D. The Euclidean distance is utilized as the classification measure
This subsection discusses on how the parameters can be varied to extract features of an image. The pais used to shift the region-of-interest horizontally. rameter the shifting of the region-of-interest is to the left, If it shifts the region of interest to the right. while is used to shift the region-of-interest vertiThe parameter the region-of-interest is shifted to the top cally. When the region-of-interest is shifted to the bottom. while Fig. 9(b)–(e) shows the results of image reconstruction of the and “ABCD” image of Fig. 9(a), using different values of . This observation can be explained by referring to Fig. 1. Notice from the figure that the values of the lower order weighted Krawtchouk polynomials are significant only in a limited range of , i.e., values outside of the range are near zero. Also obrelserve from Fig. 2 that, the varying of the value of by ative to 0.5, has an effect of shifting the weighted Krawtchouk . The direction of shifting polynomials by approximately , with the weighted Krawtchouk is dependent on the sign of direction when is positive polynomials shifting in the and vice-versa.
(34)
(35) is the -dimensional feature vector of unknown where the training vector of class . The minimum sample, and distance classifier is used to classify the image. We define the recognition accuracy as (36) The uppercase letters “B,” “F,” “I,” “J,” “L,” “T,” and “Z” from Fig. 3 are used as the training set. Two testing sets are used. The first testing set consists of the salt-and-pepper noise degraded version of the training set, with noise densities, % % % %. The feature vector based on Krawtchouk moment invariants are used to classify these images and its recognition accuracy is compared with that of Hu’s moment invariants [1]. Table I shows the results of the classification. It can be seen
1376
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 12, NO. 11, NOVEMBER 2003
Experimental results show the effectiveness of Krawtchouk moments as feature descriptors. The image representation capability of Krawtchouk moments for conditions with and without noise is examined by image reconstruction. The studies show that, Krawtchouk moments perform consistently better in terms of reconstruction error when compared with Zernike, Pseudo-Zernike, Legendre and Tchebichef moments. In the object recognition experiment, the results show Krawtchouk moment invariants perform significantly better than Hu’s moment invariant. ACKNOWLEDGMENT The authors wish to express their gratitude to the reviewers and the editor for sharing their views on the strength and weakness of this manuscript. Fig. 10.
Samples of the transformed test images.
TABLE II CLASSIFICATION RESULTS OF THE TRANSFORMED IMAGES
that Krawtchouk moment invariants show significant improvement over Hu’s moment invariants in terms of noise robustness. The second testing set is generated by rotating and scaling the training set with rotating angles, and scaling factors, ; forming a testing set of 168 images. This is followed by the addition of noise similar to that of the first testing set. Some samples of the transformed images used in the second testing set is shown in Fig. 10. The classification results are shown in Table II. The results show that Krawtchouk moments are robust to image transformations under noisy conditions, and the recognition accuracy of which are better than that of Hu’s moment invariants. V. CONCLUSION In this paper, a new set of moments based on the discrete classical Krawtchouk polynomials is introduced. Before the formulation of the proposed moments, the Krawtchouk polynomials are scaled and this results in a new set of weighted Krawtchouk polynomials. This limits the dynamic range of values of the polynomials and avoids overflows. The weighted Krawtchouk polynomials are polynomials of a discrete variable and this property eliminates the need for spatial quantization, and hence no numerical approximation is involved in arriving at the proposed moments. This property makes Krawtchouk moments well suited for extracting the analytical properties of digitized images. The computational aspects of Krawtchouk moments are discussed, specifically on how to reduce the computation time of the Krawtchouk moments. It is also shown that the Krawtchouk moments are capable of extracting the features of any selected region-of-interest of the subject image by varying the probability of the associated binomial distribution.
REFERENCES [1] M. K. Hu, “Visual pattern recognition by moment invariants,” IRE Trans. Inform. Theory, vol. IT-8, pp. 179–187, Feb. 1962. [2] M. R. Teague, “Image analysis via the general theory of moments,” J. Opt. Soc. Amer., vol. 70, pp. 920–930, Aug. 1962. [3] S. Dudani, K. Breeding, and R. McGhee, “Aircraft identification by moment invariants,” IEEE Trans. Comput., vol. 26, pp. 39–45, Feb. 1977. [4] D. Casasent and R. Cheatham, “Image segmentation and real image tests for an optical moment-based feature extractor,” Opt. Commun., vol. 51, pp. 227–230, Sept. 1984. [5] A. Khotanzad and J. J. H. Liou, “Recognition and pose estimation of unoccluded three-dimensional objects from a two-dimensional perspective view by banks of neural networks,” IEEE Trans. Neural Networks, vol. 7, pp. 897–906, Sept. 1996. [6] S. O. Belkasim, M. Shridhar, and M. Ahmadi, “Pattern recognition with moment invariants: A comparative study and new results,” Pattern Recognit., vol. 24, no. 12, pp. 1117–1138, 1991. , “Shape recognition using Zernike moment invariants,” in Proc. [7] 23rd Annu. Asilomar Conf. Signals Systems and Computers, Oct.–Nov. 1989, pp. 167–171. [8] S. Ghosal and R. Mehrotra, “Orthogonal moment operators for subpixel edge detection,” in Proc. 23rd Annu. Asilomar Conf. Signals Systems and Computers, vol. 26, 1993, pp. 295–306. [9] V. Markandey and R. J. P. Figueiredo, “Robot sensing techniques based on high dimensional moment invariants and tensors,” IEEE Trans. Robot Automat., vol. 8, pp. 186–195, Feb. 1992. [10] M. Pawlak, “On the reconstruction aspect of moment descriptors,” IEEE Trans. Inform. Theory, vol. 38, Nov. 1992. [11] C. H. Teh and R. T. Chin, “On image analysis by the method of moments,” IEEE Trans. Pattern Anal. Machine Intell., vol. 10, no. 4, pp. 496–513, 1988. [12] R. Mukundan and K. R. Ramakrishnan, “Fast computation of Legendre and Zernike moments,” Pattern Recognit., vol. 28, no. 9, pp. 1433–1442, 1995. [13] R. R. Bailey and M. Srinath, “Orthogonal moment features for use with parametric and nonparametric classifiers,” IEEE Trans. Pattern Anal. Machine Intell., vol. 18, pp. 389–399, Apr. 1996. [14] S. X. Liao and M. Pawlak, “On the accuracy of Zernike moments for image analysis,” IEEE Trans. Pattern Anal. Machine Intell., vol. 20, Dec. 1998. [15] , “On image analysis by moments,” IEEE Trans. Pattern Anal. Machine Intell., vol. 18, pp. 254–266, Mar. 1996. [16] B. Kamgar-Parsi and B. Kamgar-Parsi, “Evaluation of quantization error in computer vision,” IEEE Trans. Pattern Anal. Machine Intell., vol. 11, pp. 929–940, Sept. 1989. [17] C. H. Teh and R. T. Chin, “On digital approximation of moment invariants,” Comput. Vis., Graph., Image Process., vol. 33, pp. 318–326, 1986. [18] R. Mukundan, S. H. Ong, and P. A. Lee, “Image analysis by Tchebichef moments,” IEEE Trans. Image Processing, vol. 10, pp. 1357–1364, Sept. 2001. [19] G. Szegö, Orthogonal Polynomials. New York: Amer. Math. Soc., 1939. [20] A. Erdelyi, W. Magnus, F. Oberhettinger, and F. G. Tricomi, Higher Transcendental Functions. New York, NY: McGraw-Hill, 1953.
YAP et al.: IMAGE ANALYSIS BY KRAWTCHOUK MOMENTS
[21] “Faculty of Technical Mathematics and Informatics, Tech. Univ. Delft, Delft, The Netherlands, Tech. Rep. 98-17,”, 1998. [22] P. T. Yap, P. Raveendran, and S. H. Ong, “Krawtchouk moments as a new set of moments for image reconstruction,” in Proc. IJCNN’02, vol. 1, 2002, pp. 908–912. [23] M. Krawtchouk, “On interpolation by means of orthogonal polynomials,” Memoirs Agricultural Inst. Kyiv, vol. 4, pp. 21–28, 1929. , “Sur une généralization des polynomes d’hermite,” Comptes [24] Rendus de l’Académie des Sciences, vol. 189, pp. 620–622, 1929. [25] R. Palaniappan, P. Raveendran, and S. Omatu, “Noise tolerant moments for neural network classification,” in Proc. IJCNN’99, vol. 4, 1999, pp. 2802–2807. [26] R. Palaniappan, “Regular Moment Analysis for Pattern Recognition,” M.S. Thesis, Univ. Malaya, Kuala Lumpur, Malaysia, Apr. 1999.
Pew-Thian Yap received the B.Eng. degree in electrical engineering, with First Class Honors, and the M.S. degree in engineering science from the University of Malaya, Malaysia in 2001 and 2003, respectively. He is currently pursuing the Ph.D. degree in the Department of Electrical Engineering, University of Malaya. His research interest includes image processing, pattern classification, orthogonal systems, and special functions.
1377
Raveendran Paramesran (S’94–M’95–SM’01) received the B.Sc. and M.Sc. degrees in electrical engineering from South Dakota State University, Brookings, and the Dr. Eng degree from University of Tokushima, Japan, in 1984, 1985, and 1994, respectively. He was a System Designer with Daktronics before he joined the Department of Electrical Engineering, University of Malaya, Malaysia, in 1986, where he is now a Professor. His research areas are image analysis, image reconstruction, and brain-computer interface technology.
Seng-Huat Ong is a Professor with the Institute of Mathematical Sciences, University of Malaya, Malaysia. His current research interests are in probabilistic distribution theory, classification and statistical modeling, special functions and their applications, and image analysis.