Available online at www.sciencedirect.com
Pattern Recognition Letters 29 (2008) 1295–1301 www.elsevier.com/locate/patrec
Multifocus image fusion by combining curvelet and wavelet transform Shutao Li *, Bin Yang College of Electrical and Information Engineering, Hunan University, Changsha 410082, China Received 6 February 2007; received in revised form 22 December 2007 Available online 14 February 2008 Communicated by M. Kamel
Abstract When an image is captured by CCD device, only the objects at focus plane would appear sharp. A practicable way to get an image with all objects in focus is to fuse images acquired with different focus levels of the scene. In this paper, we propose a multifocus image fusion algorithm based on combination of wavelet and curvelet transform. Although the fused results obtained by wavelet or curvelet transform individually are encouraging, there is still large room for further improvement because wavelets do not represent long edges well while curvelets are challenged with small features. So in the proposed method, these two methods are combined together. Each of the registered images is decomposed using curvelet transform firstly. Then the coefficients are fused using wavelet-based image fusion method. Finally, the fused image is reconstructed by performing the inverse curvelet transform. The experimental results on several images show that the combined fusion algorithm exhibits clear advantages over any individual transform alone. Ó 2008 Elsevier B.V. All rights reserved. Keywords: Multifocus; Image fusion; Curvelet transform; Wavelet transform; Sensor fusion
1. Introduction The image focusing everywhere contains more information than those which just focus one object. This kind of images is useful in many fields such as digital imaging, microscopic imaging, remote sensing, computer vision and robotics. Unfortunately, optical lenses, particularly those with long focal lengths, suffer from the problem of limited depth of field (Li et al., 2004; Seales and Dutta, 1996). It is impossible to get an image in which all containing objects appear sharp. The objects in front of or behind the focus plane would be blurred. A popular way to solve this problem is image fusion, in which one can acquire a series of pictures with different focus settings and fuse them to produce an image with extended depth of field (Zhang and Blum, 1999). The simplest multifocus image fusion method just takes the pixel-by-pixel average of the source images, which often leads to undesirable side effects such as reduced contrast. *
Corresponding author. Tel.: +86 731 8672916; fax: +86 731 8822224. E-mail address:
[email protected] (S. Li).
0167-8655/$ - see front matter Ó 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.patrec.2008.02.002
So far, several algorithms based on multiscale transforms have been proposed. Commonly used multiresolution transformations include the Laplacian pyramid (Burt and Andelson, 1983), gradient pyramid (Burt, 1992), ratio-oflow-pass pyramid (Toet, 1989a), morphological pyramid (Toet, 1989b), and wavelet transform (Li et al., 1995; Zhang and Blum, 1999). The procedure of the generic multiresolution image fusion scheme contains three steps. The images to be fused are decomposed into sub-images by multiscale transform. Then, the sub-image or coefficients are combined by using a fusion rule pixel- or windowbased. Finally, the fused image can be constructed by applying the inverse transformation. Research results reveal that the wavelet schemes have more advantages than pyramid schemes such as increased directional information, no blocking artifacts that often occur in pyramidfused images, better signal-to-noise ratio, improved perception and so on (Pajares and Cruz, 2004). In recent years, several fusion schemes based on improved wavelets have been developed (Rockinger, 1997; Nu´fez et al., 1999; Chibani and Houacine, 2003; Li and
1296
S. Li, B. Yang / Pattern Recognition Letters 29 (2008) 1295–1301
Wang, 2000; De and Chanda, 2006; Hill et al., 2002). The shift invariant extension of the discrete wavelet transform yields an overcomplete signal representation and it is suitable for image fusion (Rockinger, 1997; Nu´fez et al., 1999). Multisensor image fusion using discrete multiwavelet transform is proposed by Li and Wang (2000). The multiwavelet is extended from wavelets, and offers the advantages of combining symmetry, orthogonality, and short support, which cannot be achieved by scalar two-channel wavelet systems at the same time. And De and Chanda (2006) proposed a morphological wavelet decomposition scheme for multifocus image fusion. This method is suitable for hardware implementation for its simple arithmetic operations. Hill et al. (2002) introduced the shift invariant and directionally selective dual tree complex wavelet transform (DT-CWT) to image fusion. The DT-CWT is an over complete wavelet that provides both good shift invariance and directional selectivity over the DWT. Recent papers, however, argued that wavelets and related classical multiresolution ideas are playing with a limited dictionary made up of roughly isotropic elements occurring at all scales and locations (Cande`s and Donoho, 1999; Starck et al., 2002). Despite the success of the classical wavelet viewpoint, there are objects like images that do not exhibit isotropic scaling and thus call for other kinds of multiscale representation. The curvelet transform (CT) and ridgelets are new multiresolution analysis (MRA) methods developed by Cande`s and Donoho, which is developed to overcome those problems by employing basis with structure information. With respect to conventional separable MRA, whether decimated or not, CT represents image edges more efficient, and its coefficients have little effectiveness of noise. Choi et al. (2004) suggested a CT-based image fusion method. The experiment shows that the fused image yields almost the same detail as the original panchromatic image in preserving edges. The curvelet transform is suitable for representing the edges, while the wavelet transform is more useful in expressing the image details (Starck et al., 2001). This is because that both the curvelet and wavelet multiresolution ideas are playing with a limited dictionary. The curvelet basis with directional structure cannot catch essences of the image details such as texture or angles. However, the wavelet basis is of efficiency in this aspect. In this paper, a multifocus image fusion method by combining the curvelet and the wavelet transform are proposed. The source images are decomposed into the sub-images (coefficients) in the curvelet transform domain. Traditionally, the subimages can be combined by certain criteria-based selection, or weight average. Instead, we consider the sub-images from the source images as the input images to be fused, and the wavelet transform is applied to achieve this. In other words, wavelet transform is employed to fuse those high and low sub-images, which are obtained by curveletbased decomposition. In such a way both the structure and the detail features are considered in the fusion process. The experimental results show that the combined fusion
algorithm exhibits clear advantages over any individual system alone. This paper is organized as follows: Section 2 gives briefly introduction of the curvelet transform. The conventional image fusion method using DWT or CT is described in Section 3. In Section 4, the combined proposed algorithm is presented. Experimental results are given in Section 5. In the last section, the paper is concluded. 2. The curvelet transform Cande`s and Donoho (2000) proposed the curvelet transform the idea of which is to represent a curve as superposition of functions of various lengths and widths obeying the scaling law width length2. Curvelets differ from wavelet and related systems, and it takes the form of basis elements, which exhibit a very high directional sensitivity and are highly anisotropic. In two dimensions, for instance, curvelets are more suitable for the analysis of image edges such as curve and line characteristics than wavelet. The implementation of curvelet transform has been studied by many researchers. The local ridgelet-based curvelet transform decompose the image into a series of disjoint scales using the ‘‘a` trous” wavelet transform. Then, each scale is analyzed by means of a local ridgelet transform. In this section, we introduce the implementation of the second generation curvelet which is simpler, faster, and less redundant (Cande`s et al., 2006). The two dimension continuous curvelet transform in R2 can be defined as follow. We define that x be a spatial variable, x is a frequency-domain variable, r and h polar coordinates in the frequency-domain. W(r) and V(t) are a pair of windows responding to ‘‘radial” and ‘‘angular”, respectively. They are all smooth, nonnegative and real-valued arguments and supported on r 2 [1/2, 2] and t 2 [1, 1]. In addition, they will always obey the admissibility condition: 1 X W 2 ð2j rÞ ¼ 1; r 2 ð3=4; 3=2Þ; ð1Þ j¼1 1 X
V 2 ðt ‘Þ ¼ 1;
t 2 ð1=2; 1=2Þ:
ð2Þ
‘¼1
For each j P j0, a frequency window Uj in the Fourier domain is defined by the support of W and V, the radial and angular windows, applied with scale-dependent window widths in each direction 2bj=2ch U j ðr; hÞ ¼ 23j=4 W ð2j rÞV ; ð3Þ 2p where bj/2c is the integer part of j/2. In fact, the support of Uj is a polar ‘‘wedge”. The symmetrized version of (3), namely Uj(r, h) + Uj(r, h + p), is used in order to obtain real-valued curvelets. Now the waveform uj(x) defined by ^ j ðxÞ ¼ U j ðxÞ. As ‘‘mother” mean of its Fourier transform u wavelet, uj is thought to be a ‘‘mother” of curvelet, and all curvelets at scales 2j are obtained by rotations and translations of uj. To define the curvelets, symbols h‘ and k are
S. Li, B. Yang / Pattern Recognition Letters 29 (2008) 1295–1301
defined as follow: h‘ = 2p 2bj/2c ‘, with ‘ = 1, 2, . . . such that 0 6 h < 2p; k = (k1, k2) 2 Z2. Here, h‘ is equispaced sequence of rotation angles and k the sequence of translation parameters. Then, the curvelet can be defined at scale 21, ðj;‘Þ j j=2 orientation h‘ and position xk ¼ R1 Þ by h‘ ðk 1 2 ; k 2 2 ðj;‘Þ
uj;‘;k ðxÞ ¼ uj ðRh‘ ðx xk ÞÞ; where Rh is the rotation by h radians and (also its transpose), cos h sin h T Rh ¼ ; R1 h ¼ Rh ¼ Rh : sin h cos h
ð4Þ R1 h
I1
T
1297
C1 Fusion rule
In Source images
T
Cn
Cf
T -1
Fused coefficients
If Fused image
Coefficients
Fig. 1. Block diagram of a generic image fusion scheme.
its inverse
ð5Þ
For a given function f 2 L2(R2), the curvelet coefficients is defined by Z cðj; ‘; kÞ :¼ hf ; uj;‘;k i ¼ f ðxÞuj;‘;k ðxÞ dx: ð6Þ R2
Digital curvelet transforms can also be operated in the frequency domain, and it will be useful to apply Plancherel’s theorem and express the inner product as the integral over the frequency plane Z 1 cðj; ‘; kÞ :¼ uj;‘;k ðxÞ dx f^ ðxÞ^ 2 ð2pÞ Z ðj;‘Þ 1 ð7Þ ¼ f^ ðxÞU j ðRh‘ xÞeihxk ;xi dx: 2 ð2pÞ The new fast discrete curvelet transform (FDCT) which implements via wrapping is simpler and faster (Cande`s et al., 2006). Here, we only briefly list the steps of the implement of the FDCT, and interested readers may consult the literature (Cande`s et al., 2006) for more details. The corresponding software package CurveLab is available at http:// www.curvelet.org. The architecture of the FDCT via wrapping is as follows: 1. Apply the 2D FFT and obtain Fourier coefficients f^ ½n1 ; n2 ; n=2 6 n1 ; n2 6 n=2. 2. For each scale j and angle l, form the product e j;l ½n1 ; n2 f^ ½n1 ; n2 . U e j;l is a wedge-shaped window U which isolates frequencies near the wedge {(x1, x2): 2j 6 x1 6 2j+1, 2j/2 6 x1/x2 6 2j/2}. 3. Wrap this product around the origin and obtain e j;l f^ Þ½n1 ; n2 ; ð8Þ f~ j;l ½n1 ; n2 ¼ W ð U P P where Wd½n1 ;n2 ¼ m1 2Z m2 2Z d½n1 þm1 L1;j ;n2 þm2 L2;j e j;l ½n1 ;n2 f^ ½n1 ;n2 Þ, the restriction of ðd½n1 ;n2 ¼ U Wd[n1, n2] to indices n1, n2 inside a rectangle with sides of length L1,j L2,j, and L1,j, L2,j are two constants L1,j 2j and L2,j 2j/2. The range for n1 and n2 is now 0 6 n1 < L1,j and 0 6 n2 < L2,j (forh in the range (p/ 4, p/4)). 4. Apply the inverse 2D FFT to each f~ j;l , hence collecting the discrete coefficients cD(j, l, k). 3. Common image fusion methods using DWT or CT DWT and CT are both appropriate for performing multifocus image fusion. Theoretically, both the decomposed
coefficients of these two multiresolution transforms reflect the focus measure in images. Generally, regions with large coefficients are clearer than those with small coefficients. Details of the discrete wavelet transform-based image fusion methods are presented in (Pajares and Cruz, 2004). In this section, we describe the multifocus image fusion method based on DWT or CT individually in one general framework. In this paper, we assume that the source images have already been registered. As shown in Fig. 1, the fusion schematic diagram takes the following steps: Step 1: Transform each of the registered input images I1, I2, . . . , In from image space into multiresolution domain by applying DWT or CT. Denote the decomposed coefficients as C1, C2, . . . , Cn. Step 2: Fuse the transform coefficients,C1, C2, . . . , Cn, using some fusion rules. An activity level is calculated from the transform coefficients, by treating each coefficient separately or by averaging over a small window at each coefficient location. Larger activity level indicates that the corresponding object is clearer. Then select the coefficient with the largest activity level at each pixel location from multiple transformed coefficients as the coefficient at that location in the fused image. Step 3: Optionally, perform consistency verification, which ensures that a fused coefficient does not come from a different source image from most of its neighbors. Usually, this is implemented by using a small majority filter. Step 4: The fused image is reconstructed by performing the inverse transform. An example is designed, shown in Fig. 2, to explore different performances between DWT and CT-based image fusion methods. Two artificial multifocus images are obtained firstly by blurring different scene part of an ‘ideal’ focused image using a low-pass Gaussian filter, shown in Fig. 2a and b. In Fig. 2a, the face of ‘Lenna’ is blurred while the rest of regions are blurred in Fig. 2b. Then, the two images are fused with DWT or CT individually with the methods described as above with the same settings, shown as Fig. 2c and d. Finally, the difference images, Fig. 2e and f, between the fused images and the ‘ideal’ image are obtained. By comparing the difference between Fig. 2e and f, we can conclude that the DWT and CT reveal very different features of the source images. For example, wavelets do
1298
S. Li, B. Yang / Pattern Recognition Letters 29 (2008) 1295–1301
Fig. 2. Example 1: (a) and (b) are images to be fused; (c) fused image using CT; (d) fused image using DWT; (e) difference between (c) and the ‘ideal’ one; and (f) difference between (d) and the ‘ideal’ one.
not restore long edges while curvelets are challenged with small features such as Lenna’s eyes, hair and mouth. 4. Proposed image fusion method Above analysis indicates that although the fused results obtained by wavelet or curvelet transform individually are encouraging, there certainly is room for further improvement. Each transform has its own area of expertise and this complementary characteristic may be useful for fusion. So we proposed a combined method considering both the advantages of the wavelet transform and the curvelet transform. Source images are transformed into curvelet coefficients which contain more edge information of the source images. The coefficient values will be larger if it is corresponded to an edge, but for small details, such as texture or angle points, the coefficients are still small. Instead of fusing the curvelet coefficients directly, we further fuse the coefficients using the wavelet transform-based method. Thus the decomposed coefficients not only contain edge information but also contain small details information. The fusion process, as shown in Fig. 3, is accomplished by the following steps: (1) The original multifocus images should be geometrically registered to each other. In this paper, the two multifocus images I1 and I2 are assumed to be registered in preprocessing. (2) Each of the source image is decomposed by curvelet transform into curvelet coefficients, C1 and C2. C k;l 1
and C k;l 2 denote the kth scale and l orientation of the coefficient C1 and C2, where k 2 {1, 2}, l = 1 when k = 1 and l 2 {1, . . . , 8} when k = 2. k;l (3) Each pair of the coefficients C k;l 1 and C 2 is fused by using wavelet transform. In our implementation, C k;l 1 and C k;l 2 are decomposed into wavelet coefficient space with the wavelet basis ‘db1’ and a decomposition level of 3. The largest absolute value coefficients are selected to obtain the fused wavelet coefficients. Ck,l is obtained by using the inverse wavelet transform. (4) Apply inverse curvelet transform to C to get the fused image I. 5. Experimental results In this section, experimental results of the proposed image fusion method are given and evaluated by comparing with the results obtained by using wavelet transform and curvelet transform. 5.1. Performance measures In this paper, we use the root mean squared (RMSE), entropy (EN) and spatial frequency (SF) to evaluate the performance of the proposed fusion method. RMSE measure the differences between the fusion result and the ‘ideal’ image. It is defined as vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u M X N u 1 X 2 RMSE ¼ t ð9Þ ½Rðm; nÞ F ðm; nÞ ; MN m¼0 n¼0
S. Li, B. Yang / Pattern Recognition Letters 29 (2008) 1295–1301
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u 1 X n1 u 1 M X 2 ½F ðm; nÞ F ðm 1; nÞ ; CF ¼ t MN m¼1 n¼0
C11,1 I1
CT DWT Fusion
Cf1,1
C1k,l
CT-1
If Fused image
C2 I2
1,1
Cf
5.2. Experimental setups
C2k,l Coefficients
Fig. 3. Schematic diagram of the proposed image fusion method.
where R and F are the reference image and the fused image with size M N, respectively. This is a measure which needs an ideal fused image. However, this kind of ideal fused image is hard to obtain in practice. Hence, other two quantitative measures, entropy (EN) and spatial frequency (SF), are also given. EN is defined as measuring the overall image complexity EN ¼ p logðpÞ;
ð10Þ
where p is the probability density function of the selected image. SF is defined as pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi SF ¼ RF2 þ CF2 ; ð11Þ where RF and CF are the row frequency vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u M 1 X N 1 u 1 X 2 RF ¼ t ½F ðm; nÞ F ðm; n 1Þ ; MN m¼0 n¼1 and column frequency
respectively, and F is the fused image.
k,l
Fused coefficients
CT
Source images
DWT Fusion
1299
For the wavelet-based fusion method, the wavelet basis ‘db1’ and a decomposition level of 3 are used. And the curvelet transform is implemented into two levels, and the second level contains eight orientations. The fusion rule is the maximum-selection scheme and the verification rule is major voting in 5 5 window. For fair comparison, the same parameter settings are used in our proposed algorithm. 5.3. Fusion results The first example is shown in Fig. 4, which contains six images. Fig. 4a is the reference (everywhere-in-focus) image. Figs. 4b and c are two source images obtained by blurring the left and right halves of Fig. 4a. Fig. 4d is the fusion result by using CT method and Fig. 4e is the fusion result by using DWT method individually. Fig. 4f is the fusion result by using the proposed combined method. It is hard to subjectively find the difference of the fusion results between the proposed fusion scheme and the DWT or CT-based scheme. So we use RSME to evaluate the performance of the different methods. As shown in Section 5.1, RSME can be used to evaluate the difference between the fusion result and the ‘ideal everywhere-infocus’ one. And the smaller the RMSE, the better the result. For Fig. 2, the ‘ideal’ image also exists. The RSME values of Figs. 2 and 4 are listed in Table 1. From Table 1,
Fig. 4. Example 2: (a) ideal image; (b) image focused on the right; (c) image focused on the right; (d) fused image using CT; (e) fused image using DWT; and (f) fused image using the proposed algorithm.
1300
S. Li, B. Yang / Pattern Recognition Letters 29 (2008) 1295–1301
Table 1 Performance of different fusion methods (RMSE) RMSE
DWT
CT
DWT + CT
Fig. 2 Fig. 4
2.707 4.793
2.110 5.417
2.002 4.605
we can observe that the proposed scheme provides the best performance than others.
For further comparing, the proposed method is tested on another two pairs of images. Figs. 5a, b and 6a, b are two pairs of source multifocus images. Figs. 5c and 6c are fused images using curvelet transform individually. Figs. 5d and 6d are fusion results using wavelet transform. Figs. 5e and 6e are fused images using the proposed method. Carefully observing the fusion results, we conclude that the CT-based method has the side effects of reduction in the
Fig. 5. Example 3: (a) image focused on the big clock; (b) image focused on the small clock; (c) fused image using CT; (d) fused image using DWT; and (e) fused image using the proposed method.
Fig. 6. Example 4: (a) image focused on the right book; (b) image focused on the left book; (c) fused image using CT; (d) fused image using DWT; and (e) fused image using the proposed method.
S. Li, B. Yang / Pattern Recognition Letters 29 (2008) 1295–1301 Table 2 Performance of different fusion methods (EN and SF)
References
Objective criteria EN
1301
SF
Method:
DWT
CT
DWT + CT
DWT
CT
DWT + CT
Fig. 5 Fig. 6
7.105 7.091
7.183 7.078
7.286 7.104
8.307 24.818
8.030 23.507
8.371 25.196
contrast of the fused image. And there are some artifacts such as blocking effects in the DWT-based fused image. However, the results obtained by our method are quite good. We also use two objective criterions, EN and SF, to evaluate the performance of the different fusion methods. The quantitative comparisons are shown in Table 2. From Table 2, we can see that the proposed method can preserve more useful information compared with DWT and CT-based fusion methods. 6. Conclusions In this paper, we firstly analyze the difference results between the wavelet-based and the curvelet-based method. Then a combined multifocus image fusion method was proposed by considering the complementary property of the two different multiresolution analysis method. In our algorithm, firstly, each of the registered images are decomposed using curvelet, then the coefficients are fused using wavelet transform and the fused coefficients are reconstructed by performing the inverse curvelet transform. The experimental results on several pairs of multifocus image showed that the proposed method has better performance than any individual system alone. A shortcoming of our method is that it consumes more time than the wavelet-based methods because of two different multiscale decomposition processes. However, it is necessary to analyze and to investigate more thoroughly. We plan to study more various combined method and to design more effective combination algorithm. Acknowledgements This paper is supported by the National Natural Science Foundation of China (No. 60402024) and Program for New Century Excellent Talents in University (NCET2005). The authors wish to thank the anonymous reviewers for their constructive comments.
Burt, P.J., 1992. A gradient pyramid basis for pattern selective image fusion. In: Proc. of the Society for Information Display Conf., pp. 467–470. Burt, P.J., Andelson, E.H., 1983. The Laplacian pyramid as a compact image code. IEEE Trans. Comm. 31 (4), 532–540. Cande`s, E.J., Donoho, D.L., 1999. Ridgelets: The key to higherdimensional intermittency. Phil. Trans. Math. Phys. Eng. Sci. 357 (1760), 2495–2509. Cande`s, E.J., Donoho, D.L., 2000. Curvelets – A surprisingly effective nonadaptive representation for objects with edges. In: Rabut, C., Cohen, A., Schumaker, L.L. (Eds.), Curves and Surfaces. Vanderbilt University Press, Nashville, TN, pp. 105–120. Cande`s, E.J., Demanet, L., Donoho, D.L., Ying, L., 2006. Fast discrete curvelet transforms. Multiscale Model. Simul. 5 (3), 861–899. Chibani, Y., Houacine, A., 2003. Redundant versus orthogonal wavelet decomposition for multisensor image fusion. Pattern Recognition 36 (4), 1785–1794. Choi, M., Kim, R.Y., Kim, M.G., 2004. The curvelet transform for image fusion. In: 20th Congress of the Internat. Society for Photogrammetry and Remote Sens., B8, pp. 59–64. De, I., Chanda, B., 2006. A simple and efficient algorithm for multifocus image fusion using morphological wavelets. Signal Process. 86 (5), 924–936. Hill, P., Canagarajah, N., Bull, D., 2002. Image fusion using complex wavelets. In: Proc. of the British Machine Vision Conf., pp. 487– 496. Li, S., Wang, Y., 2000. Multisensor image fusion using discrete multiwavelet transform. In: Proc. 3rd Internat. Conf. on Visual Computing, pp. 93–103. Li, H., Manjunath, B.S., Mitra, S.K., 1995. Multisensor image fusion using the wavelet transform. Graphical Models Image Process. 57 (5), 235–245. Li, S., Kwok, J., Tsang, I., Wang, Y., 2004. Fusing images with different focuses using support vector machines. IEEE Trans. Neural Networks 15 (6), 1555–1561. Nu´fez, J., Otazu, X., Fors, O., Pala`, V., Arbiol, R., 1999. Multiresolution based image fusion with additive wavelet decomposition. IEEE Trans. Geosci. Remote Sens. 37 (3), 1204–1211. Pajares, G., Cruz, J.M., 2004. A wavelet-based image fusion tutorial. Pattern Recognition 37 (9), 1855–1872. Rockinger, O., 1997. Image sequence fusion using a shift invariant wavelet transform. In: IEEE Internat. Conf. on Image Process., pp. 288–291. Seales, W., Dutta, S., 1996. Everywhere-in-focus image fusion using controllable cameras. Proc. SPIE 2905, 227–234. Starck, J.L., Donoho, D.L., Cande`s, E.J., 2001. Very high quality image restoration by combining wavelets and curvelet. In: SPIE Conf., vol. 4478, pp. 9–19. Starck, J.L., Cande`s, E.J., Donoho, D.L., 2002. The curvelet transform for image denoising. IEEE Trans. Image Process. 11 (6), 670–684. Toet, A., 1989a. Image fusion by a ratio of low-pass pyramid. Pattern Recognition Lett. 9 (4), 245–253. Toet, A., 1989b. A morphological pyramidal image decomposition. Pattern Recognition Lett. 9 (3), 255–261. Zhang, Z., Blum, R.S., 1999. A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application. Proc. IEEE 87 (8), 1315–1326.