Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui, LI Ye-wei
Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui, LI Ye-wei School of Computer Science and Technology, Shandong Institute of Economic & Techonlogy, Yantai, 264005, China
[email protected],
[email protected],
[email protected],
[email protected] doi: 10.4156/jdcta.vol4.issue6.4
Abstract The dyadic wavelet has good multi-scale edge detection and sub-band correlation features. Contourlet transformation has multi-directional characteristics. So a new dyadic nonsampling contourlet transformation is constructed. Firstly, multi-scale decomposition is performed on source images using dyadic contourlet transform to get high-frequency and low-frequency images. And then, according to the different region statistics between high-frequency and low-frequency, the fused coefficients in contourlet domain are obtained by using different fusion rules. Finally, the inverse wavelet based contourlet transform is utilized to obtain fused image. Low-frequency sub-band coefficient used the choice or weighted method according to regional similarity measure, and in accordance with the edge-dependent fusion quality index to determine the weight of edge information. For the edge of high-frequency sub-band, the fusion rule uses the largest absolute value method, and the non-edge part selects the sub-band coefficients of clear region. The experimental results show that the proposed method outperforms other conventional wavelet methods. At the same time, it can extract all useful information from the original images and improve fusion quality.
Keywords: Image Fusion, Multifocus image, Contourlet transform, Dyadic Wavelet 1. Introduction Multifocus image fusion is a classical research field in the image fusion. Multifocus sequences image is fused to obtain the clear image of each target, and can effectively improve the use ratio of image information and reliability of target detection and recognition. For the ideal optical imaging system, a certain image flat surface can generate the imaging in objects flat surface of the conjugate, but the image flat surface outside objects flat surface imaging is the fuzzy in different degree. The image fusion technology is used to handle the different focus image and extract the clear information of kinds of images to synthesize a new clear image. Image fusion method can be divided into two categories: one kind is in the spatial domain, the other kind is in the transform domain. The image fusion method of current mainstream is still the fusion method based on wavelet transform domain. The spatial domain method is a simple fusion method, it usually not transforms and decomposes the source image, it only weights the pixel of image to generate the fusion image, but the simple superposition will make images of the signal-to-noise ratio decreased. The different focus image will be positive transformed by wavelet transform method, and it can be decomposed in the different feature domain of different frequency, and then fused in the feature domain. According to a certain fusion rule, each feature domain selects the suitable low-frequency and high-frequency wavelet coefficients to invert transform to obtain the clear fusion image. The wavelet theory rise because it has good performance of time-frequency localization and optimal approximation performance, and its multi-resolution analysis has been widely used in digital signal processing and analysis, signal detection and noise suppression. Wavelet transform can better express one-dimension signals. However, because two-dimensional wavelet is the tensor product of onedimensional, it is only very limited direction of horizontal and vertical and diagonal. Ordinary wavelet transform is usually not optimal in high circumstances, so other multi-scale geometric analysis methods have been proposed, these methods have Redgelet and Curvelet and Contourlet [1]. Each kind of method is good at handling a particular type of characteristic, but for other types of handling effect is not ideal. Two-dimensional wavelet expresses the point singularity and spots. Redgelet express linear singular.
36
International Journal of Digital Content Technology and its Applications Volume 4, Number 6, September 2010
Curvelet express two-dimensional data of image. Do M N and Martin Vetterli have proposed a kind of good mathematical tool of expressing two-dimension signal --- Contourlet transformation in 2002. Contourlet transform is superior to the wavelet transform in the direction and anisotropy, the fusion algorithm based on Contourlet domain can more effectively fuse the source image information and maintain the source image feature. Reference [2] use golden section method to search the optimal lowfrequency fusion weights, and adaptive fuse the low-frequency subband coefficients, the high-frequency subband coefficients is fused by using the big fusion rules. Reference [3] use fusion rules based on regional energy to obtain the non-subsampled Contourlet coefficients of fusion image. Reference [4] use different window function to calculate the regional energy of image low-frequency components and highfrequency components, and the regional energy is normalized to weight each wavelet-Contourlet coefficients to obtain the fusion the wavelet-Contourlet coefficients. Reference [5] has introduced Cycle Spinning to effectively eliminate the image distortion becase the wavelet-Contourlet lack transformation invariability to generate. Reference [6] has analyzed the performance of Contourlet transform low-pass filter for image fusion algorithm, and has discussed the relationship of low-pass filter and decomposition layers selection. Reference [7] use non-subsampled Contourlet transform to fuse the multifocus image, and low-frequency subband and high-frequency subband are respectively handled by the direction vector and standard deviation. Reference [8] has proposed multifocus image fusion method of sharp frequency localized Contourlet transform domain based on sum-modified-Laplacian, so as to overcome the aliasing component of Contourlet fusion has generated, and restrain Pseudo Gibbs phenomenon. Reference [9] has proposed multifocus image fusion algorithm based on directional window statistics in nonsubsampled contourlet domain, low-frequency subband and high-frequency subband respectively use the variance matching degree fusion rules of directional region and energy fusion rules. Reference [10] has introduced the concept of the local region visibility and local direction energy in Contourlet domain, and has proposed the coefficient selection plan based on local area visibility and local direction energy. Reference [11] has injected HIS transform and non-subsampled Contourlet transform into multispectral image, which not only has high spatial resolution, and effectively maintain the spectral feature of multispectral image. Reference [12] has introduced PCNN-Pulse Coupled NeuralNetworks in non-subsampled Contourlet domain, and clarity matrix is handled by PCNN to generate the clear fusion image. Above fusion algorithm of Contourlet domain use different strategies to extract the useful information of source image, and eliminate noise interference to improve fusion result. This paper propose multifocus image fusion algorithms using dyadic non-subsampled Contourlet transform, this transformation has more direction subband, and use the non-subsampled filter bank to directional decompose, so it has translation invariability, and can effectively eliminate image distortion and reduce the data redundancy. This paper algorithm fusion result is superior to traditional Contourlet domain fusion algorithm under the same fusion rules condition.
2. Constructing Dyadic nonsubsampled Contourlet Transform The discrete dyadic wavelet is a special situation of wavelet frames. The wavelet function possesses characteristics of narrow-band pass filtering, and possesses energy conservation of the signal transformation. The dyadic wavelet transform is continuous in time domain and spatial domain. The dimensions will be dual discredited, but the translational measurement of time domain is remained continuous change. So it possesses translation invariability with same continuous wavelet transform and can effectively detect image edge and localization and classification. 2 Definition 1: If function ψ (t ) ∈ L ( R ) is one-dimensional dyadic wavelet and a constant 0 < A ≤ B < ∞ exists, so formula (1) is defined as:
A ≤ ∑ | ψ (2 j ω ) |2 ≤ B
(1)
j∈z
2 2 Definition 2: If function {ψ 1 ( x, y ),ψ 2 ( x, y )} ⊂ L ( R ) is two-dimensional dyadic wavelet function and 0 < A ≤ B < ∞ exists, so formula (2) is defined as:
∀ω = (ω x , ω y ) ∈ R 2 − {(0, 0)} , 0 | DAw − DBw | and | DAw′ − DBw || DAw − DBw | and | DBw′ − DAw | DBw′ , the clear goals in A, otherwise in B. Where: DAw be the clarity of some one high-frequency coefficients local area w in image A.
4. Fusion Based on Dyadic Contourlet The dyadic Contourlet is introduced the multifocus image fusion, and its excellent properties can be used to extract the geometric feature of the original image, and provide more information for the fusion image. The dyadic Contourlet transformation is not only provides the multiscale analysis, it also possesses abandant direction and shape, and thus it can effectively capture the smooth contour and geometric structure of images. Because the detail feature of image is often shown by multiple pixels of some one local area, and each pixel of this local area often has strong correlation, therefore, the fusion rules also uses the window area methods.
4.1. Fusion Steps (1) The images will be fused is firstly converted to IHS color space. Quantitative treatment color usually uses RGB color space model, which is not suitable for the image fusion processing, and it is very nonuniform in the spatial perception, its components not only express color, and express brightness, existing correlation, therefore, three components are respectively processed to loss the color information. Qualitative description color uses IHS system to be more intuitive from visual sense. IHS algorithm is the earliest in image fusion technology development, and is a kind of space transformation algorithm of maturity. Each component of IHS possesses the advantage of clearly describing the color properties. Its three components are Intensity and Hue and Saturation, and they possess the relative independence, so they can be respectively controlled, and can accurately quantitatively describe the color features. RGB color space is converted to HIS transformation as:
⎛ I ⎞ ⎡ 1/ 3 ⎜ ⎟ ⎢ ⎜ H ⎟ = ⎢ −1/ 6 ⎜S ⎟ ⎢ ⎝ ⎠ ⎣ 1/ 6
1/ 3 −1/ 6 −2 / 6
1/ 3 ⎤ ⎡ R ⎤ ⎥ 2 / 6 ⎥ ⎢⎢ R ⎥⎥ 0 ⎥⎦ ⎣⎢ B ⎦⎥
(7)
The brightness component I contain many detail information, so it need be chiefly handled in the fusion. (2) The brightness component I of image will be fused need be decomposed by the L layers dyadic Contourlet transform. The multifocus image A and B are firstly decomposed by the dyadic wavelet to obtain a low-frequency subband DH , H and two high-frequency subbands DG , H and DH ,G . Two high-frequency subbands are also decomposed by NSDFB to obtain low-frequency subband DC H and multiple wedge high-frequency direction subband { DCl ,i ( n, m), 0 ≤ l ≤ L − 1,1 ≤ i ≤ kl } , kl be the high-frequency direction subband number in scale 2− l , DCl ,i (n, m) be the direction i subband in scale 2− l .
41
Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui, LI Ye-wei
(3) Low-frequency subband fusion. (4) High-frequency direction subband handling. (5)The low-frequency coefficients and high-frequency coefficients of the brightness component are inverter converted by the dyadic Contourlet to generate the brightness component I ′ . (6) Two color components H and S directly use the mean value method to obtain H ′ and S ′ . (7) I ′ and H ′ and S ′ are inverter converted by IHS to reconstruct the fusion image. source image A
source image B
Transform to HIS color space Brightness component I
color component H,S
Dyadic non-subsampled Contourlet decompose
Mean value method resolve color component High-frequency subband
Low-frequency subband
According to the low-frequency fusion rules, Low-frequency subband coefficients is fused
According to the high-frequency fusion rules, High-frequency subband coefficients is fused
Dyadic non-subsampled Contourlet invert transform IHS invert transform Fusion image F
Figure 7. Multifocus image fusion algorithms based on dyadic Contourlet transform
4.2. Fusion Rules Fusion rules is the core part of image fusion, its choice directly impact on the quality of image fusion. Pajares G[15] has discussed kinds of fusion rules, and it basically includes all kinds of existing fusion scheme. According to the characteristics of multifocus image, the low-frequency and high-frequency decomposition coefficients of transform area are respectively fused by this paper algorithm. The standard deviation reflects the discrete case of the image gray relative to the gray mean value, and it can be used to evaluat the size of the image contrast. The bigger standard deviation expresses the bigger contrast of image and the more information of image. M
σ=
N
∑∑ ( F ( x , y ) − μ ) i
i =1 j =1
2
j
(8)
M ⋅N M
μ=
N
∑∑ F ( x , y ) i =1 j =1
i
j
(9)
M ⋅N
where μ be the image gray mean value. For any area r ∈ R , the similarity measure of two images A and B can be exressed as:
1 S AB (r ) = ( 3
∑
( x , y )∈r
∑
( x , y )∈r
( A( x, y ) − μ A|r ) ⋅ ( B ( x, y ) − μ B|r )
( A( x, y ) − μ A|r )
2
∑
( x , y )∈r
( B ( x, y ) − μ B | r )
42
2
+2−
μ A|r − μ B|r | σ A|r − σ B|r | − ) (10) max( μ A|r , μ B|r ) max(σ A|r , σ B|r )
International Journal of Digital Content Technology and its Applications Volume 4, Number 6, September 2010
where μ A|r and σ A|r are respectively the mean value and standard deviation of area r in image A. (1) Low-frequency subband fusion rules The low frequency part of image contains the smooth information, also is the large scale feature, such as the objects shap, position. The low-frequency part is calculated by formula (11):
F ( xi , y j ) = F ′( xi , y j ) − β ⋅ | DA ( xi , y j ) − DB ( xi , y j ) |
(11)
where F ′( xi , y j ) decide the image brightness and impact the image energy after image fuses. β be the weight coefficient, β ⋅ | DA ( xi , y j ) − DB ( xi , y j ) | be the weighted difference value of two images, and contains the fuzzy information of two images, the bigger β express the stronger image edges. For any area r ∈ R , and according to the similarity measure S AB (r ) of the image area, F ′(i, j ) judge to use the coefficients choice or coefficients weighted method. If S AB (r ) < TS , and TS be the similarity threshold, it use the coefficient choice methods:
⎧⎪ DA ( xi , y j ), σ A|r ≥ σ B|r F ′( xi , y j ) = ⎨ ⎪⎩ DB ( xi , y j ), σ A|r < σ B|r
(12)
If S AB (r ) ≥ TS , it use the coefficients weighted method:
⎧(1 − α ) ⋅ DA (i, j ) + α ⋅ DB (i, j ), σ A|r ≥ σ B|r F ′(i, j ) = ⎨ ⎩ α ⋅ DA (i, j ) + (1 − α ) DB (i, j ), σ A|r < σ B|r
(13)
where α = 1 (1 − 1 − S AB (r ) ) . 2 1 − Ts Edge-dependent fusion quality index (EFQI) is a new kind of objective index of evaluating image fusion quality, and can reflect the edges maintain situation and edges around sound effects of fusion image. The bigger EFQI express the higher fusion image quality. The definition [16] can be expressed as: 1 (14) Q= ∑ (λA (w)Q0 ( DA , F | w) + λB (w)Q0 ( DB , F | w)) | W | w∈W where Q express EFQI, F be the frequency-domain fusion coefficient of source image A and B. σ 2μ μ 2σ σ Q0 ( A, B) = AB ⋅ 2 A B2 ⋅ 2 A B2 , σ A , σ B be respectively the variance of subband coefficients σ Aσ B μ A + μ B σ A + σ B DA and DB , σ AB be the covariance. Q0 ( A, B | w) be the edge fusion quality index in the window w.
λA ( w) =
σ A|w , λ A ( w) = 1 − λB ( w) . σ A|w + σ B|w
The parameters is decided by the maximum formula (14), is also the maximize edge fusion quality index. (2) High-frequency subband fusion rules The high-frequency component of image contains the important feature and detail information. The fusion key lies in whether effectively extract detail information from the source image. High-frequency subband fusion rules are expressed as: c The edge information of high-frequency component is extracted by canny algorithm, and highfrequency subband is divided into edge part and non-edge part. d In order to better protect the image edge information, the edge part is fused by the most absolute method: DA (i, j ) ≥ DB (i, j ) ⎧ D (i, j ) if (15) F (i, j ) = ⎨ A otherwise ⎩ DB (i, j )
43
Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui, LI Ye-wei
e The non-edge part is judged the clear area and fuzzy area by the section 3.2 :
⎧ D (i, j ) F (i, j ) = ⎨ A ⎩ DB (i, j )
if
Ais clear
otherwise
(16)
5. Experimental Results Three different focus images are respectively experimented by this paper algorithm. The experimental environment is the computer of Intel Pentium(R)2.8 GHz and memory 512 and operating system of Windows XP. This paper experiment use the image of complete matching. This paper use 3×3 neighborhood window and similarity threshold TS = 0.7 . This paper Dyadic-Contourlet Transform (D-CT) and the Laplacian Pyramid Transform (LPT) and the Wavelet Transform (WT) and the NonSubsampled Contourlet Transform (NSCT) are compared by the experiment, and all kinds of methods have been four layers decomposed. WTF use the ‘db4’ wavelet basis, NSCT use the classic ‘9-7’ pyramid decomposition and ‘c-d’ directional filter bank (DFB), the directional subband decomposition number is 16, 8, 4, 4 from the fine scale to the coarse scale. The image fusion effect evaluation is mainly divided the subjective standard and the objective standard. The subjective evaluation method is impacted by the observer and image type and environmental conditions. Therefore, in order to objective quantitative evaluate fusion effect, this paper use Sntropy and Average Grads and Standart Deviation and Spacial Frequency to quantitative describe. The bigger Sntropy and Average Grads and Standart Deviation and Spacial Frequency of a multifocus image express the better fusion image quality. This paper methods have certain advantages has been shown by the experimental data of different fusion algorithm from table 1 in several evaluation index. (1) Sntropy: Image information entropy is an important index of evaluate the image information abundant degree, and can express the detail performance ability of image. The size of entropy reflects the number of image information. L −1
H = −∑ pi log pi
(17)
i =0
where H be the entropy of image, L be the total gray level of image, pi = Ni / N , N i be the number of pixels of gray value i, N be the total number of pixels of image. (2) Average Grads: Average grads can be sensitive to reflect on tiny detail image contrast expression ability, also reflects the image clarity. The bigger the value, the image clearer, it can be used to evaluate the clear of images.
Ag =
M −1 N −1 ⎡ ∂f ( xi , yi ) 2 ∂f ( xi , yi ) 2 ⎤ 1 × ∑ ∑ ⎢( ) +( ) 2⎥ ∂xi ∂xi ( M − 1) ⋅ ( N − 1) i =1 j =1 ⎣ ⎦
(18)
(3) Spacial Frequency: Spatial frequency reflects the whole activity degree of an image space. It includes space row frequency RF and space line frequency CF:
RF =
M
N
∑∑ ( I ( x + y ) − I ( x + y i =1 j = 2
CF =
M
i
j
N
∑∑ ( I ( x + y ) − I ( x i = 2 j =1
i
)) 2 / M × N
(19)
+ y j )) 2 / M × N
(20)
i
i −1
j
j −1
Total spatial frequency as:
SF = RF 2 + CF 2
44
(21)
International Journal of Digital Content Technology and its Applications Volume 4, Number 6, September 2010
Figure 8 Clock_A focus on the right side of the big clock, big clock clear, small clock fuzzy. Clock_B focus on the left side of the small clock, small clock clear, big clock fuzzy. Figure 9 Pepsi_A is the close clear left focus image, Pepsi_B is the far clear right focus image. The size of three groups of source image is 512*512. High-frequency and low-frequency subband fusion rules belong to this paper rule in different scales.
Clock_A
WT
Clock_B
NSCT
LPT
D-CT
Figure 8. Clock image fusion
Pepsi_A
WT
Pepsi_B
NSCT
LPT
D-CT
Figure 9. Pepsi image fusion
NSCT and D-CT two methods can obtain the clear image from the vision. However, this paper algorithm can obtain clearer fusion image than former two methods, and obtain satisfactory effect. DCT transform is superior to the wavelet transform and NSCT transform in the edge feature expression, The edge of fusion image has been obtained by transformation is more smooth. Becase the Laplace transform and wavelet transform is unable to accurately express the directional edge feature; the performance of fusion algorithm is low. But NSCT and D-CT transform possess the good time-frequency local properties and directional properties and translation invariant, and it can better capture images edge information, so the performance of fusion algorithm is high. Compared with NSCT transform, D-CT transform can effectively reduce the matching error, so as to realize the fusion operation. At the same time, In the same configuration D-CT transform single-layer decomposition
45
Multifocus Image Fusion Algorithms using Dyadic non-subsampled Contourlet Transform LI Jin-jiang, AN Zhi-Yong, FAN Hui, LI Ye-wei
contain more quantity subband, so contain more abundant image information, thus the image fusion algorithm based on D-CT transform has better performance. In the actual image fusion system, the fusion effect of this paper algorithm is good. However, this paper algorithm use D-CT transform to exist more quantum subband, so increasing the time complexity in a certain extent.
Image LPT(lc) WT(Lp) NSCT(w3) D-CT(w4)
Table 1. Experimental results comparison Clock Pepsi Ent Ag Std Sf Ent Ag Std 7.2277 7.3687 7.3893 7.4405
3.7618 3.7107 3.7986 3.8571
112.7214 112.6599 112.8318 113.1377
10.7220 10.4085 10.3744 10.5814
7.1164 7.1157 7.1188 7.1232
4.0936 4.0954 4.0983 4.1702
107.7549 107.7551 107.8678 107.6700
Sf 13.7573 13.7385 13.7753 13.8614
From table 1 experimental results, the fusion effect of this paper algorithm is significant for the image edge detail, and this paper algorithm can simple effectively realize the multifocus image fusion. The other three methods have obtained the information entropy of fusion result image to be low; the image quality is relatively poor, and fuzzier than the result image has been obtained by this paper method. Average Grads of fusion image has been obtained by D-CT is bigger than the Average Grads of the other methods, which demonstrate the fusion image to be clearer, and the details more abundant, and maintain more edge information of the original image.
6. Conclusion Becase dyadic wavelet and non-subsampled Contourlet possess translation invariability, they can effectively avoid the distortion, and Contourlet can also effectively capture multi-scale and multidirectional information in images, therefore, this paper has constructed multifocus image fusion algorithms to use dyadic non-subsampled Contourlet transform. The image will be fused is decomposed by the dyadic Contourlet. High-frequency and low-frequency subband coefficient respectively use the different fusion rules, and the coefficient of fusion is inverter transformed by the dyadic Contourlet to reconstruct the fusion image. Experimental results verify that this paper method has obtained the fusion results image to be texture clear, and edge details information reserve the more, and the fusion results have improved.
7. Acknowledgment This work is supported by the National Natural Science Foundation of China (NSFC) under Grant No. 60970105; National 863 High-Tech programe of China(2009AA01Z304); National Research Foundation for the Doctoral Program of Higher Education of China (20070422098).
8. References [1] Do M N, Vetterli M, “The contourlet transform: an efficient directional multiresolution image representation”, IEEE Transactions on Image Processing, vol.14, no.12, pp.2091-2106, 2005. [2] Chang Xia, Jiao Licheng, Jia Jianhua, “Multisensor Image Adaptive Fusion Based on Nonsubsampled Contourlet”. Chinese Journal of Computers, vol.32, no.11, pp.2229-2238, 2009. [3] Ye Chuanqi, Miao Qiguang, Wang Baoshu, “Image Fusion Method Based on the Nonsubsampled Contourlet Transform”, Journal of Computer-aided Design & Computer Graphics, vol.19, no.10, pp.1274-1278, 2007. [4] Song Yajun, Ni Guoqiang, Gao Kun, “Regional Energy Weighting Image Fusion Algorithm by Wavelet Based Contourlet Transform”, Transactions of Beijing Institute of Technology, vol.28, no.2, pp.168-172, 2008. [5] Liang Dong, Li Yao, Shen Min, et al, “An Algorithm for Multi2Focus Image Fusion Using Wavelet Based Contourlet Transform”, Acta Electronica Sinica, vol.35, no.2, pp.320-322, 2007.
46
International Journal of Digital Content Technology and its Applications Volume 4, Number 6, September 2010
[6] Cai Xi, Zhao Wei, “Discussion upon Effects of Contourlet Lowpass Filter on Contourlet-based Image Fusion Algorithms”, Acta Automatica Sinica, vol.35, no.3, pp.258-266, 2009. [7] Qiang Zhanga, Bao-long Guo, “Multifocus image fusion using the nonsubsampled contourlet transform”, Signal Processing, vol.89, no.7, pp.1334-1346, 2009. [8] Qu Xiaobo, Yan Jingwen, Yang Guide, “Multifocus image fusion method of sharp frequency localized Contourlet transform domain based on sum-modified-Laplacian”, Optics and Precision Engineering, vol.17, no.5, pp.1203-1212, 2009. [9] Sun Wei, Guo Baolong, Chen Long, “Multifocus image fusion algorithm based on directional window statistics in nonsubsampled contourlet domain”, Journal of Jilin University (Engineering and Technology Edition), vol.39, no.5, pp.1384-1389, 2009. [10] Zhang Qiang, Guo Baolong, “Fusion of Multifocus Images Based on the Nonsubsampled Contourlet Transform”, Acta Photonica Sinica, vol.37, no.4, pp.838-843, 2008. [11] Huang Haidong, Wang Bin, Zhang Liming, “A New Method for Remote Sensing Image Fusion Based on Nonsubsampled Contourlet Transform”, Journal of Fudan University (Natural Science), vol.47, no.1, pp.124-134, 2008. [12] Yang Shuyuan, Wang Min, Lu Yanxiong, et al, “Fusion of multiparametric SAR images based on SW-nonsubsampled contourlet and PCNN”, Signal Processing, vol.89, no.12, pp.2596–2608, 2009. [13] Cunha A L, Zhou J, Do M N, “The nonsubsampled contourlet transform: Theory, design and application”, IEEE Transactions on Image Processing, vol.15, no.10, pp.3089-3101, 2006. [14] Yang Xuan, Yang Wan hai, Pei Ji hong, “Fusion multifocus images using wavelet decomposition”. Acta Electronica Sinica, vol.29, no.6, pp.846-848, 2001. [15] Pajares G, Mauel J C, “A wavelet-based image fusion tutorial”, Pattern Recognition, vol.37, no.9, pp.1855-1872, 2004. [16] G. Piella, “New quality measures for image fusion” In Proceedings of the 7th International Conference on Information Fusion (Fusion 2004), International Society of Information Fusion (ISIF), Stockholm, Sweden, no.6, pp.542-546, 2004.
47