A Multimodality Medical Image Fusion Algorithm Based on Wavelet Transform Jionghua Teng, Xue Wang, Jingzhou Zhang, Suhuan Wang, and Pengfei Huo College of Automation, Northwestern Polytechnical University, Xi’an 710072
Abstract. According to the characteristics of a medical image, this paper presents a multimodality medical image fusion algorithm based on wavelet transform. For the low-frequency coefficients of the medical image, the fusion algorithm adopts the fusion rule of pixel absolute value maximization; for the high-frequency coefficients, the fusion algorithm uses the fusion rule that combines the regional information entropy contrast degree selection with the weighted averaging method. Then the fusion algorithm obtains the fused medical image with inverse wavelet transform. We select two groups of CT/MRI images and PET/ MRI images to simulate our fusion algorithm and compare its simulation results with the commonly-used wavelet transform fusion algorithm. The simulation results show that our fusion algorithm cannot only preserve more information on a source medical image but also greatly enhance the characteristic and brightness information of a fused medical image, thus being an effective and feasible medical image fusion algorithm. Keywords: Medical image, Fusion algorithm, Wavelet transform, Regional information entropy.
1 Introduction Medical image fusion refers to the matching and fusion between two or more images of the same lesion area from different medical imaging equipment. It is to obtain complementary information, increase the amount of information, and make the clinical diagnosis and treatment more accurate and perfect. Wavelet transform[1] can effectively distinguish different frequency components of the source images, select specific fusion rules according to the characteristics of these components, thus obtain fused image possessing better visual effect. Because of its good frequency characteristics, directionality and layered structure coincide with the human vision, wavelet transform has been fully used in medical image fusion [2] [3] .As a new field of information fusion technology, medical image fusion has become the focus of image research and processing. Medical images of different modalities can provide human information of mutual complement [3]. For instance, computer tomography (CT) can clearly express human bone information; oppositely, magnetic resonance imaging (MRI) can clearly express soft tissue information. MRI can display the structure of lesion clearly, while Positron emission tomography (PET) can well reflect the function and metabolism diagnosis Y. Tan, Y. Shi, and K.C. Tan (Eds.): ICSI 2010, Part II, LNCS 6146, pp. 627–633, 2010. © Springer-Verlag Berlin Heidelberg 2010
628
J. Teng et al.
information of brain. In order to obtain more comprehensive diagnosis information, we need to integrate effective information of multimodality medical images. According to the characteristics of medical images, this paper presented a multimodality medical image fusion algorithm based on wavelet transform. The algorithm adopted the fusion rule of pixel absolute value maximization for the low-frequency coefficients; the fusion rule that combined the regional information entropy contrast degree selection with the weighted averaging method. The simulation results shown that the presented algorithm can display the fused image details and brightness information well, and enhance the effect of image fusion greatly.
2 Fusion Algorithm Based on Wavelet Transform The image fusion algorithm based on wavelet transform can be described as follows: firstly, decompose source images into low frequency component, horizontal, vertical and diagonal high frequency components via wavelet transform respectively; then fuse low-frequency and high-frequency coefficients with different fusion rules respectively; finally, obtain the fused image through inverse wavelet transform. 2.1 Fusion Rule of Low-Frequency Coefficients Low-frequency coefficients represent the approximate image information, reflect the overall outline. Currently, most fusion algorithms use the fusion rule of weighted average [4] [5] in low-frequency coefficients. However, medical images have unique properties and features comparing with common images. For instance, although the CT and MRI images in this paper are both brain images, the outline of MRI image is more complex, and the range of its wavelet low-frequency coefficients is wide, which means it contains more information. But the outline of CT image is simple, and most of wavelet low-frequency coefficients are zero, the range of other coefficients is narrow. The wavelet coefficients of the bigger absolute value correspond to the relatively stronger grayscale changes of image, and human eyes are sensitive to these changes. So we adopt the fusion rule of absolute value maximization for the low-frequency coefficients. Fusion rule is described as below:
⎧⎪ A(i, j ), if A(i, j ) > B(i, j ) . F (i, j ) = ⎨ ⎪⎩ B(i, j ), else.
(1)
F(i, j), A(i, j) and B(i, j) respectively represent low-frequency coefficients pixel value of fused image F, image A and B at point (i, j). 2.2 Fusion Rule of High-Frequency Coefficients High-frequency coefficients contain image detail information of edge and texture, the processing of high-frequency coefficients directly impact on the clearness and edge distortion of image, etc. Entropy represents the average information of the image, and it is defined as [6]:
A Multimodality Medical Image Fusion Algorithm Based on Wavelet Transform
629
L −1
H = −∑ pi ln pi i =0
(2) .
Where Pi is the probability of gray level (i), and the range of i is [0,…, L-1]. The entropy is an important measure of information abundance, so we can get the contrast of image details through the comparison of entropy. The bigger entropy is, the richer details contained in image. Because the correlation between pixels of image, the region-based image fusion can reflect image characteristics and trends better than the pixel-based image fusion. So we adopt the fusion rule based on regional information entropy contrast degree for the high-frequency coefficients. Specific fusion rules are as follows: (1) Selected region in size of 3*3 from source image A, computed regional information entropy [7]: 3
3
H Al = −∑∑ pij ln pij i =1 j =1
3
(3) .
3
pij = f A (i, j ) / ∑∑ f A (i, j )
(4)
i =1 j =1
l
Where H A represents the regional information entropy of image A with point (i, j) for the center in the direction of l (l = H, V, D, on behalf of the horizontal, vertical, diagonal direction), age,
pij is the gray value probability of point (i, j) in the regional im-
f A (i, j ) is the gray value of point (i, j) in the region of image A. We got
l B
H by taking the same calculation to image B. (2) Computed regional information entropy contrast degree:
KH Al (i, j ) =
H Al (i, j ) H AH (i, j ) + H AV (i, j ) + H AD (i, j ) .
(5)
KH l (i, j )
A is regional information entropy contrast degree of image A, and it Where represents the proportion of high frequency components in one direction (horizontal,
KH l (i, j )
B vertical or diagonal) in the high frequency. We also got by the same way. (3) Compared regional information entropy contrast degree of two images.
ΔK (i, j ) = KH Al (i, j ) − KH Bl (i, j )
.
(6)
630
J. Teng et al.
⎧ f Al (i, j ), ΔK (i, j ) ≥ T . ⎪ KH Al f l (i, j ) = ⎨ f Bl (i, j ), ΔK (i, j ) ≤ −T . (α = ) l l KH + KH A B ⎪ l l ⎩α f A (i, j ) + (1 − α ) f B (i, j ), ΔK (i, j ) < T .
(7)
T is the setting threshold, 0