IET Submission Template

Report 2 Downloads 130 Views
Bacterial foraging optimization based brain magnetic resonance image segmentation Abdul Kayom Md Khairuzzaman1, * 1 *

Department of Electrical Engineering, National Institute of Technology, Silchar, India [email protected]

Abstract: Segmentation partitions an image into its constituent parts. It is essentially the pre-processing stage of image analysis and computer vision. In this work, T1 and T2 weighted brain magnetic resonance images are segmented using multilevel thresholding and bacterial foraging optimization (BFO) algorithm. The thresholds are obtained by maximizing the between class variance (multilevel Otsu method) of the image. The BFO algorithm is used to optimize the threshold searching process. The edges are then obtained from the thresholded image by comparing the intensity of each pixel with its eight connected neighbourhood. Post processing is performed to remove spurious responses in the segmented image. The proposed segmentation technique is evaluated using edge detector evaluation parameters such as figure of merit, Rand Index and variation of information. The proposed brain MR image segmentation technique outperforms the traditional edge detectors such as canny and sobel. 1. Introduction Segmentation partitions an image into certain regions of interest or clusters. It has numerous applications ranging from computer vision to target matching for military applications. Another interesting application is in medical science. For example, image segmentation helps in diagnosing abnormality in brain or any part of the body from the MRI or PET scan. MRI is a powerful non-invasive technique for diagnosis and treatment planning of various diseases such as multiple sclerosis (MS), Alzheimer’s disease, Parkinson’s disease, epilepsy, cerebral atrophy, presence of any lesion like glioma etc [1]. Effective image segmentation helps in classifying and analysing these disorders. Histogram based thresholding is the most popular and simple technique for image segmentation. Otsu [2] proposed a method for automatic threshold selection by maximizing the between class variance in a gray level image. Kapur’s [3] method utilizes maximization of posterior entropy that indicates homogeneity of the segmented classes. In general, the Kapur and Otsu methods are known for their better shape and uniformity measures. These methods are originally developed for bi-level thresholding and later on extended to multilevel thresholding. All these methods have one common problem that the computational complexity increases exponentially when extended to multilevel thresholding. This is due

1

to the exhaustive search for the optimal thresholds, which limits their usage for multilevel thresholding applications [4]. Nature has always been an inspiration in solving computationally complex problems. Particle swarm optimization (PSO) [5, 6], Ant colony optimization (ACO) [7] and BFO [8] algorithms are inspired from the foraging behaviour of natural animals. Computational complexity of multilevel thresholding is greatly reduced by using these nature inspired optimization algorithms. Genetic Algorithms (GA) [9], Particle Swarm Optimization (PSO) have been successfully applied in multilevel thresholding [10, 11]. Sathya et al. [4] showed that BFO algorithm performs better than PSO and GA for multilevel thresholding in terms of accuracy, speed and stability of the solution. Bacteria Foraging Algorithm proposed by Passino [8], mimicked the foraging behaviour of E. Coli bacteria present in human intestine. In foraging theory, it is assumed that the objective of the animal is to search for and obtain nutrients such that the energy intake per unit time is maximized [8]. Maitra et al. [1] applied BFO algorithm with Kapur method for multilevel thresholding of brain MR Image and showed its superiority over PSO based multilevel thresholding. Bi-level thresholding assumes that an image contains an object and the background. It is too harsh on an image and eliminates lot of information and hence fails in most of the cases. In case of medical image segmentation one has to be very careful while eliminating any information as it may affect diagnosis. Therefore, it is necessary to perform multilevel thresholding which retains most of the important information to help better diagnosis. However, multilevel thresholding without optimization is time consuming. MRI is state of the art technique for diagnosis of various diseases. An effective segmentation of MR image is therefore necessary for better diagnosis. Our main objective is to determine optimal thresholds, so that the image can be subdivided into several classes with different gray levels for their easier analysis and interpretation. For this, we propose a multilevel thresholding technique based on multilevel Otsu method optimized with BFO algorithm for

2

brain MR Image segmentation. First Multilevel thresholding is performed on the brain MR image. The edges in the thresholded image are then detected by comparing the intensity of each pixel with its eight connected neighbourhood. The proposed algorithm is then evaluated objectively using Rand index [12], variation of information and Pratt Figure of Merit [13]. For objective evaluation, a reference image is created by manually segmenting the original image. The proposed brain MR image segmentation technique out performs the traditional edge detection techniques such as Canny and Sobel. 2. Multilevel thresholding Thresholding is a popular image segmentation technique because of its simplicity and effectiveness. Normally, image histogram is used to determine the thresholds. There exist a number of methods for threshold selection. Kapur et al. [3] and Otsu [2] are the most widely acknowledged histogram based automatic threshold selection methods. Kapur’s method is based on maximizing posterior entropy of the thresholded image whereas Otsu method maximizes the variance between segmented classes. Here we explain briefly the Otsu method extended for multilevel thresholding [4, 2]. Let there be L gray levels in a given image, i.e. 0,1,2,...,  L 1 and let us define Pi 

h(i) , N  0  i  (L 1)  where h i   number of pixels with gray level i and N = total number of pixels in the L 1

image =  h(i) . i 0

Now, the multilevel thresholding problem can be configured as an m-dimensional optimization problem for determination of m optimal thresholds [t1 , t2 ,.., tm ] which divide the original image into m 1 classes: C0 for [0,..., t1  1] , C1 for [t1 ,..., t2  1] ,... and Cm for [tm ,..., L  1] . The thresholds are obtained by maximizing the following objective function:

J (t1, t2 ,..., tm )   02  12   22  ...   m2 .......... (1) Where

 02  0 (0  T )2 , 12  1 (1  T )2 ,  22  2 (2  T )2 ,...,  m2  m (m  T )2 are the variances of the segmented classes. Where the class probabilities are,

3

t1 1

t2 1

t3 1

L 1

i 0

i t1

i  t2

i  tm

0   Pi , 1   Pi , 2   Pi ,..., m   Pi And the mean levels  0 , 1 ,..,  m for classes C0 , C1 ,..., Cm are as follows: t1 1

0   i 0

ipi

0

t2 1

, 1   i t1

ipi

1

L 1

,..., m   i  tm

ipi

m

Let T be the mean intensity for the whole image, then we have, 0 0  11  2 2  ...  m m  T And 0  1  2  ...  m  1 .

3. Bacterial Foraging Optimization Foraging strategies are methods of locating, handling and ingesting food. Natural selection eliminates animals with poor foraging strategies. This facilitates the propagation of genes of most successful foraging strategies. After many generations, the poor foraging strategies are either eliminated or redesigned into better ones. A foraging animal looks to maximize energy intake per unit time spent on foraging within its environmental and physiological constraints. The E. Coli bacteria present in human intestine follows foraging behaviour, which consists of chemotaxis, swarming, reproduction and elimination or dispersal. Passino [8] has modelled this evolutionary technique as an effective optimization tool. 3.1 Chemotaxis The bacterial movement of swimming (in a predefined direction) and tumbling (altogether in different directions) in presence of attractant and repellent chemicals from other bacteria is called chemotaxis. A chemotactic step is a tumble followed by a tumble or run. To represent a tumble, a unit length random direction,  ( j) is generated which is then used to model chemotaxis as follows,

X i  j  1, k , l   X i ( j, k , l )  C(i)( j)

.......... (2)

Where X i ( j, k , l ) represents the i th bacterium at j th chemotactic, k th reproductive and l th elimination or dispersal event. C (i ) is the step size in the direction of movement specified by tumble (run length unit).

4

3.2 Swarming Bacterium which reaches a good food source produces chemical attractant to invite other bacteria to swarm together. While swarming, they maintain a minimum distance between any two bacteria by secreting chemical repellent. Swarming is represented mathematically as, Jcc( X , P( j , k , l )) 

S

 J cc( X , X ( j, k , l )) i

i

i 1

S

m

i 1

n 1

 [dattract exp(attract  ( X n  X i )2 )] S

m

i 1

n 1

[hrepellant exp(repellant  ( X n  X i )2 )] .......... (3)

Where Jcc( X , P( j, k, l)) is the value of the cost function to be added to the actual cost function to be optimized to simulate swarming behaviour. S is the total number of bacteria, m is the number of parameters (dimension of the optimization problem) to be optimized. d attract , attract , repellant , hrepellant are the coefficients to be chosen properly. 3.3 Reproduction After completion of N c chemotactic steps, a reproductive step follows. Health of i th bacterium is determined as, Nc

J i health   J sw (i, j, k , l ) .......... (4) j 1

Then, the bacteria are sorted in descending order of their health. The least healthy bacteria die and the other healthier bacteria take part in reproduction. In reproduction, each healthy bacterium splits into two bacteria each containing identical parameters that of the parent keeping population of bacteria constant.

3.4 Elimination and dispersal

5

The bacterial population in a habitat may change gradually due to constraint of food or suddenly, due to environmental or any other factor. All the bacteria in a region may be killed or a group may be dispersed into a new location. It may have the possibility of destroying chemotactic progress, but it also has the possibility of assisting chemotaxis, since dispersal event may place the bacteria to near good food sources [1].

3.5 Bacterial foraging optimization algorithm The original BFO algorithm given by Passino [8] is modified by Sathya et al. [4] which increases the convergence speed and global searching ability of the algorithm. In the modified BFO algorithm, they take the global best bacterium for the calculation of swarm attraction function. Instead of averaging all the objective function values, they consider the best value for each bacterium. Let us briefly explain the BFO algorithm in step by step manner.

Step 1 Initialize the number of variables to be optimized p , number of E. Coli bacteria S , number of chemotactic steps N c , maximum swimming length N s , number of reproduction steps N re , number of elimination or dispersal events N ed , the probability of elimination or dispersal Ped , and the step size

C i  , i  1, 2,...S . Step 2 Elimination-dispersal loop: ell  ell 1 Step 3 Reproduction loop: k  k  1 Step 4 Chemo-taxis loop: j  j  1

6

Step 4.1 For i  1,2,..., S take a chemotactic step for i th bacterium as: Step 4.2 Calculate the value of objective function J  i, j, k , ell  Step 4.3 Find the global best bacterium X gn , from all the objective functions evaluated till this point. Step 4.4 Calculate J sw , i.e. the cost function value  J  which is to be added with the swarm attractant cost  J cc  . So  J sw  can be expressed by the following equation: J sw  i, j, k , ell   J  i, j, k , ell   J cc  X gn ( j , k , ell ), X ( j , k , ell ) 

Step 4.5 Let Jlast  J sw  i, j, k , ell  save this value for finding a better cost via run. Step 4.6 Tumble: Generate a random vector  i  with each element n  i  , n  1,2,..., m , a random number in  1,1 . Step 4.7 Move: X i  j  1, k , ell   X i  j , k , ell   C (i)

(i)  (i)(i) T

which results in a step of size C (i ) in the direction of the tumble for i th bacterium. Step 4.8 Calculate J i, j  1, k , ell  and let



J sw  i, j  1, k , ell   J  i, j , k , ell   J cc X ( j  1, k , ell ), P( j  1, k , ell ) i



7

Step 4.9 Swim: set m=0 (counter for swim length). While m