PHASE-BASED SEGMENTATION OF CELLS FROM BRIGHTFIELD MICROSCOPY Rehan Ali 1 , Mark Gooding PhD 2 , Martin Christlieb PhD 3 , Michael Brady FRS FEng 1 1
Wolfson Medical Vision Labs, Dept of Engineering Science, University of Oxford 2 Nuffield Dept of Obstetrics and Gynaecology, University of Oxford 3 Gray Cancer Institute, University of Oxford ABSTRACT
Segmentation of transparent cells in brightfield microscopy images could facilitate the quantitative analysis of corresponding fluorescence images. However, this presents a challenge due to irregular morphology and weak intensity variation, particularly in ultra-thin regions. A boundary detection technique is applied to a series of variable focus images whereby a level set contour is initialised on a defocused image with improved intensity contrast, and subsequently evolved towards the correct boundary using images of improving focus. Local phase coherence is used to identify features within the images, driving contour evolution particularly in near-focus images which lack intensity contrast. Preliminary results demonstrate the effectiveness of this approach in segmenting the main cell body regions. Index Terms— Biomedical image processing, Microscopy, Image segmentation 1. INTRODUCTION Fluorescence microscopy is a powerful in vitro technique for studying cell-level biological processes. In cancer cell studies, quantification of the uptake and efflux of fluorescent drug molecules such as daunorubicin can assist in pharmacokinetic drug profiling and the study of anti-chemotherapeutic activity [1]. This information can be derived from fluorescence microscopy images if an accurate cell delineation is available within which to sample the intracellular fluorescence signal. Fluorescence images yield a high signal to noise ratio compared to brightfield imaging, but cannot be used for automated segmentation since the fluorescence signal cannot be guaranteed to illuminate the entire cell. Brightfield imaging is used to locate cells prior to fluorescence imaging (to minimise degradation of the fluorescence signal by photobleaching), however cells viewed this way without staining are hard to visualise as they are transparent and irregularly shaped, and exhibit no discernible difference in intensity profile when compared to the background. In many cases, cell biology experts can disagree on the precise cell boundary location. Zernike showed that slightly defocusing the microscope improves the image contrast by mod-
1424406722/07/$20.00 ©2007 IEEE
ifying the optical paths of out-of-phase diffracted light rays, bringing them into view on the imaging plane [2]. Defocusing brightfield images to visualise cells is still a commonly used technique, but is typically done at the expense of anatomical detail which is lost through blurring. Some attempts have been made to automatically locate cell boundaries in such images. Thresholding is the most common approach; Wu’s highly cited method [3] used adaptive thresholding coupled with morphological operators. More recently, Tscherepanow et al. [4] applied constrained active snakes on rounded ovarian cells with some success. Earlier methods have operated on intensity data alone, and may also have used slightly out-of-focus images where ultra-thin cell regions were not visible. Ultra-thin regions are caused by the cell periphery spreading out to maximise cell-surface interactions, and are around 100 nm cf. 1 μm for the nucleuscontaining region (Figure 2). To date, the segmentation problem has not been robustly solved for cells with complex morphology. Our approach is different in that it complements intensity with edge information obtained in two ways. The first is phase-shift by defocusing, as described above. A level set function [5] is initialised on a highly defocused image, in which the cell is visible as a strong dark smear and extraneous details are smoothed out. The contour then follows the emerging cell boundary in images of improved focus where the boundary signal becomes more localised but contrast is diminished. This is achieved using local phase coherence, a powerful mathematical measure of feature strength [6], to re-
Fig. 1. Two segmentation results of single HeLa cells within clusters of cells observed by brightfield microscopy, using our intensity/phase-based level set.
57
ISBI 2007
inforce the boundary signal from the contrast-deficient nearfocus images. Figure 1 shows typical segmentation results obtained with our intensity/phase-based level set. The next section outlines the methods used, then Section 3 demonstrates some of the results obtained. Further developments and uses of the method are discussed in Section 4. 2. METHOD 2.1. Image Acquisition Transillumination brightfield images were acquired for HeLa immortal cervical cancer cells using an inverted Nikon Eclipse TE2000E epifluorescence microscope at 40x magnification, illuminated with a mercury arc 100W lamp without any frequency filters. Images were acquired for several values of Δf = f − f0 describing the difference in objective lens to object distance between the current focus level f and the infocus distance f0 .
Fig. 2. (left) Cross-section of cell spread over coverslip. (right) A plot showing the relationship between cell profile (thick line) and image intensity curvature (thin and dashed lines). The thin line represents an out-of-focus image, where the saddle point A is shifted from the actual cell stationary point B. The signal is relatively noise-free but heavily Gaussian smoothed. The dashed line represents an in-focus image, which more accurately describes the cell boundary position but is masked by noise.
2.2. Defocusing and Initialisation Brightfield microscopy images are formed by a convolution of amplitude and phase information. The phase component is zero for in-focus images, but defocusing of a thin, transparent phase object (one which shifts the phase of the wave incident on the object, i.e. acting as a secondary lens) results in an increase in image contrast as part of the light diffracted by the object becomes visible on the image plane. Agero et al. [7] showed that, for objects of thickness h(x, y), the contrast C(x, y) at any image point x, y is directly proportional to the object’s curvature, ∇2 h(x, y), for very small Δf . However this direct spatial correspondence between the cell position and appearance in the defocused images breaks down for large Δf due to blurring of the signal by the Point Spread Function (PSF). The regions of increased contrast are approximately Gaussian smoothed, and also shifted away from the actual boundary position (Figure 2). Thus, the high curvature points at the edge of the thick region of the cell are represented by bright and dark bands in the image. This results in a saddle in the curvature of the image intensity on the brightfield image between the thick and ultrathin regions. The Shape Index [8], given by Equation 1, maps such image intensity curvature to a linear scale with topological descriptors (Figure 3). The term n describes a vector normal to the surface described by the image intensities.
s=
2 arctan π
∂n ∂x
∂n ∂x x
−
∂n ∂y
+ x
2 y
∂n ∂y
Fig. 3. Topology Scale for s, the normalised Shape Index metric (from [8]).
We calculate the Shape Index at a scale, θ, appropriate to the feature size in each image (Figure 4). This scale is varied proportionally to Δf , the degree of defocusing. Detecting s = 0 for the brightfield images generates closed contours of intensity saddles which, at the coarsest scale where defocusing has removed extraneous detail, correspond to the region between the ultra-thin regions and the thick part of the cell body (Figure 2). This contour is used to initialize a level set method [5] which is refined as described below.
+4
y
∂n ∂n ∂x y
∂y
Fig. 4. (left) Shape Index map from defocused image, using a high value of θ. (right) Shape Index derived boundary contour initialisation superimposed on focused HeLa cell image.
x
(1)
58
2.3. Refinement By Level Sets A signed distance function φ is generated from the initial contour found using the Shape Index. This function is subsequently evolved using the level set evolution term ∂φ = δ(φ)(βκ − F ) ∂t
(2)
where δ is the Dirac Delta function, κ is the curvature term and β is a weighting factor. F is a speed field, which we define as:
F
= ζ +λ
arctan(k(I − IT )) π
(3)
Equation 3 is composed of two distinct terms, each normalised to the range ±0.5, with zero defining potential boundary features. The first, ζ, drives the contour towards points of local phase coherence, representing points of contrast-invariant and robust signal feature detection. It is based on the finding by Morrone et al. [6] who demonstrated that features such as a steps or peaks can be defined so that they exhibit phase congruency. The monogenic signal enables local phase estimation in multidimensional space. A detailed explanation is beyond the scope of this paper, and the reader should consult Felsberg’s seminal work on the area [9]. A MATLAB implementation by Mellor and Brady [10] is used here to generate local phase maps, using scale-invariant DC filters. The second term evolves the contour with respect to the image intensity image. This is zero-centered with a threshold IT which is estimated from the initialisation result as the value covering 90% of the pixel intensities within the initial region. The arctan term normalises the intensity range to ± π2 with the effect of smoothly ramping intensity values close to the threshold (with the ramp gradient determined by the constant k). The intensity information is strong in the defocused images, allowing contour expansion into long tail regions. For in-focus images this information becomes weak, and even misleading, therefore λ is scaled down in proportion to Δf to reduce its effect, reflecting the diminishing contribution of the intensity term. At the same time, β is increased with improving focus to ensure a smooth level set contour in images with potential misleading edge-information. Equation 2 is solved to convergence in each image using a level set implementation in ITK, an open-source C++ image processing toolkit [11]. 3. RESULTS
Fig. 5. (top left to bottom right) HeLa cells being segmented at different focal distances, without local phase information. level set falling into a local minima. The Shape Index detection is non-specific and can highlight several potential regions, however a simple user intervention step filters out the correct object. This minimises the effect of the user interaction upon the accuracy of the final segmentation result. The level set was tested without (Figure 5) and with (Figure 6) the local phase information term, to test its specific contribution to the boundary edge detection. Figure 5 shows the level set evolves well in defocused images where intensity contrast is strong, but evolves away from the boundary in focused images and towards patterned intracellular regions. By comparision, when local phase is included, Figure 6 shows the level set contour tracking the cell boundary accurately across the range of defocused and focused images. We have evaluated the accuracy of the method against manual segmentations provided by three expert cell biologists. Using a sample of 22 cells, our algorithm correctly classified 75% of true cell pixels (±8%). It was noted that the experts intentionally oversegmented the cells to ensure the whole fluorescence signal was within the boundary. The inter-expert variability was 4%. We implemented the methods described in [3] and [4], however these could only operate on defocused images and the initialisations were very sensitive to image artefacts (such as the dark circles in Figure 6). As a result, they significantly undersegmented the cells (28% and 51% respectively). In contrast, our Shape Index based initialisation provides greater robustness against these artefacts. 4. DISCUSSION
Figure 4 shows an example initialisation. The Shape Index initialisation, operating on a highly defocused image, provides a closed contour which acts as a good first approximation to the general cell shape, reducing the possibility of the
We have developed a level set method which makes use of available information across several images acquired at vary-
59
several biological avenues of research. We aim to use it in a study of pharmacokinetics in a study based on [1], by extracting the total intracellular drug fluorescence signal from a timeseries of images and fitting the results to a mathematical model of cellular drug resistance. 5. REFERENCES [1] C Bour-Dill, M-P Gramain, J-L Merlin, S Marchal, and F Guillemin, “Determination of intracellular organelles implicated in daunorubicin cytoplasmic sequestration in multidrug-resistant mcf-7 cells using fluorescence microscopy image analysis,” Cytometry, vol. 39, no. 1, pp. 16–25, 2000. [2] F Zernike, Phase contrast: a new method for the microscopic observation of transparent objects, The Hague, 1942. [3] K Wu, D Gauthier, and M Levine, “Live cell image segmentation,” IEEE Trans Biomed Eng, vol. 42, no. 1, pp. 1–12, 1995. [4] M Tscherepanow, F Zollner, and F Kummert, “Automatic segmentation of unstained living cells in brightfield microscope images,” MDA Workshop on MassData Analysis of Images and Signals, pp. 86–95, 2006. [5] S. Osher and R.P. Fedkiw, Level Set Methods and Dynamic Implicit Surfaces, Springer, 2002.
Fig. 6. (top left to bottom right) HeLa cells being segmented at different focal distances, with phase information. Bottom left shows summation of intermediate segmentation results, demonstrating level set evolution. Bottom right image shows final segmentation overlaid on fluorescence image of R uptake (Invitrogen CorporationTM ). LysoTrackerdye
[6] M Morrone and R Owens, “Feature detection from local energy,” Patt Rec Lett, vol. 6, pp. 303–313, 1987. [7] U Agero, C Monkey, C Ropert, R Gazzinelli, and O Mesquita, “Cell surface flucuations studied with defocusing microscopy,” Phys Rev E, vol. 67, pp. 1–9, 2003.
ing focal distances. The method makes use of optical defocusing to generate a strong initialisation, which is refined with improving focus using local phase information to amplify boundary features. It is currently able to home in on most of the primary cell body along with some of the ultrathin regions, even in the case of cells with complex morphology. The results have been quantitatively assessed against manual segmentations and other methods, and are so far encouraging, but show that further work is needed in order to delineate the cell boundary in the same way as the experts. Understanding the formation of the defocused brightfield image may enable us to maximise the available information content in order to improve the segmentation results further. We are currently working on implementing a physical model of diffraction of light by the cell, in order to investigate this further. Extension of this work is also planned to consider multiple cells at once. This can be achieved within the level set frame work [12]. An effective brightfield cell segmentation would open up
[8] J Koenderink and A van Doorn, “Surface shape and curvature scales,” Image Vision Comput, vol. 10, no. 8, pp. 557–565, 1992. [9] M Felsberg and G Sommer, “The monogenic signal,” IEEE Trans Sig Proc, vol. 49, no. 12, pp. 3136–3144, 2001. [10] M Mellor and M Brady, “Phase mutual information as a similarity measure for registration,” Med Image Analysis, vol. 9, no. 4, pp. 330–43, 2005. [11] T.S. Yoo, M.J. Ackerman, and W.E. Lorensen, “Engineering and algorithm design for an image processing API: A technical report on itk-the insight toolkit,” Proc. of Medicine Meets Virtual Reality, pp. 586–592, 2002. [12] M Gooding, S Kennedy, and A Noble, Volume Reconstruction from Sparse 3D Ultrasonography, Springer Berlin / Heidelberg, 2003.
60