Variational Image Restoration and Decomposition with ... - Springer Link

Report 4 Downloads 93 Views
J Math Imaging Vis (2008) 30: 125–132 DOI 10.1007/s10851-007-0051-4

Variational Image Restoration and Decomposition with Curvelet Shrinkage Lingling Jiang · Xiangchu Feng · Haiqing Yin

Published online: 22 November 2007 © Springer Science+Business Media, LLC 2007

Abstract The curvelet is more suitable for image processing than the wavelet and able to represent smooth and edge parts of image with sparsity. Based on this, we present a new model for image restoration and decomposition via curvelet shrinkage. The new model can be seen as a modification of Daubechies-Teschke’s model. By replacing the β β Bp,q term by a Gp,q term, and writing the problem in a curvelet framework, we obtain elegant curvelet shrinkage schemes. Furthermore, the model allows us to incorporate general bounded linear blur operators into the problem. Various numerical results on denoising, deblurring and decomposition of images are presented and they show that the model is valid. Keywords Curvelets · Negative Hilbert-Sobolev space · Image decomposition · Image restoration · Image deblurring

1 Introduction Image restoration and decomposition are of important interests in image processing. Many well-known models for image restoration work by decomposing the observed image f into a sum of two components, e.g. f = u + v, where u is the geometric (cartoon) component and v is the oscillatory component representing texture or noise. One classical example of such models was proposed by Rudin-Osher-Fatemi (ROF) [1]. L. Jiang () · X. Feng · H. Yin Department of Mathematics, Xidian University, Xi’an 710071, China e-mail: [email protected]

The ROF model performed very well for removing noise while preserving edges. However, it failed to separate well oscillatory components from high-frequencies components. To remedy this situation, Meyer [2] suggested that v should be modeled by the space of oscillating functions which is in some sense dual to BV space. But Meyer’s convex variational model cannot be solved directly. In [3], Vese-Osher (VO) proposed to model oscillatory components v at first order derivatives of vector fields in Lp , for 1 ≤ p < ∞ as an approximation to Meyer’s model. In [4], the model from [3] was further analyzed and developed. For a bounded domain , it has been done by minimizing:  inf G(u, g1 , g2 ) u,g1 ,g2



= 

|∇u| + λf − (u + div g)2L2 ()

 + μ|g|Lp () .

(1)

Limiting the case p = 2, Osher-Sole-Vese (OSV) [5] have modified the minimization problem (1) to a new problem corresponding the case λ = ∞ in (1). They proposed the following minimization problem:     inf G(u) = |∇u| + λ |∇(−1 (f − u))|2 dx . (2) u





L. Lieu and L. Vese [6] generalized the model (2) to H −s , s > 0 for modeling the oscillatory component v:    2 inf G(u) = (3) |∇u| + f − uH −s () u



where v2H −s () = transform of v.



 (1

+ |ξ |2 )−s |v| ˆ 2 dξ, vˆ is the Fourier

126

J Math Imaging Vis (2008) 30: 125–132

But it is a pity that the resulting PDE based these variational problems are usually numerically intensive. Thus, in 2004, Daubechies and Teschke (DT) introduced a special variational model which was solved in a wavelet region [7]:  inf F (u, v) = 2α|u|B 1 u,v

 + γ v2H −1 () .

1,1 ()

+ γ v2H −s ()



+ f − (u + v)2L2 () (4)

(5)

1 () and H −s () by G H −s (). By replacing B1,1 1,1 () 1/2

−s−1/4

1,1



.

(6)

The rest of the paper is organized as follows. Section 2 introduces the second generation of curvelets and the deβ composition spaces-Gp,q . Section 3 is mainly concerned with minimizing the variational functional and with creating somewhat advanced curvelet shrinkage schemes. In Sect. 4 we present results of image decomposition, denoising and deblurring examples via curvelet-based model. Section 5 provides the concluding remarks.

1,1

where α, γ > 0 are tuning parameters, and s > 0. Despite the fact that wavelets have had a wide impact in image processing, they fail to efficiently represent objects with edges for the simple reason that wavelet transform does not take advantage of the geometry of the edge curve. To overcome this deficiency, researchers have recently considered multiscale and directional representations that can capture the intrinsic geometrical structures such as smooth contours in natural images. In particular, the first generation of curvelet transform, pioneered by Candes and Donoho [8], was shown to be optimal in a certain sense for functions in the continuous domain with curved singularities. In this paper, we apply the second-generation of curvelets [9], which is considerably simpler to use than the first generation of curvelets. It is known that suitable sparseness of a wavelet expansion is equivalent to smoothness measure in a Besov space. However, curvelets correspond to a decomposition of the frequency plane that is significantly different from both dyadic and uniform decomposition, and as a consequence, sparseness for curvelet expansions cannot be described in terms of classical smoothness spaces. In [10], Borup-Nielsen defined curvelet-type tight frames and obβ tained a natural family of sparseness spaces-Gp,q associated to the curvelet-type frames. The curvelet-type frames are quite similar to the second generation of curvelets. The sparseness spaces associated with the two types of frames are the same. Therefore, the decomposition space norm can be completely characterized by a sparseness condition on the curvelet coefficients. Moreover, embedding results for curvelet-type decomposition spaces relative to Besov spaces 1 () and were also given. Here, we are interested in B1,1 and G2,2

u,v

+ γ v2 −s−1/4 G2,2 ()

In this paper, extending the range of applicability by incorporating a bounded linear blur operator in the second term of (4) and generalizing the third term of (4) further to negative Hilbert-Sobolev spaces H −s (), we have the following formulation:  inf F (u, v) = 2α|u|B 1 () + f − K(u + v)2L2 () u,v

in a curvelet framework, we end up with the following variational problem:  inf F (u, v) = 2αuG1/2 () + f − K(u + v)2L2 ()

() respectively, and writing the problem (5)

2 Curvelets and Smoothness Spaces 2.1 The Second Generation of Curvelets Assume v is an even C ∞ (R) window which is supported on [−π, π] and its 2π -periodic extension obeys |v(θ )|2 + |v(θ − π)|2 = 1, for θ ∈ [0, 2π]. Define vj,l (θ ) = v(2j/2 θ − πl) for j ≥ 0 and l = 0, 1, . . . , 2j/2 − 1. Assume that w is a smooth compactly supported function that obeys  |w(2−j t)|2 = 1, t ∈ R |w0 (t)|2 + j ≥0

with w0 a function supported in a neighborhood of the origin. For j ≥ 2 and l = 0, 1, . . . , 2j/2 − 1, put: kj,l (ξ ) = w(2−j |ξ |)(vj,l (θ ) + vj,l (θ + π)), ξ = |ξ |eiθ .

(7)

We let μ be the triple (j, l, k): here, j is a scale parameter; l is an orientation parameter; and k = (k1 , k2 ), k1 , k2 ∈ Z is a translation parameter. We put J to be the pair of indices J = (j, l), and let θJ = πl2−j/2 . The support of w(2−j |ξ |)v(2j/2 θ ) is contained in a rectangle Rj = I1j × I2j given by: I1j = {ξ1 , tj ≤ ξ1 ≤ tj + Lj }, I2j = {ξ2 , 2|ξ2 | ≤ lj }, where Lj = δ1 π2j and lj = δ2 2π2j/2 , δ1 and δ2 obey δ1 = 14/3(1 + O(2−j )) and δ2 = 10π/9 respectively. Now let I˜1j = ±I1j and define R˜ j = I˜1j × I2j . The system uj,k (ξ1 , ξ2 ) =

2−3j/4 i(k1 +1/2)2−j ξ1 /δ1 ik2 2−j/2 ξ2 /δ2 e e , √ 2π δ1 δ2

k1 , k2 ∈ Z, is then an orthonormal basis for L2 (R˜ j ). Finally, define φˆ μ (ξ ) = kJ (ξ )uj,k (RθTJ ξ ),

(8)

J Math Imaging Vis (2008) 30: 125–132

127

characterized by a sparseness condition on the curvelet coefficients:  3 2j q(β+ 2 (1/2−1/p)) (10) f Gβ ≈ p,q

(j,l)

×



q/p 1/q | f, φj,l,k |p

k

where 0 < p ≤ ∞, 0 < q < ∞, β ∈ R. It is well-known that Besov spaces can be defined using a partition of unity adapted to dyadic frequency bands {x ∈ R2 : 2j ≤ |x| < 2j +1 }, and the curvelet splitting of the frequency space can therefore be considered a refinement of the Besov space. By this observation, the embedding results between curvelet spaces and Besov spaces were obtained.

Fig. 1 Sketch of a tiling of the frequency plane using the polar wedges

μ

where := {λ = (j, l, k) : j ≥ 2, l = 0, 1, . . . , 2j/2 − 1, k ∈ Z2 }, RθJ is rotation by the angle θJ , and let φˆ μ1 (ξ ) = 2πk1 (ξ )uk (ξ ), where μ1 := {λ = (1, 0, k) : k ∈ Z2 }, k12 (ξ ) = w02 (|ξ |)+w 2 (|ξ |)+w 2 (|ξ |/2) and uk (ξ ) = (2πδ0 )−1 ei(k1 ξ1 /δ0 +k2 ξ 2 /δ0 ) for a constant δ0 > 0. For any f ∈ L2 (R2 ), the system {φμ }μ obeys the relation  μ

| f, φμ |2 = f 2L

2 (R

2)

.

(9)

2.2 The Smoothness Spaces Adapted to Curvelets In applicable harmonic analysis, smoothness spaces are often designed following the principle that smoothness should be characterized by some decay or sparseness of an associated discrete expansion. Below let us review the family of smoothness spaces in [10]. The family of spaces is based on structured decompositions of the frequency space R2 , so strictly speaking they are decomposition spaces on the Fourier side. By certain subband splittings of the frequency domain, function f ∈ L2 (R2 ) is decomposed on different subbands and their energies are also calculated. Given certain measurement, if these energies are finite, the decomposition space is generated by the set of function f . Especially, by splitting each dyadic band {x ∈ R2 : 2j ≤ |x| < 2j +1 } into approximately 2j/2 equally sized “wedges” (as shown in Fig. 1), we obtain the decomposition space correβ sponding to the curvelet-type decomposition-Gp,q . In [10], Borup-Nielsen proved that the sparseness spaces associated to curvelet-type frames were the same to the sparseness spaces corresponding to the second generation of curvelets. Therefore, the decomposition space norm can be completely

Lemma 1 [10] For 0 < p ≤ ∞, 0 < q < ∞, and β ∈ R, we have β+s (R2 ) → Gβp,q (R2 ) Bp,q

where s = 1/(2q). Likewise,

β−s (R2 ) Gβp,q (R2 ) → Bp,q

where s = (max(1, 1/p) − min(1, 1/q))/2. Remarks (1) When working with images, we are dealing with functions defined on a bounded domain  ⊂ R2 . However, Lemma 1 is given on the whole space R2 . To resolve this, we will consider extending a function given on a bounded domain  by zeros to the whole R2 space. 1 (R2 ), H −s (R2 ) = B −s (R2 ) [6], Lemma 1 gives (2) For B1,1 2,2 the embedding results 1/2

1 G11,1 (R2 ) → B1,1 (R2 ) → G1,1 (R2 ), 1/4−s

G2,2

−s−1/4

−s (R2 ) → B2,2 (R2 ) → G2,2

(R2 ). 1/2

This shows that images have smaller norms in G1,1 (R2 ) −s−1/4

1 (R2 ) and H −s (R2 ) respec(R2 ) than B1,1 and G2,2 tively. Therefore, we can obtain the variational problem (6). (3) In [11], Mumford and Gidas have pointed out that −1−ε −s , where Hloc Gaussian white noise lives in ε>0 Hloc (s ≥ 0) is the Hilbert-Sobolev space of negative degree of differentiability. On the other hand, for the −s−1/4 (R2 ), the space embedding result H −s (R2 ) → G2,2 −s−1/4

(R2 )is larger than H −s (R2 ). Therefore, we G2,2 cannot conclude that Gaussian white noise does not live −s−1/4 (R2 ). in G2,2

128

J Math Imaging Vis (2008) 30: 125–132 Fig. 2 A part of Barbara image

3 Variational Problem and Minimization In what follows, we shall solve the problem (6) in two cases: (1) K is the identity operator (2) K is not the identity operator. 3.1 Variational Problem with K Being Identity Operator When K is the identity operator, the problem (6) simplifies to:  inf F (u, v) = 2αuG1/2 () + f − (u + v)2L2 () u,v

1,1

+ γ v2 −s−1/4 G2,2

()

 .

f, φμ1 φμ1 +

v˜γ ,α =





ST (fλ )φμ ,

λ∈μ

k∈Z2

(1 + γ 2|λ|(−2s−1/2) )−1 [fλ − ST (fλ )]φμ

λ∈μ

Proof Assuming f is fixed, we consider the family of variational problems (11) that naturally give rise to a parame1/2 terized class of solutions: Given the space G1,1 () and −s−1/4

1/2

(), find functions u˜ γ ,α ∈ G1,1 () and v˜γ ,α ∈

−s−1/4 () G2,2

(15)

(u˜ γ ,α )λ = fλ − T sign((u˜ γ ,α )λ ) = ST (fλ )

(16)

where St denotes the soft-shrinkage operator, and T = α2−|λ|/4 (1 + γ 2|λ|(−2s−1/2) )/(γ 2|λ|(−2s−1/2) ). Finally, by replacing uλ by u˜ γ ,α in (16), we obtain: (v˜γ ,α )λ = (1 + γ 2|λ|(−2s−1/2) )−1 (fλ − ST (fλ )). Reconstructing (u˜ γ ,α )λ and (v˜γ ,α )λ , we finish the proof.  3.2 Restoration in the Presence of Blur

where St denotes the soft-shrinkage operator, and T = α2−|λ|/4 (1 + γ 2|λ|(−2s−1/2) )/(γ 2|λ|(−2s−1/2) ).

G2,2

×(1 + γ 2|λ|(−2s−1/2) )−1 (fλ − uλ )2 .

Hence, the λth curvelet coefficient of minimizer u˜ γ ,α has to satisfy

Proposition 1 Let f ∈ L2 () be a given function. The variational problem (11) is minimized by the parameterized class of functions u˜ γ ,α and v˜γ ,α given by the following curvelet series of f : 

[S(u, v˜γ ,α )]λ = 2α2−|λ|/4 |uλ | + γ 2|λ|(−2s−1/2)

(11)

This results in a curvelet shrinkage scheme:

u˜ γ ,α =

Replacing vλ by (v˜γ ,α )λ in [S(u, v)]λ yields:

minimizing the functionals

F (u, v) = 2αuG1/2 () + f − (u + v)2L2 () 1,1

+ γ v2 −s−1/4 . G2,2 ()

(12)

Applying (9) and (10), we have the following equivalent convex sequence based functional:  (2α2−|λ|/4 |uλ |1{λ∈μ } + |fλ − (uλ + vλ )|2 S(u, v) = λ∈μ

+ γ 2|λ|(−2s−1/2) |vλ |2 ).

(13)

Let [.]λ denote the λth addend in (13), then the λth curvelet coefficient of minimizer v˜γ ,α has to satisfy:

If K is a linear operator modeling the blur, the variational functional (6) reads as F (u, v) = 2αuG1/2 () + f 1,1

− K(u + v)2L2 () + γ v2 −s−1/4 G2,2

()

.

(17)

Minimizing (17) results in a coupled system of nonlinear equations for uλ and vλ . However, this can be circumvented by constructing a surrogate functional that removes the influence of K ∗ K(u + v) [12], where K ∗ is the adjoint operator of K. Since K can be renormalized, we restrict ourselves without loss of generality to the case K ∗ K < 1. For a ∈ L2 (), the surrogate functional for (17) is of the form F sur (u, v, a) = F (u, v) + u + v − a2L2 () − K(u + v − a)2L2 () .

(18)

One can try to approach the minimizers of F (u, v) by an iterative process. One suitable iterative scheme can be formulated as follows: a0 = u0 + v0 ,

arbitrary;

(un , vn ) = arg min(F sur (u, v, an−1 )); u,v

(v˜γ ,α )λ = (1 + γ 2|λ|(−2s−1/2) )−1 (fλ − uλ ).

(14)

an = un + vn ;

n = 1, 2, . . . .

(19)

J Math Imaging Vis (2008) 30: 125–132

129

Fig. 3 Cartoon-texture decomposition by four models: figures (a), (b) and (c) are three cases of model (11)

Proposition 2 Let K be some bounded linear operator modeling the blur, with K ∗ K < 1, and suppose the given function f ∈ L2 (), then the nth iteration (un,γ ,α , vn,γ ,α ) of the scheme (19) is given by un,γ ,α =



an−1 + K ∗ f − K ∗ Kan−1 , φμ1 φμ1 k∈Z2

+



ST ((an−1 )λ + (K ∗ f )λ − (K ∗ Kan−1 )λ )φμ ,

λ∈μ

vn,γ ,α =



(1 + γ 2|λ|(−2s−1/2) )−1

λ∈μ

 × (an−1 )λ + (K ∗ f )λ − (K ∗ Kan−1 )λ

 − ST ((an−1 )λ + (K ∗ f )λ − (K ∗ Kan−1 )λ ) φμ where an−1 = un−1,γ ,α + vn−1,γ ,α , St denotes the soft shrinkage operator with threshold t, and T = α2−|λ|/4 (1 + γ 2|λ|(−2s−1/2) )/(γ 2|λ|(−2s−1/2) ).

4 Numerical Examples In this section, we present numerical results obtained by applying our model (6) to image decomposition, denoising and deblurring. Firstly, we use a part of Barbara image for the decomposition experiments, as shown in Fig. 2. Figure 3 gives decomposition results of structure u and texture v using (11), DT (4), OSV and VO model. Figure 3(a), (b) and (c) are three cases of model (11). These results show that the new model can separate better the textured details v from nontextured image kept in u; DT model can not protect edge well and results in edge blurring. In Fig. 3(e) and (f), some table cloth and clothes textures remain in the cartoon u part. It shows that OSV and VO model give unsatisfactory results. Secondly, we validate denoising performance by applying the proposed new models (11). In this experiment, we use the classic image-Lena, which is polluted by white Gaussian noise of σ = 20 (see Fig. 4). Figure 4(c) is one case of model (11). Table 1 gives comparative data of root mean square error (RMSE) and standard peak signal-tonoise (PSNR) after image denoising using (11) and (4) for

130

J Math Imaging Vis (2008) 30: 125–132

Fig. 4 Denoising results on Lena image: figure (c) is one case of model (11) Table 1 RMSE and PSNR (:dB) for the denoising results on Lena image Restoration model

RMSE

Noisy Lena (σ = 20)

19.9881

22.1154

DT (α = 0.5; γ = 0.0001)

8.6727

29.3678

α = 5; γ = 0.5

7.2433

30.9321

α = 3; γ = 0.5

8.1975

29.8572

α = 0.5; γ = 0.5

8.6482

29.3923

α = 0.1; γ = 0.5

7.6462

30.4618

α = 0.001; γ = 0.5

9.1216

28.9294

α = 0.0005; γ = 0.5

8.6383

29.4022

s = 1/2

New model

s=1 s=2

different parameter choices. These data indicate that the proposed new model can denoise effectively and protect edge well than the DT’model. Let u¯ and u be the exact image and the reconstructed image, respectively, of size M × M. For denoising, we measure the goodness of the reconstructed image by the following quantity  RMSE =

M M i=1

¯ i,j j =1 (u 2 M

− ui,j )2

;

2552 . PSNR = 10 lg RMSE2

In Figs. 5, 6, we show deblurring results on a part of Barbara image and Peppers image using the scheme (19). In our par-

PSNR

ticular case K is a convolution operator with Gaussian kernel. For the result in Fig. 5, the blurred image has RMSE = 29.0098. The improved image has RMSE = 6.1299 using the parameters s = 1; α = 0.5; γ = 0.5. For the result in Fig. 6, the blurred image has RMSE = 25.8758, and the improved image has RMSE = 7.2036 using the same parameters. Visually, we see a significant improvement in the recovered image as compared to the degraded one. Our last experiment is recovering from a blurred and noisy image. In Fig. 7, we add white noise of standard deviation σ = 10 to the blurred image in Fig. 6. The degradation has RMSE = 27.7081. The improved image is obtained with s = 1; α = 0.5; γ = 0.5, and the RMSE = 16.4683.

J Math Imaging Vis (2008) 30: 125–132

131

Fig. 5 Deblurring on a part of Barbara image using the scheme (19)

Fig. 6 Deblurring on Peppers image using the scheme (19)

Fig. 7 Denoising-deblurring result using the scheme (19)

5 Conclusions

protect edge well but also separate the texture part v from the cartoon part u well.

The DT model gives explicit descriptions of v and u that are computed by wavelet shrinkage schemes. However, wavelet shrinkage schemes create artifacts in terms of undesirable oscillations, which manifest themselves as ringing and edge blurring. In this paper, we present a new method that solves the problem for image restoration and decomposition. It can be seen as a modification of DT’s model. It can not only

Acknowledgements The authors would like to thank the anonymous reviewers whose comments helped to improve the revised manuscript.

References 1. Rudin, L., Osher, S., Fatemi, E.: Nolinear total variation based noise removal algorithms. Physica D 60, 259–268 (1992)

132 2. Meyer, Y.: Oscillating Patterns in Image Processing and Nonlinear Evolution Equations. University Lecture Series, vol. 22. American Mathematical Society, Providence (2001) 3. Vese, L., Osher, S.: Modeling textures with total variation minimization and oscillating patterns in image processing. J. Sci. Comput. 19(1–3) (2003) 4. Vese, L., Osher, S.: Image denoising and decomposition with total variation minimization and oscillatory functions. J. Math. Imaging Vis. (1–2), 7–18 (2004) 5. Osher, S., Sole, A., Vese, L.: Image decomposition and restoration using total variation minimization and the H −1 norm. SIAM J. Multiscale Model. Simul. 1–3, 349–370 (2003) 6. Lieu, L., Vese, L.: Image restoration and decompostion via bounded total variation and negative Hilbert-Sobolev spaces. Applied Mathematics and Optimization (2007, in press) 7. Daubechies, I., Teschke, G.: Wavelet based image decomposition by variational functionals. In: Proc. SPIE, Wavelet Applications in Industrial Processing, vol. 5266, pp. 94–105 (2004) 8. Candès, E.J., Donoho, D.L.: Curvelets—a surprisingly effective nonadaptive representation for objects with edges. In: Rabut, C., Cohen, A., Schumaker, L.L. (eds.) Curves and Surfaces, pp. 105– 120. Vanderbilt University Press, Nashville (2000) 9. Candès, E.J., Donoho, D.L.: New tight frames of curvelets and optimal representations of objects with piecewise-C 2 singularities. Commun. Pure Appl. Math. 57(2), 219–266 (2004) 10. Borup, L., Nielsen, M.: Frame decomposition of decomposition spaces. J. Fourier Anal. Appl. 13(1), 39–70 (2007) 11. Mumford, D., Gidas, B.: Stochastic models for generic images. Q. Appl. Math. 59, 85–111 (2001) 12. Daubechies, I., Defrise, M., DeMol, C.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57, 1413–1541 (2004)

J Math Imaging Vis (2008) 30: 125–132 Lingling Jiang was born in Shandong, China on August 15, 1981. She received the M.S. degree in computational mathematics from Xidian University, Xi’an, China, in 2005. She is currently pursuing the Ph.D. degree in applied mathematics at Xidian University. Her research interests include image processing, wavelets, ridgelets, curvelets, and partial differential equations.

Xiangchu Feng received the B.S. degree in computational mathematics from the Xian JiaoTong University, Xi’an, China, 1984, and the M.S. and Ph.D degrees in applied mathematics from Xidian University I 1989 and 1999, respectively. He is currently a Professor of applied mathematics at Xidian University. His research interests include numerical analysis, wavelets, and partial differential equations for image processing Haiqing Yin was born in Shandong, China on November 11, 1977. He is currently pursuing the Ph.D. degree in applied mathematics at Xidian University. His research interests include optimization problems, kernel-based learning and pattern recognition.