Parameterized modeling of spatially varying ... - Semantic Scholar

Report 3 Downloads 115 Views
Parameterized modeling of spatially varying optical blur Jonathan Simpkins Robert L. Stevenson

Journal of Electronic Imaging 23(1), 013005 (Jan–Mar 2014)

Parameterized modeling of spatially varying optical blur Jonathan Simpkins and Robert L. Stevenson

University of Notre Dame, 275 Fitzpatrick Hall, Notre Dame, Indiana 46556

Abstract. Optical blur due to lens aberrations and defocus has been demonstrated in some cases to be spatially varying across the image plane. However, existing models in the literature for the point-spread function (PSF) corresponding to this blur are either parameterized and spatially invariant or spatially varying but ad-hoc and discretely defined. A parameterized model is developed and presented for a spatially varying PSF due to lens aberrations and defocus in an imaging system. The model is motivated from an established theoretical framework in physics and is demonstrated to be able to unify a set of hundreds of PSF observations across the image plane into a single 10-parameter model. The accuracy of the model is demonstrated with simulations and measurement data collected by two separate research groups. © 2014 SPIE and IS&T [DOI: 10.1117/1.JEI.23.1.013005] Keywords: Point spread function; image blur; Seidel aberrations; Hu moments. Paper 13525P received Jan. 13, 2013; revised manuscript received Feb. 15, 2013; accepted for publication Mar. 10, 2013; published online Jan. 13, 2014.

1 Introduction The point-spread function (PSF) is a description of how a point of light is redistributed over a local area of the sensor in an imaging system. The formation of a blurred observation image O can be viewed as a linear function of the underlying sharp image I and the PSF P [Eq. (1)]. Given a blurred observation O, there are established methods for estimating the underlying sharp image I , but these methods hinge on an accurate estimate of the PSF, and inaccurate PSF estimates have been demonstrated to result in inadequate image reconstructions.1 X Oðx; yÞ ¼ I ðx þ i; y þ jÞP x;y ði; jÞ: (1) i;j

The point location ðx; yÞ is denoted in the coordinate frame of the image, whereas ði; jÞ is in the coordinate frame of the PSF, and the two coordinate frames differ only by translation of the origin. P is generally assumed to vary slowly as a function of the ideal image point ðx; yÞ, although in application, P is sometimes assumed to be spatially invariant. Blur kernel models in the literature tend to fall into one of two categories: parameterized and spatially invariant or spatially varying and discretely defined.2 The downside of the spatially invariant model is that both simulation3 and experiments4,5 have demonstrated the need for P to have a spatial dependency. The downside of the discretely defined model is that it has many degrees of freedom for a PSF of reasonable size and requires a larger dataset for accurate kernel estimation. Additionally, there is no well-motivated mechanism to accurately interpolate the PSF for any particular point ðx; yÞ in the image plane: other works6,7 on spatially varying blur have instead relied on ad-hoc interpolation of discretely defined PSFs corresponding to known points on the image plane. Image restoration and analysis approaches, such as deconvolution,8 super-resolution,9 and depth-from-defocus,10 *

Address all correspondence to: Jonathan Simpkins, E-mail: [email protected]

Journal of Electronic Imaging

require an accurate estimate of the blur kernel across the image plane. If the blur is considered discretely defined over independent subwindows of the image, this overall estimation requires high-fidelity blur estimates at a dense spatial sampling of subwindows. A parameterized spatially varying model, however, can provide a regularization mechanism, which constrains each individual blur estimate and constrains the relationship between neighboring blur estimates—the application of these constraints will be shown in this article to allow for the accurate and complete specification of spatially varying blur, given even just a sparse sampling of reasonable-fidelity PSF observations. In this article, a novel, parameterized model will be developed and presented for a spatially varying PSF due to Seidel aberrations and defocus in an imaging system. Section 2 will develop an imaging model, which describes the path of a single ray of light through an imaging system affected by Seidel aberrations and defocus. Section 3 will build off this raytracing model to develop the proposed spatially varying PSF model. The results of applying the proposed model to data from two separate research groups will be presented in Sec. 4, and Sec. 5 will present conclusions and discuss the directions for future work. 2 Imaging Model This section will present two established models in the literature, which describe the path of a single ray of light passing through a lens (the Seidel aberration model and the Gaussian thick-lens model). A derivation will then be presented of a composite model which will be used as the proposed raytracing model to account for lens aberrations and defocus. The section will conclude with a discussion of the limitations and implicit assumptions of this imaging model. 2.1 Seidel Aberration Model The Seidel aberration model11 describes where a ray leaving from a point P10 on the exit pupil should arrive on the focused-image plane (at aberrated arrival point P1 ), given 0091-3286/2014/$25.00 © 2014 SPIE and IS&T

013005-1

Jan–Mar 2014



Vol. 23(1)

Simpkins and Stevenson: Parameterized modeling of spatially varying optical blur

Fig. 1 The Seidel-aberrated raytracing model (adapted from Born and Wolf11).

an ideal arrival point P⋆1 (Fig. 1). The model is based on a third-order Taylor series expansion of Snell’s law and assumes a spherical lens surface which is symmetric about the optical axis. A complete presentation of the Seidel model is given by Born and Wolf.11 In the formulation of the model presented by Born and Wolf, the actual optical system is abstracted away, leaving the more generally defined planes of the object, the entrance and exit pupils, and the focused-image plane. The Seidel aberration model requires only five parameters corresponding to the aberration coefficients representing spherical aberration (B), astigmatism (C), field curvature (D), radial distortion (E), and coma (F). The general effect of each of these aberrations on P x;y ði; jÞ can be briefly summarized as: 1. Spherical aberration: P x;y ði; jÞ is spread out radially in ði; jÞ, and the effect does not depend on ðx; yÞ. This results in ideal defocus blur having soft falloff more similar to a Gaussian blur kernel. 2. Astigmatism: P x;y ði; jÞ is spread out in ði; jÞ either along a line from ðx; yÞ to the origin or perpendicular to this line. This results in an ideal defocus blur disc appearing as an ellipse, and the effect varies in intensity as a function of distance from the image center. 3. Field curvature: P x;y ði; jÞ is spread out radially in ði; jÞ. Effectively, this aberration causes the focus distance for the system to vary as a function of the distance to the optical axis, and the amount of defocus can therefore vary as a function of distance from the image center. 4. Radial distortion: P x;y ði; jÞ is shifted in ði; jÞ, but is otherwise unaffected. This turns out to be equivalent to a third-order limitation of the Brown radial distortion model,12 which is popular in the computer vision literature. 5. Coma: P x;y ði; jÞ is skewed along a line from ðx; yÞ to the origin. This can result in a comet-shaped blur kernel, which is oriented toward or away from the image center. With the exception of spherical aberration, the effect of each of the above aberrations varies with respect to the radius of ðx; yÞ from the origin, and therefore the presence of almost any of these aberrations will result in a blur kernel which varies across the image plane. Interested readers are referred Journal of Electronic Imaging

Fig. 2 The thick-lens raytracing model with defocus consideration. The principal planes of the lens are indicated with vertical dashed lines and separated by distance T Lens .

to Ref. 13 for more information on the Seidel aberrations and the physical explanation for each. 2.2 Gaussian Thick-Lens Model with Offset Apertures The Seidel model describes the arrival point of the aberrated ray on the focused-image plane (the conjugate plane of the object plane), which is not necessarily the plane of the image sensor. In order to take defocus into account, the Gaussian thick-lens geometric optics model with offset apertures14 can be used to follow the path of a ray traveling from the object at point P0 to the image sensor at point PImg (Fig. 2). In this model, the lens is represented by a pair of equivalent refractive planes, which are separated along the optical axis by a distance T Lens . In the thick-lens model, there are three pairs of conjugate planes: 1. The focal and the image planes (at distances ZFoc and ZImg ), which represent the focus distance of the camera and the location of the image sensor, respectively. 2. The object and the focused-image planes (at distances ZObj and ZFoc;Img ), which represent the object distance and the location where the image of the object would appear in focus in the absence of aberrations, respectively. 3. The entrance and exit pupils of the lens system (at distances ZEnter and ZExit ). The distance T Lens between the equivalent refractive surfaces of the model presents itself as an adjustment to the traditional thin-lens equation [Eqs. (2)–(4)]. If the focal length of the lens is given as f, then the pairs of planes are related as f −1 ¼ ðZFoc − T Lens Þ−1 þ Z−1 Img

(2)

f −1 ¼ ðZObj − T Lens Þ−1 þ Z−1 Foc;Img

(3)

f −1 ¼ ðZEnter − T Lens Þ−1 þ Z−1 Exit :

(4)

In the common case15 of a lens containing many optical elements, the thick-lens model with offset apertures is an approximation to the overall effect of the lens. Therefore

013005-2

Jan–Mar 2014



Vol. 23(1)

Simpkins and Stevenson: Parameterized modeling of spatially varying optical blur

T Lens , ZEnter , and ZExit may be the effective thickness and offsets of the thick-lens approximation and not the actual lens thickness or locations of the physical lens pupils. 2.3 Proposed Raytracing Model By incorporating the Seidel aberration model into the thicklens model, the aberrated point P1 can be obtained (the point at which the ray would intersect the focused-image plane), and it is therefore possible to compute the point PImg at which the ray actually intersects the imaging sensor. Without the Seidel model, aberration would not be accounted for, and without the thick-lens model, defocus would not be accounted for, but with the proposed combined model, the arrival point on the imaging plane at ðxImg ; yImg Þ of a ray arriving at the entrance pupil at ðx00 ; y00 Þ is given by Eqs. (5) and (6).13 ! ! 2 ⋆ 2 "" Z2 M ½ðx1 Þ þ ðy⋆1 Þ2 & 1−E Z1 Z21 " ! " ! Z2 Z2 x00 ½ðx00 Þ2 þ ðy00 Þ2 & 0 þ Mx0 1 − þB M Z1 ! " ⋆ 2 ⋆ 2 0 Z Mx ½ðx Þ þ ðy1 Þ & þD 2 0 12 Z1 ! " 2Z2 Mx⋆1 ðx⋆1 x00 þ y⋆1 y00 Þ þC Z21 ! " Z2 ½3ðx00 Þ2 x⋆1 þ ðy00 Þ2 x⋆1 þ 2x00 y00 y⋆1 & −F Z1

xImg ¼ x⋆1

yImg

! ! 2 ⋆ 2 "" Z2 M ½ðy1 Þ þ ðx⋆1 Þ2 & ¼ 1−E Z1 Z21 " ! " ! Z Z y 0 ½ðx 0 Þ2 þ ðy00 Þ2 & þ My00 1 − 2 þ B 2 0 0 M Z1 ! " ⋆ 2 ⋆ 2 0 Z My ½ðx Þ þ ðy1 Þ & þD 2 0 12 Z1 ! " ⋆ ⋆ 0 2Z2 My1 ðx1 x0 þ y⋆1 y00 Þ þC Z21 ! " Z ½3ðy00 Þ2 y⋆1 þ ðx00 Þ2 y⋆1 þ 2x00 y00 x⋆1 & −F 2 ; Z1

(5)

y⋆1

ZExit T Lens − ZEnter

(10)

2.4 Imaging Model Limitations and Assumptions The proposed imaging model assumes that the system is composed of elements which are rotationally symmetric about the optical axis. It is important to emphasize that the aberrations which are accurately represented by this model are those which are caused by an imperfect lens design using ideal radially symmetric elements and not aberrations which are caused by imperfect fabrication of the lens system. Additionally, it is important to point out that the proposed imaging model does not represent wave properties of light such as diffraction, which can be the dominant causes of blur in systems with small apertures. The proposed model is intended to represent lens systems which are not diffraction limited and where the effects of blur can be approximately represented by a purely geometric imaging model. Finally, the proposed imaging model is assumed to operate on monochromatic light. Because the Seidel aberration model follows from a Taylor-series approximation to Snell’s law and because the index of refraction for a material can vary as a function of the wavelength of light, the corresponding Seidel aberration coefficients can, therefore, also vary as a function of the wavelength of light. However, the PSF model proposed in the next section can still be practically implemented with color images, following the general approach proposed by Ref. 16. In this approach, the effect of blur is modeled independently for each of the three color channels in a color image (red, green, and blue). This approach has been demonstrated to be effective at representing and correcting chromatic aberrations present in spatially varying blur.2

The first component is a spatially varying model, which predicts the central Hu moments of the PSF as a function of the location ðx; yÞ across the image plane. • The second component is a spatially agnostic model, which, given a set of predicted Hu moments for the PSF at a particular location, generates the corresponding surface in ði; jÞ for the PSF at that location (which will be referred to as the “instantaneous” PSF). •

(6)

(7)

Z1 ¼ ZFoc;Img − ZExit

(8)

Z2 ¼ ZImg − ZExit

(9)

Journal of Electronic Imaging

! " ZFoc;Img 1− P0 : f

3 PSF Model The proposed spatially varying PSF model has two components, which function together to produce the surface of the PSF at any particular point across the image plane.

where M is the magnification from entrance to exit pupils, Z1 and Z2 are the distances of the focused-image and the image planes, respectively, to the exit pupil of the system, and P⋆1 is the nonaberrated arrival point of the ray on the focusedimage plane. These simplifying parameters are computed from the thick-lens model as M¼

P⋆1 ¼

This section will begin by reviewing the relevant background on Hu moments. The proposed model for the Hu moments of a spatially varying PSF will then be presented, and the section will conclude with the presentation of two candidate instantaneous PSF models. 3.1 Background on Hu Moments and Their Computation from Grayscale Images The Hu moments were first proposed by their namesake17 in 1962, as a compact and robust representation of object shape.

013005-3

Jan–Mar 2014



Vol. 23(1)

Simpkins and Stevenson: Parameterized modeling of spatially varying optical blur

Given an image ρðx; yÞ (assumed continuous), the Hu moment mpq is defined for p, q ¼ 0; 1; 2; : : : as Z ∞Z ∞ mpq ¼ xp yq ρðx; yÞdxdy: (11) −∞

x¯ ¼ m10 ∕m00

(12)

y¯ ¼ m01 ∕m00

(13)

1 m00

Z

∞ −∞

Z



−∞

ðx − x¯ Þp ðy − y¯ Þq ρðx; yÞdxdy:

(14)

The Hu moment proved a valuable tool for object recognition and detection,18–22 because the combinations of central Hu moments were proven to have certain properties that 0 , for example, were invariant to different transformations: μpq is invariant to translation and scale of image intensity. An approach proposed by Liao and Pawlak23 for computing the Hu moments from a digital image fðx; yÞ (which is defined only at integer values of x and y) is to treat fðx; yÞ as continuous but piecewise constant ^ pq ¼ m

∞ ∞ X X

x¼−∞ y¼−∞

hpq ðx; yÞ ¼

¼

Z

Z

hpq ðx; yÞfðx; yÞ

xþ1∕2 x−1∕2

Z

yþ1∕2

(15)

×

Z pffiffiffiffiffiffiffiffiffiffiffiffi R2 −y 02 Ap

½xImg ðx⋆1 ; y⋆1 ; x00 ; y00 Þ&p pffiffiffiffiffiffiffiffiffiffiffiffi 2 02



0

RAp −y0

½yImg ðx⋆1 ; y⋆1 ; x00 ; y00 Þ&q dx00 dy00 :

(17)

Calculating this integral for p, q ∈ f0; 1; 2; 3g, the normalized second- and third-order central moments can be solved for as functions of the aberration coefficients, the aperture radius, the focus and object distances, and the location in the image plane [Eqs. (18)–(24)]. 0 ðx⋆ ; y⋆ Þ ¼ S þ S ðx⋆2 Þ þ S ðx⋆2 þ y⋆2 Þ μ2;0 1 2 1 3 1 1 1 1

⋆2 ⋆2 ⋆2 ⋆2 2 þ S4 ðx⋆4 1 þ x1 y1 Þ þ S5 ðx1 þ y1 Þ

0 ðx⋆ ; y⋆ Þ ¼ S ðx⋆ y⋆ Þ þ S ðx⋆ y⋆3 þ x⋆3 y⋆ Þ μ1;1 2 1 1 4 1 1 1 1 1 1

(18)

(19)

0 ⋆2 ⋆2 μ0;2 ðx⋆1 ; y⋆1 Þ ¼ S1 þ S2 ðy⋆2 1 Þ þ S3 ðx1 þ y1 Þ

⋆2 ⋆2 ⋆2 ⋆2 2 þ S4 ðy⋆4 1 þ x1 y1 Þ þ S5 ðx1 þ y1 Þ

(20)

0 ⋆ ⋆4 ðx⋆1 ; y⋆1 Þ ¼ S6 ð3x⋆5 μ3;0 1 − 3x1 y1 Þ

⋆2 ⋆ ⋆4 ⋆3 þ S7 ð3x⋆3 1 y1 þ 3x1 y1 Þ þ S8 ðx1 Þ ⋆ þ S9 ð3x⋆1 y⋆2 1 Þ þ S10 ð3x1 Þ

(21)

0 ðx⋆ ; y⋆ Þ ¼ S ð5x⋆4 y⋆ þ 4x⋆2 y⋆3 − y⋆5 Þ μ2;1 6 1 1 1 1 1 1 1

⋆4 ⋆ ⋆2 ⋆3 ⋆2 ⋆ þ S7 ðy⋆5 1 − 2x1 y1 − x1 y1 Þ þ S8 ðx1 y1 Þ

x˜ p y˜ q d˜yd˜x:

⋆2 ⋆ ⋆ þ S9 ðy⋆3 1 − 2x1 y1 Þ þ S10 ðy1 Þ

(16)

y−1∕2

The results in this article follow Liao’s approach for computing Hu moments from a digital image of a PSF observation with a numerical approximation to the double integral in Eq. (16). 3.2 Proposed Model for Hu Moments of PSF Across Image Plane To compute the theoretical Hu moments of a PSF under the raytracing model proposed in Sec. 2.3, the integral of Eq. (11) needs to be modified. Consider that for a raytracing model, the sum effect of the light passing through the optical system is the sum effect of the individual rays passing through the system. Rather than integrating with respect to coordinates ðx; yÞ on the image plane, it is possible to integrate with respect to coordinates ðx00 ; y00 Þ on the entrance pupil of the system. By posing the integration in this way, it is possible to compute the Hu moment of a PSF even though the PSF itself does not have a closed-form solution ρðx; yÞ under the proposed imaging model. If it is assumed that the entrance pupil is a disc centered on the optical axis with radius RAp , then assuming that the rays fall uniformly over the entrance pupil, and given by Eqs. (5) and (6) the arrival point of any ray passing through the entrance pupil, the Hu moment mpq of the PSF for ideal image point ðx⋆1 ; y⋆1 Þ can be computed in closed form with Eq. (17). Journal of Electronic Imaging

RAp

−RAp

−∞

The Hu moment has a clear analogy to the moments of a joint bivariate probability distribution function. Following this analogy, the Hu moment is extended to the concept of 0 as a normalized central moment μpq

0 ¼ μpq

mpq ðx⋆1 ; y⋆1 Þ

(22)

0 ðx⋆ ; y⋆ Þ ¼ S ð5x⋆ y⋆4 þ 4x⋆3 y⋆2 − x⋆5 Þ μ1;2 6 1 1 1 1 1 1 1

⋆ ⋆4 ⋆3 ⋆2 ⋆ ⋆2 þ S7 ðx⋆5 1 − 2x1 y1 − x1 y1 Þ þ S8 ðx1 y1 Þ ⋆ ⋆2 ⋆ þ S9 ðx⋆3 1 − 2x1 y1 Þ þ S10 ðx1 Þ

(23)

0 ⋆4 ⋆ ðx⋆1 ; y⋆1 Þ ¼ S6 ð3y⋆5 μ0;3 1 − 3x1 y1 Þ

⋆3 ⋆4 ⋆ ⋆3 þ S7 ð3x⋆2 1 y1 þ 3x1 y1 Þ þ S8 ðy1 Þ ⋆ ⋆ þ S9 ð3x⋆2 1 y1 Þ þ S10 ð3y1 Þ;

(24)

where the constants S1 ; : : : ; S10 are introduced to make the central moments polynomial functions of the point ðx⋆1 ; y⋆1 Þ across the image plane, and the constants are determined from the optical parameters of the camera as

S1 ¼

S2 ¼

013005-4

ðZ1 − Z2 Þ2 M2 R2Ap 4Z21

!

F2 þ 2BC 3

"!

þ

BðZ1 − Z2 ÞZ2 R4Ap B2 Z22 R6Ap þ 3Z1 8M2 (25)

Z2 R2Ap Z1

"2

þC

!

Z2 ðZ1 − Z2 ÞM2 R2Ap Z31

Jan–Mar 2014

"

(26) •

Vol. 23(1)

Simpkins and Stevenson: Parameterized modeling of spatially varying optical blur

S3 ¼

!

F2 þ 2BD 6

"!

S4 ¼ ðC þ CDÞM 2

S5 ¼

!

S6 ¼ −

S7 ¼ −

S8 ¼ − −

S9 ¼ − −

DM 2

"2 !

2

Z2 R2Ap Z1

!

"2

Z2 RAp Z21

Z2 RAp Z21

þ

! " D Z2 ðZ1 − Z2 ÞM2 R2Ap 2 Z31 (27)

"2

(28)

"2

(29)

ð2C þ DÞ2 FM 2 R4Ap Z32

(30)

ðC þ DÞð2C þ DÞFM2 R4Ap Z32

(31)

6Z51

3Z51

ð2C þ DÞFM2 R4Ap ðZ1 − Z2 ÞZ22 7Bð2C

Z41 þ DÞFR6Ap Z32 8Z31



F3 R6Ap Z32

(32)

4Z31

ðC þ DÞFM2 R4Ap ðZ1 − Z2 ÞZ22 3Z41

Bð6C þ 7DÞFR6Ap Z32

S10 ¼ − −

24Z31



FM2 R4Ap ðZ1 − Z2 Þ2 Z2 6Z31

B2 FR8Ap Z32 8M2 Z1

:

F3 R6Ap Z32

(33)

12Z31



7BFR6Ap ðZ1 − Z2 ÞZ22 24Z21

(34)

In application, the constants S1 ; : : : ; S10 can be treated as parameters of the system, which are fixed for a given lens, object distance, focus distance, and focal length. This approach is taken to produce the results in this article, but future work will seek to exploit the relationship between the constants and the relationship between the constants at one object distance and the constants for a different object distances. 3.3 Instantaneous PSF Models The second component of the proposed model takes a set of predicted Hu moments and produces the corresponding estimate of the surface of the instantaneous PSF which has those Hu moments. There are two candidate models for the instantaneous PSF that will be evaluated in this article: the elliptical Gaussian model and the bivariate skew-normal distribution. The elliptical Gaussian model is specified entirely by the second-order central moments Journal of Electronic Imaging

! 0 $ 1 μ P Gauss ði; jÞ ¼ λGauss exp − ð i j Þ 2;0 0 μ 2 1;1

0 μ1;1 0 μ0;2

"−1 ! "% i ; j (35)

where λGauss is a normalizing constant, so that the sum of the PSF is unity over the support of ði; jÞ. The Gaussian PSF model is a commonly assumed PSF model in the literature.2,24 Unfortunately, the Gaussian PSF model cannot represent skew (it requires all four of the third-order moments to be zero). This is a major shortcoming of the Gaussian model, because with a lens that exhibits coma (represented by Seidel aberration coefficient “F”), the resulting lens-effect PSF can display significant skew. Interestingly, it is shown in Eqs. (21)–(24) and (30)–(34) that if the coma coefficient F goes to zero, all third-order moments also go to zero, regardless of the values of the other four Seidel aberration coefficients. Examples of PSFs displaying significant skew can be seen in Sec. 4 and in Figs. 3 and 4. To allow for nonzero skew, an alternative candidate is the skew-normal model. The model is formed as $ 2 % γ − 2ωγ 1 γ 2 þ γ 22 P SN ði; jÞ ¼ λSN exp − 1 2ð1 − ω2 Þ ! 2" Z ðα γ þα γ Þ 1 1 2 2 t × exp − dt; 2 −∞

(36)

where γ 1 ¼ i∕κ1 and γ 2 ¼ j∕κ 2 . The skew-normal model has five parameters (κ1 , κ2 , ω, α1 , and α2 ), which affect the shape of the PSF surface, and a normalizing parameter λSN . If α1 and α2 are chosen to be zero, this model reduces to the Gaussian model. The skew-normal model has been championed in the field of statistics by Azzalini and Valle26 (whose work is the basis for the presentation of the model here), and it has elsewhere been applied to modeling nonisotropic PSFs by Hansen and Jensen.27 The mapping to the model parameters from the secondand third-order Hu moments is far less direct in this case than it was with the Gaussian model, and the mapping is not guaranteed to be exact (there are five model parameters that result in seven second- and third-order Hu moments). The proposed mapping (derived in Appendix) forms the five model parameters as a function of two additional variables δ1 and δ2 , which are each bounded on ½−1; 1& and must satisfy the constraints of Eq. (42). The second-order moments are guaranteed to be matched for any choice of δ1 and δ2 that satisfies Eq. (42), and the “optimal” mapping is chosen as the one which minimizes the sum-squared error εSSE of the third-order moments [Eq. (43)]. sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ! "ffi 1 2 2 κ1 ¼ 1 − δ1 (37) 0 μ2;0 π sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ! "ffi 1 2 2 κ2 ¼ 1 − δ2 0 μ0;2 π

(38)

0 þ 2δ δ ω ¼ κ1 κ2 μ1;1 π 1 2

(39)

013005-5

Jan–Mar 2014



Vol. 23(1)

Simpkins and Stevenson: Parameterized modeling of spatially varying optical blur

Fig. 3 Simulated PSFs across the image plane by Brauers et al.3

Fig. 4 Measured PSFs from Rangarajan et al.25

δ1 − δ2 ω ffi α1 ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 ð1 − ω Þð1 − ω2 − δ21 − δ22 þ 2δ1 δ2 ωÞ δ2 − δ1 ω ffi α2 ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 ð1 − ω Þð1 − ω2 − δ21 − δ22 þ 2δ1 δ2 ωÞ δ1 δ2 −

(40)

(41)

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð1 − δ21 Þð1 − δ22 Þ < ω < δ1 δ2 þ ð1 − δ21 Þð1 − δ22 Þ

(42)

εSSE ¼

! ! " " δ3 2 δ2 δ 2 0 0 μ3;0 − ϕ 13 þ μ2;1 − ϕ 12 2 κ1 κ1 κ2 ! ! " " 2 2 3 2 0 − ϕ δ1 δ2 0 − ϕ δ2 þ μ1;2 þ μ ; 0;3 κ1 κ22 κ32

where ϕ ≈ 0.218. Journal of Electronic Imaging

(43)

4 Results This section will present the results of applying both components of the proposed model: Sec. 4.1 will discuss the results of applying the proposed spatially varying Hu moment model, and Sec. 4.2 will discuss the results of applying the candidate instantaneous PSF models. Section 4.3 will then discuss the results of applying the proposed model using a sparse set of PSF observations. Two datasets from two separate research groups were used as input for these results. The first dataset is a set of 100 simulation results from Brauers et al.3 (Fig. 3). These PSFs were computed using ZEMAX professional raytracing software and an exact design specification of an actual zoom lens. No PSF model was imposed on the simulations, and the results are purely numeric results of tracing many rays through a specified geometric camera model. The second dataset is a collection of 420 PSF observations by Christensen et al.28 and Rangarajan25 (Fig. 4). These PSFs were measured using a point illuminant, a single-element lens, and a digital imaging sensor. Both datasets contain PSFs across the image plane at a constant object and focus distances. 4.1 Results of Applying Spatially Varying Hu Moment Model For each dataset, the normalized central Hu moments of second and third orders were computed numerically with Eq. (15), and the spatially varying model from Eqs. (18)– (24) was fit to the computed Hu moments in a least-squares sense. The observed and model-fit values are compared in Figs. 5–8. Qualitatively, the model appears to be an accurate description of the observations in both datasets. The second- and third-order moments in the Brauers dataset (700 total observations; Figs. 5–6) are well represented by the proposed spatially varying model with the 10 parameters, S1 ; : : : ; S10 . Similarly, the second- and third-order moments in the Rangarajan dataset (2940 total observations; Figs. 7 and 8) are well represented by the 10-parameter model. With both datasets, the observations of the third-order moments are noisier than the observations of the secondorder moments, and this follows Liao’s claim23 that the numerical computation of moments becomes less robust as the order increases. With both datasets, this model fit is done without informing the model of the object distance, the focus distance, or any specification of the imaging system. To quantify the accuracy of the model fit, the metric MVarFrac is considered [Eq. (44)]. MVarFrac represents the fraction of observation variance accounted for by the model, where A ¼ fa1 ; a2 ; : : : ; aN g is a set of N observation values and A^ ¼ fa^ 1 ; a^ 2 ; : : : ; a^ N g is a set of N corresponding model-fit values. A metric score of 100% would indicate that the model perfectly describes the observations, and a metric score of 0% would indicate that the mean-squared error of the model was of equal magnitude to the variance of the original observations. A negative metric score would indicate that the model would be better replaced by a constant value of the mean of the observations, but this would be a severe result.

013005-6

Jan–Mar 2014



Vol. 23(1)

Simpkins and Stevenson: Parameterized modeling of spatially varying optical blur

0 , μ 0 , and μ 0 (from left to right) Fig. 5 Observed (top row) versus model fit (bottom row) values of μ2;0 1;1 0;2 across the image plane from the Brauers dataset. All values are in units of pixels2 .

0 , μ 0 , μ 0 , and μ 0 (from left to Fig. 6 Observed (top row) versus model fit (bottom row) values of μ3;0 2;1 1;2 0;3 right) across the image plane from the Brauers dataset. All values are in units of pixels3 .

1

^ ¼1−N MVarFrac ðA; AÞ

ΣNi¼1 ðai − a^ i Þ2 : Var½A&

(44)

For each of the seven Hu moments fit by the model, this metric is evaluated for both datasets (Table 1). To Journal of Electronic Imaging

compensate for the outliers in the set of observed Hu moments (in particular, the comparatively noisy set of computed third-order moments), the metric is also evaluated over the middle 90% of observation errors (i.e., exclude the largest and the smallest 5% of discrepancies between the model fit and observed values, without re-evaluating the model fit).

013005-7

Jan–Mar 2014



Vol. 23(1)

Simpkins and Stevenson: Parameterized modeling of spatially varying optical blur

0 , μ 0 , and μ 0 (from left to right) Fig. 7 Observed (top row) versus model fit (bottom row) values of μ2;0 1;1 0;2 across the image plane from the Rangarajan dataset. All values are in units of pixels2 .

0 , μ 0 , μ 0 , and μ 0 (from left to Fig. 8 Observed (top row) versus model fit (bottom row) values of μ3;0 2;1 1;2 0;3 right) across the image plane from the Rangarajan dataset. All values are in units of pixels3 .

These results indicate that the spatially varying model accurately represents the observed Hu moments of both datasets across the image plane.

With both datasets, the skew-normal model produces superior results. Qualitatively, both models appear to oversmooth the PSF, but it turns out that the outputs of the models still perform well quantitatively when used in application (Figs. 11 and 12). To quantify the accuracy with which a PSF observation P x;y ði; jÞ is represented by a model approximation P^ x;y ði; jÞ, a metric MFB is introduced, which we refer to as the forward-blur SNR [Eq. (45)]. This metric represents the discrepancy between an image Z blurred with the ground-truth PSF versus the image blurred with the approximate PSF.

4.2 Results of Applying the Candidate Instantaneous PSF Models The two candidate instantaneous PSF models (Gaussian and skew-normal) were evaluated over the datasets using the Hu moments produced by the fits of the spatially varying model from the previous section. Several of the resulting PSF surfaces are shown in Figs. 9 and 10.

Table 1 Fraction of the observation variance accounted for by the proposed spatially varying model. 0 μ2;0 (%)

0 μ1;1 (%)

0 μ0;2 (%)

0 μ3;0 (%)

0 μ2;1 (%)

0 μ1;2 (%)

0 μ0;3 (%)

Brauers dataset

89.83

99.43

77.91

92.07

75.27

89.37

92.15

Brauers dataset (middle 90% of model error)

92.16

99.70

80.50

94.99

84.99

92.13

93.60

Rangarajan dataset

97.57

98.97

97.72

95.65

60.39

81.69

91.17

Rangarajan dataset (middle 90% of model error)

98.37

99.19

98.25

97.28

75.41

89.06

94.87

Journal of Electronic Imaging

013005-8

Jan–Mar 2014



Vol. 23(1)

Simpkins and Stevenson: Parameterized modeling of spatially varying optical blur

0

1

Var½P ' Z& ^ ¼ 10 log10 B &2 C MFB ðP; PÞ @& & & A: ^ &ðP ' ZÞ − ðP ' ZÞ&

(45)

2

The results presented in this article used the canonical “Lena” image for Z at a resolution of 512 × 512 pixels. Other images were used in preliminary tests and produced consistently similar results. For both models, the metric MFB was evaluated with the model fit using the raw, numerically computed Hu moments from the PSF observations and with the model fit using the Hu moments produced by the spatially varying model component. Additionally, the metric was evaluated using the average PSF observation over each dataset to approximate the case of assuming a spatially invariant PSF. The statistical distribution of metric scores over the two datasets is shown in Figs. 11 and 12, and the average metric scores over the datasets are compared in Table 2. The average PSF did not perform well in either dataset, underscoring the need for a spatially varying model. The Gaussian model using the model-constrained Hu moments performed nearly as well as it did using the raw numerically computed Hu moments, despite requiring only a fraction of the degrees of freedom (the model-constrained Gaussian requires only 5, whereas the raw-moment Gaussian requires 150 and 1260 for the Brauers and Rangarajan datasets, respectively). The most encouraging result is that in both datasets, the skew-normal model fit with the model-constrained Hu moments resulted in the best performance. This is considered

good for two reasons: the first reason is that it demonstrates that as the instantaneous PSF model takes advantage of a larger set of Hu moments, the resulting PSF surface is a better approximation (as might be expected). The second reason is that it demonstrates that the spatially varying model for the third-order Hu moments can help mitigate some of the numerical sensitivity in computing those higherorder moments. If this was not the case, then it would otherwise be expected that the extra degrees of freedom with the raw moments would lead to superior performance (which it does not with either dataset). 4.3 Results of Applying the Model with a Sparse Set of Observations To test the proposed model’s ability to represent the complete spatially varying blur across the image plane based only on a sparse set of observations, an approach similar to cross-validation was used: the total observations in a given dataset were partitioned into “training” and “validation” sets, the spatially varying model was trained using the numerically computed Hu moments from the training set, and then the average MFB metric was computed over the validation set by applying the spatially varying model with a particular choice of instantaneous PSF model. The selection of observations in the training set was done randomly in a nonstratified manner (i.e., the likelihood of choosing two adjacent PSF observations was identical to the likelihood of choosing two PSF observations which were not adjacent in the image), and every observation had mutually exclusive membership in either the training or validation set.

Fig. 9 A selection of PSF surfaces across the width of the image plane resulting from the Brauers dataset. Top row is the observed PSF, middle row is the Gaussian fit, and bottom row is the skew-normal fit. Journal of Electronic Imaging

013005-9

Jan–Mar 2014



Vol. 23(1)

Simpkins and Stevenson: Parameterized modeling of spatially varying optical blur

Fig. 10 A selection of PSF surfaces across the width of the image plane resulting from the Rangarajan dataset. Top row is the observed PSF, middle row is the Gaussian fit, and bottom row is the skew-normal fit.

Fig. 11 The statistical distribution of MF B scores for the different methods over the 100 PSF observations from the Brauers dataset.

The model was applied in this manner for a range of sizes for the training set, from a training set containing only 2 PSF observations (and 14 corresponding Hu-moment values) up to a set containing nearly 100 observations (and 700 Humoment values). For each training set size, multiple random partitions of training/validation sets were used, and Fig. 13 shows both the average performance (solid lines) and the 95% confidence intervals for the performance (dashed lines), based on these results for both datasets and both choices of instantaneous PSF model. These results indicate that for both instantaneous PSF models, the performance levels off when the training set contains 10 or more observations. This is true for both the Brauers dataset (100 total observations) and the Rangarajan dataset (420 total observations). Journal of Electronic Imaging

A distinction is made between this point of diminishing returns and the number of degrees of freedom in the spatially varying model (which is also 10): every PSF observation is contributing seven values to the model fit in the form of the seven different second- and third-order Hu moments, so even a training set of only 2 PSF observations is fitting the 10-parameter model to a set of 14 different values. 5 Conclusions and Future Work We conclude that the proposed spatially varying model, motivated from an established theoretical framework in physics, is able to accurately describe the second- and third-order Hu moments of a set of spatially varying PSF observations across the image plane. The model has been validated on the simulation results of Brauers et al. and

013005-10

Jan–Mar 2014



Vol. 23(1)

Simpkins and Stevenson: Parameterized modeling of spatially varying optical blur

Fig. 12 The statistical distribution of MF B scores for the different methods over the 420 PSF observations from the Rangarajan dataset.

Table 2 Average MF B score for each approximation method over each dataset. Average PSF (dB)

Gaussian, raw moments (dB)

Gaussian, model-constrained moments (dB)

Skew-normal, raw moments (dB)

Skew-normal, model-constrained moments (dB)

Brauers dataset

21.60

25.22

24.77

27.19

28.60

Rangarajan dataset

21.01

28.18

27.91

29.06

29.79

the PSF measurements of Rangarajan et al. Furthermore, the Hu moments produced by this model have been demonstrated to result in instantaneous PSF approximations which are comparable or superior in accuracy to the approximations made with the Hu moments calculated directly from observations. The proposed model is constrained to optical systems which are symmetric about the optical axis (e.g., it would

not be applicable to anamorphic lenses), but the model is otherwise general and does not require specification of the optical system design. Although it is only specified up to third-order moments here, the model is easily extended to higher order moments with Eq. (17). In future work, we hope to improve on the candidate instantaneous PSF models that have been presented in Sec. 3.3 in order to improve the representation of the PSF

Fig. 13 The average MF B scores as a function of the number of PSF observations used to train the spatially varying model. Dashed lines indicate the 95% confidence intervals, based on the performance over multiple random selections of training and validation sets. Journal of Electronic Imaging

013005-11

Jan–Mar 2014



Vol. 23(1)

Simpkins and Stevenson: Parameterized modeling of spatially varying optical blur

surface, given the Hu moments from the spatially varying model. We also hope to exploit the connections between the model parameters S1 ; : : : ; S10 in Eqs. (25)–(34) and to extend the proposed model to be spatially varying not just across the image plane, but also with respect to object depth from the camera.

Appendix: Fitting a Skew-Normal Model from Second- and Third-Order Hu Moments The moment-generating function for the bivariate skewnormal function is given by Azzalini and Valle26 as $ % 1 2 −p −q 2 Mðp; qÞ ¼ 2κ1 κ2 exp ðp þ 2ωpq þ q Þ 2 Z ðδ pþδ qÞ 1 2 1 pffiffiffiffiffi expð−τ2 ∕2Þdτ; × (46) 2π −∞

for skew-normal parameters δ1 , δ2 , ω, κ1 , and κ 2 , and where δ1 and δ2 must satisfy Eq. (42). This leads to a closed-form solution to the second- and third-order central Hu moments of the skew-normal distribution [Eqs. (47)–(53)]. ! " 1 2 0 ¼ 2 1 − δ21 μ2;0 (47) π κ1

0 ¼ μ1;1

! " 1 2 ω − δ1 δ2 κ1 κ2 π

(48)

0 μ0;2

! " 1 2 2 ¼ 2 1 − δ2 π κ2

(49)

0 μ3;0

$ !rffiffiffi"3 rffiffiffi% 3 2 2 δ1 ¼ 2 − π π κ 31

(50)

0 μ2;1

$ !rffiffiffi"3 rffiffiffi% 2 2 2 δ1 δ2 ¼ 2 − π π κ 21 κ2

(51)

0 μ1;2

$ !rffiffiffi"3 rffiffiffi% 2 2 δ1 δ22 ¼ 2 − π π κ 1 κ22

(52)

0 μ0;3

$ !rffiffiffi"3 rffiffiffi% 3 2 2 δ2 ¼ 2 − : π π κ 32

(53)

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ! "ffi 1 2 2 κ2 ¼ 1 − δ2 0 μ0;2 π

(38)

2 0 þ δ1 δ2 : ω ¼ κ1 κ2 μ1;1 π

(39)

The “optimal” selection of δ1 and δ2 is then chosen as the one which minimizes the sum-squared error between the set of observed third-order moments and the actual third-order moments resulting from Eqs. (50)–(53). As specified by Azzalini, δ1 and δ2 are each constrained on ½−1; 1& [This conveniently guarantees a real value for κ1 and κ 2 in Eqs. (37) and (38)], and there is an additional constraint involving ω [Eq. (42), reprinted below]. qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi δ1 δ2 − ð1 − δ21 Þð1 − δ22 Þ < ω qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi < δ1 δ2 þ ð1 − δ21 Þð1 − δ22 Þ: (42) This can be alternatively posed as ' ' sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi " ! "ffisffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ! "ffi ! ' ' 1 2 2 1 2 2 2 'μ 0 − 1 δ1 δ2 '' 1 − δ1 1 − δ2 þ ' 1;1 μ 0 0 π μ0;2 π π 2;0 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi < ð1 − δ21 Þð1 − δ22 Þ: (54)

The valid search space for ðδ1 ; δ2 Þ will therefore depend 0 0 0 , μ1;1 , and μ0;2 , on the observed second-order moments μ2;0 but is guaranteed to be a subspace of ½−1; 1& × ½−1; 1&. References

Because there are only five parameters in the model, there is no guarantee that parameters can be chosen in such way that the second- and third-order central Hu moments of the model will exactly match the numerically computed Hu moments from the observations. The question then becomes how to choose a set of parameters that are optimal in some Journal of Electronic Imaging

sense. Motivated by the claim by Liao and Pawlak23 that the numerical computation of moments is more robust at lower orders, we begin by choosing to fit the skew-normal model in such a way that the second-order moment observations exactly match the second-order moments of the model. As specified by Azzalini δ1 and δ2 are each constrained on ½−1; 1&, which conveniently guarantees a real value for κ1 and κ2 in Eqs. (37) and (38). There is an additional constraint involving ω [Eq. (42), reprinted below]. sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ! "ffi 1 2 2 κ1 ¼ 1 − δ1 (37) 0 μ2;0 π

1. J. Simpkins and R. Stevenson, “Robust grid registration for non-blind PSF estimation,” Proc. SPIE 8305, 830501 (2012). 2. N. Joshi, R. Szeliski, and D. Kriegman, “PSF estimation using sharp edge prediction,” in IEEE Conf. Computer Vision and Pattern Recognition, Anchorage, Alaska (2008). 3. J. Brauers, C. Seiler, and T. Aach, “Direct PSF estimation using a random noise target,” Proc. SPIE 7537, 75370B (2010). 4. J. Wang et al., “Third-order aberration fields of pupil decentered optical systems,” Opt. Express 20(11), 11652–11658 (2012). 5. K. Thompson, “Description of the third-order optical aberrations of near-circular pupil optical systems without symmetry,” J. Opt. Soc. Am. 22(7), 1389–1401 (2005). 6. J. Nagy and D. O’Leary, “Restoring images degraded by spatially variant blur,” SIAM J. Sci. Comput. 19(4), 1063–1082 (1998). 7. P. Escande, P. Weiss, and F. Malgouyres, “Image restoration using sparse approximations of spatially varying blur operators in the wavelet domain,” ArXiv E-Prints 1302.6105, (2013). 8. E. Kee et al., “Modeling and removing spatially-varying optical blur,” in IEEE Int. Conf. Computational Photography, Anchorage, Alaska (2011).

013005-12

Jan–Mar 2014



Vol. 23(1)

Simpkins and Stevenson: Parameterized modeling of spatially varying optical blur 9. S. Farsiu, M. Elad, and P. Milanfar, “Multi-frame demosaicing and super-resolution of color image,” IEEE Trans. Image Process. 15(1), 141–159 (2006). 10. A. Levin et al., “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26(3), 1–9 (2007). 11. M. Born and E. Wolf, Principles of Optics, Cambridge University Press, New York (1999). 12. D. Brown, “Close-range camera calibration,” Photogramm. Eng. 37(8), 855–866 (1971). 13. J. Simpkins, “Modeling and estimation of spatially-varying pointspread functions due to lens aberrations and defocus,” Master’s Thesis, University of Notre Dame (2011). 14. M. Aggarwal and N. Ahuja, “A new imaging model,” in IEEE Int. Conf. Computer Vision, Vancouver, BC (2001). 15. B. Tordoff, “Active control of zoom for computer vision,” Ph.D. Thesis, University of Oxford (2002). 16. S. Kang, “Automatic removal of chromatic aberration from a single image,” in IEEE Conf. Computer Vision and Pattern Recognition, Minneapolis, Minnesota (2007). 17. M. Hu, “Visual pattern recognition by moment invariants,” IRE Trans. Inf. Theor. 8(1), 179–187 (1962). 18. F. Alt, “Digital pattern recognition by moments,” J. Assoc. Comput. Mach. 9(2), 240–258 (1962). 19. F. Smith and M. Wright, “Automatic ship photo interpretation by the method of moments,” IEEE Trans. Comput. C-20(9), 1089–1095 (1971). 20. J. Flusser and T. Suk, “Pattern recognition by affine moment invariants,” Pattern Recogn. 26(1), 167–174 (1993). 21. G. Cash and M. Hatamian, “Optical character recognition by the method of moments,” Comput. Vis. Graph. Image Process. 39(3), 291–310 (1987).

Journal of Electronic Imaging

22. R. Wong and E. Hall, “Scene matching with invariant moments,” Comput. Graph. Image Process. 8(1), 16–24 (1978). 23. S. Liao and M. Pawlak, “On image analysis by moments,” IEEE Trans. Pattern Anal. Mach. Intell. 18(3), 254–266 (1996). 24. S. Farsiu et al., “Fast and robust multi-frame super resolution,” IEEE Trans. Image Process. 13(10), 1327–1344 (2004). 25. P. Rangarajan, Private communication (April 2013). 26. A. Azzalini and A. Valle, “The multivariate skew-normal distribution,” Biometrika 83(4), 715–726 (1996). 27. P. Hansen and T. Jensen, “Noise propagation in regularizing iterations for image deblurring,” Electron. Trans. Numer. Anal. 31(1) (2008). 28. M. Christensen et al., “Structured light optical super-resolution: encoding for limited optical bandwidth,” in Applied Industrial Optics, Monterey, California (2012). Jonathan Simpkins received a BS degree in engineering science from Trinity University, San Antonio, TX, in 2009, and a MS degree in electrical engineering from the University of Notre Dame, Notre Dame, IN, in 2012. He is currently a PhD candidate at the University of Notre Dame. Robert L. Stevenson received the BEE degree summa cum laude from the University of Delaware, Newark, DE, in 1986, and the PhD degree in electrical engineering from Purdue University, West Lafayette, IN, in 1990. He joined the faculty of the Department of Electrical Engineering at the University of Notre Dame, Notre Dame, IN, in 1990, where he is currently a professor and associate chair.

013005-13

Jan–Mar 2014



Vol. 23(1)

Recommend Documents