Direct Solution of Orientation{from{Color Problem ... - Semantic Scholar

Report 7 Downloads 88 Views
Direct Solution of Orientation{from{Color Problem using a Modi cation of Pentland's Light Source Direction Estimator Mark S. Drew School of Computing Science Simon Fraser University Vancouver, B.C. Canada V5A 1S6 (604) 291-4682 fax (604) 291-3045 e-mail [email protected]

Running head: Orientation{from{Color from Light Source Estimator

1

Abstract If a uniformly colored Lambertian surface is illuminated by a collection of point or extended light sources or interre ections, with unknown directions and strengths, such that illumination varies spectrally with orientation from the surface, then surface normals can be recovered up to an orthogonal transformation using a robust regression on points in color space. Recently, it was shown that the unknown orthogonal transformation can be recovered by applying an integrability condition on the recovered normals. However, the integrability method results in an unavoidable convex/concave ambiguity additional to the usual one. Here a much simpler method is set out that avoids this ambiguity. Using Pentland's or a similar tilt estimator for each of the RGB channels in turn, in e ect treating the combination of lights as three single sources, the robust color space regression leads to three constraints on the slants of the three sources. The result is accurate recovery of light source directions and hence of surface normals. A self-check mechanism for evaluating the algorithm's performance on real images is introduced.

2

1 Introduction The orientation{from{color problem consists of recovering surface normals for a colored surface illuminated by light sources with unknown color, strength, and position which combine to produce a lighting environment that varies spectrally with orientation from the surface. As such, this problem forms an extension to the Photometric Stereo paradigm [1] for the case of unknown lights [2]. Because the lighting environment must vary with direction from the surface, the problem encompasses situations in which there are many point and extended lighting sources as well as interre ected light.

1.1 Linear color shading For a single Lambertian surface illuminated by a single distant light and under conditions of distant viewing, the grayscale image intensity is I(x ) = a T n (x)

(1)

where includes both the illuminant strength and the albedo, I is intensity produced by lighting from (normalized) direction a , and n is the (normalized) surface normal, with position parameterized by two{ dimensional retinal coordinates x . Here, T means transpose so a T n is the dot-product between the surface normal and the light direction. For a single colored light the RGB camera responses form a 3{vector  (x ) in color space which arises from the ltering e ect of three camera system sensor response functions Q () on the spectral power distribution formed from the product of illumination E() and surface spectral re ectance S(). Since a Lambertian model is assumed, the shading is still just a T n for all three color channels and the color vector is

Z

 = (a n ) E() S() Q () d ; T

(2)

integrating over the visible spectrum. The dependence of the color vector on x through n (x) is implied here and below. 3

For a collection of L discrete sources we must replace (2) by a sum:

 =

L X i=1

(a Ti n )

Z

Ei() S() Q () d

(3)

if a surface point sees all lights. The color that each illuminant produces when re ected from the surface is

bi 

Z

Ei() S() Q () d ;

(4)

so that eqn.(3) can be rewritten [3]

 =BAn;

(5)

where we stack all the directions a i row{wise into an L  3 matrix A and group the strength{direction color space vectors b i column{wise into a 3  L matrix B :

A = (a 1; a 2; : : a L)T ; B = (b 1; b 2; : : b L) :

(6)

Let F  B A , so that (5) becomes simply

 =Fn:

(7)

Thus a linear model relates color to surface normal. The model breaks down when not every illuminant can be seen from a particular surface patch, so that F is not the same there as at a patch illuminated by all lights. This is the case when the surface is in shadow or is self-shadowed, is beyond the light's horizon (its terminator), or has specularities or non-uniform color.

4

1.2 Orientation{from{color In [2] Woodham et al. studied the photometric stereo problem in the case of images formed under lights with unknown position and strength. In that work, the problem was to recover the surface normal n from three images taken under three lights turned on in turn to form three separate grayscale images of a single Lambertian surface. Woodham et al. showed that the photometric stereo problem could be solved for unknown lights using a least{squares solution to the equation equivalent to (7) for the triple of grayscale values. However, in that case there is no matrix B . In [3] Drew applied the statistical analysis of [2] to single RGB images formed according to the linear model of eqn.(7). Since not every pixel obeys eqn.(7), outlier detection is needed to determine those pixels illuminated by a subset of lights or by a combination of lights forming a di erent F than that recovered by a regression [3]. In [4], Drew replaced the least{squares (LS) estimator with a robust Least Median of Squares (LMS) regression [5, 6]. In [4], it was shown that the orientation n could be recovered even in the presence of specularities in the image. In that case specularities show up as another source of outliers in the robust regression, along with surface patches in shadow.

1

Petrov [8, 9, 10, 11] rst considered the linear model (7) for a collection of chromatic lights impinging on a Lambertian surface, using the well{known fact that Lambertian surfaces e ectively sum up an extended light source into an equivalent point source (see [12], p. 237). In [13] Petrov and Kontsevich provide a complete classi cation of surface patches based on the rank of the illuminating lights and on the rank of the illuminated surface. In [14], Kontsevich et al. put forward a segmentation scheme based on Petrov's model. In this scheme, local regions are assumed to obey the linear model and model coecients are found by solving the set of six linear equations resulting from  values at six contiguous pixels. The region determined this way is grown outward until a tolerance is reached for the current model parameters. No regression is used. Once the model coecient matrix F is known, surface orientation is recoverable from color. However, the recovery is successful only up to an unknown overall rotation (this was also the case in [2]). This comes 1 Clearly, however, specularities cannot be construed as outliers if the image is preponderately specular. E.g., over 50% of the pixels in Fig.2 of [7] are specular, and the method of [4] breaks down. In order for that robust method to work 50% + 1 of the pixels must be fairly Lambertian.

5

about because the regression actually solves for the six components of the matrix F F T .

1.3 Integrability Kontsevich et al. provided a solution to the problem of recovery of orientation up to an overall unknown rotation: once surface normals are recovered up to an arbitrary rotation then that rotation may be calculated by applying an integrability condition [14]. Recovery of the rotation is based on the fact that the integral of an integrability condition is minimized when the putative gradient of depth (derived from the recovered normals) is derived from normals rotated so as to align the meaning of partial derivatives with the coordinate system tied to the camera axes. However, the calculation of the correct rotation, based on integrability, is complex. In [15], Drew and Kontsevich set out a smoothness condition that allows one to quickly solve for the needed rotation. Nonetheless, as shown below, any scheme based on a minimization integral has an inherent weakness: since re ections are allowed as well as rotations, there is no way of ruling out a convex/concave inversion in the recovered surface. Below, in x2, a direct method for recovery of matrix F is set out. This method avoids the possibility of a surface inversion and is also a good deal simpler than the method based on minimizing integrability. In

x3 typical images are generated by shading a radar range image. And in x4 the method is applied to a real image. A new indicator assessing the performance of the algorithm on real images, for which the correct orientation vectors are unknown, is introduced. Section 5 concludes this study with some observations.

2 Recovering Orientation For eqn.(7) to be solvable for n , the matrix F must be invertible and hence neither matrix B nor matrix

A can have rank less than 3. Therefore all the light source directions A must not be coplanar in space; and similarly the dimension of the set of re ected colors B must be 3 [3, 4]. Since eqn.(7) involves a 3  3 matrix multiplication, one can only expect to recover the matrix F and the vectors n up to an overall orthogonal transformation R , without further assumptions, since such a trans6

formation could be inserted after F and the inverse R T before n . To determine R , additional knowledge must be injected into the model. It is important to note that any element of the group O(3) can be used for

R , including the re ections, and not just the rotation subgroup SO(3).

2.1 Color space quadratic form Denoting the inverse of F by G , we have

n = G :

(8)

Since n is unit length,  is constrained to lie on an ellipsoid centered on the origin in color space: 1

n Tn =  TG TG    TC  :

(9)

Since C is G T G , it is 3  3 and symmetric positive de nite with 6 independent elements. If we can nd

C we have determined the 9 elements of F up to the 3 degrees of freedom corresponding to an unknown orthogonal transformation. The quadratic form (9) can be written c11 21 + c22 22 + c33 23 + 2c12 1 2 + 2c13 1 3 + 2c23 2 3 = 1

(10)

in terms of the values of  = (1 ; 2; 3 )T measured for each pixel. Since we have N such equations, one for each pixel, a regression can be used to solve for the matrix entries cij . We collect observations into an N  6

?



matrix M . For each pixel, let M have columns 21 ; 22 ; 23 ; 21 2 ; 21 3 ; 223 : Denote by z the best approximation to the matrix elements cij , where z is a 6{component object z = (c11 ; c22 ; c33 ; c12 ; c13 ; c23)T . Then eqn.(10) reads

M

z

'

N 6 61 7

1 N 1:

(11)

We nd the best hyperplane z by using the robust LMS regression. Once one has z and hence C the matrix G is determined up to an orthogonal transformation. Any root of C will do for G , therefore, and here we use the eigenvector decomposition C = U  U T , with  diagonal, to form G =

p

 U T.

2.2 Outlier detection The LMS regression for eqn.(11) produces a robust dispersion estimate

0 1 r med 5 s0 = f @1 + N ? 6 i ri2A :

(12)

where ri is the residual r for the ith case, r =  T C  ? 1, and, in terms of the cumulative standard normal distribution , f = 1=?1(0:75) ' 1:4826. Then an RGB point is accepted as corresponding to the model if

jri=s0 j  2:5; else the point is an outlier and is rejected. Finally, a Reweighted Least Squares regression is carried out, using only the accepted points, allowing con dence limits to be established on coecients using standard techniques. The method works best when most points see most lights. This situation would obtain when lights are not close to the horizon and the surface is fairly at, so that there is less chance of a surface normal being perpendicular to a light. On the other hand, matrix A must be rank 3; therefore widely separated lights, not all close to the z-axis, work best. As shown in [16], however, a straightforward identi cation of outliers using the LMS method is insucient for correctly identifying all pixels that must be rejected. Instead, we must make further recourse to the physical model underlying the problem here. For the orientation{from{color model does not guarantee that orientation vectors recovered by the method are in fact normalized. Therefore one should further reject any normal vectors that have lengths too far from unity. To do so, one should carry out another LMS robust regression, this time identifying of the location of the maximum-likelihood length of recovered orientation vectors (since these lengths may not be exactly 1). Here, again, the LMS procedure delivers outliers for this second regression \on a silver platter" [6]. The result is that another 5% of the image pixels need to be eliminated as outliers. These additional pixels mostly occur around the occluding boundary of the gure; 8

they correspond to normals that are slightly too small in length to be accepted by the second regression. The reason they are not excluded by the rst LMS regression is that the rst regression makes the assumption that normal lengths are exactly unity. But the second regression estimates the center of the distribution of normal lengths as they are actually recovered, which is slightly larger than unity.

2.3 Orthogonal matrix: integrability condition c of G = F ?1. However, because of the unknown rotation (or, more generally, So far we have an estimate G orthogonal transformation) we recover rotated versions of the correct surface normals n . Call these rotated estimates N . Then if we actually knew F we could de ne the matrix R via

N = Gc  = GcF n  R n ;

(13)

and our estimate of the unrotated normal would be

n = R ?1N = R T N :

(14)

It is possible to recover the orthogonal matrix R by considering an integrability condition on the normals [14]. The two-dimensional partial derivatives p(x; y) and q(x; y) are related to n by p = ?n1 =n3 ; q =

?n2 =n3. In order for these derivatives to be derivable from the same depth z, they must satisfy py ? qx ' 0. In [14], rotations are chosen using a type of simulated annealing algorithm and the matrix R is selected that satis es

min ; ;

ZZ

[?(n1 =n3)y + (n2=n3 )x ]2 dx dy ;

R = R  R R

(15)

symbolizing a sum over samples by an integral and bearing in mind that the correct n is given in terms of the recovered N by eqn.(14). Here, subscripts x and y denote partial di erentiation with respect to x and y. Finding the matrix R e ectively aligns the object axes with the camera axes and thus determines the pose of the object. 9

Instead of minimizing (15) by random assignments of the orthogonal matrix, in [15] we nd a closed-form solution. There we show how to replace (15) with the following smoothness condition which may be used to form Euler equations that can be solved for R . Suppose we denote the components of the normal by l = n1; m = n2 ; o = n3. Then we minimize the smoothness criterion min

ZZ? ZZ   kn x k2 + kn y k2 dx dy = (lx )2 + (ly )2 + (mx )2 + (my )2 + (ox )2 + (oy )2 dx dy : (16)

We also need to append a Lagrange multiplier term to this optimization to ensure an orthogonal R . Euler equations for (16) yield an eigenvector equation for rows r i of the orthogonal matrix R such that n3 = r 3  N is maximized. Once a best r 3 is established the other vectors in the rotation matrix, r 1 and

r 2, lie in the plane perpendicular to r 3. To nd them, it is straightforward to substitute r 3 into the full py ? qx minimization (15), since only a single angle in the r 1 ; r 2 plane remains to be found [15].

2.4 Concave/convex ambiguity The minimization (15) is over all orthogonal transformations taking the normals N , recovered using the LMS method, into estimates of correctly aligned normals n =

R T N . We wish to nd an orthogonal

matrix R with rows r i such that the correct normal n = (l; m; o) is given in terms of the recovered one

N by l = r 1  N ; m = r 2  N ; o = r 3  N . However, if we simultaneously reverse the sign of n1 and n2 in eqn.(15) no e ect is produced. Since such as sign reversal would come about by including two re ections in R , such a reversal is undetectable by the method of minimizing integrability. The e ect of a sign change in n1 and n2 is to reverse the convex/concave property of a surface patch. In x3 we show that by incorporating an illuminant direction algorithm the problem of ambiguity can be resolved. 10

3 Direct Method For Recovery of Color-Orientation Matrix 3.1 Relationship to illuminant direction estimator The color-orientation matrix F itself, not F F T , is actually the quantity that needs to be recovered and in this study we set out a proof-of-concept for a method that obtains F directly, with no auxiliary rotation step. So far, the LMS regression has produced robust measures that greatly constrain this matrix. For, rewriting eqns.(7) explicitly in terms of the rows f i of F , we have

0 BB f 1 B  = BB f 2 B@ f3

1 CC CC CC n : A

(17)

(with row vectors f i ). The three RGB image channels encode the shading eld for the three color-strengthdirection vectors f i ; it is well known that for a Lambertian surface multiple lights can be replaced by a composite light [17, 12], and here we have one \light" for each channel. Petrov rst described this linear shading model relating RGB to surface normal. From eqn.(9) de ning the matrix C , once

C is determined we also know the norms (strengths) and

dot-products (angles) between all three f i . For we have

C ?1 = (G T G )?1 = F F T with matrix elements

(18)

C ?ij1 = f i f Tj . I.e., the diagonal elements give the norms of the vectors f i and

o -diagonal elements give angles between them (since we know their norms). Since we can simply divide the red, green, and blue images by the respective norms of the f i, and thus produce three shading images with values in [0::1], the problem of nding matrix F reduces to that of nding the illuminant direction from a grayscale image.

11

Suppose the illuminant direction is speci ed by a unit vector

a = (sin  cos ; sin  sin ; cos )

(19)

with slant  measured from the camera z-axis and tilt  arbitrarily measured from the camera x-axis. Now,

the vectors f i can be normalized by forming fe i = f i =kf i k since we know kf i k from C . Vectors fe i are not actually \illuminants", but the same analysis that pertains to nding the illuminant direction can be applied to them. Several methods for carrying out this task have been proposed since the pioneering e ort of Pentland [18]. Of particular note is the method of Zheng and Chellappa [19], which takes into account the part of the

surface in shadow, and the method of Chojnacki et al. [20], which uses a shadow{free circular subregion of the image. However, in a study by Gibbins et al. [21] several illuminant nder methods are compared. While methods for recovering the tilt, including the original method of Pentland, are found to be quite useful, an ominous note is sounded in regard to nding the slant for non-symmetric surfaces (emphasis added): Here, the Gaussian and Crater [Gaussian with a large concavity] images were used, and all of the methods failed to produce remotely tolerable results (consequently no gures are presented).

Below, in x4, it is shown that for real or realistic images this warning is indeed borne out. However, so far not all the information in matrix C , found by a robust estimator, has been put to use. We have used the vector norms kf ik but not the inner products f i f Tj .

3.2 Constrained slant estimator It would appear [21] that several methods produce a good estimate of tilt in a grayscale image, even for complex surfaces. Therefore we may adopt one of the methods mentioned above to nd the \illuminant" tilt

for vector fe i separately in each of the three channels. Below, we use Pentland's method. Pentland suggested

making use of the gradient Ix ; Iy of a grayscale image I by forming the directional derivative Is in the x; y12

plane direction s for several directions s. In x4 the axes directions and the 45 directions are used. Forming averages of ( nite-di erence approximations to) directional derivative values, one approximates expectation values that should obey < Is > = s1 < Ix > + s2 < Iy > :

(20)

Then the above equation is treated as a regression equation for the expectations of Ix and Iy . Finally, the tilt is estimated as  = arctan(< Iy > = < Ix >) :

(21)

Pentland's slant estimator uses the above values to form

 2 2  = arccos 1 ? < Ix > K+2 < Iy >

1=2

s

; Ks = [< Is2 > ? < Is >2 ]1=2

(22)

and is assumed to be independent of s. The last estimator, eqn.(22), has been criticized by Chojnacki et al. [20], but these authors have further established the validity of eqn.(21). The slant estimator of Lee and Rosenfeld is also derived from a statistical analysis [22], and both their method and Pentland's are based on the idea that the image has an approximately isotropic distribution of surface normals or gradient of surface normals, as for a spherical surface. The Lee{Rosenfeld tilt estimator is equivalent to Pentland's when only the axes are used for directions s. Their slant estimator is derived from expectations not involving derivatives: ) cos  + sin  ; < I 2 >= ( 2 =4)(1 + cos ) : < I >= 4 ( ?3(1 + cos )

(23)

All the above expectation values are taken over non{zero pixels. Since these and other slant estimators are known to be inaccurate, for the present orientation{from{color problem we would like to use the additional information contained in matrix C . And in fact since we know

the inner products fe i fe j the fairly accurate estimates of tilt derived from Pentland's or another method of T

estimation can be used to provide three constraints on the slants for vectors fe i . 13

Suppose for de niteness we have used Pentland's estimator (21) to determine the tilts i for vectors fe i

for each of the three channels. Now from C we also know fe i fe j  hij . Therefore we have an equation T

relating i with j because if fe i = (sin i cos i ; sini sin i; cos i ) then

fe ife Tj = kij sin i sin j + cos i cos j = hij ; 1  i < j  3 ;

(24)

with kij  cos i cos j + sin i sinj . There are three such equations relating the three unknown slants 1; 2; 3 to known values kij ; hij . Of course, while the robust estimate of C should provide a reasonable estimate of the three values hij , this constrained slant estimator relies on a tilt estimator producing accurate values for the three tilts. We note below that fairly accurate estimates of the i are indeed produced by Pentland's method. The present method e ectively adopts the assumptions of whatever tilt estimation algorithm is employed. Incorporation of a light source direction nder bypasses the extra concave/convex ambiguity of x2.4 and now the ambiguity that still remains is resolved by assuming a convex surface | in eq.(21) it is assumed that a negative < Ix > corresponds to a negative cos  and that a negative < Iy > corresponds to a negative sin ; for a concave surface this assignment would be reversed. Naturally, the original `crater illusion' [17] remains, as it does for human vision, but now the algorithm produces a convex surface when presented with an image of one, and not a concave surface as the integrability method alone may produce.

3.3 Solution of constrained slant estimator To solve eqn.(24) for 2 in terms of 1, we use the substitutions tan(1=2) = sin(1 =2)=(1 + cos(1 =2)) and cos 1 = (1 ? tan2(1 =2))=(1 + tan2 (1=2)). The resulting quadratic equation has solution 2 sin2  + 2 cos2  ? h2 ]1=2 1 1 12 tan(2=2) = k12 sin 1 + 1[k12cos  +h 1

12

(25a)

where 1 = 1. Similarly, there are two more equations, the rst giving 3 in terms of 2 , and the last giving 14

1 in terms of 3:

2 sin2  + 2 cos2  ? h2 ]1=2 2 2 23 tan(3=2) = k23 sin 2 + 2[k23cos  +h

(25b)

2 sin2  + 2 cos2  ? h2 ]1=2 3 3 31 tan(1=2) = k31 sin 3 + 3[k31cos  +h

(25c)

2

3

23

31

with 2; 3 = 1. All three of eqns.(25) must be solved simultaneously for the set f1; 2; 3g.

To understand eqn.(24), consider Fig.1. We know the three tilts i : therefore the three vectors fe i must

each lie on a known plane through the z-axis cutting the x; y-plane at tilt i . Fig.1(a) shows three vectors

fe 1; fe 2; fe 3 with (; )= (40; 20), (30; ?120), (5; 0), respectively. These vectors are shown dashed. If tilt were known but slant were unknown, we would nonetheless know that each vector must lie on the plane shown in Fig.1(b) by the dashed quarter great circle from the z-axis to the dashed i tilt line on the

x; y-plane. (Vector fe 3 has tilt 0 so the line on the x; y-plane for its tilt is just the x-axis).

As well, if the angle between unit vectors fe 1 and fe 2 is known, from h12, then we also have the infor-

mation that fe 2 must lie somewhere on the dotted circle centered on fe 1 and at angle arccos(h12) to it, and simultaneously on its own dashed great circle at tilt 2 . Such small circles showing xed angles around each

vector are displayed in Fig.1(c). The angles between the vectors in Fig.1 are: 1,2: 65:3, 2,3: 32:8, 3,1: 35:3. The three equations (24), then, correspond to the problem of positioning unit vectors on a sphere, each along the great circle corresponding to its tilt, while adjusting all three vectors simultaneously to have the correct angle between each pair of vectors. Each of eqns.(25) represent two solutions of the above geometric problem: the arccos(h12) circle around

fe 1 may cut the tilt{plane great circle for vector fe 2 in two places. Thus there may in general be as many as eight possible combinations of solution sets f1; 2; 3g to consider. Of course, if the values of kij and hij are inaccurate, there may be no solutions to eqns.(25). In that case one could fall back on other slant estimators, or else minimize (15) subject to having knowledge of the tilts. Each i is also constrained to lie in the interval [0::=2] because of the meaning of f i | each such vector is a weighted sum of directions a j ; j = 1::L, with weights bji (i.e., the i color-component of vector b for 15

light j). The coecients bji are all positive numbers (since they are colors) and the cos  for each illuminant direction a j is positive. Therefore cos i for the composite \light" f i is also positive and i lies in [0::=2]. However, there could still be multiple solutions for eqns.(25). Fortunately, in the orientation{from{color problem this does not present a problem because the correct solution can be identi ed as that yielding the smallest value of the integrability integral eqn.(15). A quick method for nding solutions for eqns.(25) is to turn the set of equations into a xed-point problem of the form f(x) = x. Starting with eqn.(25a), any initial value for 1 results in a guess for 2 , and then for 3 via eqn.(25b), and nally in an iterated value for 1 from eqn.(25c). Thus we have a function of 1 that must equal the initial value 1 itself for a correct solution. The well-known Contraction Mapping Theorem [23] is useful for establishing existence and uniqueness for a xed-point problem, but only for monotonic functions. This theorem does not apply here. Fig.2 shows the composite function for 1 as a function of initial value 1 for the set of vectors illustrated in Fig.1. Here, the eight possible cases corresponding to 1; 2; 3 = 1 are numbered such that case 1 corresponds to 1; 2; 3 = +1; +1; +1 and case 8 to 1; 2; 3 = ?1; ?1; ?1. This gure shows that it is indeed possible to have multiple solutions for eqns.(25): points crossed by the f() =  line are solutions. In this gure, values of k12 and h12 used are the correct ones for vectors f i. The correct value of 1 is shown by the point marked S: the method nds the correct value as one of its solutions, i.e., slants of 40; 30; 5. The other solution has slants 64:6; 0:9; 32:3. If we calculate normals n using the incorrect solution for F and eqn.(8), and form partial derivatives p(x; y) and q(x; y), then the integrability integral in (15) is greater than if we had used the correct solution | integrability establishes which solution is the correct one. Other initial choices for the f i than those in Fig.1 change the curves in Fig.2 considerably: sometimes there are two solutions, and sometimes only one. Just because the f() =  line crosses a curve does not necessarily mean that we have a useful solution: the constraint that all slants must lie in [0::=2] can rule out a solution. Note that the curves in Fig.1 do not have values for all values of 1 in [0::=2]. This is because for values of 1 not plotted, the argument of one of the square roots in eqns.(25) becomes negative. To derive a solution for the entire set f1 ; 2; 3g from a solution for 1 we substitute the solution for 1 16

into eqns.(25a,25c), making use of the appropriate triple (1 ; 2; 3) for the curve yielding the solution for 1.

3.4 Application to orientation{from{color problem The algorithm for solution of the orientation{from{color problem is then as follows: 1. Carry out an LMS regression in color space to determine matrix C and identify outliers to be omitted from the analysis. 2. Calculate the norms kf i k and the inner products hij from C . 3. Using only non-outlier pixels, apply Pentland's tilt estimator to each of the three color channels, yielding 1 ; 2; 3 . 4. Calculate quantities kij from the i . Find all solutions for 1 from eqns.(25) and valid solutions for all three i .

f i from f~ i, given in terms of i and i, by multiplying by kf ik. These vectors make up the matrix F .

5. Compose vectors

6. Calculate the matrix G = F ?1 (using the singular value decomposition method, in case the matrix is rank-reduced). 7. For each non-outlier pixel, obtain the normal vector n from eq.(8). 8. If more than one possible solution for the i was obtained, choose that solution which yields the smallest value of the integrability integral (15) over all pixels. In x4 below this algorithm is successfully applied to typical images derived from a radar range depth map by synthetically shading with colored lights, with noise added, and in x5 to a real image taken under spectrally varying illumination. 17

4 Orientation{from{Color for Synthetic Images Consider Fig.3(a), which shows a laser range image of a plaster bust of Mozart.

2

From this depth map

synthetic images can be generated, and all those studied were found to produce quite similar algorithm performance. Here we shall carry out the algorithm step by step for a particular example. Let us use the minimum number of lights, 3, to shade the depth map; however, in general any number of lights can be used [15]. Here we choose three lights with directions (; ) = (60 ; 20); (40; 160); (50 ; ?110), respectively, and color-strength matrix B = ((59; 13; 55)T ; (7; 51; 49)T ; (32; 43; 50)T ) ; as in eqn.(6); i.e., color b 1 equals column 1, etc. These colors happen to be the red and green color opponent theory colors [24] and a neutral. The resulting image was produced by summing Lambertian shading for each light with self-shadowing calculated using a ray-tracing algorithm. The image was scaled uniformly such that the maximum value of the modulus of  was 255 | a grayscale image has maximum 255. Additive Gaussian noise was then added with rms value 1/255 in order to ensure that results are not dependent on exact input values. The result is shown in Fig.3(b) (displayed in black and white). Following the algorithm of x 3.4 we begin by carrying out an LMS regression in color space. In this case the regression succeeds very well | the value of the robust version of the coecient of determination R2 is 0.968. Not many outliers are found in the non-background area of the image. Now normal vectors up to an arbitrary orthogonal transformation can be derived from a root of the matrix C . Not all such normals will in fact be normalized, since the regression accepts as non-outliers some pixels derived from shadowed points. Therefore we further carry out a robust estimate of the location of lengths of normal vectors, and reject further outliers for this second regression, as noted in x2.2. The mask for outliers determined by the LMS regression, plus allowance for incorrect lengths of recovered normal vectors, is shown in Fig.3(c). To obtain tilts for the three vectors f i we use Pentland's method. Table 1 shows the correct values for the three vectors, from the matrix F = B A , and from Pentland's estimator. Angles are shown in degrees. 2 The laser range data for the bust of Mozart is due to Fridtjof Stein of the USC Institute for Robotics and Intelligent Systems.

18

f1

f2

f3

Correct  -36.80 -149.91 -136.55 Pentland  -30.00 -154.60 -141.01 Correct  23.26 30.63 10.87 OFC  23.15 24.69 6.33 OFC  (correct ) 22.43 28.36 10.18 Pentland  90.0 90.0 90.0 Lee-Rosenfeld  0.0 91.70 0.0 Correct 156.06 164.96 145.68 OFC 149.85 164.06 144.65 Channel maxima 156.50 165.21 144.26 Table 1: Table 1: Algorithm results for synthetic image.

As well, the Lee and Rosenfeld estimators, eqn.(23), can be used to estimate slant by solving for  from < I 2 >1=2 = < I > as an implicit equation. However, the smallest value of this ratio, for  = 0, is

p

3=(2 2) = 1:0607. Unfortunately, this is larger than the ratio calculated for f 1 and f 3 and thus 1 ; 3 are set to 0 . For f 2 the ratio gives a wrong slant. Thus, calculating from the slant and the estimator of < I > is also either unde ned or wrong. Pentland's slant estimator gives slant estimates of 90 because the variance in the denominator of (22) is small (cf. [22]). In contrast, the orientation{from{color (OFC) method determines the equivalent of from the estimates of kf i k. These are shown in Table 1 and are seen to be quite good. In fact, simply using the channel maxima gives the best result, in this case. The discrepancy arises from the fact that normal vectors recovered are not required to be normalized. The LMS regression gives the best t for all data points, regardless of the channel maxima. We could, of course, simply set the kf i k equal to the channel maxima, and readjust F ; however, while changing the lengths of the normal vectors recovered, this would not change the f~ i or the slants found by the constrained slant estimator. To arrive at estimates for slants, we consider the curves for the xed-point problem, eqns.(24). Fig.4(a) shows that there are two candidate solutions for 1. The correct value for 1 is shown by the mark S. Note that the correct value does fall on the intersection of the curve marked 2 and the f(1 ) = 1 line, notwithstanding the fact that values in the equations have been distorted by image noise, regression error, and any error arising from the independent tilt estimator. The other intersection point gives negative values 19

for 2 and 3 and is ruled out. Fig.4(b) shows that any other value of 1 would yield a larger value of the minimization integral (15). Here, logs of values of the integral are displayed for values of 1 along the curve marked 2 in Fig.4(a), with 2 and 3 determined by one iteration of eqns.(25) (further iterations do not result in simultaneous solutions except at the single solution point). Table 1 shows the slant values (\OFC ") recovered from the intersection in Fig.4(a): they are reasonably accurate. Clearly, Table 1 is not meant to be a comprehensive comparison of all methods available; on the other hand the methods compared have performance generally typical of previous methods for nding the slant [21]. Noting that the method is not tied to any particular tilt estimator, Table 1 also shows how the method would have performed given accurate measures of the tilts. In this circumstance the method performs very well. Fig.5(a) is a grayscale image of how the original depth map would appear, shaded from direction (1; 1; 1). Here, the image has been masked by the outlier image, Fig.3(c); thus this is the best we can hope to do for a similar shaded image of the recovered normals. Fig.5(b) is a shaded image of the algorithm output. A depth map could be created from recovered normals by spanning the outlier gaps using a coupled depth/slope surface recovery algorithm [25] if a smooth recovered surface were acceptable. Fig.5(c) shows how well recovered normals match up with actual ones, for non-outlier pixels, by displaying the angular errors between them. The median error is 21:51. In comparison, the best that could be done, with complete knowledge of matrix F , e.g. using a rotation derived perfectly correctly from (15), is 16:80, which is not much better. Therefore the algorithm performs adequately. Fig.5(d) shows the errors in the graylevels for Fig.5(b) in comparison with Fig.5(a).

5 Real Image Fig.6(a) shows a color image of an egg, illuminated by orange, green, and blue lights, placed close to the imaging device, which was a commercial camcorder. 3

3

The egg was very smooth and uniformly colored.

The egg image is due to Leonid Kontsevich of the Smith-Kettlewell Eye Research Institute.

20

f1

f2

f3

Pentland  90.98 -12.71 -167.99 OFC  18.71 10.81 19.98 Pentland  24.02 25.70 22.95 Lee-Rosenfeld  0.0 0.0 0.0 OFC 198.69 196.72 195.70 Channel maxima 206 206 202 Table 2: Table 2: Algorithm results for real image.

Fig.6(b) shows the results in color space of a LMS regression using model equation (10). Again the t is very good, yielding a robust coecient of determination R2 of 0.9950. This t is considerably better than that for the synthetic image of a rough surface, in x4. In the present case, as before, all ellipsoid coecients are statistically signi cant. Now a root of C is extracted and normals are recovered for non-outlier pixels. Again, we nd that it is necessary to modify the standard procedure for identifying outliers by adding additional pixels for which norms of recovered normals are too far from unity. To do so, we carry out a second LMS regression, a 1-dimensional estimate of the location of the correct length of norms. Here that estimate is 0.9883. When the additional rejected pixels are added to the outlier mask, the result is as in Fig.6(c). There is some \color bleeding" on the side and bottom of the image due to the fact that the object is placed on a glossy white label, and interre ection takes place, and the fact that the background is a white magazine. The median value of norms of non-outlier recovered normal vectors is 1.0090 and the range of values is 0.9152 to 1.0611. This narrow range shows that the algorithm succeeds very well in recovering orientation vectors. Fig.6(d) displays the xed-point problem in this case. As can be seen, there is only one 1 point to examine. Table 2 shows how the direct method for recovering the F matrix performs in this case. The normal vectors derived from the F matrix and the RGB values are displayed in the bottom row of images in Figs.7(a,b,c), shown as synthetic grayscale images shaded with a single light placed in the recovered

f i directions (images in Fig.7 have been histogram{equalized for display). The top row is the input R, G, and B images, multiplied by the inlier mask. If we have succeeded in recovering the correct orientation 21

vectors, then we expect the bottom row to equal the top row within reasonable accuracy. The top row is the best we can hope to do given the fact that pixels at outlier positions cannot be determined. Qualitatively, the input (times mask) images and output images (bottom row of Fig.7) do agree, and agree with the OFC results in Table 2. As a simple test of the correspondence of the input and output

P images, we calculate the correlation between each of the two sets of images, de ned as f(I1 ? I1 )(I2 ? P P I2 )g=f[ (I1 ? I1)2 ]1=2[ (I2 ? I2 )2 ]1=2g. For the three RGB channels the correlations are 0.99897, 0.99896, 0.99892. We can, of course, ignore our absence of knowledge at outlier pixels and integrate recovered normals to produce a depth map by using an interpolation scheme or simply by setting p(x; y) and q(x; y) to zero at outlier pixels. Fig.8 shows the recovered depth map for the egg image using a simple Poisson equation solver. Once we have recovered depth, another self-check is available: we can re{run Pentland's tilt nder on the shaded synthetic output images. If we have succeeded in determining the correct normals then the output images should produce close to the same results as the input (as given for the egg example in Table 2). This is an indirect test that output normals are similar to those that actually produced the input image, since even if normals were incorrectly recovered and lights were also incorrectly recovered, the recovered normals would produce a di erent pattern of self-shading than correct normals and the light{source{direction nder would produce di erent output values than those input. While Fig.7 is not very dramatic (since the input images do not show large changes in brightness), the last test is indeed positive here | output images produce close to the same illuminant direction as the input images. The correlation test not only shows that the method does well in recovering the correct normals, but also points out an interesting indicator for the quality of how well RGB values for an image obey the linear model. A matte surface departs from the Lambertian model for incident angles that are nearly grazing to the surface and for surface normals nearly orthogonal to the viewing direction [26]. If this is the not the situation for most surface normals then a surface illuminated with a set of many lights should produce RGB images resulting from the present analysis having high correlations with the input RGB channel images.

22

6 Conclusions In this study we have shown that it is possible to recover orientation from color, in a spectrally varying environment with unknown lighting, using a direct method for nding equivalent{light directions. This comes about by using the dot-products determined by a robust regression in color space to constrain possible values of `illuminant' slant angle. The direct method replaces a much more complex method based on minimizing an integrability condition, while also avoiding an ambiguity inherent in that method. The present method also makes available a new indicator of how Lambertian a surface is, based on the correlation of input and reconstructed RGB channel images. Of course, the method is dependent on the illumination environment being varying, i.e., rank{3 (although as a special case the present method could always be applied to the usual, black and white, photometric stereo problem). In a closed environment the condition of rank{3 lighting is more likely to be met because of colored interre ected light. However, the important case of rank-2 light, intermediate between the present study and the usual, rank{1, shape-from-shading problem will be pursued elsewhere [27], along with the question of how to knit together rank{1, {2, and {3 areas in an image.

7 Acknowledgements The author is indebted to Tom Shermer and Peter Borwein for useful discussions, and to an anonymous referee for thoughtful comments.

8 Figure Captions Figure 1. Vectors fi must lie in plane of recovered tilt (shown dashed in (b)), while also having correct dot product with other vectors fj (dotted circles in (c)). Figure 2. The geometric problem of Fig.1 can be turned into a xed-point problem. Here, there are two 23

possible solutions. The correct solution is marked `S'. Figure 3. (a): Depth map of plaster bust. (b): Synthetic color image from shading with three colored lights. (c): Mask for inliers detected by robust regression. Figure 4. (a): Fixed-point problem determines constrained 1 . Solution with all of f1; 2; 3g positive is marked `S'. (b): Fixed-point problem solution 1, marked `S', yields least value of integrability integral. Figure 5. (a): Shaded (depth map)  (inlier mask), for light from (1; 1; 1). (b): Synthetic image of recovered orientation vectors. (c): Angular errors (degrees) for recovered inlier orientation vectors. (d): Graylevel errors between gs. (a) and (b) for inliers. Figure 6. (a): Input color image. (b): Robust regression in color space nds best ellipsoid. (c): Outliers detected. (d): Fixed-point problem determines constrained 1, marked `S'. Figure 7. (a): R, (b): G, (c): B. Top row: (input image)  (inlier mask). Bottom row: output normals, shaded with single light from recovered f direction. Figure 8. Depth reconstructed with Poisson equation, using all pixels including outliers.

24

References [1] R. J. Woodham. Photometric method for determining surface orientation from multiple images. Optical Engineering, 19:139{144, 1980.

[2] R. J. Woodham, Y. Iwahori, and R. A. Barman. Photometric stereo: Lambertian re ectance and light sources with unknown direction and strength. Technical Report TR 91-18, University of British Columbia Department of Computing Science, 1991. [3] M.S. Drew. Shape from color. Technical Report CSS/LCCR TR 92{07, Simon Fraser University School of Computing Science, 1992. Available using ftp://fas.sfu.ca/pub/cs/techreports/1992/ as CSS{ LCCR92{07.ps.Z. [4] M.S. Drew. Robust specularity detection from a single multi-illuminant color image. CVGIP:Image Understanding, 59:320{327, 1994.

[5] P. J. Rousseeuw. Least median of squares regression. J. Amer. Stat. Assoc., 798:871{880, 1984. [6] P. J. Rousseeuw and A. M. Leroy. Robust Regression and Outlier Detection. Wiley, 1987. [7] R. J. Woodham. Gradient and curvature from the photometric-stereo method, including local con dence estimation. J. Opt. Soc. Am. A, 11:3050{3068, 1994. [8] A. P. Petrov. Light, color, and shape. In E. P. Velikhov, editor, Cognitive processes and their simulation, pages 350{358, 1987. In Russian. [9] A. P. Petrov. Color and Grassman{Cayley coordinates of shape. In B. E. Rogowitz, M. H. Brill, and J. P. Allebach, editors, Human Vision, Visual Processing and Digital Display II, volume 1453, pages 342{352. SPIE, 1991. [10] A. P. Petrov. On obtaining shape from color shading. Color Research and Application, 18:375{379, 1993. [11] A. P. Petrov. Surface color and color constancy. Color Research and Application, 18:236{240, 1993. 25

[12] B. K. P. Horn. Robot Vision. MIT Press, 1986. [13] A. P. Petrov and L. L. Kontsevich. Properties of color images of surfaces under multiple illuminants. J. Opt. Soc. Am. A, 11:2745{2749, 1994.

[14] L. L. Kontsevich, A. P. Petrov, and I. S. Vergelskaya. Reconstruction of shape from shading in color images. J. Opt. Soc. Am. A, 11:1047{1052, 1994. [15] M. S. Drew and L. L. Kontsevich. Closed-form attitude determination under spectrally varying illumination. In Proc. IEEE Comp. Soc. Conf. on Comp. Vis. and Patt. Rec., pages 985{990, 1994. [16] M. S. Drew. Outlier detection and physical model in the orientation{from{color problem. In Peter Meer and Robert M. Haralick, editors, Workshop on Performance vs. Methodology in Computer Vision, Seattle, WA, June 24-25, pages 124{133. NSF/ARPA and IEEE Computer Society, 1994.

[17] A. P. Pentland. Local shading analysis. IEEE Trans. Patt. Anal. and Mach. Intell., 6:170{187, 1984. See also revised version in Shape from Shading, B.K.P. Horn and M.J. Brooks eds, MIT Press, Cambridge, MA, 1989, pp.443{487. [18] A. P. Pentland. Finding the illuminant direction. J. Opt. Soc. Am. A, 72:448{455, 1982. [19] Q. Zheng and R. Chellappa. Estimation of illuminant direction, albedo, and shape from shading. IEEE Trans. Patt. Anal. and Mach. Intell., 13:680{702, 1991.

[20] W. Chojnacki, M.J. Brooks, and D. Gibbins. Revisiting Pentland's estimator of light source direction. J. Opt. Soc. Am. A, 11:118{124, 1994.

[21] D. Gibbins, M.J. Brooks, and W. Chojnacki. Light source direction from a single image: a performace analysis. Australian Comput. J., 23:165{174, 1991. [22] C.-H. Lee and A. Rosenfeld. Improved methods of estimating shape from shading using the light source coordinate system. J. Opt. Soc. Am. A, 26:125{143, 1985. See also revised version in Shape from Shading, B.K.P. Horn and M.J. Brooks eds, MIT Press, Cambridge, MA, 1989, pp.323{347.

26

[23] R.L. Burden, J.D. Faires, and A.C. Reynolds. Numerical Analysis. Prindle, Weber & Schmidt, 2nd edition, 1981. [24] D. Jameson and L.M. Hurvich. Some quantitative aspects of an opponent{colors theory. I. Chromatic responses and spectral saturation. J. Opt. Soc. Am., 45:546{552, 1955. [25] J. G. Harris. A new approach to surface reconstruction: the coupled depth/slope model. In Proc. First Int. Conf. on Comp. Vision, pages 277{283, 1987.

[26] L. B. Wol . Di use re ection. In Proc. IEEE Comp. Soc. Conf. on Comp. Vis. and Patt. Rec., pages 472{478, 1992. [27] M.S. Drew.

2 goes into 3: Reduction of rank-reduced orientation{from{color problem with

many unknown lights to two{image known-illuminant photometric stereo.

Technical Report

CSS/LCCR TR 95{08, Simon Fraser University School of Computing Science, 1995. Available using ftp://fas.sfu.ca/pub/cs/techreports/1995/ as CSS{LCCR95{08.ps.Z.

y

x

z

1

(b)

y

x

z

1

(c)

y

x

z

1

(a)

3

6

2

Fixed point problem: (slant,tilt)=(40,20),(30,-120),(5,0)

77 777 88888

-2

-1

slant_1_fnc 0

1

11111111111111111111111 11111111111111 111111111 1 1 1 1 1 1 S1111 1 2 11111 2222 111 222222 1 222 2222222 2 2 2 2 2 2 2 2 3 3 3 3 3 2 3 3 3 333333 2222222222222222222222222222222222 22 333333333 333333333333 333333333 3333333 3 3 3 3 3 3333 444 3333 44444 33 4444444 4 4 4 4 4 4 4 4 444444444 4 4 44444444444444444444444444444444

-3

55 555 6666 0.0

0.5

1.0 slant_1

Figure 2:

1.5

250 300 0 50 100 150 200

Mozart: Depth

60 50

60

40

50

30

40

20

30

10

20 10

(a)

(b)

(c)

Figure 3:

0.4

Mozart: Fixed point problem 2S

5

2

2

1

2 2

2

1

1

2

5

5

2

0.2 slant_1_fnc 0.0

2

1

2 7 2

7

7 4

-0.2

1

-0.4

3

4

3

4

4 4

4

4

4

4

0.0

4

4

3 3

3

0.5

1.0

1.5

slant_1

(a)

8

log_integrability_integral 10 12 14

16

Minimization of integrability integral

6

S 0.0

0.2

0.4 slant_1

(b)

Figure 4:

0.6

0.8

(a)

(b) Shaded recovered Mozart normals: absolute error

0

0

500

100

1000

200

1500

300

2000

400

Mozart: angular errors

0

20

40 degrees

60

80

-150

(c)

-100

-50

0 Shading in 0..255

(d)

Figure 5:

50

100

150

50

100

Green

150

200

Egg: Ellipsoid in Color Space . .. .. . . . . .. .. .. .. .. .. . . . . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . . .. .. . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . .. . .. .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . .. .. .. .. .. .. .. .. . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . .. .. .. . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . .. .. .. .. .. .. .. .. .. .. .. .. .. . . .. .. .. .. .. .. .. . .. .. .. .. .. . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . ... ... ... ... ... ... .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... . ... .. ... ... ... ... ... ... .. ... ... ... .. .. ... ... ... ... ... . .................. .......................... . . . . . .. . ... ... .. ... .. .. ... ... ... ... ... ... ... ... .. .. ... ... .. ... .. ... ... .. .. ... .. ... ... ... .. .. ... .. ... ... ... ... ... .. ... ... ... ... . . .. . . . . ... ... ... ... .. ... . .. ... .. ... ... ... ... ... ... .. ... ... ... .. ... .. . . .. ... ... ... .. ... .. .. .. .. ... ... .. ... ... ... ... . . . .. . . ... . .. . ... .. .. . ... . .. . .. ... .. ... .. ... ... ... ... ... .. .. .. .. .. .. ... .. ... .. . ... .. ... ... .. ... ... ... ... ... ... ... . . . .. . . . . . . . . . . . . . . .. . .. ... ... .. .. .. . ... ... .. ... . . ... .. ... . . .. ... . .. .. .. .. . .. .. ... ... ... .. . . .. .. . ... . .. . . . . . . .. ... ... ... .. ... ... ... . .. .. ... .. ... ... ... ... ... ... ... ... .. ... ... ... .. ... ... .. ... . . .. . .. . .. .. ... .. . .. .. .. . . .. ... . . . .. . . .. . . . . . . . . .. . .. . .. . .. . .. ... . . .. . .. ... .. ... ... ... .. ... ... .. ... ... ... ... ... ... ... ... ... ... ... .. ... ... .. ... ... .. ... . . .. .. .. .. .. .. .. . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . . . .. . .. . . .. . .. .. .. .. .. . .. . .. . . .. .. . . . . .. .. ... ... .. ... ... ... ... ... ... ... ... ... ... ... ... ... .. ... .. ... ... .. .. . .. . . . . . .. . . . . ... . .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . .. . .. . . .. . .. . . .. . . . .. .. . . . .. . . . . .. . . . . . . .. . .. . . . . . . . . . . . .. . .. .. . . . .. .. .. ... ... ... ... ... ... ... ... ... . .. .. .. ... .. .. . .. .. . . . . .. . . . .. . .. . . .. .. .. .. .. .. .. .. .. .. .. .. .. . . . .. . . .. .. .. . .. .. . . . .. . .. . .. .. .. . . . . . . . .. .. . ...... ..... . .. ... . . . ... ... ... ... ... .. .. . .. .. . .. .. .. . . ... .. ... .. ... ... .. ... ... ... ... ... ... ... .. .. . ... .. .. ... .. .. . .. . ... .. . . . . . . . . . . . . .. . .. . . . . . .. .. . . . .. .. . .. .. . .. .. . .. .. . .. .. .. . .. .. . . . . . .. . ... .. ... ... ... .. ... .. ... ... . . . ... .. ... ... . .. ... ... ... ... .. ... .. .. .. .. . ... . .. . ... . .. . . . . .. .. . .. .. .. . . . . . . . .. .. . . . . .. . . .. . . . . .. . . . .. . . .. . . . .. .. .. .. . .. .. .. .. . .. . .. .. .. .. . .. .. .. .. .. .. . . . . . . . . .. . . .. . . . .. . . . . . . . . . . . .. .. . . .. .. . .. .. .. .. . ... . .. . ... .. .. .. . ... ... ... .. .. ... ... ... ... . .. .. .. . . . . . . . .. .. . . . . . . . . . . . . .. .. .. .. .. .. . .. .. .. . . . . .. .. . . . . . . . . . . .. .. .. .. .. .. . . . . . . . ... ... . .. ... . .. .. . . . 50

100

150

200

Red

(a)

(b)

Egg: Fixed point problem 6 1

0.4

6 1

6 1

6

6

6

1

0.2

1S

-0.2

slant_1_fnc 0.0

1

-0.4

3 8 8 3 0.0

8 3

8 3

8 3

8 3 0.5

1.0 slant_1

(c)

(d)

Figure 6:

1.5

(a)

(b)

Figure 7:

(c)

Reconstructed Depth

Figure 8: