Radiometric Calibration of CCD Sensors: Dark Current and Fixed Pattern Noise Estimation Alberto Ortiz and Gabriel Oliver Department of Mathematics and Computer Science University of the Balearic Islands Email:
[email protected],
[email protected] Abstract— The irradiance measurement performed by vision cameras is not noise-free due to both processing errors during CCD fabrication and the behaviour of the electronic device itself. A proper characterization of sensor performance, however, allows either removing the resulting noise from the image or accounting for it within image processing algorithms. This paper proposes new methods for estimating the distribution parameters of two of such sources of noise: dark current and the so-called fixed pattern noise. Since both methods require knowledge about the scene illumination, an estimation method using a calibrating sphere is also presented. This method models illumination as the combination of directional and ambient lighting. Experimental results can be found at the end of the paper.
Keywords – robot vision; sensors; camera calibration I. I NTRODUCTION As it is well known, vision cameras measure the spatial distribution of light incident on a light-sensitive device and produce, accordingly, bidimensional descriptions of this distribution known as images. Since Charge-Coupled Devices (CCD) were proposed and experimentally verified in the 1970’s as imaging sensors, they have become the most spread imaging technology and is included in many current cameras. Among their many interesting features, CCD sensors are characterized by a high linearity which makes easy handling pixel values. Unfortunately, the measurement process is not noisefree due to both processing errors during CCD fabrication and the behaviour of the electronic device itself. A proper characterization of sensor performance, however, allows either removing the resulting noise from the image or accounting for it within image processing algorithms. On the basis of the camera noise model of [1], this paper proposes two new methods for computing the distribution parameters of, respectively, dark current and the fixed pattern noise. As both methods require knowledge about the scene illumination, an estimation method using a calibrating sphere is presented first. This method models illumination as the combination of directional and ambient lighting. The rest of the paper is organized as follows: first, section II describes the model of image formation, from the interaction of light with matter to the operation of a CCD camera; secondly, section III address the problem of estimating the lighting parameters by means of a spherical calibration object under perspective projection; thirdly, sections IV and V present two methods for estimating the distribution parameters of,
respectively, dark current and the fixed pattern noise; previous work is reviewed in section VI; next, section VII presents some experimental results; finally, section VIII concludes the paper. II. I MAGE F ORMATION A. Interaction of Light with Matter It is generally accepted that objects reflection is an additive composition of body or diffuse reflection and interface or specular reflection [2], [3]. Besides, this model is enhanced by a further term accounting for non-directional or ambient lighting which interacts with the scene increasing objects radiance irrespectively of local surface geometry. All in all, radiance at a scene point p and for a wavelength λ can be summarized as indicated in equation 1: La (p;λ)
Lb (p;λ)
}| { z }| { z L(p; λ) = La (λ)ρa (p; λ) + mb (p) [Ld (λ)ρb (p; λ)] Li (p;λ)
}| { z + mi (p) [Ld (λ)ρi (p; λ)] ,
(1)
where: (i) La (λ) represents light coming from all directions in equal amounts while Ld (λ) is for directional lighting; (ii) ρa , ρb and ρi are surface material reflectances expressing the fraction of the incoming light which is conveyed by the corresponding reflection component, being ρa assumed a linear combination of the body and interface reflectances, ρb and ρi , respectively; (iii) mb is a term dependent on local surface geometry, which is most times given as mb (p) = cos θ(p), where θ(p) is the angle between the unit direction towards the light source at p, ~s(p), and the unit surface normal at p, ~n(p), and, thus, mb (p) = ~s(p) · ~n(p); and, finally, (iv) mi is a geometrical term for interface reflection (see [4], among others, for the details). B. Camera Operation Model Ideally, the number of electrons accumulated at a given collection site or image cell (i, j) for colour band c, I c (i, j), can be expressed as: I c (i, j) = T
Z „Z Z Λ
y
E(x, y; λ)Sr (x, y)η(λ) dx dy x
«
τ c (λ)dλ , (2)
where (x, y) are continuous coordinates on the sensor plane, Λ represents the set of wavelengths λ within the visible spectrum, T is the integration or exposure time,
E(x, y; λ) is the irradiance incident at point (x, y) over the collection site, Sr (x, y) is the spatial response of the collection site, η(λ) is the ratio of electrons collected per incident light energy (a form of the so-called quantum efficiency), and τ c (λ) is the filter transmittance for colour channel c. Assuming a non-attenuating propagation medium and ignoring the blurring and low-pass filtering effects of the pointspread function of the optics in a properly focused camera, the following relation can be established between E(x, y; λ) and the corresponding scene radiance L(p; λ) [5]: ³ π ´ µ d ¶2 4 E= (cos ϕ) L , (3) 4 f
where d is the effective diameter of the lens (i.e. its aperture), f is the focal distance and ϕ is the angle between the optical axis and the straight line that, passing through the lens nodal point, connects (x, y) with p. The quantity f /d is the so-called F-number. Several sources of noise can affect the performance of CCD-based imaging systems, preventing them from measuring actual irradiance values. According to [1], the digitized signal corresponding to pixel (i, j) can be stated as a random variable Dc (i, j) = µc (i, j) + N c (i, j) as follows: µc (i,j)
}| { z Dc (i, j) = (K(i, j)I c (i, j) + EDC ) Ac + N c (i,j)
z }| { NSc (i, j)Ac + (NDC (i, j) + NR (i, j)) Ac + NQ (i, j) , | {z } | {z } Nec (i,j)
(4)
Nfc (i,j)
where: (i) K represents a Gaussian random variable of mean 1 2 expressing the site-to-site nonuniand standard deviation σK formities among image cells due to processing errors during CCD fabrication, also called fixed pattern noise; (ii) EDC is the expected dark current generated by thermal energy at every collection site, while NDC is a zero-mean Gaussian 2 superimposed over EDC ; (iii) NSc noise of variance σDC is the so-called shot noise, representing the uncertainty in the number of electrons collected at a given image cell, which is, in turn, distributed as a Poisson random variable of variance KI c + EDC ; (iv) NR is the zero-mean Gaussian noise introduced by the charge-to-voltage output amplifier of the camera; (v) NQ is a uniform random variable defined over the interval [−1/2, 1/2] accounting for the quantization noise; and (vi) Ac is the camera gain for colour channel c. In equation 4, Nec depends on the number of collected electrons while Nfc does not. III. E STIMATION OF L IGHTING PARAMETERS 1) General Procedure: Using equation 3, equation 1 and equation 2 can be combined to produce equation 5, where the contribution of every reflection component to I c of equation 2 is explicitly stated. In equation 5, (La ρa (i, j))c , (Ld ρb (i, j))c and (Ld ρi (i, j))c represent the joint contribution of lighting and reflectance from every reflection component and, at the
same time, are assumed to have embedded, among others, the integration time T (equation 2) and factor (π/4)(d/f )2 (equation 3), so that variations in exposure time and lens aperture yield different values for those quantities. I c (i, j) = (La ρa (i, j))c + mb (i, j)(Ld ρb (i, j))c + mi (i, j)(Ld ρi (i, j))c .
(5)
In case of a uniformly coloured matte object (ρa (p; λ) = ρb (p; λ) = ρb (λ), ρi (p; λ) = 0, ∀p in the object), equation 6 results from the combination of equation 4 and equation 5: Dc (i, j) = K(i, j)(La ρb )c Ac + EDC Ac + mb (i, j)K(i, j)(Ld ρb )c Ac + N c (i, j) .
(6)
The relevance of equation 6 is that it reveals a (noisy) linear relationship under the aforementioned circumstances between Dc and mb for all the object pixels. Therefore, if enough mb values can be related to the corresponding D c values, so that some pairs (mb , Dc ) can be obtained, those (noisy) pairs can be fitted by a straight line D c = αc + mb β c which allows estimating αc = (La ρb )c Ac + EDC Ac and β c = (Ld ρb )c Ac . The noise in the pairs (mb , Dc ), which comes from N c and the spatial variation in K, can be significantly removed if the Dc values for those pixels corresponding to the same mb are c averaged and this average D is used in the fitting. If the matte object under consideration is white (ρb (λ) = 1), once the straight line parameters αc and β c are known, β c is an estimation of the strength of the directional lighting of the scene Lcd Ac , while, once known EDC Ac , αc − EDC Ac = Lca Ac is an estimate of the strength of the ambient illumination, both for colour channel c. The knowledge about mb (i, j) = ~s(i, j) · ~n(i, j) required by the previous procedure implies, as well, knowledge about ~n(i, j) and ~s(i, j) for the same set of pixels. In particular, ~n(i, j) can be determined if the object shape is known beforehand, which implies the use of a calibration object. The following sections prove that both ~n and ~s can be easily determined from the projection of a calibration sphere. 2) Projection of the Sphere: Under perspective projection, the shape of the projection of the sphere results from the intersection of the image plane with the circular cone depicted in figure 1, whose vertex coincides with the center of projection. Therefore, the projection of the sphere over the image plane is a conic since it is the intersection between a plane and a cone. Its exact parameters are derived next. On the one hand, all the points (x, y, z) over the sphere surface satisfy equation (x−x0 )2 +(y−y0 )2 +(z −z0 )2 = R2 , where (x0 , y0 , z0 ) and R are, respectively, the center and the radius of the sphere. On the other hand, the straight line connecting any point over the sphere surface (x, y, z) and the center of projection (0, 0, 0) can be expressed, assuming a pin-hole camera model, as (x, y, z) = t(u, v, f ), where (u, v) is the projection of (x, y, z) onto the image plane and f is the focal distance of the camera. Now, substituting (x, y, z) = t(u, v, f ) into the equation of the sphere and
y
z = f (im a g e p la n e ) c irc u la r c o n e
z
( 0 ,0 ,0 )
c
Projection of a sphere over the image plane.
a
Graph used to derive
lin e c o n ta in in g th e m a jo r a x is o f th e e llip s e ( 0 ,0 ,f )
b
}| }| { ¢{ z u2 + v 2 + f 2 t2 − 2(ux0 + vy0 + f z0 ) t
(7)
¢ ¢ ¡ R2 − y02 − z02 u2 + R2 − x20 − z02 v 2 + 2x0 y0 uv ¢ ¡ + 2f x0 z0 u + 2f y0 z0 v + f 2 R2 − x20 − y02 = 0 .
(8)
The eigenvalues of the quadratic form associated to this conic can be shown to be λ1 = R2 − z02 and λ2 = R2 − x20 − y02 − z02 . Since z0 is always larger than R (see figure 1), both eigenvalues are negative (i.e. have the same sign) and therefore the conic results to be in general an ellipse. 3) Estimation of Surface Normal Vectors: The surface normal vector ~n(x, y, z) for a sphere centered at (x0 , y0 , z0 ) and having radius R is given by equation 9: ³ q ¡ x−x ¢2 ¡ y−y0 ¢2 ´T x−x0 y−y0 0 ~n(x, y, z) = . 1 − − R , , − R R R (9) Given (x, y) = fz (u, v): − x0
=
uz x0 − , ny = fR R
z fv
− y0
v z y0 − . fR R (10) z x0 In order for these equations to be useful, f , R , R and yR0 should be known beforehand. The following method makes use of f and the ellipse fitting the contour of the sphere projection to determine the other values. R
r c
y
( x 0,y 0,z 0)
( u 0,v 0,f )
z = f (im a g e p la n e )
( 0 ,0 ,0 )
2
b −4ac from which t is given as −b± 2a . The outer points of the sphere projection correspond to the points of the sphere surface for which the straight line (x, y, z) = t(u, v, f ) intersects the sphere in just one point, i.e. b2 − 4ac = 0. This fact leads to equation 8, which defines the wanted conic expressed in terms of u and v:
z fu
c
z0 . R
a
c
}| { z + x20 + y02 + z02 − R2 = 0 , √
R z
reordering, equation 7 results:
nx =
x
( 0 ,0 ,f )
x
Fig. 1.
¡
z a
Fig. 2.
z¡
e llip s e m a jo r a x is
( x 0,y 0,z 0)
o p tic a l a x is
( 0 ,0 ,0 ) c e n te r o f p ro je c tio n
y
( u 0,v 0,f )
R
=
Fig. 3. View of figure 2 after rotating the viewpoint so as to put it on top of the ellipse major axis.
z is a data structure of values the same On the one hand, R size as the sphere projection which can be determined using the equation of the sphere surface (x − x0 )2 + (y − y0 )2 + (z − z0 )2 = R2 , substituting x = u fz and y = v fz and dividing the resulting equation by R2 , to achieve equation 11: ¶2 µ ¶2 ³ µ x0 v z y0 z 0 ´2 uz z − + − + − = 1 , (11) fR R fR R R R
from which, after a reordering of terms, a second-order polyz is obtained: nomial in R # "µ ¶ µ ¶2 ¸ · 2 ³ z ´2 v v y0 z0 ³ z ´ u x0 u + +1 + + −2 f f R f R f R R R ·³ ´ ¸ ´ ´ ³ ³ 2 2 2 x0 y0 z0 + + + − 1 = 0 . (12) R R R On the other hand, if the focal distance f is available, either from the lens/camera manufacturer or from a previous geometric calibration of the camera, then the remaining variables xR0 , yR0 and zR0 can be put as a function of f and some information coming from the ellipse fitting the contour of the sphere projection. First of all, given the relationship between the center of the sphere and the center of the ellipse (x0 , y0 ) = zf0 (u0 , v0 ), if both equations are divided by R,
expressions for
x0 R
and
y0 R
are obtained in terms of x0 u 0 z0 y0 v0 z 0 = , = . R f R R f R
z0 R:
(13)
Secondly, zR0 can be determined from some geometrical relationships between the sphere and its projection over the image plane. These relationships are established in figures 2 and 3, being the plane depicted in the latter the one containing the triangle (0, 0, 0) − (0, 0, f ) − (u0 , v0 , f ) of figure 2. Over this plane, angle ψ happens to be: tan ψ = p
R x20
+
y02
+
z02
=p
r u20
+ v02 + f 2
,
(14)
from which, substituting (x0 , y0 ) by zf0 (u0 , v0 ), zR0 = fr results. Now, since r = a cos χ (see figure 3) and cos χ = √ 2 f 2 2 (see figure 2), zR0 is definitely given by: u0 +v0 +f
p u20 + v02 + f 2 z0 = . (15) R a 4) Estimation of the Lighting Direction: If the light coming from the directional light source is assumed distant, ~s(i, j) ≈ ~s = (sx , sy , sz ) throughout the escene. In this way, using the expressions developed for nx and ny (equation 10), and z z defining p and q as p = uf R − xR0 and q = fv R − yR0 , cos θ results to be: ´ ³p (16) 1 − p 2 − q 2 sz . cos θ = psx + qsy −
~s can now be determined from the intensity pattern of the sphere projection. In effect, assuming for the moment N c (i, j) = 0 and K(i, j) = 1, the isophote curve of intensity L of the image (i.e. D c = L), without considering the background, is given by: h ³p ´ i L = αc + psx + qsy − 1 − p 2 − q 2 sz β c , (17)
where αc = (La ρb )c Ac + EDC Ac and β c = (Ld ρb )c Ac . Equation 18 can now be obtained reordering equation 17: ¢ ¢ ¡ ¡ 2 sx + s2z p2 + s2y + s2z q 2 + 2sx sy pq c
− 2Csx p − 2Csy q + C 2 − s2z = 0 ,
(18)
where C = L−α β c . Clearly, equation 18 defines a rotated conic in terms of the variables p and q. The eigenvalues of the corresponding quadratic form are λ1 = s2x + s2y + s2z = 1 and λ2 = s2z . Since both are positive (i.e. have the same sign), the conic results to be an ellipse. On the other hand, the coordinates of the center of the ellipse (p0 , q0 ) are given by p0 = Csx and q0 = Csy , while the slopes m1 and m2 of the axes of the ellipse are m1 = sy /sx , for the shortest axis (i.e. the one related to λ1 ), and m2 = −sx /sy , for the largest axis. Therefore, if (p, q) points corresponding p to a given isophote curve are fitted by an ellipse, sz = − λ2 /λ1 (curve fitting methods typically introduce a scale factor into the curve parameters which also affects λ1 and λ2 , so that sz cannot be directly recovered from λ2 ), while the tilt of the illumination τ = sy /sx is given by τ = tan−1 m1 , being the
precise angular quadrant determined by the signs of (p0 , q0 ) = (Csx , Csy ), since C ≥ 0. Taking into account that the slant of the illumination σ is cos−1 sz , then the lighting orientation can be completely recovered as ~s = (sin σ cos τ, sin σ sin τ, cos σ). The accuracy of the estimation of ~s can be enhanced if the isophote curves for several intensities L and the different colour channels c are considered and the corresponding estimates ~sL,c of ~s are properly aggregated [6]. IV. E STIMATION OF DARK C URRENT This section describes a method for estimating the spatial distribution parameters of the expected value of the dark current of a CCD camera, EDC . Turning again to equation 6, let us assume that several images of the calibration sphere with the same camera and lighting parameters are taken and averaged. In this way, noise N c vanishes in the average image c c : D and equation 19 results for D A(i,j) c µ ¶ c (La ρb )c D (i, j) = K(i, j) m (i, j) + (Ld ρb )c + EDC . b Ac (Ld ρb )c (19) Now, suppose the calibration object is imaged under different lens apertures (i.e. different values of the F-number). In this way, according to equation 3, a different value for (Ld ρb )c Ac and (La ρb )c Ac will result for every configuration and, conc sequently, a different value for D for every (i, j) across c c lens apertures. The resultant (Ld ρb ) A andc (La ρb )c Ac values a ρb ) keep however constant the quotient (L (Ld ρb )c among configurations, since the factor (π/4)(d/f )2 in equation 3 changes both quantities equally. Therefore, for the samecpixel (i, j), the pairs c D (i,j) c , (L = δ+γ(Ld ρb´ )c ( D A(i,j) c d ρb ) ) define a straight line ³Ac c a ρb ) with δ = EDC and γ = K(i, j) mb (i, j) + (L (Ld ρb )c . c
values, Ac should have Notice that, in order to obtain D A(i,j) c previously been determined, for instance with the method described in [1]. On the other hand, note that, since this method estimates EDC by fitting a straight line to points c D (i,j) ( Ac , (Ld ρb )c ), no knowledge is needed about the shape of the calibration object. Therefore, either the sphere used for estimating the lighting parameters or any other object can be utilized. Since equation 19 defines a straight line for every colour channel, several possibly non-coinciding estimations of EDC will be obtained among the different colour bands. Although this can be acceptable for a 3-CCD colour camera, for 1-CCD colour cameras they should all take the same value. In order to reinforce this fact, a multiple straight line fitting procedure through a common origin is suggested (see [6] for the details). Now, the site-to-site variation of EDC can be measured obtaining an estimate for EDC for different image points (i, j) over the calibration object projection, using the procedure described above, and taking the mean of the estimates of EDC and their standard deviation. V. E STIMATION OF THE F IXED PATTERN VARIATION The distribution parameters of the fixed pattern variation can be estimated performing an analysis similar to the one
presented in section IV. The method relies on equation 6 again, but now making use of the knowledge acquired about dark current. Assuming that several images of the calibration sphere with the same camera and lighting parameters are taken and averaged as in section IV, noise N c vanishes in the average c c image D and equation 20 results for D − EDC Ac : c
D (i, j) − EDC Ac =
K(i, j) ((La ρb )c Ac + mb (i, j)(Ld ρb )c Ac ) .
(20)
Now, according to equation 20, if several images with different lens apertures are available as in section IV, given a certain pixel (i, j), the different ´ pairs ³ c D (i, j) − EDC Ac , (La ρb )c Ac + mb (i, j)(Ld ρb )c Ac lie in c
a straight line D (i, j) − EDC Ac = κ ((La ρb )c Ac + mb (i, j)(Ld ρb )c Ac ) with slope κ = K(i, j), the same for all the colour bands. Notice that, opposite to the method of estimation of dark current parameters, the previous method for estimating the fixed pattern does require knowledge about the shape of the calibration object in the form of the geometrical factor mb . As it has been shown in section III, this knowledge can be obtained for the calibration sphere used for estimating the lighting parameters. To finish, as in section IV, the spatial variation of K around the CCD can be estimated repeating the straight line fitting for several pixels of the calibration object. VI. P REVIOUS W ORK In contrast with geometric calibration, for which lots of algorithms have been published, the radiometric calibration of vision cameras has been rarely studied, being the paper by Healey and Kondepudy one of the most detailed studies of the subject which can be found [1]. Their calibration methods are based on their camera noise model and make use of uniform reflectance calibration cards from which the gain, the charge-independent noise, the camera dark current and the fixed pattern variation can be estimated [1]. After the paper by Healey and Kondepudy, Tarel [7], [8] proposed several experiments for estimating the charge-independent noise, the dark current —using images taken under a dark environment again— and the joint effect of the fixed pattern array and the shadowing introduced by the camera optics due to the effect known as vignetting [9]. In all cases, uniform reflectance calibration cards were also used. The calibration results were illustrated executing two common tasks in computer vision, edge detection and image segmentation, over the original and the corrected images. Finally, Stokman [10] makes use of a camera noise model consisting of electronic gain, shot noise and dark current for error propagation inside an edge-detection framework. The author estimates the gain and the dark current variance using a series of images of a white reference card varying the lens aperture. VII. E XPERIMENTAL R ESULTS A number of experiments have been performed in order to evaluate the methods described in this paper. In particular,
all the radiometric parameters of the camera noise model introduced in section II-B were estimated for a JAI CV-M70 progressive scan colour CCD camera with linear response and 8 bits per colour channel and pixel. The rest of the calibration setup hardware consisted of a Quartz Colour Pulsar mod. 3130 650 W spotlight and a COMET Matrox frame grabber. As for the calibration object, a sphere made of white cork lying over a black background to avoid shadows was used (see [6] for the details). The gain Ac and the standard deviation of the non-chargedependent noise σfc were first estimated using the procedure presented in section IV of the paper by Healey and Kondepudy [1]. As a result, Ac and σfc for the red, green and blue channels were measured as, respectively, (5.65, 4.91, 6.18) × 10−3 and (0.59, 0.51, 0.68). Dark current and fixed pattern variations were measured using 20 different lens apertures. For every case, 10 calibration images were captured and averaged, as suggested in sections IV and V to diminish the effect of N c . Applying the procedure described in section III, lighting parameters were determined for the average images. Next, dark current variation was estimated over 900 evenly distributed pixels within the sphere projection by means of 3line common-origin straight line fittings, according to equation 19, and using Ac and the light strength estimates for the 20 different lens apertures. EDC was finally estimated as 1621.66 while the spatial variation of EDC resulted to be σEDC = 137.77. In this way, EDC Ac for the red, green and blue channels is (9.16, 7.96, 10.02), with a spatial standard deviation (0.78, 0.68, 0.85), less than one digital intensity level in every colour band. By way of illustration, figure 4 shows a plot of one of the straight line fittings performed to estimate EDC for one of the image points involved, together with the histogram of the 900 EDC values collected. With the same set of images, the fixed pattern variation was estimated with the method outlined in section V, using the same 900 image points utilized for measuring the distribution parameters of the expected dark current. The result was K = 1.000691, with a spatial standard deviation σK = 0.024205. Figure 5 presents a plot of one of the straight line fittings performed for estimating the parameters of the distribution of K. A histogram of K values is also given in the same figure. For comparison purposes, the methods by Healey and Kondepudy were also implemented (sections V.A and V.B of [1]). Referring to the estimation of dark current, Healey and Kondepudy’s procedure, which is based on images taken in a dark environment, yields a null dark current, probably because the electronics of the camera under calibration compensates internally when a collection site stores a low number of electrons. The method based on the calibration sphere, however, is not affected by this behaviour. As for the fixed pattern variation, the method of Healey and Kondepudy was slightly modified to incorporate the estimation of dark current computed through the calibration sphere, in order not to use the dark images suggested by the authors, which turned out to be useless. The experiment yielded an estimation of K =
Fig. 4.
Dark current estimation graphs.
Fig. 5.
Fixed pattern estimation graphs.
1.000000, with a spatial standard deviation σK = 0.006954. Both methods, thus, produce expected values for K close to 1, consistently with the model, although the method based on the calibration sphere tends to produce a value slightly above 1 together with a standard deviation one order of magnitude above. Probably, this is because of the (small) discrepancies between estimated and true mb values.
models —Gaussian, salt and pepper, etc.— not taking into account the performance of the sensor producing the irradiance measurement. For more details and more results, specially referring to the lighting parameters estimation procedure and the above-mentioned applications of the radiometric calibration of CCD sensors, see [6].
VIII. D ISCUSSION AND C ONCLUSIONS
This study has been partially supported by project CICYTDPI2001-2311-C03-02 and FEDER fundings.
Two procedures have been proposed for calculating the distribution parameters of two important factors related with the performance of a CCD camera, dark current and the fixed pattern noise. Both methods are based on the estimations of the illumination parameters of a model consisting of ambient and directional lighting. Images of a calibration sphere under different lens apertures have been used in all the experiments. The estimations obtained for dark current and the fixed pattern noise have resulted consistent with the noise model proposed in [1] and, in fact, have compared well with the methods proposed in the same paper. The radiometric calibration of CCD sensors by means of these sort of methods derive in two important benefits: on the one hand, it allows accounting for the corresponding noise within image processing algorithms by means of uncertainties associated to every digital intensity level; on the other hand, synthetic images can be generated according to the estimated noise model parameters, being the main interest of these images that they allow testing machine vision algorithms under more realistic noisy conditions, rather than just using standard
ACKNOWLEDGMENT
R EFERENCES [1] G. Healey and R. Kondepudy, “Radiometric CCD camera calibration and noise estimation,” PAMI, vol. 16, no. 3, pp. 267–276, 1994. [2] G. Healey, “Using color for geometry-insensitive segmentation,” JOSAA, vol. 6, no. 6, pp. 920–937, 1989. [3] S. Shafer, “Using color to separate reflection components,” COLOR Research and Application, vol. 10, no. 4, pp. 210–218, 1985. [4] R. Zhang et al., “Analysis of shape from shading techniques,” CS Dep. (Univ. of Central Florida), Tech. Rep., 1994. [5] B. Horn and R. Sjoberg, “Calculating the reflectance map,” Applied Optics, vol. 18, no. 11, pp. 1770–1779, 1979. [6] A. Ortiz and G. Oliver, “Scene lighting parameters estimation and radiometric camera calibration,” (DMI, Univ. de les Illes Balears), Tech. Rep. A-2-2003, 2003. [7] J.-P. Tarel, “Calibration radiom´etrique de cam´era,” INRIA, Tech. Rep. 2509, mars 1995. [8] ——, “Une m´ethode de calibration radiom´etrique de cam´era a` focale variable,” in Proceedings of 10`eme Congr´es AFCET, Reconnaissance des Formes et Intelligence Artificielle, RFIA, 1996. [9] B. Horn, Robot Vision. MIT Press, 1986. [10] H. Stokman, “Robust photometric invariance in machine colour vision,” Ph.D. dissertation, ISIS Research Group - Fac. of Science (Univ. of Amsterdam), 2000.