IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 53, NO. 3, MAY 2004
695
A Method for Generating Simulated Nighttime Road Images Under Automotive Headlamp Lights Cheol-Hee Lee, Kuk Sagong, and Yeong-Ho Ha, Senior Member, IEEE
Abstract—This study proposes a new calculation method for generating real nighttime lamp-lit images. In order to improve the color appearance in the prediction of a nighttime lamp-lighted road scene, the lamp-lit image is synthesized based on spectral distribution using the estimated local spectral distribution of the headlamps and the surface reflectance of every object. The principal component analysis method is introduced to estimate the surface color of an object and the local spectral distribution of the headlamps is calculated based on the illuminance data and spectral distribution of the illuminating headlamps. High-intensity discharge and halogen lamps are utilized to create beam patterns and captured road scenes are used as background images to simulate actual headlamp-lit images on a monitor. As a result, the reproduced images presented a color appearance that was very close to a real nighttime road image illuminated by single and multiple headlamps. Index Terms—Headlamps, principal component analysis, surface spectral reflectance, visibility.
I. INTRODUCTION
T
HE visibility evaluation of headlamps is an important step in reducing the time and labor involved in the development of headlamps. There already are a variety of graphic and real image methods for simulating nighttime road scenarios [1]–[5]. However, such graphic images are unable to produce the real impression of an actual lamp-illuminated image; plus, simulations based on the RGB color space of a real image are unable to accurately reproduce the color of the original scene. Accordingly, this study proposes a new calculation method for a real nighttime lamp-lit image. In order to improve the color appearance in the prediction of a nighttime scene, the lamp-lit image is synthesized within the spectral domain. For such a spectral-based image synthesis, the spectral power distribution of the headlamps and the surface spectral reflectance of the nighttime road image need to be calculated for the input road image. The local spectral power distribution of the illuminating headlamps is predicted for every pixel using the light distributions, which are predicted in terms of the illuminance for every point of the input nighttime-captured image, and the spectral power distributions
Manuscript received June 19, 2001; revised February 28, 2003, October 26, 2003, and December 16, 2003. This work was supported by the National Research Laboratory Program, Korean Ministry of Science & Technology, under Grant M10203000102-02J0000-04810. C.-H. Lee is with the Major of Computer Engineering, Andong National University, Andong, Gyeongbuk 760-749, Korea (e-mail:
[email protected]). K. Sagong is with the Samlip Industrial Company, Ltd., Gyeongsan, Gyeongbuk 1208-6, Korea (e-mail:
[email protected]). Y.-H. Ha is with the School of Electrical Engineering and Computer Science, Kyungpook National University, Daegu 702-701, Korea (e-mail: yha@ ee.knu.ac.kr). Digital Object Identifier 10.1109/TVT.2004.825790
of the headlamps themselves. Next, the surface reflectance of every object is estimated using the principal component analysis method that is widely applied in color engineering. After calculating the local spectral power distribution of the illuminating lamps and the surface reflectance of each object, the lamp-illuminated images are then synthesized for each image pixel. Here, the illuminance distributions are converted into lightness distributions to correlate the calculated illuminance of the lamps and the perceived brightness on the cathode-ray tube (CRT) monitor. To evaluate the proposed method, a spectral comparison between the estimated spectral distribution and the real measured spectral distribution of the headlamps was performed. In addition, subjective experiments were carried out using real lamp-lit scenes and the proposed synthesized images reproduced on a CRT monitor under single and multiple headlamp illumination. As a result, the estimated spectral power distribution of the headlamps exhibited a high similarity to the real measured lights; plus, the predicted image produced a color appearance that was very close to an actual nighttime road image illuminated by single and multiple headlamps. II. CALCULATION OF ILLUMINANCE ACCORDING TO DISTANCE FROM HEADLAMPS In order to calculate the spectral distribution of headlamps on a road surface, the spectral distribution of the headlamp bulb plus the illuminance on each surface are required. The spectral distribution of a headlamp bulb can be obtained by direct measurement using a spectro-radiometer, whereas illuminance calculation requires information on the distance between the surfaces and the headlamps along with luminous intensity values for each surface. However, it is actually difficult to measure the corresponding distance in a real scene for all points of image; thus, in this research, distance information for a background road image is directly estimated from a background road image. A. Calculation of Distance Information Using a Spherical Coordinate System Fig. 1 illustrates the image-capture process using a camera. In Fig. 1, represents the focal length of the camera and describes the distance from the camera lens to the image. and for both charge-coupled device (CCD) and image are distance coordinates represented by a spherical coordinate system. Plus, denotes distance between headlamps and each point on a real road image. To obtain the illuminance, values have to be calculated for all pixels in the background road image. However, direct measurement for values is actually difficult because it is complicated to identify the corresponding positions on a real
0018-9545/04$20.00 © 2004 IEEE
696
Fig. 1.
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 53, NO. 3, MAY 2004
Image formation by camera.
road to each point in the captured road image; also, values need to be measured for all captured images. Accordingly, in this study, the values are estimated using the known focal length and calculated values along with coordinate transformation from a spherical to the Cartesian coordinate system. In Fig. 1 (1) By a proportional expression (2) where means image size in the horizontal direction. We then substitute into (1) as (3) where (4) is the horizontal CCD length. Thus, (1) can be reand placed by (5)
In the same manner,
(7)
and are the vertical CCD length and vertical where image size, respectively. By substituting a given focal length of CCD and image size into (6) and (7), the distance information and can be calculated for the each projected image plane to the direction. Thereafter, if the distance from image points to the camera lens is calculated, the actual distance coordinates of each pixel in the spherical coordinate system can be formulated. However, remains a variable; therefore, to compute the variable values, a coordinate transformation is applied. Although the correct is calculated via coordinate transformation, there is calculation error of distance, as shown in Fig. 2, because image plane is a two-dimensional (2-D) projection of a three-dimensional (3-D) scene. The distance error depends on the viewing angle of a camera. For example, distance error is approximately less than 15% given a camera with 60 viewing angle. In addition, distance calculation is mostly for the ground in an image and distance for standing objects in space cannot be basically estimated by this calculation method. B. Coordinate Transformation From Spherical to a Cartesian Coordinate System At
(5) Finally, the distance information for a projected image on a CCD can be calculated using (6)
can be calculated by
the
camera position, the distance coordinates , represented by the Cartesian coordinate system, are described in Fig. 3. The distance information at the camera position can be formulated by (8) (9) (10)
LEE et al.: METHOD FOR GENERATING SIMULATED NIGHTTIME ROAD IMAGES UNDER AUTOMOTIVE HEADLAMP LIGHTS
Fig. 2.
697
2-D projection of a 3-D scene.
Fig. 3. Definition of each axis in camera-view coordinate system.
The distance information in the Cartesian system represents the actual distance from the camera to each pixel in the captured images when the camera position is given as the origin of the Cartesian coordinate system. Fig. 4 defines the four different Cartesian coordinate systems according to the view position. The world-coordinate system is the standard coordinate system and its origin is located on the ground, which is the vertical projection point relative to a driver’s eye position. The eye- and camera-view coordinate systems share the same origin for their coordinates; however, their axes are tilted away from each other. Although the camera is placed at the position of the driver’s eyes, there is a mismatch between the eye- and camera-view coordinate system due to the camera tilt. The camera view tilts toward the center point of the captured image, whereas the eye view tilts toward the vanishing point; the center and vanishing points of the image do not necessarily coincide. However, the current study assumed that the lamp-and eye-view coordinate systems share the same vanishing point and the driver’s view and headlamp point are
Fig. 4.
Definition of coordinate systems according to view.
at the same vanishing point. Based on this assumption, a coordinate transformation from the camera-to-eye view coordinate systems is required to compensate for the mismatch between the center point of the image according to the camera view and the vanishing point of the image according to the eye view.
698
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 53, NO. 3, MAY 2004
Fig. 5. Angle difference between the vanishing and center points of the road image.
C. Transformation to the Eye-View Coordinate System
and
Fig. 5 shows a mismatch between the center point of an image and the vanishing point. As mentioned above, this mismatch exists due to the camera tilt. For a distance calculation using the lamp-view coordinate system, a coordinate transformation from the camera- to eye-view coordinate system is required to compensate for the mismatch between the center point of the image according to the camera view and the vanishing point of the image according to the eye view. The angle difference between the camera and eye views can be formulated by (11) and (12) where is the center pixel of the image, represents the vanishing point. and represent the CCD length in a horizontal and vertical direction, respectively, and denotes the focal length of the camera. After calculating the angle difference, the axis transformation from the camera-to-eye view can be formulated using
(16) In this approach, the distance information can be calculated from a 2-D image based on a one-point vanishing perspective projection. That is, a 2-D image is divided into two regions according to the vanishing point. First, the region below the vanishing point has a separate vanishing distance for each pixel. Second, the region above the vanishing point has a fixed-length vanishing distance. Thus, the distance information for the pixels ; located below the vanishing point can be calculated using that is, the height of the driver’s eyes measured from the ground. into (16), the variable can be comThen, by substituting puted. As a result, and in (14) and (15) can also be estimated by the value, respectively. For those pixels above the vanishing point, a fixed-length vanishing distance is applied. As such by substituting this predetermined vanishing distance and into (14), the variable can be computed and can then be calculated by substituting the variable into (15) and (16), respectively. D. Transformation to Lamp-View Coordinate System Finally, the lamp-view coordinate system can be defined from the eye-view coordinate system via an origin movement in the Cartesian coordinate system as
(13)
into (13), the distance inforBy substituting mation for the eye-view coordinate system can be represented by
(14) (15)
(17) , and are the distance components between where the origins of the eye-and lamp-view coordinate systems in the lamp-view coordinate system, as illustrated in Fig. 6. The distance information, obtained from (17), represents the actual distance components between the headlamps and each pixel of a road image. Therefore, the actual distance for each pixel of a road image can be formulated by the following equation in the lamp-view coordinate system.
(18)
LEE et al.: METHOD FOR GENERATING SIMULATED NIGHTTIME ROAD IMAGES UNDER AUTOMOTIVE HEADLAMP LIGHTS
Fig. 6.
699
Distance between origins of eye- and lamp-view coordinate systems.
E. Computation of Luminous Intensity and Its Transformation Into Illuminance After the distance calculation, the illuminance can be computed by dividing the luminous intensity by the square of the distance for each pixel as (19) denotes the illuminance for a pixel , where value in the lamp-view coordinate system, which has and is the luminous intensity of a pixel. In order in the to calculate the luminous intensity in terms of lamp-view coordinate system, the spherical coordinates to the in the lamp view have to be counted as corresponding Fig. 7.
Spectral power distributions of headlamps.
(20) (21) and (22) The luminous intensity for the corresponding spherical coordinates can be linearly interpolated from a database of headlamp beams obtained from headlamp simulations or actual headlamp measurements using a photometer, such as an Gonio photometer. The angular spacing of the luminance data was a 0.25 step for a 90 width in the horizontal direction and a 0.25 step for a 30 width in the vertical direction. III. ESTIMATION OF LOCAL SPECTRAL POWER DISTRIBUTIONS OF HEADLAMPS The local spectral power distribution of the illuminating headlamps is predicted using light distributions in terms of the illuminance [lux] and spectral power distributions of the headlamps. For the illuminance calculation, we computed distance information based on the assumption that the captured image is a one-point vanishing perspective projection. As such, the beam reaches the ground below the vanishing point of the lamp-lit
image and then arrives on the vanishing face at a finite distance above the vanishing point. However, this assumption divides the image into two sections based on a vanishing point and the 3-D information of standing objects such as traffic signals, trees, etc. disappears. Accordingly, an image-segmentation method must be applied to separate standing objects from the background nighttime image. The illuminance data is then recomputed for these selected regions based on the distance from the ground point of the standing object to the headlamps and the width and height information of the standing object. After the illuminance calculation, the local spectral power distributions of the headlamps are determined. In this study, a high-intensity discharge (HID) and two halogen lamps are used for this purpose. When multiple lamps are shone on an object, the sum of the spectral power distributions of the lights in each position can be defined as (23) is the number of illuminated headlamps. dewhere notes the scaled spectral power distribution of each headlamp, represents the calculated illuminance as shown in Fig. 7. of each headlamp. As shown in Fig. 7, the spectrum of each
700
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 53, NO. 3, MAY 2004
Fig. 8. Measured spectral distribution and calculated spectral power distribution when HID, H1, and H3 lamps are illuminated on a white plate.
headlamp is represented by spectral data within a wavelength range from 400 to 700 with 10-nm intervals and contains 31 components. For magnitude compensation between the headlamp spectra, the spectra are normalized before summation by (23). Thereafter, the calculated local spectral distribution of the headlamps is again normalized by the corresponding value of D65 at the wavelength of maximum magnitude of the Commission Internationale de l’Eclairage (CIE) spectral luminous effi, as in ciency function (24) Therefore, the local spectra of the headlamps only determine the relative spectral power distribution for each pixel of an input nighttime image when the headlamps are illuminated. For an accuracy evaluation of the calculated spectral distribution of the headlamps, an experiment with HID (low beam), H1 (high beam), and H3 (fog beam) was simulated in a darkroom, where the illumination level was about 0.1 lux. The three lamps were illuminated on a white plate for calibration and the reflected light was measured by a Minolta CS-1000 spectrophotometer in order to compare the measured result with the estimated result. Fig. 8 shows the result. The between mean-square-error (mse) and color difference the calculated and measured spectral distributions were 2.3408 and 2.1474, respectively. IV. ESTIMATION OF SURFACE REFLECTANCE OF A NIGHTTIME ROAD IMAGE A. Capture of a Background Image to Generate a Nighttime Road Image A real nighttime road image was captured by a digital camera without streetlamps. However, since the typical illumination level at night is below 0.1 lux when streetlamps are not considered, this illumination level is below the luminance tolerance of a digital camera. Therefore, the resulting captured images included much color noise. Accordingly, as an alternative, a
Fig. 9. Relative spectral power distribution of the sky. (a) Night time. (b) Day time.
nighttime background image was generated using a daylight image with the same view, as follows:
(25) where represents the surface reflectance of the daylight image and the spectral distribution of daylight illumination is measured by a CS-1000 spectrophotometer. , and are the color-matching functions for primaries. The attenuation coefficient , which the generates the illumination level at night, is then empirically determined. However, the sky luminance distribution is most likely not as similar as the nighttime sky luminance. Fig. 9(a)
LEE et al.: METHOD FOR GENERATING SIMULATED NIGHTTIME ROAD IMAGES UNDER AUTOMOTIVE HEADLAMP LIGHTS
Fig. 10.
701
Three principal components from 1269 spectra of the Munsell system.
shows the power distribution of the nighttime sky when using a Minolta CS-1000 on a night with a full moon. As shown in this figure, the power distribution of the nighttime sky is very similar to that of dark gray, plus the magnitude of the power distribution at night is very low. Fig. 9(b) represents the power distribution of daylight, the magnitude of which is normalized to the maximum power distribution of Fig. 9(a). As shown by the two figures, there is a slight mismatch between the power distributions. However, Fig. 9(a), measured on a road, can also be slightly changed according to weather conditions. Fig. 9(b) also changes according to weather conditions, so it was assumed that the nighttime sky power distribution is an attenuated daylight power distribution permitting a mismatch between the two distributions. B. Calculation of Surface Reflectance Using Principal Component Analysis The surface reflectance of an RGB-format image can be calculated using a linear combination of principal components and eigen-values , as shown in
Fig. 11. Luminous intensity and illuminace. (a) Luminous intensity graph. (b) Illuminance image.
V. CALCULATION OF THE LAMP-LIT ROAD IMAGE USING LOCAL SPECTRAL DISTRIBUTION OF HEADLAMPS AND SURFACE SPECTRAL REFLECTANCE OF THE ROAD IMAGE A road image under headlamp lighting is calculated using the estimated surface spectral reflectance of the nighttime road image, the calculated spectral distributions of the headlamps, and lightness data as follows:
(26)
where
are the averaged tristimulus values and are the tristimulus values corresponding to the three principal components of 1269 Munsell color chips [6]. Based on the assumption that all possible colors in a general road image can be represented by a large set of color spectra, the spectral reflectance database of Munsell color chips is utilized. This spectral database is then used to create a few principal components from the Karhunen–Loeve expansion based on the subspace method. The tristimulus values of each pixel are transformed using a 3 3 linear matrix to map the device RGBs to s. Fig. 10 represents the three principal components used in this study.
(27)
702
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 53, NO. 3, MAY 2004
where is induced to correlate between the calculated illuminance of the lamps and its perceived brightness on the monitor. Therefore, the illuminance data, obtained from the luminous intensity using distance information, is converted into lightness values using the following equations, which are taken from the CIELAB appearance model [7]:
if otherwise where denotes the calculated illuminance data and standard illuminance.
(28) is the
VI. EXPERIMENTS In other to evaluate the proposed headlamp visibility test method, we estimated the lamp-lit image for a test screen and real road images under HID (low beam), H1 (high beam), and H3 (fog beam) headlamps. Fig. 11(a) illustrates the measured luminous-intensity graph for a test screen and a HID lamp and Fig. 11(b) shows the illuminance of the gray-level image. Fig. 12 shows results for a test screen at 7.5 m. Fig. 12(a) was captured under a fluorescent lamp in the darkroom environment. Then, the lamp illumination was removed to create a nighttime environment. Fig. 12(b) shows the predicted image for the test-screen image when HID lamp will be illuminated. The distance between the HID lamp and the screen is 7.5 m. Fig. 12(c) is also the predicted result when the HID and H1 lamps are simultaneously illuminated on the screen. Fig. 13 shows the results for a real captured road image. By using (23) and measurement data from ambient illumination, Fig. 13(a) was converted into a nighttime image that was then utilized as the background image for Fig. 13(b)–(e). Under HID headlamps, the predicted lamp-lighted image is shown in Fig. 13(b). Fig. 13(c) is another prediction when the HID and H1 lamps were shone onto the road. Fig. 13(d) and (e) are introduced to compare when the image segmentation process was and was not applied, respectively. The predicted image in Fig. 13(e) was slightly darker than that in Fig. 13(d), which can be explained in two ways. First, when the segmentation process was not applied, the standing objects were included in the ground or space according to the vanishing point of the image. Therefore, the distance information for the standing objects was not correct. Second, the surface spectral reflectances of all the objects were modeled by three principal components, generated using Munsell color chips. However, the spectral reflectances of the Munsell color chips represent the spectral reflectances of the pigments that were included in the manufacture of the Munsell chips. Therefore, the surface spectral reflectance of standing objects with a high reflection could not be well estimated. Accordingly, the resulting images without segmentation for standing objects, such as traffic signals, had a darker appearance.
Fig. 12. Predicted lamp-lit images for a test screen. (a) Test-screen image. (b) Resultant predicted image under HID headlamp. (c) Resultant predicted image under HID and H1 headlamps.
VII. CONCLUSION This paper introduced a method of calculating nighttime lamp-lit images from captured nighttime images or daylight images based on the spectral reflectance of objects and spectral
LEE et al.: METHOD FOR GENERATING SIMULATED NIGHTTIME ROAD IMAGES UNDER AUTOMOTIVE HEADLAMP LIGHTS
703
Fig. 13. Road image captured in daylight and predicted nighttime road images with the same view. (a) Road image captured in daylight. (b) Predicted nighttime road image under HID headlamp. (c) Predicted nighttime road image under HID and H1 headlamps. (d) Predicted nighttime road image under H1 headlamp. (e) Predicted nighttime road image under H1 headlamp without image segmentation for standing objects.
distribution of light. The principal component analysis method was used to estimate the surface spectral reflectance of objects, whereas the spectral power distribution of the headlamps was
calculated based on a linear model using the assumption that changes in the light spectra are almost linear to the luminous intensity. As a result, the proposed method produced images
704
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 53, NO. 3, MAY 2004
that were very close to the real lamp-lit image under single and multiple headlamps and, thus, offers a new process for evaluating headlamp visibility in the development of automotive headlamps. However, further study is still required for the reproduction of real nighttime road images that include objects with a high reflection; also, a more detailed estimation is needed to model the changes in the spectral distribution of headlamps relative to the magnitude of the luminous intensity.
Kuk Sagong received the B.S. and M.S. degrees in mechanical engineering from Yeungnam University, Gyeongsan, Gyeongbuk, Korea, in 1991 and 1993, respectively. In January 1993, he joined the Samlip Industrial Company, Ltd., Gyeongsan, Gyeongbuk, Korea, where he now is a Senior Engineer in the software development team. His main research interests include software development and computer graphics.
REFERENCES [1] M. Kitagawa and T. Abe, Color Appearance of a Scene Under Automotive Headlamp Light. New York: McGraw-Hill, 1999, SAE Tech. Paper Ser. 1999-01-0707, pp. 83–89. pp. 15–64, 1964. [2] J. Damasky, A New Software Tool for Performance Simulation of Headlamp Pattern. New York: McGraw-Hill, 1999, SAE Tech. Paper Ser. 1999-01-1215, pp. 141–144. [3] F. J. Kalze, Light Distribution Editor Virtual Light World in Helios. New York: McGraw-Hill, 1997, SAE Tech. Paper Ser. 970 907. [4] J. W. Mazur and R. Bosk, “Automotive headlamps: Calculation of light distribution,” Lighting Res. Technol., vol. 27, no. 2, pp. 65–74, 1995. [5] M. Perel, “Evaluation of headlamp beam patterns using the Ford CHESS program,” Progress Technol., vol. 60, pp. 153–158, 1996. [6] Y. Miyake and Y. Yokoyama, “Obtaining and reproduction of accurate color images based on human perception,” in Proc. SPIE, vol. 3300, 1998, pp. 190–197. [7] M. D. Fairchild, Color Appearance Model. Reading, MA: AddisonWesley, 1997.
Cheol-Hee Lee received the B.S., M.S., and Ph.D. degrees in electronic engineering from Kyungpook National University, Daegu, Korea, in 1995, 1997, and 2000, respectively. In September 2003, he joined the Department of Computer Science, Andong National University, Gyeongbuk, Korea, as an Assistant Professor. He currently is General Manager of the Korea Society for Imaging Science and Technology. His main research interests include color image processing, color printing, and computer graphics. Dr. Lee is a Member of the Society for IS&T.
Yeong-Ho Ha (SM’00) received the B.S. and M.S. degrees in electronic engineering from Kyungpook National University, Taegu, Korea, in 1976 and 1978, respectively, and the Ph.D. degree in electrical and computer engineering from the University of Texas at Austin, Austin, in 1985. In March 1986, he joined the Department of Electronic Engineering, Kyungpook National University, as an Assistant Professor and is currently a Professor. His main research interests include color image processing, computer vision, and digital signal and image processing. Dr. Ha served as TPC Chair, Member, and Organizing Committee Chair of several IEEE, SPIE, and IS&T conferences, including the IEEE International Conference on Intelligent Signal Processing and Communication Systems (1994) and the IEEE International Conference on Multimedia and Expo (ICME; 2000). He is Chairman of the IEEE Taegu Section, Vice President of the Institute of Electronics Engineering of Korea (IEEK), President of the Korea Society for Imaging Science and Technology (KSIST). He is a Member of the Pattern Recognition Society and of the Society for IS&T.