Photometric Image-Based Rendering for Virtual Lighting Image Synthesis Yasuhiro MUKAIGAWA
Sadahiko MIHASHI
Takeshi SHAKUNAGA
Department of Information Technology, Okayama University Tsushima naka 3-1-1, Okayama, 700-8530, Japan
[email protected] Abstract
and it works well with a scene model which consists of a set of 3D shape models, reflection properties as well as the lighting conditions of the scene. In order to synthesize a realistic image, however, the MBR approach requires accurate models. In some applications, the accurate models can be prepared by means of CAD technology. On the other hand, image-based modeling[1] is necessary to built the model when the model cannot be prepared by CAD. It is, however, often unstable to reconstruct models from a collection of images. The accurate models often cannot easily obtained without special devices such as a high precision range finder. To overcome the problem with the MBR approach, the Image-Based Rendering (IBR) has been widely used. The IBR does not require accurate object models. A new image is directly synthesized from a collection of input images without the unstable reconstruction of models. Since the IBR makes full use of real images, a realistic image can be synthesized. Although a lot of IBR algorithms have been proposed, almost of them deal with only the geometric appearance changes caused by the view point changes [2][3][4][5]. They can synthesize a new image, which should be taken from a particular view position, from input images which were taken from the discrete view positions. The appearance change, however, is subject to not only geometric factors but also photometric factors. The appearance change caused by the lighting conditions is one of the most important ones in the photometric factors. Unfortunately, the conventional IBR algorithms cannot cover the photometric appearance changes. It is necessary to control the photometric factors in the IBR framework, in order to realize the seamless augmented reality which mixes real and synthesized images. For solving this problem, we propose a new IBR called the Photometric Image-Based Rendering (PIBR). Using the PIBR, a new image, which should
A concept named Photometric Image-Based Rendering (PIBR) is introduced for a seamless augmented reality. The PIBR is defined as Image-Based Rendering which covers appearance changes caused by the lighting condition changes, while Geometric Image-Based Rendering (GIBR) is defined as Image-Based Rendering which covers appearance changes caused by the view point changes. The PIBR can be applied to image synthesis to keep photometric consistency between virtual objects and real scenes in an arbitrary lighting condition. We analyze the conventional IBR algorithms, and formalize the PIBR in the whole IBR framework. A specific algorithm is also presented for realizing the PIBR. The photometric linearization makes a controllable framework for the PIBR, which consists of four processes; (1) separation of environmental illumination effects, (2) estimation of lighting directions, (3) separation of specular reflections and cast-shadows, and (4) linearization of self-shadows. After the photometric linearization of input images, we can synthesize any realistic images which include not only diffuse reflections but also self-shadows, cast-shadows and specular reflections. Experimental results show that realistic images can be successfully synthesized with keeping photometric consistency.
1. Introduction Considerable work has been made for augmented reality, which mixes a synthesized virtual object with a real scene. In order to realize a seamless augmented reality, the synthesized image should be very realistic because the synthesized objects and real scene are directly compared in the same image. Model-Based Rendering (MBR) has been investigated for the image synthesis, 115
be taken in a particular lighting condition, is directly synthesized from a collection of input images taken in a variety of lighting conditions. In this paper, we show how to mix the synthesized images with real scenes keeping the photometric consistency. This paper is organized as follows: Section 2 analyzes that the IBR framework, and shows that it is decomposed into two aspects, named Geometric ImageBased Rendering (GIBR) and PIBR. Section 3 presents the formulation of PIBR, that is, the relation between the appearance changes and the lighting condition changes. Section 4 describes how to synthesize images in the PIBR. Some experimental results are shown in Section 5. Finally, some conclusions are presented in Section 6.
view point, from a set of input images which were taken from different view points. These algorithms are classified in the Geometric Image-Based Rendering (GIBR), and almost of them are based on the interpolation technique [2][3][4][5]. That is, an intermediate view is synthesized from two or more input images by a linear interpolation of the corresponding points. The essential task is to find the corresponding points between input images and to predict the location of the corresponding points from a specified view point. Any 3D information, such as the 3D shape model of the scene and the view point, is not necessary. The GIBR framework can cover the geometric appearance changes caused by the view point changes or the rotation of the object. However, the photometric appearance changes cannot be controlled in the GIBR because the lighting condition is assumed to be fixed.
2. Two Aspects of IBR
2.2. Photometric Image-Based Rendering
In the IBR approach, a collection of real images is used instead of 3D object models. The input images are analyzed, and a different view is directly synthesized from the input images. In other words, the IBR is based on the conversion from images to images. The IBR approach has some advantages to the MBR. Realistic images can be synthesized, if the dense input images are registered in advance. The IBR does not require unstable processes such as a reconstruction of the scene model from images, which are inevitable for the MBR approach. On the other hand, the IBR has to treat a wide variety of appearance changes for the practical use. The appearance changes are actually dependent on both the geometric and the photometric properties of the scene as well as the lighting condition. However, the changes are mainly dominated by the relative view position and the lighting condition. Thus, they can be decomposed to the orthogonal two aspects; the geometric and photometric aspects. The Geometric ImageBased Rendering is defined as the Image-Based Rendering which covers the appearance changes caused by the view point changes. On the other hand, the Photometric Image-Based Rendering is defined as the Image-Based Rendering which covers the appearance changes caused by the lighting condition changes. We assume that the two aspects are almost independent and they work cooperatively in the whole IBR framework. We describe details of the two aspects in the following two sections.
The photometric appearance changes are the other dominant factor of the image making process. When the lighting condition changes, the appearance also changes even if the view point and the object pose are fixed. If the concept of IBR can be applied to the photometric appearance changes, the realistic image with any lighting conditions can also be synthesized in the similar framework as the GIBR. Let us call the framework the Photometric Image-Based Rendering, or PIBR in short. In the PIBR framework, a new image in a virtual lighting condition should be synthesized from a set of input images which were taken in a variety of lighting conditions. Since the view point is fixed in the PIBR, it is not necessary to find the corresponding points between input images. Thus, the essential task is reduced to prediction of the intensity of each point on the surface without using any models of both the reflection properties and the lighting properties. Some algorithms have already been proposed for changing lighting conditions in the PIBR framework. Shashua[6] showed that an image with any light direction can be synthesized by a linear combination of three base images. This method, however, assumes the Lambertian reflection model. Unfortunately, real scenes include more complex factors such as specular reflections and shadows. Therefore, this method cannot be applied to the real scenes. In order to deal with the real scenes, Zhang[7] used the principal component analysis and showed that an image in any light direction can be synthesized by a linear combination of the principal component images. If dense input images are prepared, realistic images can be synthesized
2.1. Geometric Image-Based Rendering A lot of algorithms have been developed to synthesize a new image, which should be taken at a particular 116
by the method. However, it requires a large number of base images, if the scene is complex. Furthermore, the coefficients of the linear combination cannot be analytically calculated because this method is based on the brute-forced principal component analysis of the complex photometric effects. For this problem, we propose a new PIBR scheme which decomposes complex input images into the linear and nonlinear factors. Since the linear factors, which cover diffuse reflections, obey the Lambertian reflectance model, an image with any light direction can be synthesized by a linear combination of three images and the coefficients are uniquely calculated. The nonlinear factors, which consist of specular reflections and shadows, are separately processed.
Self-shadow
Diffuse Reflection Specular Reflection
Cast-shadow
Figure 1. Main photometric factors included in the real scene; Two kinds of reflections and two kinds of shadows are treated in our framework. Here, let S denote the lighting property vector (ls) and N the surface property vector (rn). Using these notations, Eq.(1) can be simplified to
3. Formulation of the PIBR 3.1. Factors of Appearance Changes
i = S · N.
(2)
The major factors of the appearance changes due to the lighting condition are the reflection and the shadow. According to the dichromatic reflection model[8], the reflection is classified into a diffuse (body) reflection and a specular (surface) reflection as shown in Fig.1. The two kinds of reflections have quite different properties. The diffuse reflection does not depend on the viewing direction, and it is equally observed from every direction. On the other hand, the specular reflection is intensely observed from the mirror direction of incident direction. The shadow is also classified into a self-shadow (attached-shadow) and a cast-shadow. The two kinds of shadows also have quite different properties. The self-shadow depends on the relation between the surface normal and the lighting direction, and it is observed where the surface does not face the lighting direction. On the other hand, the cast-shadow depends on the whole 3D shape of the scene, and it is observed where the light is occluded by other objects.
3.3. The Case of Real Scenes
3.2. The Case of Lambertian Reflection Model
4. Image Decomposition and Synthesis
In the above section, we assumed the Lambertian reflection model. In order to deal with a real scene, however, the specular reflections should be considered. The light also includes the environmental illumination which evenly illuminates the whole scene. Real scenes also include shadow regions, where any light does not arrive. Taking these factors into account, we formulate an image as follows: i = α
=
α(iD + iS ) + iE , ½ 0 light is occluded 1 light is not occluded
(3)
where iD = S · N is the diffuse reflection factor, iS the specular reflection factor and iE the environmental illumination factor. In this paper, the pixel value on the surface is formulated as a sum of the three factors1 .
4.1. Image synthesis by linear combination
The basic reflection model, including only the diffuse reflection, is called the Lambertian reflection model. In the model, the intensity on the surface is simply formulated in i = (ls) · (rn), (1)
If the Lambertian reflection model is assumed, a simple algorithm works for image synthesis. The surface property vector N does not change at each point on the surface, if the target objects and view position are fixed. Thus, the observed image depends on only
where l is a lighting power, s is a unit vector of the lighting direction, r is a diffuse reflectance and n is a unit vector of the surface normal.
1 Real scenes also include inter-reflections. We do not treat them in this paper, because the effect seems relatively small.
117
(a)
(b)
(c)
Figure 2. An example of simple linear combination. (a) and (b) are input images, and (c) is synthesized image by the linear combination. The image (c) does not show an image with illuminated from top direction, but it results in the case of an image with illuminated by the two light sources. the lighting direction. Shashua[6] showed that if a single point light source is assumed at infinity, the image ˆ can be synthesized by with any lighting directions (I) a linear combination of three base images (I1 , I2 and I3 ) taken in different lighting directions, Iˆ = a1 I1 + a2 I2 + a3 I3 .
linear combination of base images. However, the linear combination can not completely express nonlinear factors, because they essentially require geometric deformations in the image plane. As the result, a large number of base images should be registered in order to reduce errors of the synthesized image. Moreover, it is a weak point of this method that all the coefficients can not be analytically calculated, because there is no controllable relation between the coefficients and the synthesized image. Therefore, the coefficients could be determined through trial and error. Alternatively, they could be made by the interpolation of registered coefficients in a large data base. These efforts are quite exhaustive, especially for the complex scene. For solving this problem, we decompose an image into two factors. The linear factor corresponds to diffuse reflections, which fully obeys the Lambertian reflection model. The nonlinear factor consists of specular reflections and cast-shadows, which can not be expressed as a linear combination. First, the nonlinear factors are separated from the input images as shown in Fig.3. Next, the self-shadow regions are linearized to satisfy Eq.(2). Finally, the nonlinear factors are interpolated and mixed to the synthesized image. Since our approach separately treats linear and nonlinear factors, a new image can be synthesized in an arbitrary lighting direction by the linear combination of only three base images, even if it includes the nonlinear factors. It should be noted that the coefficients can be analytically calculated in our method.
(4)
The following relation holds between the coefficients a1 , a2 and a3 of the linear combination and the lighting ˆ which correspond to property vectors S1 , S2 , S3 and S ˆ I1 , I2 , I3 and I, respectively. ˆ = a1 S1 + a2 S2 + a3 S3 S
(5)
According to this relation, if the lighting property vectors are known, a1 , a2 and a3 are uniquely determined for a synthesized image with a specified lighting direction. Now, an essential task to accomplish is to select the suitable base images and to determine three coefficients of the linear combination.
4.2. Decomposition of Real Images If the input images are taken in a real scene, a new image in a different lighting condition cannot be completely synthesized, since the intensity on the surface is expressed as a sum of the several factors as shown in Eq.(3). For example, Fig.2 (a) and (b) are two input images illuminated from the left and right direction respectively, and Fig.2 (c) is a synthesized image by the linear combination of (a) and (b). The synthesized image does not show an image with illuminated from the top direction, but it results in the case of an image with illuminated by the two light sources. This example indicates that shadows and specular reflections cannot be treated by the simple linear combination. To cope with the complex scenes, the principal component analysis has been used[7]. An image in an arbitrary lighting condition can be approximated as a
4.3. Photometric Linearization The photometric linearization converts a real image, which satisfies Eq.(3), to an imaginary image which satisfies Eq.(2). In this section, we show how the linearization is realized in the following processes; (1) separation of the environmental illumination effect, (2) estimation of the lighting directions, (3) separation 118
Linearized images Self-shadow linearization
Linear combination of the three images
Decomposition
Synthesized image
New image with linear factors
Linear factors
Input Images
Synthesis of self-shadow
Composition
Interpolation New image with nonlinear factors
Nonlinear factors
Figure 3. The flow of the process. The input images are decomposed into linear factors and nonlinear factors. The two factors are separately processed. of specular reflections and cast-shadows, and (4) linearization of self-shadows. 4.3.1
not be known. Here, we assume the 3D coordinate system whose base vectors correspond to S1 , S2 and S3 . Although this coordinate system might not be orthonormal, Eq.(2) should hold in this space. Therefore, Eqs.(4) and (5) hold in the space. ˆ which is not selected as the For each input image (I), base image, the coefficients, a1 , a2 and a3 , are uniquely determined from Eq.(4). The coefficients indicate the ˆ in the space defined by S1 , lighting property vector S S2 and S3 . Since the lighting property vectors are not defined in the Euclidean coordinate system, it can not used for the reconstruction of the surface normal. However, it is available enough for the purpose of the image decomposition. In the real images, the lighting property vector can not be stably calculated, since Eq.(4) does not always hold because of specular reflections or shadows. However, the equation holds in most part of the real images. The random sampling method is effective for the stable calculating the lighting property vector in these situations. Three points are randomly selected from an input image, and the lighting property vector is calculated from these points. After the iteration of the random sampling, the appropriate lighting property vector is selected.
Separation of Environmental Illumination Effect
In our method, the effect of the environmental illumination is first eliminated. Since the environmental illumination does not depend on the lighting source, the effect is considered as constant. An image, which is taken without any lighting source in advance, is regarded as a background image. The environmental illumination is eliminated by the subtraction of the background image from each input image. 4.3.2
Estimation of Lighting Direction in NonOrthonormal Space
The lighting property vector is necessary for linearizing input images. The simplest method to obtain the vector is to directly measure the lighting direction and the lighting power when images are taken. It is, however, very difficult to precisely measure them, and the direct measurement often contains some errors. It is noted that our purpose is not the precise estimation of the lighting direction and the lighting power, but the calculation of the lighting property vectors, which are necessary in Eq.(2). In other words, the vectors do not have to represent the actual lighting source, but they should be correct in the context of Eq.(2). Assuming the Lambertian surface, we show how to directly calculate the lighting property vectors from input images. First, three base images (I1 , I2 and I3 ) are selected from the input images, and the corresponding lighting property vectors are S1 , S2 and S3 , respectively. The actual values of these vectors might
4.3.3
Separation of Nonlinear Factors
After the elimination of the environmental illumination effect, reflections and shadows are still included in the input images. In our method, a lot of input images can be taken in the various lighting conditions. Therefore, the nonlinear factors can be separated based on Shashua’s method[6] without any limitations about the object color. The diffuse reflections are observed except in shadow 119
Intensity
Intensity
sum of diffuse reflection and specular reflection
Actual intensity Lambertian model
Decomposition of the specular reflection
Actual intensity Linearized intensity Lambertian model
0
0
Decomposition of the cast-shadow
cast-shadow self-shadow 0
0.5 1 1.5 2 The angle between the lighting direction and the surface normal (rad)
0
Forced linearization of self-shadow
0.5 1 1.5 2 The angle between the lighting direction and the surface normal (rad)
Figure 4. The comparison of the diffuse reflection factor and the pixel value in the real image. The nonlinear factors are distinguished from the diffuse reflection factor.
Figure 5. The linearization of input images. The nonlinear factors are separated, and the input images satisfy the Lambertian reflection model.
regions, and the intensity is expressed as shown in Eq.(2). The specular reflections and the shadows, on the other hand, are observed dependent on the lighting direction. Comparing of the actual pixel value with supposed one estimated by Eq.(2), the specular reflections or the shadows are detected as shown in Fig.4. In the figure, the lighting power is assumed to be constant for the simplification. If the lighting power is constant, Eq.(2) describes a cosine curve characterized by the angle between the lighting direction and the surface normal. In Fig.4, the curve indicates the power of the diffuse reflection calculated from Eq.(2), and each point indicates the actual pixel value. All the points in the diffuse reflection regions should exist on the curve. If the actual pixel value is more than the curve, it is regarded as a specular reflection. If the actual pixel value is less than the curve and its value is close to zero, it is regarded as a cast-shadow. If the value in Eq.(2) has a negative value, it is regarded as a self-shadow. The specular reflections and cast-shadows are separated to satisfy Eq.(2) as shown in Fig.5. For this linearization, the surface property vector is necessary. The diffuse reflectance and the surface normal are estimated by the photometric stereo. Since the lighting direction is estimated in the non-orthonormal coordinate system as mentioned in 4.3.2, the estimated surface normal also does not express the actual Euclidean shape. However, it satisfies Eq.(2), and the images are correctly linearized in the non-orthonormal coordinate system. Since the input images include specular reflections and shadows which do not satisfy Eq.(2), it is difficult
to stably estimate the surface property vectors. For the robust estimation, the random sampling method is effective again. Three images are randomly selected from a set of input images, and the surface property vector is calculated from these images. After the iteration of the random sampling, the appropriate surface property vector is selected. 4.3.4
Self-Shadow Linearization
After the separation of specular reflections and castshadows, each image includes only diffuse reflections and self-shadows. If the angle between the lighting direction and the surface normal is not more than π/2, the pixel value is exactly subject to Eq.(2). However, if the angle is more than π/2, Eq.(2) is not satisfied. In case Eq.(2) has a negative value, the actual pixel value is supposed to be zero because it should be in the self-shadows. Thus, the pixel value of the surface should satisfy Eq.(6). i = α
=
α(S · N). ½ 0 self-shadow 1 diffuse reflection
(6)
Eq.(6) shows that the pixel values can be expressed by Eq.(2) in the diffuse reflection regions. On the other hand, Eq.(2) should have negative values in the selfshadow regions. These facts suggest that we can use Eq.(2) instead of Eq.(6). Diffuse reflection regions and self-shadow regions can be distinguished by the value of Eq.(2). Consequently, these two kinds of regions can correctly treated by the exact linear combination. 120
For each input image, the lighting direction is estimated and the patterns of the nonlinear factors are separated. If we want to synthesize a new image with a specified lighting direction, the suitable patterns of the nonlinear factors are generated by the interpolations of the separated nonlinear patterns. If the lighting directions of a set of input images are dense, a nearest-neighbor method can replace the interpolation. That is, the nonlinear factors are selected from a registered image which has the closest lighting direction. Since it is a kind of approximation, the smooth animation cannot be synthesized which requires subtle lighting position changes. However, it can be applicable to the stillness image with the appropriate property of the surface.
Light source N: surface property vector N S
S: lighting property vector
θ
S
θ < π/2
θ N
θ > π/2
Figure 6. The self-shadow linearization. The pixel value in the self-shadow region is replaced with the negative value.
5. Experimental Results
4.4. Image Synthesis 4.4.1
First, we show image syntheses of a glossy ceramic pot. Keeping a halogen light in the long distance from the pot, we took 27 images with changing the lighting source position. The input images include specular reflections and shadows as shown in Fig.7. Figure 8 shows the results of the linearization. In these images, some discontinuities are found. They are caused by errors included in the recovered lighting property vectors and the surface property vectors. Next, the principal component analysis was accomplished to make three optimal base images. Figure 9 shows the first three principal component images. The discontinuity is not included in the base images. For the comparison, the eigenvalue of the original images and the linearized images are shown in the Tables 1 and 2, respectively. If the input images are directly used, the sum of the first ten eigenvalues is still 98% of the total sum of eigenvalues. So many base images are required for suppressing the errors. If we use the linearized images as the base image, the sum of the first three eigenvalues is more than 99.6% of the total sum. We can see that our method can reduce errors of the synthesized image by using only three base images. Several lighting directions corresponding to the coefficients were specified, and virtual images were synthesized, as shown in Fig.10. Both diffuse reflections and self-shadows are correctly synthesized. Figure 11 shows synthesized images with nonlinear factors, which were selected by the nearest neighbor method. We can see that the realistic images with the appropriate surface properties can be synthesized by the proposed PIBR method. Finally, we show some results of mixing virtual objects and real scenes with keeping the photometric consistency. Figure 12 shows three base images of the vir-
Linear Combination of Base Images
Once all input images are linearized, an image with any lighting directions can be synthesized by the linear combination of three base images. In out method, the coefficients of the linear combination can be easily determined because three coefficients (a1 a2 a3 ) corˆ which we respond to the lighting property vector S generate newly. For the image synthesis, any independent three images can be used as the base. The optimal base images, however, should be selected through the principal component analysis for the stable image synthesis. That is, the first three principal components are used as the base images of the linear combination. Since these base images are lineally independent, the stable image synthesis can be accomplished. While an image is synthesized in the arbitrary lighting condition by the linear combination, some pixels often have negative values. These pixel values should be set to zero because they should belong to self-shadows. As mentioned above, we can synthesize a new image including both diffuse reflections and self-shadows by the linear combination with the simple revision. 4.4.2
Interpolation of Nonlinear Factors
The nonlinear factors, such as the specular reflection and the cast-shadow, are regarded as the geometric deformations in the image plane. It is difficult to predict the patterns of them, if the whole 3D shape of the scene and the precise surface normal are unknown. However, the rough locations and shapes can be estimated by the interpolation. 121
Figure 7. Examples of input images. These images include both specular reflections and shadows.
Figure 8. Examples of linearized images. The nonlinear factors are almost separated, and self-shadows are correctly linearized. These images are completely subject to the Lambertian reflection model.
(a) The 1st principal component
(b) The 2nd principal component
(c) The 3rd principal component
Figure 9. The principal component images. These three images are used as the base images.
6. Conclusions
tual object which were made from 29 images. Figure 13 shows some real scenes in which the virtual object is to be mixed. The lighting direction of each real scene was estimated in the same way described in 4.3.2. Since we used the common base vectors (S1 , S2 and S3 ) in order to estimate the lighting direction of both the real scenes and the virtual objects, the estimated lighting direction vector of the real scenes becomes the coefficients of the linear combination for the image synthesis of the virtual objects. Figure 14 shows the synthesized virtual object images. Each image in Fig.14 has the same lighting direction as the real scene in Fig.13. Figure 15 shows the result of mixing Fig.13 and Fig.14. In the mixed images, a mouse is a virtual object while a cup and a book are real ones. Since the photometric property is consistent between the real scene and the virtual object, the mixed image looks realistic.
In this paper, we introduce a concept of Photometric Image-Based Rendering and provided a scheme to realize a seamless augmented reality. In order to deal with specular reflections and shadows, we showed how to separate nonlinear factors from input images and how to extract three optimal base images for the stable image synthesis. Since our method can synthesize realistic images, it can be applicable not only to augmented reality but also to a lot of applications. In future work, we are trying to interpolate the nonlinear factors, and trying to generate smooth animation with lighting position changes. We are also trying to treat not only lighting position changes but also view point changes by the combination of GIBR and PIBR.
122
Figure 10. Synthesized images without nonlinear factors
Figure 11. Synthesized images with nonlinear factors
References Table 1. The eigenvalue without linearization [1] Sato, Y., Wheeler, M. D. and Ikeuchi, K., “Object Shape and Reflectance Modeling from Observation”, Proc. SIGGRAPH’97, pp. 379–387, 1997.
eigenvalue (×108 )
1 2 3 4 5 6 7 8 9 10
[2] Skerjanc, R. and Liu, J., “Computation of intermediate views for 3DTV”, Proc. 5th Workshop 1992 on Theoretical Foundations of Computer Vision, pp. 190–201, 1992. [3] Chen, S. E. and Williams, L., “View Interpolation for Image Synthesis”, Proc. SIGGRAPH’93, pp.279–288, 1993. [4] Werner, T., Hersch, R. D. and Hlavac, V., “Rendering Real-World Objects Using View Interpolation” Proc. ICCV’95, pp.957–962, 1995.
1.2336 0.9956 0.2500 0.0722 0.0494 0.0274 0.0216 0.0131 0.0094 0.0084
relative rate (%)
45.075 36.380 9.136 2.636 1.805 0.999 0.787 0.478 0.377 0.343
cumulative rate (%)
45.075 81.456 90.592 93.229 95.034 96.034 96.821 97.299 97.677 98.019
Table 2. The eigenvalue with linearization
[5] Seitz, S. M. and Dyer, C. R., “View Morphing”, Proc. SIGGRAPH’96, pp.21–30, 1996.
eigenvalue (×108 )
1 2 3 4 5 6 7 8 9 10
[6] Shashua, A., “Geometry and Photometry in 3D Visual Recognition”, Ph. D thesis, Dept. Brain and Cognitive Science, MIT, 1992. [7] Zhang, Z., “Modeling Geometric Structure and Illumination Variation of a Scene from Real Images”, Proc. ICCV’98, pp. 1041–1046, 1998. [8] Klinker, J., Shafer, A. and Kanade, T., “A Physical Approach to Color Image Understanding”, Int. J. Computer Vision, 4, 1, pp. 7-38. 1990.
123
5.4611 2.1176 0.4700 0.0127 0.0031 0.0019 0.0015 0.0013 0.0012 0.0009
relative rate (%)
67.587 26.207 5.816 0.156 0.038 0.024 0.019 0.016 0.015 0.011
cumulative rate (%)
67.587 93.795 99.612 99.769 99.807 99.831 99.851 99.867 99.883 99.895
Figure 12. Base images of the virtual object.
Figure 13. Real scenes. Each image is illuminated from different direction.
Figure 14. Synthesized virtual object. Each image have the same lighting direction as the real scene.
Figure 15. Result of mixing a real scene and a virtual object. The photometric property is consistent.
124