Radiometric Compensation of Global Illumination Effects with Projector-Camera Systems Gordon Wetzstein∗ Bauhaus-University Weimar
Oliver Bimber† Bauhaus-University Weimar.
Abstract Conventional radiometric compensation approaches such as [Bimber et al. 2005] rely on a direct mapping between projector and camera pixels. Global illumination effects such as reflections, refractions, scattering, dispersion, diffraction etc are completely ignored since only local illumination is taken into account. We propose a novel approach that applies the light transport matrix for performing an image-based radiometric compensation that accounts for all possible lighting effects. Introduction Inverse light transport for scenes with unknown geometry and BRDFs has been explored by [Seitz et al. 2005]. An Impulse Scatter Function (ISF) matrix of a scene is estimated using a laser pointer and applied to remove indirect illumination from photographs. This matrix contains only the outgoing light field. The full light transport between a projector and a camera can efficiently be acquired using the technique proposed by Sen et al. [Sen et al. 2005]. It is represented as a matrix T and estimated hierarchically. Starting with a floodlight image the projector space is recursively subdivided and the set of influencing projector blocks is computed for each camera pixel. This allows to determine the conflict-free projector blocks of the next hierarchy level. Note that T differs from the ISF matrix used by Seitz et al. in taking not only the outgoing but also the incoming light field into account. Algorithm A weighted bipartite graph representation allows to compute clusters of mutually influencing camera and projector pixels within the lowest hierarchy level. A radiometric compensation can be performed separately for each cluster based on the fundamental relation of the forward light transport: c = T p + EM, where c and p are camera and projector space as column vectors of sizes mn and pq respectively, T is the light transport matrix with a size of mn x pq and EM is the environmental light contribution (including the projector’s black level). If the camera image is replaced by a desired image of the same resolution, a linear equation system can be solved for p. Since T includes all global illumination effects, they are all considered for radiometric compensation. Neighboring projector pixels often overlap in the camera image due to projector or camera defocus, blooming or lens imperfections. This results in large clusters that contain local and global illumination effects. In order to allow efficient processing these clusters have to be decomposed in a way that global effects are preserved while local effects which result from overlaps are discarded. Therefore all connected projector pixels of the same camera pixel are grouped into spatially neighboring blocks. Lower intensity connections in each block are removed from the graph depending on a predefined threshold. Individual projector pixels must be preserved by reinserting connections according to a ∗ e-mail:
[email protected] † e-mail:
[email protected] Figure 1: Sample scene including interreflections (a) and color coded clusters (b). A compensation image (d) that corrects for a desired image (c) is projected onto the scene and captured by a camera (e+f).
second threshold. The thresholds and the area of the neighborhood affect the size of the decomposed clusters. Figure 1 (b) shows a majority of small clusters that originate from local light interaction and several larger clusters that are due to global illumination effects. A possible approach towards interactive compensation rates is to precompute and solve the equation systems using T ’s pseudoinverse, yielding T + (c − EM) = p. This is numerically less stable than solving the set of linear equations explicitly. However, it allows to separate the computation into an off-line step (computing T + ) and an on-line matrix-vector multiplication. The latter can efficiently be implemented on programmable graphics hardware as a dot product of a single matrix row with c for each projector pixel. A vector of all non-negative pseudo-inverse entries and camera space indices can be stored in a compressed look-up table for each projector pixel. About 6 fps could be achieved for the sample scene (GeForce 7900 GTX), the matrix inversion took app. 15 minutes. Conclusion The light transport matrix of a projector-camera configuration allow compensating all possible local and global illumination effects. Applying T ’s pseudo-inverse enables to perform the compensation at interactive framerates on the GPU. These depend mainly on the amount of global effects within the scene. Visual differences between both methods are hardly perceivable.
References B IMBER , O., E MMERLING , A., AND K LEMMER , T. 2005. Embedded Entertainment with Smart Projectors. IEEE Computer, 56–63. S EITZ , S. M., M ATSUSHITA , Y., AND K UTULAKOS , K. N. 2005. A Theory of Inverse Light Transport. In ICCV, 1440–1447. S EN , P., C HEN , B., G ARG , G., M ARSCHNER , S. R., H OROWITZ , M., L EVOY, M., AND L ENSCH , H. P. A. 2005. Dual Photography. ACM Trans. Graph. 24, 3, 745–755.
Radiometric Compensation of Global Illumination Effects with Projector-Camera Systems - Color Plate Gordon Wetzstein∗ Bauhaus-University Weimar
(a)
Oliver Bimber† Bauhaus-University Weimar.
(b)
(c)
Figure 1: The composition (a) and dual (c) image of the scene, synthesized using the acquired light transport matrix. T was acquired using a projector-camera system (b).
(a)
(b)
Figure 2: Clusters of mutually influencing projector and camera pixels in camera (a) and dual (b) space. A majority of small clusters that originated from local light interaction and several larger clusters that are due to global illumination effects can be seen.
∗ e-mail: † e-mail:
[email protected] [email protected] (a)
(d)
(b)
(e)
(c)
(f)
Figure 3: A picture showing a forest (a) and its compensation image (b) that is projected onto the scene (c). This is captured by the camera (e) and from a second perspective (d). The projection of a compensation image that was computed on the GPU using T ’s pseudo-inverse (f) shows only slight variation of color and geometry compared to explicitly solving the equation systems.
(a)
(d)
(b)
(e)
(c)
(f)
Figure 4: A screenshot from the movie ”9” (courtesy: Shane Acker) (a) and its compensation image (b). A calibrated projector displays the compensation image onto the scene (c). The scene captured by the camera’s point of view showing the solution by solving the equation systems (d) and using T’s pseudo-inverse (e). A sheet of paper in front of the projection (f).