Selective Culling of Discontinuity Lines David Hedley? , Adam Worrall, Derek Paddon Department of Computer Science, University of Bristol, UK
Abstract: In recent years discontinuity meshing has become an im-
portant part of mesh-based solutions to the global illumination problem. Application of this technique accurately locates all radiance function discontinuities in the scene and is essential for limiting2visual artifacts. In an environment containing m edges there are O(m ) D1 and D2 discontinuities. With a typical scene this can result in many thousands of discontinuity lines being processed. We review existing methods for reducing these lines and introduce an improved, perception based metric for determining which discontinuities are important and which can be safely ignored. Our results show that a 50% or more reduction in the number of discontinuity lines can be achieved with a corresponding reduction in general mesh complexity, with little or no perceptible change to the rendered result.
1 Introduction One of the fundamental aims of global illumination is the representation of surface radiance functions. The most popular approach used is a mesh of elements, typically utilising linear- or constant-basis functions to reconstruct illumination information [3], although higher order basis functions are also possible [2, 14] for non-interactive viewing. Mesh based solutions to the global illumination problem rely heavily on the accuracy of the underlying mesh to model the radiance function across each surface. Each surface is typically given an initially coarse mesh which is then re ned during the illumination calculations to yield ner mesh elements, where the existing mesh is deemed to be at signi cant variance to the radiance function being modelled [4]. In general, adaptive subdivision can signi cantly enhance the resulting solution. However, areas of high contrast such as shadow boundaries can often either be missed (due to bad placing of initial sample points), or cause mesh subdivision to proceed inde nitely, or at least down to a minimum element size which can leave noticeable jagged shadows (e.g. [1]). Discontinuity Meshing [10, 12, 16] was proposed to rectify these problems by explicitly representing discontinuities in the radiance function. Discontinuities are located by tracing out critical surfaces formed by all combinations of source and occluder edges and vertices and the resulting constraint edges inserted into a constrained Delaunay triangulation. Discontinuities are classi ed according to their order, where a Dn discontinuity means the radiance function is C n,1 continuous but not C n at that point. A discontinuity is also classi ed according ?
[email protected] to the type of visual event which generated it | either Edge-Vertex or EdgeEdge-Edge for a polygonal environment. The methods we discuss are equally applicable to both, although our implementation only considers EV events. Within a polygonal scene with n edges, there are O(n2 ) such surfaces, although we only consider discontinuities emanating from primary light sources as those from secondary light sources are harder to categorise and have a generally weaker eect [9]. Even if we only consider primary light sources and trace only silhouette edges, a large number of discontinuity lines are placed in the receiving meshes (see section 4). Empirical evidence suggests that there is an approximate 1 : 2 : 3 relation for the numbers of vertices, triangles and edges respectively in a typical Delaunay triangulation. Hence, a large increase in the number of constraint edges in a mesh generates a correspondingly large increase in the number of triangles and vertices, all of which serve to impair the performance of mesh oriented operations such as radiance function reconstruction and rendering. Also within the emerging eld of dynamic mesh updating [21], performance is directly related to the complexity of the meshes which have to be updated. Note that in the following paper we deal exclusively with D1 and D2 discontinuities. We do not include D0 discontinuities in our discussion as in general they can be determined without tracing out critical surfaces by using a simple geometry preprocess (e.g. [1]) .
2 Previous work 2.1 Discontinuity ranking and culling
Tampieri[16] rst suggested the idea that many discontinuities have little or no visual eect and should be dropped. The need for a perceptual measure was recognised, but only a simple, radiometric measure was given: L~ ij (x) max (1) x2c L~ old i (x) + L~ ij (x) where c is the discontinuity, L~ ij (x) is the radiance contribution at point x from the light source sj that caused the discontinuity on the receiving surface si , and L~ old i (x) is the existing radiance value at point x on surface si . A few samples are taken along the discontinuity and if the strength exceeds a speci ed threshold L then the line is included in the mesh. An alternative approach was adopted by Hardt and Teller in [8]. The emphasis here was on rendering a radiosity solution at interactive rates and hence keeping the triangle count to an absolute minimum is vital. Their method performs hierarchical radiosity on the environment and places an upper bound on the number of discontinuities present on each quadtree leaf. All the relevant discontinuities are scored using a heuristic `weight' w (de ned in equation 2) and those exceeding a certain threshold wmin are sorted, with only the k worst being added to the mesh. dsb (2) w = max d Bsrc sr
where dsb is the distance from the source to the blocker, dsr is the distance from the source to the receiver, and Bsrc is the radiosity of the source in W=m2 .
dsb and d are maximised over the length of the discontinuity. The expression d sr
max( dsrsb ) is a geometric factor which is proportional to the rate at which the source becomes visible as seen by an observer crossing the discontinuity. This is scaled by the radiosity of the source so as to give larger weights to brighter sources which tend to have more pronounced discontinuities. In [8] some results are shown with varying k and wmin , although it is dicult to judge the eventual visual quality from them. The current authors have found that this metric does not extend well to ranking lines on a per-surface basis and so we have omitted results for it. In [5] an algorithm is presented which allows a more compact representation of radiance due to multiple emitters and allows the cost of discontinuity meshing to be deferred until required. A list of regions marked as potentially requiring meshing is stored. If a subsequence light source eliminates the need for the meshing then the corresponding region is deleted from the list of regions needing meshing. Once all light sources have been visited there will be a residual list of regions for which complete discontinuity meshing is required. Sturzlinger [15] proposed a technique which combines traditional adaptive mesh re nement with a demand-driven discontinuity meshing algorithm where discontinuities are traced and incorporated into the mesh on a per-element basis if only if there is sucient variance in the radiance values at the vertices of that element. This technique can suer from the same aliasing problems that are inherent with any traditional adaptive subdivision, where shadows can be missed if the initial sampling density is too low. In all of the above approaches, the strength of a discontinuity is gauged purely in radiometric or geometric terms. However, one of the principle eects of discontinuity meshing is to increase visual accuracy and thus a metric based on the perceivable dierence a discontinuity makes to a mesh should be used. In order to judge this we need a model which as accurately as possible mirrors the way the eye and brain perceive radiance. 2.2 Tone reproduction
In order to estimate the eect of a discontinuity when the surface it resides upon is nally rendered, we must map the radiance values down to a value which ts within the range for the desired display device. Mapping these radiometric values is non-trivial, because in the radiosity simulation, as in real life, these values can range over a scale which may reach ten orders of magnitude. The resulting display device output range is typically only one or two orders of magnitude. Given that a simple linear mapping is not possible, we must do the next best thing and attempt to recreate the same perceptual response in the viewer when they look at the rendered image as they would have if they were looking at the actual scene. This is possible as the eye is more sensitive to relative changes in luminance rather than changes in absolute luminance levels [18, 19]. This problem has been greatly studied in photography and television and is known as Tone Reproduction [17, 20]. There are a number of steps needed to calculate n | the display inputs, from Lw | a real-world luminance value. The tone reproduction operator must try to ensure that a luminance Ld, output from the display device and calculated from Lw , evokes the same perceptual response from an observer Xd viewing it, that a
real world observer Xw would have when viewing luminance Lw . The dierent parts of the tone reproduction operator are fully described in [17]. Many experiments have been performed to determine the eect of light energy on the eye (e.g. [11, 13]). Several tone reproduction operators have been suggested, with two of the most popular being a linear mapping proposed by Ward [20] and a more complex non-linear mapping from Tumblin [17]. The two operators take slightly dierent approaches and it is not clear which of the two is better. Our implementation considers them both, although we only present results for a metric based on Tumblin's model as there is little dierence in the overall results between the two. Tumblin's model aims to match a perceived quality known as brightness, measured in brils, so that the real world brightness and display brightness can be equated. The complete tone reproduction equation (along with its derivation) is given in [17] and reproduced here:
20 , rw 1 d rw , d n = 4@ Lrw A 10 d , Ldmax
, 3 d 5 1
1
Cmax
(3)
where Ldmax = the maximum luminance the display can output; Lrw = the luminance we are trying to convert; Cmax = the contrast ratio for the display;
d = the gamma correction for the display. All luminances are in lamberts. and are de ned as follows: = 0:4 log10 (La ) + 2:92; = ,0:4(log10 (La ))2 , 2:584 log10 (La ) + 2:0208 where La is the luminance the eye has adapted to. To calculate rw and rw , replace La with a luminance value obtained from the scene luminances. If no view direction is known, this value will have to be an average over the scene (so called static adaption), otherwise the adaption level can be calculated based on the luminances of the viewable surfaces (dynamic adaption). Similarly for d and d , replace La with the screen adaption level (i.e. the luminance level to which the eye has adapted as a result of viewing the display device). Ward [20] suggests a value of half the maximum display output luminance. Ward's mapping function attempts to match the just noticeable dierence on the display device and in the real world. Using the model, a linear scaling factor m is calculated between Lw and Ld such that Ld = mLw . If we assume there is a linear mapping between display inputs and display luminances, we can simply divide Ld by Ldmax to nd n. m is calculated as follows: 1:219 + L0:4 2:5 m = 1:219 + L0da:4 (4) wa where Lda is the display adaption level and Lwa is the world adaption level. The display input value n is clipped to lie in the range 0n1. The resultant function, termed the tone reproduction operator, becomes: f (L; La) 7! [0; 1]. In addition to this being useful for rendering calculated luminances to the screen, it is also useful for giving a measure of the perceptive dierence between two luminances at a given level of adaption. This function can then be used to guide algorithms, such as discontinuity meshing, where there is a need to be able to determine whether some process would be noticeable or not to the end user.
The methods described thus far have been dealing solely with single luminance values, i.e. grey-scale. The extension to colour mapping is an active area of research, as in addition to adapting to luminance levels, the eye also incorporates colour correction. However, for our purposes it is sucient to convert each element of an XYZ colour triple into a luminance value, sum them, and process the result. We do not process each colour channel separately as this leads to unsightly colour shifting, whereby each colour channel is scaled independently, resulting in a change of chromaticity. While this method for dealing with colours works well for generating a pixel colour, for comparing two colours it is not sucient. The reason for this failure is that the XYZ colour space is not perceptually uniform, that is to say, three colours numerically equidistant from each other will not necessarily appear to be perceptually equidistant. We must therefore transform the XYZ triple into a more perceptually uniform space such as L*u*v*, L*a*b* or Farnsworth's nonlinear transformation [6], all of which were designed to be perceptually uniform. In our implementation we use L*u*v* with the CIE standard illuminant `C' as the reference white, although any standard illuminant would suce. In a perceptually uniform colour space, the perceptive dierence between two colours can be calculated by taking the Euclidean distance between the two colour triples. In [7], tone reproduction was combined with adaptive subdivision in an attempt to selectivity cull unwanted triangles. The work reports successful applications of the method, but does not include a representation of discontinuities.
3 Combining tone reproduction and discontinuity ranking In order to accept or reject a discontinuity line, we have to ask the question: Would the inclusion of this line make a perceptible dierence to the resulting mesh? The discontinuity line itself may not necessarily be visible, but may exert a perceptible in uence on the mesh due to improving the general triangulation, stopping artifacts such as shadow/light leaks etc. Before we decide on the line's importance we must therefore accrue as much information about the radiance function surrounding the line and how the eye would perceive it. The only method for correctly determining a discontinuity line's eect on the nal mesh would be to rst generate a complete reference solution for the environment and analyse the radiance values surrounding each line, assessing its perceptual importance. A second pass would then build up new meshes containing only those discontinuities which pass some threshold test. This approach wastes computation and is unfeasible in dynamic environments where mesh changes have to be made on-the- y and casting wedges is prohibitively expensive. An alternative to computing a complete reference solution is to incrementally process each light source in turn, rst casting out the discontinuity surfaces and then shooting radiosity from that source. Later sources can then use the information generated by earlier sources to determine whether their discontinuities would make a perceivable dierence to the current mesh. Although this approach implies that the rst few light sources will contribute more discontinuity lines to the receiving meshes than later sources, this is not a problem as each discontinuity is judged solely on its own merits | only if it is deemed to make a perceptible dierence to the mesh, is it included.
Our method of weighting discontinuities applies equally well to both the above methods, although our implementation only considers the latter. For each discontinuity line, illumination information is gathered in the same manner as Tampieri[16] | by sampling along the line at regular intervals. However, in addition to sampling exactly on the discontinuity, we gain information on the behaviour of the radiance function either side of the discontinuity by sampling a small distance at either side of that point, perpendicular to the line direction. At these points we take `before' and `after' values, that is, the radiance values before and after the current light source has contributed energy. If, at any one point along the line the perceptual dierence is greater than some threshold T , then we surmise that some portion of the discontinuity line must make a perceptible dierence to the mesh and therefore should be included in it. Pseudo code for the method is as follows: Boolean CullLine(SourcePatch sp, Discontinuity D, Real tolerance) /* Returns TRUE if the given line should be culled */ XYZSample before, after LuvColour beforeLuv, afterLuv Vector v Point p, pSamp, Points[] Points = set of evenly spaced points along D v = perpendicular vector to the discontinuity for every point p of Points for i = -1 to 1 pSamp = i * SampDist * v + p before = radiance value at pSamp shoot contribution from sp to pSamp after = new radiance value at pSamp beforeLuv = radiance2LuvColour(before) afterLuv = radiance2LuvColour(after) if (ColourDifference(beforeLuv, afterLuv) > tolerance) return FALSE return TRUE
SampDist is some globally de ned distance - 0:5mm in our implementation. converts an XYZ radiance sample to an L*u*v* colour triple by rst extracting the chromaticity, converting the XYZ value to a single luminance value, passing it through the tone reproduction operator and then recombining the result with the stored chromaticity value, before converting it to L*u*v* using the standard formulae. For the tone reproduction operator to work, it requires an adaption level for the eye. In our implementation this can either be hand-speci ed or calculated a priori based on the luminosity of the primary light sources and the average re ectances within the scene2 . ColourDifference is a function which returns the Euclidean distance between two L*u*v* colour triples. The results in section 4 show a marked improvement in both numeric and visual accuracy over existing methods when using this metric. radiance2LuvColour
2
For non-closed environments the energy lost during radiosity will result in an overestimate of the adaption level and hence it must be speci ed manually.
4 Results 4.1 Error analysis
Given that one of the main strengths of discontinuity meshing is the reduction of visual artifacts, and that the culling methods described thus far try to remove discontinuities without introducing more artifacts, we should judge their eectiveness by estimating how free from artifacts the results are. What constitutes an artifact, however, is subjective and very dicult to quantify. To give quantitative results, we give three error estimates for each threshold level, in addition to triangle, discontinuity line and vertex counts. All three errors are calculated using the 2-norms metric which is de ned for an n by m image as follows:
0 k A k2 = @
1
n X m X
nm i=1 j=1
1 (Aij )2 A
1 2
(5)
B k2 A relative error RelE can then be de ned as: RelE = k Ak , Bk 2
(6)
and an absolute error AbsE can be de ned as: AbsE =k A , B k2 (7) The error RelEi in the results gives the relative error between two rendered images of speci c surfaces. This error analysis is calculated in RGB `screen space' i.e. after the conversion from radiometric energies to screen pixel values via the tone reproduction operator, and hence give a better idea of how perceptively dierent the two surfaces are. The RelEr and AbsEr are area weighted average relative and absolute errors respectively of the radiance values over all the surfaces in the scene, compared with the reference scene. They were generated by sampling relevant surfaces of the reference solution and the solution to be compared at a high density. 4.2 Test results and discussion
To compare and contrast the three dierent culling techniques, three test scenes were used. In all scenes an appropriate level of initial subdivision was used and vertices were introduced at regular points along long discontinuities to ensure well shaped elements in the triangulation. For all the scenes, results are given for the fully discontinuity meshed reference scene against which every other scene is compared. For comparison, results are also given for a non-discontinuity meshed scene (i.e. just initial subdivision), and a traditional adaptively subdivided scene, where the adaptive tolerance was adjusted so as to result in a similar number of triangles as previously obtained in the reference scene. For all the methods, a range of thresholds were considered and the results show there is a direct relationship between the thresholds chosen, the number of lines culled, and the error in the resultant solution. The rst scene is a somewhat pathological test case, containing nine light sources in close proximity and eight narrow pillars casting complex shadows on the oor. There are a total of 42 polygons in the scene, shown in gure 7, with the discontinuity lines on the oor shown in gure 1. This scene illustrates that due to the vastly dierent illumination across the oor, the shadows due
Fig. 1. The fully discontinuity meshed
Fig. 2. The oor after perception based
Fig. 3. Test scene 2 (100 polygons)
Fig. 4. After culling with Tampieri's
oor of scene 1
culling with T = 5:0
metric with L = 0:1. Note the missing shadow on the seat
to two of the pillars are hardly noticeable and yet cause the same number of discontinuities in the mesh (72 discontinuities per pillar in this case). Figure 3 shows scene 2, which is a simple, but more typical example. Here the illumination is a 100W light bulb modelled as a cube, and a `standard' chair is the occluder. The discontinuities on the oor of the scene are shown in gure 5, and the result after culling is shown in gure 6. The third test scene ( gure 9) is a much more complex scene illuminated by a 100W bulb inside a lamp shade in the ceiling, and a 60W bulb in the angle-poise lamp on the desk. The results for scenes 1, 2 and 3 are shown in tables 1, 2 and 3 respectively. The relative error Reli shown in the tables was calculated from the oor in scenes 1 and 2, and the desktop in scene 3. For scene 1, both the methods work well and it is not until over 60% of the lines have been culled do artifacts become noticeable. Figure 2 shows the
oor after over half the lines have been culled and gure 8 shows a rendered image after this culling. Notice the perception metric has culled virtually all of
Fig. 5. The fully discontinuity meshed
oor of scene 2 (1788 discontinuity lines)
Fig. 6. The oor after perception based
culling with T = 3:0 (768 discontinuity lines)
Threshold/Metric Triangles D Lines Vertices RelEi(%) RelEr (%) AbsEr (W=m2 ) Reference 10858 576 5759 0 0 0 No DMeshing 836 0 460 6.310 4.8489 0.0104751 Adaptive Subdivision 12117 0 6111 2.362 1.9959 0.0043117 Tampieri's Metric L = 0:1 5783 355 3111 0.737 0.420 0.0009067 L = 0:2 4840 282 2603 1.214 0.459 0.0015965 L = 0:3 4043 229 2178 1.589 0.950 0.0020531 L = 0:4 3529 195 1904 2.001 1.295 0.0027979 Perception Metric T = 1:0 6967 429 3740 0.513 0.430 0.0009278 T = 2:0 6161 379 3310 0.567 0.458 0.0009884 T = 5:0 4456 240 2390 0.995 0.640 0.0013820 T = 7:0 3731 191 2001 1.434 0.902 0.0019479
Table 1. Test scene 1 results. Average scene radiance: 0.2298Wm,2 st,1 Threshold/Metric Triangles D Lines Vertices RelEi(%) RelEr (%) AbsEr (W=m2 ) Reference 4109 2638 2519 0 0 0 No DMeshing 494 0 591 15.178 1.996 0.0008119 Adaptive Subdivision 5077 0 1988 5.718 0.822 0.0003107 Tampieri's Metric L = 0:1 2587 1319 1717 0.387 0.199 0.0000581 L = 0:3 2346 1169 1592 1.156 0.273 0.0000870 L = 0:5 2052 928 1427 1.851 0.407 0.0001441 Perception Metric T = 1:0 2708 1447 1790 0.426 0.093 0.0000271 T = 3:0 2356 1146 1593 0.451 0.141 0.0000425 T = 5:0 2066 960 1413 1.627 0.353 0.0001224
Table 2. Test scene 2 results. Average scene radiance: 0.8645Wm,2 st,1
Threshold/Metric Triangles D Lines Vertices RelEi(%) RelEr (%) AbsEr (W=m2 ) Reference 60411 26652 35655 0 0 0 No DMeshing 14611 0 11180 16.650 0.1684 0.0002060 Adaptive Subdivision 84617 0 31512 6.743 0.0893 0.0001906 Tampieri's Metric L = 0:1 39495 14264 24578 2.565 0.0579 0.0000622 L = 0:3 35725 11925 22605 3.037 0.0648 0.0000635 L = 0:5 30903 9134 20070 3.115 0.0718 0.0000679 Perception Metric T = 1:0 46843 20059 28490 0.999 0.0398 0.0000580 T = 3:0 41054 16383 25440 1.649 0.0461 0.0000606 T = 5:0 37310 13857 23474 1.878 0.0516 0.0000618 T = 7:0 34346 11767 21901 2.058 0.0557 0.0000627
Table 3. Test scene 3 results. Average scene radiance: 0.0898 Wm,2st,1 the lines from the areas in low illumination whilst leaving most of the lines in the brighter areas to capture the more complex shadows. With a similar overall triangle count, Tampieri's purely radiometric measure results in a large number of lines remaining around the two dark pillars. This is a result of using a relative metric, where a small amount of energy contributed to a dark area can have a large relative eect, even though the resulting energy level is perceived as black to the eye. In scene 2 we see the problems that culling the wrong lines can have on the eventual triangulation. Figure 4 shows the results after culling with Tampieri's metric. Even with a low threshold a shadow is missing from the bottom seat and an artifact has been introduced on the back support. By sampling only along the discontinuity, Tampieri's metric cannot easily judge the eect this discontinuity has on the surrounding area of the mesh. With L = 0:3, several of the discontinuities on the oor generated by the light bulb and the back of the chair have been culled. These discontinuities in themselves are not necessarily strong, but they are important for preventing light leaks into the shadow. The perception metric can better determine this when it samples either side of the line, resulting in a much more accurate (numerically and visually) solution for a similar number of triangles with T = 3:0. Scene 3 is a real-world test and contains a large number of discontinuities, many of which are packed very close together (e.g. the shadow round the book and cheque on the desk). Here is it very important to select the correct lines to cull to prevent shadow and light leaks. Colour gure 10 shows a close-up of the chair seat after culling by the two metrics. With Tampieri's method, despite culling fewer lines, has almost completely eliminated the shadow from the seat. This and other artifacts are not present with the perception metric.
5 Conclusion and future work From the results presented we can see that discontinuity line culling can achieve good results with around a 50% reduction in the triangle count of resultant meshes. Reducing the triangle count beyond this is possible, although this re-
quires altering normally xed parameters to avoid noticeable artifacts appearing due to shadow and light leaks arising from bad triangulations. In a real radiosity simulation where adaptive subdivision and discontinuity meshing are combined, these artifacts would be reduced, although this rather defeats the object of discontinuity meshing. An alternative would be to use quadratic elements which can eliminate many of the artifacts due to linear reconstruction. Of the two methods presented, the perception metric is much more selective about which discontinuities to drop and hence results in far fewer visual artifacts than a simple radiometric test. We conclude that with a value of T = 3:0, between 40% and 60% of all discontinuities can be dropped with no visual eect on the output. 5.1 Future work
One of biggest problems the authors faced was nding a method of judging the end results. While it is easy to look at an image and point out shadow and light leaks, it is often dicult to look at a rendered shadow and know if it is correct, particularly if the shadows are very complex such as in scene 1. It is even harder to ascribe a general mathematical metric to recognising and weighting visual artifacts. We resorted, as did Tampieri, to the simple 2-norms metric and associated relative error which gives a reasonable idea as to the number and strength of visual artifacts, but it is far from ideal. A more rigorous approach needs to be examined here. In terms of the actual culling metric used, the addition of a more structured approach similar to [5] may well pay dividends, e.g. not tracing critical surfaces which fall within the penumbra, which may vastly reduce the number of discontinuities required to represent hard-edged shadows. This would be particularly eective where the light sources are small (as in our test scenes) and the discontinuities generated are almost D0 .
6 Acknowledgements We acknowledge the generous donation of equipment by Sun Microsystems Inc. Thanks are also due to Chris Hearle at BBC Bristol for the loan of photometric equipment. We would also like to acknowledge Jonathan Shewchuk for his triangulation package. We also acknowledge the reviewers of this paper for their many helpful and instructive comments.
References 1. Daniel R. Baum, Stephen Mann, Kevin P. Smith, and James M. Winget. Making Radiosity Usable: Automatic Preprocessing and Meshing Techniques for the Generation of Accurate Radiosity Solutions. Computer Graphics (ACM SIGGRAPH '91 Proceedings), 25(4):51{60, July 1991. 2. Zoltan J. Cendes and Steven H. Wong. C 1 Quadratic Interpolation Over Arbitrary Point Sets. IEEE Computer Graphics and Applications, 7(11):8{ 16, 1987.
3. Michael Cohen and Donald P. Greenberg. The Hemi-Cube: A Radiosity Solution for Complex Environments. Computer Graphics (ACM SIGGRAPH '85 Proceedings), 19(3):31{40, August 1985. 4. Michael Cohen, Donald P. Greenberg, Dave S. Immel, and Philip J. Brock. An Ecient Radiosity Approach for Realistic Image Synthesis. IEEE Computer Graphics and Applications, 6(3):26{35, March 1986. 5. George Drettakis. Simplifying the Representation of Radiance from Multiple Emitters. Fifth Eurographics Workshop on Rendering, pages 259{272, June 1994. 6. D. Farnsworth. A temporal factor in colour discrimination. In Visual Problems in Color, II (National Phys. Lab. Symposium), number 8, page 429. Her Majesty's Stationery Oce, London, 1958. 7. S. Gibson and R.J. Hubbold. Perceptually-driven radiosity. Computer Graphics Forum, to appear. 8. Stephen Hardt and Seth Teller. High-Fidelity Radiosity Rendering at Interactive Rates. Rendering Techniques '96 (Proceedings of the Seventh Eurographics Workshop on Rendering), pages 71{80, 1996. 9. Paul Heckbert. Simulating Global Illumination Using Adaptive Meshing. Ph.D. thesis, Technical Report, June 1991. 10. Paul Heckbert. Discontinuity Meshing for Radiosity. Third Eurographics Workshop on Rendering, pages 203{226, May 1992. 11. L.L. Holladay. The Fundamentals of Glare and Visibility. J. Optical Society of America, 12(4):271{317, 1926. 12. Daniel Lischinski, Filippo Tampieri, and Donald P. Greenberg. Discontinuity Meshing for Accurate Radiosity. IEEE Computer Graphics and Applications, 12(6):25{39, November 1992. 13. P. Moon and D.E. Spencer. The Speci cation of Foveal Adaption. J. Optical Society of America, 33(8):444{456, August 1943. 14. Mark C. Reichert. A Two-Pass Radiosity Method Driven by Lights and Viewer Position. M.Sc. thesis, Program of Computer Graphics, Cornell University, Ithaca, NY, January 1992. 15. W. Sturzlinger. Adaptive Mesh Re nement with Discontinuities for the Radiosity Method. Fifth Eurographics Workshop on Rendering, pages 239{ 248, June 1994. 16. Filippo Tampieri. Discontinuity Meshing for Radiosity Image Synthesis. Ph.D. thesis, Technical Report, Ithaca, NY, 1993. 17. Jack Tumblin and Holly E. Rushmeier. Tone Reproduction for Realistic Images. IEEE Computer Graphics and Applications, 13(6):42{48, November 1993. 18. H. Wallach. Brightness Constancy and the Nature of Achromatic Colors. J. Experimental Psychology, 38:310{324, 1948. 19. H. Wallach. The Perception of Neutral Colors. Scienti c American, 208(1):107{116, January 1963. 20. Gregory J. Ward. A Contrast-Based Scalefactor for Luminance Display. In Paul S. Heckbert, editor, Graphics Gems IV, pages 415{421. Academic Press Professional, Boston, MA, 1994. 21. Adam Worrall, Claire Willis, and Derek Paddon. Dynamic Discontinuities for Radiosity. Edugraphics + Compugraphics Proceedings, pages 367 { 375, December 12 1995.
Fig. 7. Test scene 1 (42 polygons)
Fig. 8. Test scene 1 after culling
Fig. 9. Test scene 3 (726 polygons)
Fig. 10. Closer view of the chair.
(Above) Tampieri's metric. (Below) Perception metric