Expressive line selection by example - Springer Link

Report 4 Downloads 105 Views
Visual Comput (2005) 21: 811–820 DOI 10.1007/s00371-005-0342-y

Eric B. Lum Kwan-Liu Ma

Published online: 1 September 2005 © Springer-Verlag 2005

E.B. Lum (u) · K.-L. Ma Department of Computer Science University of California Davis, 1 Shields Av., Davis, CA 95616, USA [email protected]

ORIGINAL ARTICLE

Expressive line selection by example

Abstract An important problem in computer generated line drawing is determining which set of lines produces a representation that is in agreement with a user’s communication goals. We describe a method that enables a user to intuitively specify which types of lines should appear in rendered images. Our method employs conventional silhouetteedge and other feature-line extraction algorithms to derive a set of candidate lines, and integrates machine learning into a user-directed line removal process using a sketching metaphor. The method features a simple and intuitive user interface that provides

1 Introduction Line drawings are widely used in traditional artistic media to produce simple but expressive visual representations. A properly selected collection of lines is able to communicate an artist’s vision while removing ambiguity and adding emphasis to structures of interest. For this reason, there has been a great deal of interest in the computer graphics community in developing methods for generating line drawings automatically. This paper describes a method that selectively chooses which lines to omit from a rendering to create visual depictions that are more in line with a user’s vision. The method is very flexible and can be applied in combination with any of a number of previously developed line detection methods to selectively prune the line set for a more concise representation. One of the more significant attributes of non-photorealistic rendering is that there is no “correct” or optimal

interactive control over the resulting line selection criteria and can be easily adapted to work in conjunction with existing line detection and rendering algorithms. Much of the method’s power comes from its ability to learn the relationships between numerous geometric attributes that define a line style. Once learned, a user’s style and intent can be passed from object to object as well as from view to view. Keywords Non-photorealistic rendering · Silhouettes · Examplebased rendering · Machine learning

visual representation. For this reason, there has been an increasing amount of research devoted toward developing algorithms that can produce rendering styles consistent with those desired by the user. Of particular relevance are example-based rendering techniques in which the rendering style is acquired from examples that are inputted by the user using more traditional artistic methods. In the area of line-drawing representations, these methods have dealt with the example-based acquisition of the style of how the lines are rendered. Our work, in contrast, uses an examplebased method not for determining the appearance of a line primitive, but rather for the selection of the primitive itself. The main idea behind our work is that the line extraction task can be reduced to a classification problem, the problem being: given some set of candidate line attributes, assign each line to one of two classes. The first class contains lines that are consistent with the desired line style and should be drawn, while the second class con-

812

E.B. Lum, K.-L. Ma

sists of lines that are not part of the desired style. The task of separating data into different classes based on examples of known classifications has been well studied by the machine learning community. Thus, by treating line extraction as a classification problem we are able to take advantage of the results of their studies. Specifically, we have experimented with two of the more popular machine learning methods: neural networks and support vector machines. With this machine learning framework, our method works as follows. Using existing feature edge extraction algorithms, a dense set of edges is presented to the user. The user draws over and erases a few representative lines to indicate the desired line selection style. This input is used to train a neural network or create a support vector machine that learns the attributes of the lines desired by the user. The trained classifier can then be applied for rendering from arbitrary viewpoints, or can be saved and reused to render entirely different objects in the same scene or animation sequence. There are three main contributions of our work. First, we introduce an intuitive example-based method for specifying the style of silhouette edge extraction. Unlike previous research in this area, our work deals with the style of line selection and not the stylized rendering of those lines. Second, we combine numerous silhouette extraction methods into a single framework that gives control to the user over how these methods are used. Finally, the machine learning based framework that we describe is very flexible, and can easily be adapted to incorporate a wide range of attributes in making the determination of whether an edge should be rendered. In fact, although our method was designed for the rendering of polygon models, we also demonstrate that it can easily be applied to the rendering of lines from color photographs as well.

2 Related work Due to the increasing popularity of non-photorealistic rendering (NPR), considerable work has been done in finding and using silhouette edges and other feature lines for generating line drawings. These lines can be detected in image space [12, 21], object space [2, 9, 14, 19, 22], or both [10, 16, 20]. A comprehensive survey of silhouette edge-extraction algorithms is given by Isenberg et al. [15]. In our work we do not develop a new method for detecting feature lines. Rather, we introduce an intelligent interface for specifying a set of preferred lines from the set of edges extracted using a combination of existing feature line detection algorithms. Significant work has been done for the specification of style in NPR. Our work does not address the appearance of individual lines, which has been studied by others [6, 8, 14, 18, 20]. Instead, in our work style is a re-

sult of the decision of which line primitives are shown, rather than how those lines are rendered. In addition, unlike the work of Sousa and Prusinkiewicz [22], which automatically creates and renders compact stylized line representations, our method is much more user-directed in nature. One of the easiest ways to specify a rendering style is by example [13, 17]. Likewise, our system takes an example provided by the user in the form of line placement for selected areas of the 3D object. It then learns to apply the aesthetic preference suggested by the example to the whole object. Unlike WYSIWYG NPR, which allows the user to annotate a 3D model with strokes [18] and appropriately adapts the given strokes for new views, our system begins with a set of silhouette lines that captures the intrinsic geometric shape of the 3D model and, through an iterative process, removes unwanted lines according to the hints provided by the user. Chen et al. [4, 5] have developed example-based methods for generating facial sketches from color photographs. The methods take frontal view photographs and matched sketch drawings as training input and can synthesize new line drawings with facial features illustrated in the same line style. Unlike these methods, our approach is designed primarily for polygon rendering and is not focused on creating illustrations of the face. Specifically, our method does not rely on a priori known facial features such as the eyes, nose, and mouth.

3 Overall approach and interface One of the more fundamental problems in line rendering is defining the characteristics of a “good” line. The definitions given for a well-chosen set of lines vary widely in traditional artistic media, which may explain the wide range of techniques that have been developed for automated line extraction. The goal of our work is to incorporate the various edge-extracting methods that have been developed and produce a single set of lines consistent with the style desired by the user. In addition, we would like to incorporate as many geometric properties as possible in making the determination if a line should be drawn, taking into account that there might be a highly complex relationship between those attributes in deciding if a line should be included. The most obvious way to combine the various feature line extraction techniques is to simply add their contributions. This is trivial to implement and is well-suited for producing a large set of lines for depicting an object’s shape. Most edge-detection methods rely on thresholds that influence the density of the resulting lines. The user is given a degree of control over the number of lines produced by adjusting each of these thresholds independently.

Expressive line selection by example

813

This can be expressed as s(a1 , a2 , ..., an ) = threshold(a1 (x, y), t1) + threshold(a2 (x, y), t2) + ... + threshold(an (x, y), tn ) where s is the combined silhouette-edge-determining function that is a function of a set of attributes a1 through an . The function threshold yields a value of one if the attribute ai is larger than the threshold ti . There are two significant limitations to combining edge-detection techniques in this way. First, the user must deal with an interface that consists of tuning the set of thresholds (t1 ...tn ), rather than working with the image directly as he or she would with traditional media. This becomes increasingly difficult as more attributes are taken into consideration. Second, the set of independent thresholds gives limited control over the selection of the rendered edges. As a simple example, consider the case where one might want to show edges where there is a high variation in depth or where there is a moderate variation in both depth and curvature. In this case, rather than treating each feature-line extraction method independently, the methods need to be combined using a more general two-dimensional function that takes into account both attributes simultaneously in making the silhouette-edge determination. As more geometric attributes are used in making the feature-line determination, it becomes increasingly difficult to specify a fully general, separable function that takes all of this information into account. We accomplish this in our work by using a machine learning classifier that determines whether a line should be drawn. As will be described in the following sections, the use of machine learning allows for the flexible application of classification functions that take into account a large number of attributes in determining which edges to render. The classifier is created using training data input provided by the user employing a traditional-media metaphor, abstracting away the difficult task of specifying a high-dimensional edge-pruning function that takes all these attributes as input. A block diagram illustrating our technique is shown in Fig. 1. The system starts with a polygon mesh from which a dense set of candidate lines are extracted using conventional silhouette edge-extraction methods. These edges are displayed to the user, as shown in Fig. 2(a), who can then selectively draw over the edges of interest and erase the undesired edges, as illustrated in Fig. 2(b). The result of the user’s interaction is used for training the classifier to implement the desired edge-removal function. The classifier can then take as input the dense set of edges, along with other geometric attributes of the original polygon mesh to produce a more compact line representation as seen in Fig. 2(c). The user’s interaction is an iterative process, in which training progresses as the user inter-

Fig. 1. Block diagram illustrating the iterative machine learning classifier training process

Fig. 2a–d. The user is presented with a set of candidate edges as shown in a, and then sketches over those edges they would like to keep in brown and erases edges they do not like by applying green as illustrated in b. A machine learning classifier is then trained to produce the image shown in c. The user can see the resulting edges superimposed on the set of candidate edges as shown in d, to further revise their sketching until the classifier converges to implement the desired function

acts with the system. The user can also see the rendered edges composited over the candidate edges, as shown in Fig. 2(d), to see which areas the classifier does not handle properly. Additional input can then be applied to those areas, producing more training data for those cases, until the classifier converges toward implementing the function that removes the types of edges not desired by the user.

4 Machine learning classifiers A widely studied application of machine learning is the task of classification, which consists of constructing

814

E.B. Lum, K.-L. Ma

a function to assign objects to different categories based on their attributes. In the case of supervised learning methods, given a set of n-dimensional training data x i ∈ Rn , and the corresponding class labels yi ∈ {+1, −1}, we would like to estimate a classification function f(x) = y, which is appropriate for unseen data that is assumed to have a similar probability distribution as the training set. In our work, the vector x contains a line’s attributes that will be used for making the determination of whether that line should be displayed. The type of data used relates to the geometric properties of a pixel or edge and varies depending on if our technique is being applied in object- or image-space. A detailed discussion of the attributes used will follow in later sections. The output class y is the decision of whether a particular pixel or edge should be displayed. The user’s painting interaction generates training data (x, y) to construct the classification function f that is applied to select lines for rendering. A number of different methods have been developed for constructing classification functions, and our method can easily be adapted to use different machine learning algorithms to perform classification. In our work, we have tried using two different types of classifiers: the well established neural networks (NNs) and the increasingly popular support vector machines (SVMs). 4.1 Neural networks Artificial neural networks are inspired by the idea of modeling the interaction of low-level neural pathways in a brain. They are relatively simple, and have been popular for their ability to learn both linear and non-linear relationships between inputs. Excellent introductory texts on neural networks have been written by Hertz et al [11] and Bishop [1]. Each connection between neurons has a weight, with the weights modulating the value across that connection. Training is the process of modifying the weights until the network implements a desired function. Once training has occurred, the network can be applied to data that was not part of the training set. We use a feed-forward, back-propagation neural network [23], that consists of three layers of neurons: an input layer, a hidden layer, and an output layer. Neurons are connected in a feed-forward fashion, and every neuron in the input layer is connected to all neurons in the hidden layer; similarly, hidden nodes are fully connected to the nodes in the output layer. 4.2 Support vector machines Much of the more recent research involving machine learning classifiers has focussed on the use of support vector machines. Unlike other machine learning techniques, SVMs separate data into classes with a maximal margin of

separation. This makes SMVs better suited to generalize their classifications for data outside the training set. An indepth tutorial on support vector machines has been written by Burges [3]. Given a set of N-dimensional training data x i ∈ Rn , and the corresponding class labels yi ∈ {+1, −1}, we can use a simple linear classifier S = {x|w, x + b = 0} to separate two classes. The decision function then becomes f (x new ) = sign (w, x new  + b) Maximizing the margin between different classes is a problem of constrained optimization, and the Lagrange method is used to solve the constrained optimization problem. With the Lagrange method, training data x i is used as support vectors, and the separating hyperplane is determined when the Lagrange multiplier αi > 0. The decision function with support vectors becomes  #SV  sv αi yi x i , x new  + b f (x new ) = sign i=1

In order to classify data in a manner that is not linearly separable, a kernel function is used to map the data from data space to a higher-dimensional feature space. The four most common kernels are linear, polynomial, radial basis function, and sigmoid kernels. With a kernel function, the decision function becomes #SV   αi yi K (x i , x new ) + b f (x new ) = sign i=1

with the kernel used in our work being the radial basis function, which is widely used for general-purpose classification and can be expressed as: K (x 1 , x 2 ) = ex p(−γ ||x 1 − x 2 ||2 ).

5 Image space method When our technique is applied to image-space featureline extraction, the machine learning classifier essentially works as an intelligent, non-linear filter, which is applied on a per-pixel basis to all possible feature-line pixels. The input vector to the classifier contains the geometric attributes associated with a pixel, and the output is a value between zero and one, indicating whether the pixel should be rendered as an edge. The most obvious types of inputs are the same as those used for traditional edge-detection methods. We thus

Expressive line selection by example

use the gradient magnitude of the depth buffer, the gradient magnitude of the normal buffer, an edge buffer of the edges neighboring front- and back-facing pixels, and a buffer on lines extracted using the suggestive contour algorithms proposed by Decarlo et al. [7]. These four buffers result in a classification feature vector x with four entries in size. By incorporating additional attributes, the classifier can take into account the relationship between the different spatial properties when making the edge determination. For example, by using the original depth buffer as input, the classifier can learn to discard edges selectively based on distance from the viewpoint. The use of the normal direction buffer as input permits the classifier to discard edges based on surface orientation. The classifier might learn, for example, that the user has a preference for edges that have a vertical surface orientation. The unprojected three-dimensional position associated with a pixel can be used as input to allow the classifier to learn to discard edges based on the real world position of that edge. Several of these inputs, such as gradient direction and position, are vectors of length three, adding considerably to the size of the classifiers input vector x. By incorporating all of the attributes described above, the input vector has a total size of eleven. In other words, the classifier makes the determination of whether an edge should be drawn using a function that takes into account the values of eleven different attributes. The classifiers used in our work are well suited for handling even larger feature vectors. Often silhouette edges are extracted using properties derived from neighboring pixels, such as depth and normal gradient magnitude. By incorporating the attributes of a neighborhood of pixels, a classifier can learn to directly incorporate the characteristics of these neighboring pixels in making the line selection determinations. With this information a classifier can be trained to have greater immunity to noise in the various buffers, or can be trained to avoid edges that are only a couple of pixels in size. By using the values of the pixels above, below, and to

815

the left and right, the input feature vector grows to size fifty-five. Much of the power of machine learning classifiers comes from the fact that they can easily be used to incorporate a wide variety of inputs. Thus, attributes in addition to those already discussed could easily be incorporated if desired. Furthermore, the classifier can automatically learn to discard those attributes that are not important in making the edge determination. With slight modification, our method can also be applied to non-synthetic color photographic images. In this case, the amount of data associated with each pixel is only an RGB color without any depth or normal direction information. A set of candidate lines is generated from the gradient magnitude of the colored image, and the machine learning classifier uses as input a neighborhood of pixels’ colors as well as the result of the edge detection filter. The use of RGB colors as input makes it possible for the classifier to learn to differentiate edges depending on the colors at their transitions. For example, it might be taught to discard edges between red to yellow regions while keeping edges that occur between those regions that are red and blue.

6 Object space algorithm Our method can also be used for object space determination of which polygon edges should be rendered. In this case, the classifier is applied on a per-edge basis using attributes related to that edge as input. Each edge is associated with two neighboring faces and two vertices. The attributes we use as input to the classifier are the normal directions of an edge’s two adjacent faces, as well as an edge’s two vertex positions, depths, curvature magnitudes, and normal directions computed from the average of a vertex’s neighboring polygons. The user’s sketching and erasing interaction with the system produces an edge list that stores the attributes of each selected edge as well as whether that edge should be

Fig. 3a–d. With our technique, the user is able to easily control the line selection criteria to produce a wide variety of looks

816

E.B. Lum, K.-L. Ma

Fig. 4. The line style learned by the machine learning classifier can be applied to a mesh for varying viewpoints as shown in these renderings of the Max Planck bust. Left column: the style is relatively dense in the head region. Right column: line style is sparse and does not include some of the features on the face. Notice how the style stays consistent across the various viewpoints

drawn. Since there are two vertices and two faces associated with each face, all inputs to the classifier exist in pairs. It is desirable to have a classifier that produces similar output regardless of the ordering of the pair of inputs. For example, given an edge with two neighboring vertices a and b, the network should produce almost the same result if they are treated as inputs a and b or b and a. The classifier is therefore trained using both orderings of the input.

7 Results We applied our technique to a number of different meshes using both neural network and support vector machine classifiers. As will be discussed later, the NN classifier ran significantly faster and produced similar results to the SVM classifier, and was therefore used to produce all images except where stated otherwise. A number of different line styles can be produced with our technique. Figure 3(a) shows the relatively dense set of candidate lines extracted using conventional techniques.

Fig. 5. The machine learning edge classifier can be trained to vary line density based on the scale of an object with respect to the view frustum. On the left a uniform threshold is used for controlling the line density, while on the right, a neural network that adapts line density based on the projected size is used

Notice the particularly dense set of edges on the face between and above the eyes. Figure 3(b) shows the result of the image-space application of our technique, with a neural network trained to remove some lines on the nose and above the eyes. Next, Fig. 3(c) shows an even more sparse depiction, where most of the edges from the more subtle features have been removed. Figure 3(d) shows a line style that depends heavily on surface orientation. By drawing over edges that are oriented toward the lower right, and erasing some of the edges oriented toward the upper left, the neural network produces a set of lines that gives the illusion of lighting. Once a style is learned by the classifier, it can be applied from arbitrary viewpoints, as illustrated in the renderings of the Max Planck bust in Fig. 4. The left and right columns illustrate two different line styles. Notice the consistent application of this style between viewpoints.

Expressive line selection by example

817

Fig. 6. A line style learned for one object can be applied to different types of objects as illustrated in this figure. For each column, the user specified a line style for the pig that is successfully transferred to the other two objects to the right

It is often desirable to have line density remain relatively uniform in screen space and not become extremely dense for objects that project to a small region of the display. The relative scaling between world- and screenspace of a projected pixel can be used as input to the classifier to provide control based on the projected size of that pixel, as shown on the right set of images in Fig. 5. The left set of images uses a uniform feature-line threshold, which yields very dense line sets for smaller scales. The classifiers created with our method can also be reused for different polygon meshes as illustrated in the image sequences in Fig. 6. In each case, the user specified a style using the pig model, and that style was then applied to the triceratops and cow models. The top row shows the set of candidate silhouette edges that were extracted using conventional techniques. In the middle row detail is kept in the face region, but removed from the back of the pig. In the bottom row, the user specifies a style that significantly removes details from the face and nose of the pig and slightly discards edges from the rear. In all cases, the other two models maintain the same style as the pig. We prototyped our technique using machine learning classifiers that were implemented in software with attribute buffers that were generated with graphics hardware. For our neural network implementation, we found that we obtained best results when the number of hidden nodes was between five and fifteen. Most of the examples shown using neural networks were generated using eight hidden nodes. When using the neural network classifier with eight hidden nodes, rendering performance to a 512 × 512 window occurs at approximately 1.5 frames per second using an AMD Athlon-64 FX 2.2 GHz pro-

Fig. 7. Training data consisting of drawing and erasing from one viewpoint as shown on the top. An SVM was trained using that data resulting in the images shown in the middle row. An NN trained with the same data resulted in the images on the bottom row

cessor and Nvidia GeforceFX 5900 graphics card. This is sufficient to provide interactive feedback during the sketching and erasing process. Since training is performed iteratively, users can immediately see the result their interaction has on the neural network, and can watch the network converge to a solution in a matter of seconds, or can add further input before convergence is reached. The amount of time required for both training and classification is significantly longer when using support vector machines. The training process is therefore less iterative in nature. The amount of time required for training depends on the amount of training data. After sketching and erasing about fifteen strokes from several viewpoints, training required approximately ten seconds, providing a limited amount of feedback for the user to decide where to provide additional training strokes. We expected the SVM’s maximum margin classifiers to provide superior classifications to the NN classifiers. In order to test the two classifiers’ abilities to learn a style and apply it to new viewpoints, we tested the classifiers under the condition where sketching and erasing input is provided for only one viewpoint shown on the top of Fig. 7, resulting in the SVM and NN classifications for the same viewpoint on the middle and bottom, respectively, of the left column. The viewpoint was then changed, result-

818

E.B. Lum, K.-L. Ma

Fig. 8. With slight modification our technique can be applied to RGB bitmap images. The first column shows two examples of possible user input, where purple is applied to edges of interest and green to unwanted edges. The neural network is trained, resulting in the set of edges shown in black in the second column, while the excluded edges are shown in blue. In the third column, the extracted edges have been composited over a lightened, blurred version of the original image

ing in the classifications shown on the right column. The results are similar, with the SVM classifier not showing lines on the fur of the bunny, which indicates it has better learned the style on the top, which omits this fur. In the more common case when training input is specified from multiple views, however, the resulting images produced using NN and SVM classifiers become comparable, with neither method showing significantly better results given the same amount of training data. An example of the application of our technique to a color photograph is shown in Fig. 8. In Fig. 8(a), the user sketched over the edges over the center flower and erased edges found on the leaves. The network learned to keep those edges found on the flower, as illustrated in black in Fig. 8(b), and removed those edges found on the leaves

(shown in blue). Notice that although the user only specified input for the center tulip, the network has generalized that style for the other tulips. Figures 8(d) and (e) illustrate the result when the user erased those edges found inside the flower, but kept those edges surrounding the leaves. The result of compositing the extracted edges on a blurred, lightened version of the original image is shown in Figures 8(c) and 8(f). Notice how the line style on the top adds emphasis to the flowers, while the line style on the bottom adds emphasis to the leaves. The previous examples were generated using the image space version of our technique. Figure 9 illustrates a result generated using the object space variation of our method. Figure 9(a) shows the original mesh, while Fig. 9(b) shows the dense set of candidate edges. The right two images

Fig. 9a–d. Our technique can also be applied in object space as well as image space. From the subdivided cow mesh in a, a set of candidate edges are extracted and shown in b. Based on the properties of these edges, two different neural networks produced the two different styles shown in c and d

Expressive line selection by example

Fig. 10. In this example, two support vector machine classifiers were used to extract white and black lines. The white lines occur on features that are oriented downward, giving the impression of illumination, while a sparse set of black lines are used for the remainder of the head

illustrate two different styles of line removal that selectively emphasize different aspects of the model. Figure 9(c) maintains detail near the face of the cow with significantly reduced detail for the rest of the model. In Fig. 9(d) more emphasis is given to the back of the cow. Notice that both styles are significantly more sparse than the original.

8 Conclusions and future work In this paper, we present a method that allows for the intuitive specification of style with respect to line selection,

819

which can be contrasted with previous work that has dealt with the appearance of the line itself. Line rendering style, however, is a very important part of non-photorealistic rendering, which we believe could be addressed using a variation of the method we have described. In particular, machine learning could be used for determining not only which lines should be rendered but also their appearance. The classification functions used in our work have a single output that indicates if a line should be drawn. Additional output values could be used for the network to learn attributes related to line style, such as line thickness and waviness. To illustrate this type of capability, Fig. 10 shows an example where two support vector machine classifiers were used, one for extracting a white set of lines and the other for black. The white line classifier is trained to extract those lines that are in the face region with a downward orientation giving the impression of illumination. The black line classifier extracts a very sparse set of lines where the surface has a normal direction that does not point downwards. This paper describes a method that facilitates the selective rendering of key lines for the creation of compact line depictions. Much of the utility of non-photorealistic rendering is its ability to provide users with a means of producing imagery that realizes their vision. In that way, we believe the technique we describe takes a step forward toward achieving that utility. Acknowledgement This work has been sponsored in part by the U.S. National Science Foundation under contracts ACI 9983641 (PECASE), ACI 0222991, and ACI 0325934 (ITR), and the U.S. Department of Energy under Lawrence Livermore National Laboratory Agreement No. B537770, No. 548210 and No. 550194. Data sets are from the Cyberware, Viewpoint, the Stanford 3D Scanning Repository, the Max Planck Instutute, and the Digital Michelangelo Project.

References 1. Bishop, C.M.: Neural networks for pattern recognition. Oxford University Press, Oxford, UK (1996) 2. Buchanan, J.W., Sousa, M.C.: The edge buffer: A data structure for easy silhouette rendering. In: Proceedings of the First International Symposium on Non-Photorealistic Animation and Rendering (NPAR 2000) (2000) 3. Burges, C.J.C.: A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery 2(2), 121–167 (1998) 4. Chen, H., Liu, Z., Rose, C., Xu, Y., Shum, H.Y., Salesin, D.: Example-based composite sketching of human portraits. In: NPAR ’04: Proceedings of the 3rd international symposium on Non-photorealistic animation and rendering, pp. 95–153 (2004) 5. Chen, H., Xu, Y.Q., Shum, H.Y., Zhu, S.C., Zheng, N.N.: Example-based facial sketch

6. 7.

8.

9.

10.

generation with non-parametric sampling. In: IEEE International Conference on Computer Vision, pp. 433–438 (2001) Curtis, C.: Loose and sketchy animation. In: SIGGRAPH ’98 Conference Abstracts and Applications, p. 317 (1998) Decarlo, D., Finkelstein, A., Rusinkiewicz, S., Santella, A.: Suggestive contours for conveying shape. ACM Transactions on Graphics 22(3), 848–855 (2003) Freeman, W.T., Tenenbaum, J.B., Pasztor, E.: An example-based approach to style translation for line drawings. Tech. Rep. TR99-11, MERL (1999) Gooch, B., Sloan, P., Gooch, A., Shirley, P., Riesenfeld, R.: Interactive technical illustration. In: 1999 Symposium on interactive 3D Graphics, pp. 31–38 (1999) Grabli, S., Turquin, E., Durand, F., Sillion, F.: Programmable style for NPR line drawing. In: Rendering Techniques 2004

11.

12.

13.

14.

15.

(Eurographics Symposium on Rendering) (2004) Hertz, J., Krogh, A., Palmer, R.G.: Introduction to the theory of neural computation. Addison-Wesley Longman Publishing Co., Inc. (1991) Hertzmann, A.: Introduction to 3D non-photorealistic rendering: Silhouettes and outlines. In: S. Green (ed.) SIGGRAPH ’99 Course Notes (1999) Hertzmann, A., Jacobs, C., Oliver, N., Curless, B., Salesin, D.: Image analogies. In: Proceedings of SIGGRAPH 2001 Conference, pp. 327–340 (2001) Hertzmann, A., Zorin, D.: Illustrating smooth surfaces. In: Proceedings of SIGGRAPH 2000 Conference, pp. 517–526 (2000) Isenberg, T., Freudenberg, B., Halper, N., Schlechtweg, S., Strothotte, T.: A developer’s guide to silhouette algorithms for polygonal models. IEEE

820

E.B. Lum, K.-L. Ma

Computer Graphics and Applications 23(4), 28–37 (2003) 16. Isenberg, T., Halper, N., Strothotte, T.: Stylizing silhouettes at interactive rates: From silhouette edges to silhouette strokes. Comput. Graph. Forum 21(3) (2002) 17. Jodoin, P.M., Epstein, E., Granger-Pich´e;, M., Ostromoukhov, V.: Hatching by example: a statistical approach. In: NPAR ’02: Proceedings of the 2nd international symposium on Non-photorealistic animation and rendering, pp. 29–36 (2002)

18. Kalnins, R., Markosian, L., Meier, B., Kowalski, M., Lee, J., Davidson, P., Webb, M., Hughes, J., Finkelstein, A.: WYSIWYG NPR: Drawing strokes directly on 3D models. ACM Transactions on Graphics 21(3), 755–766 (2002) 19. Markosian, L., Kowalski, M.A., Trychin, S.J., Bourdev, L.D., Goldstein, D., Hughes, J.F.: Real-time nonphotorealistic rendering. In: SIGGRAPH ’97 Conference Proceedings, pp. 415–420 (1997) 20. Northrup, J., Markosian, L.: Artistic silhouettes: A hybrid approach. In: Proceedings of NPAR 2000, pp. 31–38 (2000)

E RIC B. L UM received his Ph.D. degree in Computer Science from the University of California at Davis in 2004, and B.S. and M.S. degrees in Electrical Engineering from UCLA in 1997 and 1999, respectively. He is currently a postdoctoral researcher in computer science at UC Davis where he is investigating rendering and user interaction methods that facilitate the expressive visual representation of data.

K WAN -L IU M A received the PhD degree in computer science from the University of Utah in 1993. He is a professor of computer science at the University of California (UC), Davis. His research spans the fields of visualization, computer graphics, and high-performance computing. During 1993-1999, he was with ICASE/NASA LaRC as a research scientist. In 1999, he joined UC Davis. In the following year, he received the presidential Early Career Award for Scientists and Engineers (PECASE). Presently, he is directing research projects on parallel visualization, volume modeling and visualization, artistically inspired illustrations, visual interface designs, and information visualization. He is the editor of the VisFiles Column of ACM SIGGRAPH’s Computer Graphics Quarterly. Information about Professor Ma’s publications and research projects can be found at: http://www.cs.ucdavis.edu/∼ma.

21. Saito, T., Takahashi, T.: Comprehensible rendering of 3-d shapes. In: Proceedings of SIGGRAPH 1990 Conference, pp. 197–206 (2000) 22. Sousa, M., Prusinkiewicz, P.: A few good lines: Suggestive drawing of 3D models. Computer Graphics Forum (Proc. of EuroGraphics ’03) 22(3) (2003) 23. Werbos, P.: Beyond regression: New tools for prediction and analysis in the behavioral sciences. Ph.D. thesis, Department of Applied Mathematics, Harvard University (1974)