Towards Image-Based Beard Modeling - Computer Graphics Bonn

Report 3 Downloads 77 Views
Towards Image-Based Beard Modeling Tomas Lay University of Bonn

1

Arno Zinke University of Bonn

Thomas Vetter† University of Basel

Introduction

Modeling of plausible hair is essential for creating believable virtual characters. However, due to the inherent complexity of the hair geometry only very costly approaches, that either rely on manual modeling or require very specialized acquisition hardware, have been previously presented in literature [Paris et al. 2008]. In contrast to prior publications, which were focusing on modeling hairstyles, the ultimate goal of this work is to a develop a practical technique for generating realistic facial hair, such as beards with only minimum user interaction. Facial hair is naturally much more constraint than scalp hair allowing the use of dedicated heuristics to analyze and synthesize hair strands. By taking into account specific characteristics of beards, such as the beard region, the direction into which strands grow and the hair density we generate a realistic hair geometry using registered texture images and 3D head models obtained using a photometric stereo approach. Once a beard geometry has been extracted from a head (SOURCE) either the full model or only certain of its properties (e.g only the beard region) can be transfered to other (shaved) head models (TARGET). Hence, the technique allows us to interchange beard geometries between different head models (see Fig.1). The transfer involves feature analysis from both SOURCE and TARGET. After all relevant features have been extracted the beard is synthesized.

2

Andreas Weber∗ University of Bonn

Feature Analysis

The first and most challenging stage of the analysis step is to recognize the beard region in the texture image of the SOURCE. For simplicity we assume that every pixel on the image represents either hair or skin. Based on this assumption we iteratively learn a statistical classification model that maximizes a posterior probability for two regions. More precisely, we randomly select two equally-sized rectangular “learning regions” (one for skin and for hair) from the SOURCE texture and maximize P the other max(pis , pih )) where pis and pih denote the probabilities of the i i-th pixel for belonging to skin and hair, respectively. These probabilities which are central to segmentation are estimated using five different features accounting for the distinctive properties of skin and hair, such as color and structural variation, namely: luminance , red-green color channel difference, red-blue color channel difference, image frequency content and strength of texture space orientation [Freeman and Adelson 1991]. First these features are combined by a Bayesian histogram-based approach taking into account expectation and variance giving independent probability estimates for hair (µih ) and skin (µis ) for each pixel. Then, for the actual probabilities pis,h , these estimates are p combined as follows: pih,s = µih,s · (1 − µih,s ). Finally, applying a graph cut operation [Boykov et al. 2001] on pih , a homogeneous beard region is extracted. Another important feature, the hair density, is estimated based on the hair pore’s distribution in the texture of the shaved TARGET. Hair pores are identified by blob detection with Laplacian-ofGaussian kernel functions and their density is used as pdf for hair root density. ∗ e-mail: † e-mail:

{tomas,zinke,weber}@cs.uni-bonn.de [email protected]

Figure 1: Left: Workflow. Right: Automated beard transfer from SOURCE to TARGET using TARGET’s own hair (pore) density. The proposed technique took less than two minutes per transfer.

3

Beard Synthesis

After analysis the hair geometry is synthesized on the TARGET. Since both SOURCE and TARGET are using properly registered texture images, the corresponding 3D-shapes are related through texture space. Hence, by transferring texture features estimated during analysis phase we are able to apply certain characteristics of the original beard on the shaved TARGET. However, naturally not all properties of a beard, in particular volumetric features, can be extracted using our simple texture-based approach and remaining gaps need to be filled by user defined parameters or statistical knowledge. The 3D location of the hair roots is computed according to the 2D pore density pdf of the TARGET. The initial 3D direction of the strands at the roots is obtained from the 2D texture orientation of the SOURCE already estimated during analysis step. Hair strands are generated using particle shooting with gravity and the 3D orientations as “external forces”, and a grid-based collision detection method to avoid intersections. Since there is no information about the length of a filament either a priori knowledge or heuristics have to be applied. A simple but effective approach is to use pih · l with a user defined maximum length l. Future work will focus on long and complex beards where missing features will be inferred using statistical database knowledge.

References B OYKOV, Y., V EKSLER , O., AND Z ABIH , R. 2001. Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 23, 11, 1222– 1239. F REEMAN , W., AND A DELSON , E. 1991. The design and use of steerable filters. IEEE Transactions on Pattern Analysis and Machine Intelligence 13, 9, 891–906. PARIS , S., C HANG , W., KOZHUSHNYAN , O. I., JAROSZ , W., M ATUSIK , W., Z WICKER , M., AND D URAND , F. 2008. Hair photobooth: geometric and photometric acquisition of real hairstyles. ACM Trans. Graph. 27, 3.