Single-View 3D Hair Modeling with Clumping Optimization

1Zhejiang University, 2Faceunity Inc.

Given an image, we obtain an initial hairstyle using existing methods and then transform it into our parametric hair representation combining guide strands and clumping modifier. We optimize the guide strands and clumping parameters using a differentiable renderer. This ensures that the resulting hairstyle's contour, growth direction and clumping characteristics match the input image. From left to right, the sequence includes the input image, initial hairstyle, optimized hairstyle, and the details and projections of both hairstyles.

Abstract

Deep learning advancements have enabled the generation of visually plausible hair geometry from a single image, but these still do not meet the realism required for further applications (e.g., high quality hair rendering and simulation). One of the essential element that is missing in traditional single-view hair reconstruction methods is the clumping effect of hair, which is influenced by scalp secretions and oils, and is a key ingredient for high-quality hair rendering and simulation.

Observing practices in industrial production like XGen, which simulates realistic hair clumping by allowing artists to adjust clumping parameters, we aim to integrate these clumping effects into single-view hair reconstruction. We introduce a novel hair representation model which exploits guide strands and clumping modifiers to transform the output of existing methods into models that consider hair clumping. We developed a neural model called clumpNet using contrastive learning to evaluate the multimodal similarity between the geometric features of 3D hair and the input image. Furthermore, we introduce a differentiable framework that utilizes line-based soft rasterization along with clumpNet for optimizing hair parameters. By adjusting guide strands positions and clumping parameters to match the hair’s natural growth and clumping in the image, we are able to significantly enhance realism in rendering and simulation. Our method demonstrates superior performance both qualitatively and quantitatively compared to state-of-the-art techniques.

Video