Deep learning advancements have enabled the generation of visually plausible hair geometry from a single
image,
but these still do not meet the realism required for further applications (e.g., high quality hair
rendering and simulation).
One of the essential element that is missing in traditional single-view hair reconstruction methods is the
clumping effect of
hair, which is influenced by scalp secretions and oils, and is a key ingredient for high-quality hair
rendering and simulation.
Observing practices in industrial production like XGen, which simulates realistic hair clumping by
allowing artists to adjust
clumping parameters, we aim to integrate these clumping effects into single-view hair reconstruction. We
introduce a novel hair
representation model which exploits guide strands and clumping modifiers to transform the output of
existing methods into models
that consider hair clumping.
We developed a neural model called clumpNet using contrastive learning to evaluate the multimodal
similarity between the geometric features of 3D hair and the input image.
Furthermore, we introduce a differentiable framework that utilizes line-based soft rasterization along
with clumpNet for
optimizing hair parameters. By adjusting guide strands positions and clumping parameters to match the
hair’s natural growth
and clumping in the image, we are able to significantly enhance realism in rendering and simulation.
Our method demonstrates superior performance both qualitatively and quantitatively compared to
state-of-the-art techniques.