Meta’s ‘Codec Avatars’ Now Feature Customizable Hairstyles

▼ Summary
– Meta’s Codec Avatars are photorealistic digital humans driven by VR headset tracking, aiming to create a true sense of social presence in virtual interactions.
– The avatars now support customizable hairstyles through a new “HairCUP” system, separating head and hair modeling for flexibility and improved realism.
– Originally requiring complex multi-camera scans, Meta has simplified avatar creation using smartphone scans and Gaussian splatting for real-time rendering.
– Current challenges include the lack of eye/face tracking in Meta’s latest headsets and high rendering demands, though Apple Vision Pro demonstrates on-device potential.
– Meta may debut a basic version of Codec Avatars for video calls before achieving full VR integration, with updates expected at Meta Connect 2025.
Meta’s groundbreaking Codec Avatars have taken another leap forward with the introduction of customizable hairstyles, bringing virtual interactions closer to real-life social experiences. The company’s decade-long research into photorealistic digital humans now allows separate modeling of facial features and hair, marking significant progress toward overcoming the uncanny valley effect in virtual reality.
These avatars aim to create genuine social presence, the psychological sensation of being physically near someone despite geographical separation. Current video call technology falls far short of this immersive experience. While early demonstrations required high-end PCs and extensive multi-camera scans, Meta has streamlined the process, now enabling avatar creation through simple smartphone scans.
Originally, generating a Codec Avatar demanded an elaborate setup with over 100 cameras and specialized lighting. Now, Meta uses this system primarily to train a universal model, allowing new avatars to be created from a short selfie video. However, achieving the highest quality still requires server-side processing, taking roughly an hour on powerful GPUs.
Last year, Meta adopted Gaussian splatting, a technique revolutionizing volumetric rendering much like large language models transformed chatbots. This advancement made high-fidelity avatars more accessible, with applications already appearing in products like Varjo Teleport and Niantic’s Scaniverse. Apple has also embraced the technology for its visionOS Personas, though Meta’s research remains at the forefront of realism.
The latest innovation, detailed in the “HairCUP” research paper, introduces a split between head and hair modeling. This breakthrough lets users swap hairstyles from a preset library or previous scans without redoing facial captures. The method also refines transitions between hair and skin, improving details like bangs and potentially supporting accessories like hats in future updates.
Despite these advancements, challenges remain before Codec Avatars become mainstream. Current Quest headsets lack the necessary eye and face tracking, and rendering high-quality avatars still relies on PC hardware. While Apple Vision Pro demonstrates on-device Gaussian avatar rendering, Meta’s hardware-software integration isn’t as tightly controlled.
A potential interim solution could involve launching simplified 2D versions for video calls on platforms like WhatsApp and Messenger. With Meta Connect 2025 approaching in September, further updates on this transformative technology may soon emerge.
(Source: UploadVR)





