The future is now thanks to the metaverse, and Roblox avatar real-time facial animation is an exciting one, but the technical challenges the company faces with this prospect mean it’s not a feature that can happen overnight. However, that doesn’t mean the US-based games platform isn’t dedicated to the cause.
Thanks to a recent Roblox blog post, we know that the company wants to take this step forward with Roblox avatar real-time facial animation, and it’s currently looking at options to make this happen. As you can imagine, despite the vast leaps technology makes year on year, there are still monumental challenges to overcome in order to achieve such tasks.
Nonetheless, there are avenues to explore, and in order to execute its vision for the future, Roblox is all too happy to walk this path. Currently, the company is taking a deep learning approach “for regressing facial animation controls from a video that both addresses these challenges and opens us up to a number of future opportunities.”
To get a better idea of what Roblox intends to do, you need to understand the basic framework for facial animation and its approach to it.
“There are various options to control and animate a 3D face-rig. The one we use is called the Facial Action Coding System or FACS, which defines a set of controls (based on facial muscle placement) to deform the 3D face mesh,” the blog post reads. Despite being over 40 years old, FACS are still the de facto standard due to the FACS controls being intuitive and easily transferable between rigs.”
Going back to the company’s deep learning approach, it intends to “take a video as input and output a set of FACS for each frame.” In order to achieve such a feat, it uses face detection and FACS regression.”
Of course, Rome wasn’t built in a day, and there are steps to take when it comes to moving forward with Roblox avatar real-time facial animation – “We initially train the model for only landmark regression using both real and synthetic images. After a certain number of steps, we start adding synthetic sequences to learn the weights for the temporal FACS regression subnetwork.”
Going deeper into the training process, Roblox explains that “The synthetic animation sequences were created by our interdisciplinary team of artists and engineers. A normalised rig used for all the different identities (face meshes) was set up by our artist, which was exercised and rendered automatically using animation files containing FACS weights. These animation files were generated using classic computer vision algorithms running on face-callisthenics video sequences and supplemented with hand-animated sequences for extreme facial expressions that were missing from the calisthenic videos.”
Furthermore, there are several loss terms that Roblox implements to train its deep learning network, the first of which is positional loss, then there’s temporal loss, and consistency loss to consider too.
To say the future is bright is an understatement. Roblox already has a clear picture of its heading in terms of its real-time facial animation for avatars. Through Roblox’s research, practices, and implementation of methods, its trained model is constantly moving forward, and it hopes to continue this train of improvement going forward.
If you want to give yourself an easier time when it comes to Roblox face codes, make sure you follow that link – there’s a face for every occasion.