Skip to main content

Star Citizen developer wants to use your face for in-game chat

Soon, you won’t have to rely on just yelling or typing in all-caps to express yourself in-game. Faceware Technologies is teaming up with Cloud Imperium Games to bring facial animation to Star Citizen. The new Face Over Internet Protocol (FOIP) feature uses webcams to detect expressions and project them onto players’ avatars. Faceware will also be releasing a facial motion sensor that will provide better precision.

Star Citizen is a science-fiction massively multiplayer online game that’s still in development. Developer Cloud Imperium raised over $34 million from fans on crowdfunding platforms such as Kickstarter. It has compared the title to CCP’s Eve Online, saying that it wants the game to be heavily influenced and shaped by players’ actions and interactions. Peter Busch, Faceware’s vice president of business development, says that adding facial animation will help Star Citizen players to connect more to their characters and enable them to express their emotions.

“Since the beginning of multiplayer gaming, there has been an evolution of player-to-player communication,” said Busch in an email. “It all started with simple text-based chat windows, then came the improvement via audio-based chat services like Team Speak or Discord. But there has always been a disconnect visually with what was being spoken in the audio—players’ faces in-game were typically lifeless or driven by procedural audio-driven animation, which was nothing better than simple jaw-flap.”

Cloud Imperium demoed the FOIP feature at Gamescom (Europe’s largest game industry event), which will be available in Star Citizen in an update after version 3.0. Faceware built it with its LiveSDK, which provides real-time motion-capture and facial animation. In 2012, Faceware spun off from Image Metrics, a company that developed the original tech that was used to create realistic facial animations in games like Assassin’s Creed II.

Realistic facial animation is Faceware’s specialty. The facial motion sensor, which it hasn’t released yet, can capture minute changes in expression in different levels of light. Busch says that to avoid the uncanny valley — the off-putting phenomenon where something looks almost completely human but not quite — you have to pay attention to the details.

“The motion of the eyes can instantly connect an audience to a character because it creates empathy, the first true emotion of believability,” said Busch. “Another important element to motion is a strong character rig, or digital skeleton, to ensure the character moves like a human. Lastly, all of the details must look real — skin should look like skin, hair should look like hair, etc.”

Faceware says that players are just now seeing what real-time motion capture can bring to the table. Busch calls out Ninja Theory’s haunting action-adventure Hellblade: Senua’s Sacrifice as an example of strikingly effective facial animation.

“I think we will all be surprised at how many ways this affects interaction in-game,” said Busch. “I believe that the experience is so natural and easy, it’s going to be the spontaneous and comical interactions that will get the most attention—the points when the players actually start to embody their own personalities.  When you see ‘you’ as the character, that’s when this is really going to get fun and revolutionize multi-player communication.”