Ninja Theory, an independent video game developer, has recently showcased its cutting-edge motion capture technology through a live representation of a scene of their upcoming AAA indie title, Hellblade: Senua’s Sacrifice. Ninja Theory is known to have been one of the first game developers to heavily employ motion capture in titles such as Devil May Cry 5 (or simply ‘DmC’) back in 2013, even though the technology has been present in video game design and movie making since the early 90s.
The demonstration was done as part of the Epic Games keynote at Game Developer Convention 2016, yesterday. The public watched in awe as the this new type of motion capture technology was depicting first time the way an actor performed directly into the game world. Leaving the way motion capture used to be done previously, where an actor would perform lines and gestures, then developers would generate the character in-game, based on the recordings, Ninja Theory showcased a way that allows an actor to be instantly transposed into virtual reality.
While wearing the motion capture suit, anything the actor is doing will instantly take life into its pre-designed character in-game. Any scene that doesn’t come out the way the producers intended it to and they can simply do another take, just like movie directors would. This increases the efficiency of game developers as they don’t have to go back to filming actors after rendering their characters in-game and deciding it wasn’t what they needed.
So going back to how Hellblade: Senua’s Sacrifice was rendered from start to finish, we are being welcomed into a new age of… virtual reality, really. Everything has been done in steps. First, the developers used a professional face cam to record over 120 different expressions of the actor that was going to become the main character of Hellblade. Then, the creators photographed the actor in several different lighting settings in order to be able to create a realistic skin texture that would animate in the same way real skin does, but in a virtual scenario. The skin shader that resulted could react with light to show wrinkles, blood flow and the effects of facial expressions.
The actor’s body was also photographed, then turned into an identical digital replica, then studied in terms of ample motion. Artists created fitted clothing for the character in-game, then simulated along with the motions recorded by the mo-cap to make it respect rules of physics and realism.
Once everything was completed, the actor, as long as she was equipping the motion capture suit and face camera, could downright act out entire video game cutscenes on the go, with little to no post-processing required from the game developers afterwards.
Image Source: 1