Disney Research has a very different idea for stage makeup. Last month the research arm of the global entertainment company presented a system that projects on to actors’ faces during live performances.
The software and hardware system called MakeUp Lamps tracks an actor’s movements and expressions without the use of facial markers. In this way effects can be applied to faces with light and not physical makeup. Facial appearance can change during a performance without the actor leaving the stage.
By adjusting illumination, the system can display any colour or texture that an artist wants to achieve. Actors might appear older for example or face paint can be added live during a performance.
Markus Gross, vice president at Disney Research, said: "We've seen astounding advances in recent years in capturing facial performances of actors and transferring those expressions to virtual characters.
“Leveraging these technologies to augment the appearance of live actors is the next step and could result in amazing transformations before our eyes of stage actors in theatres or other venues."
The concept isn’t new. Nobumichi Asai, creative director of Japanese visual studio WOW, recently deployed a similar technique during Lady Gaga’s Superbowl appearance and also worked with Tokyo dance duo to
produce this stunning video. InAVate
first reported on Asai’s work in 2014 when he unveiled Omote, a facial projection-mapping project.
Anselm Grundhöfer, principle research engineer at Disney, said that in contrast to previous work, the Disney team showed it can track facial movements and dynamically adjust the projections to a wide variety of arbitrary movements, rather than project augmentations onto static objects or rigidly constrain facial movements.
"The key challenge of live augmentation is latency - the time between generating an image that matches the actor's pose and when the image is displayed," Grundhöfer said. "The larger the latency, the more the actor's pose will have changed and the greater the potential misalignment between the augmentation and the face."
Latency has been reduced at every possible step; from capturing the facial pose, to processing to projection. To reach this goal complexity of the algorithms used has been limited. In addition a coaxial coaxial camera-projection, where the camera that detects facial movements shares the same optical axis as the projector that illuminates the face, was used. This enabled the team to process images in two dimensions, rather than three, and still provide consistent augmentation of the face.
The team conceded that some latency is unavoidable and used method called Kalman filtering that uses measurements over time to make predictions, enabling slight adjustments that better align the augmentations with the facial performance.
Makeup Lamps is able to simulate different lighting conditions and facial effects, such as wrinkles, that are expression specific while allowing the performance to change over time, according to Grundhöfer.
The researchers presented Makeup Lamps on April 24 at the European Association for Computer Graphics conference, Eurographics 2017, in Lyon, France.