3D characters walk out of 2D photos thanks to clever algorithm

3D characters walk out of 2D photos thanks to clever algorithm
University of Washington researchers have developed a method to make people and characters walk out of still photos or paintings.

The computer scientists developed an algorithm called Photo Wake-Up to create an effect of 2D people walking, jumping and running out of photos. The animated character can be viewed in 3D using AR devices. 

In a post of the University of Washington website, Brian Curless, a professor in the Allen School, said: “There is some previous work that tries to create a 3D character using multiple viewpoints.

“But you still couldn’t bring someone to life and have them run out of a scene, and you couldn’t bring AR into it. It was really surprising that we could get some compelling results with using just one photo.”

The system could make paintings in a museum interactive and animated, or could be used for people to create lifelike avatars of themselves. 
Animated Stephen Curry
Photo Wake-Up works in several stages. First, it identifies a person in an image and makes a mask of the outline of the body. It then matches a 3D template to the body position. To warp the template so it looks like the person in the photo, the algorithm then projects the 3D person back to 2D.

In the University of Washington article, Chung-Yi Weng, a doctoral student in the Allen School, said: “It’s very hard to manipulate in 3D precisely. Maybe you can do it roughly, but any error will be obvious when you animate the character. So we have to find a way to handle things perfectly, and it’s easier to do this in 2D.”

The researchers go on to explain that Photo Wake-Up stores 3D information for each pixel. The algorithm can add information such as texture and colours. Cleverly it generates the back of a person, so they can turn around and be viewed from behind. 

Background also must be generated so that the character doesn’t leave a blank space behind. The algorithm takes information from other parts of the image to achieve this. 


This research was funded by the National Science Foundation, UW Animation Research, UW Reality Lab, Facebook, Huawei and Google.

[Via University of Washington]