Animated diagram of external combustion engine

New tool streamlines the creation of moving pictures

By

Molly Sharlach

on

It’s often easy to imagine balloons soaring or butterflies fluttering across a still image, but realizing this vision through computer animation is easier said than done. Now, a team of researchers has developed a new tool that makes animating such images much simpler.

The tool is designed to animate similar elements within an image, such as balloons or raindrops, said Nora Willett, a graduate student in Princeton’s Department of Computer Science and the lead author of a paper presenting the research. To do so, the user manually selects a subset of the repeating objects, then draws motion lines and specifies the frequency and velocity at which the objects should move. The tool’s algorithm extracts similar objects in the image and separates them into their own layer for animation.

“The main challenge in this system was to design an interface that allows the person and the computer to work together to create a plausible animation,” said co-author Adam Finkelstein, a Princeton professor of computer science. “The person provides clues about what aspects of the scene they would like to animate, and the computer removes much of the difficulty and tedium that would be required to create the animation completely by hand.”

The new tool builds on the existing capabilities of the Autodesk SketchBook Motion animation app. To animate a still image with the app, a user must either produce the image completely from scratch, or work with an existing image using a program such as Adobe Photoshop to select different objects and separate them into layers before generating the animation.

Developing an algorithm that could successfully identify repeating objects was surprising difficult, said Willett. While machine learning methods can reliably do this with photographs, training computers to recognize elements of drawings or paintings is less straightforward. “There’s such a wide range of drawing styles, and humans can create such fantastical things, that there’s just not enough data to train a machine to recognize every single fantastical drawing,” she said.

To improve the user interface, the researchers worked with six users representing a range of experience levels with digital animation. Two users chose to animate their own artworks: One created a slowly swinging light within a photograph, while another animated a ring of avocado pieces circling around other food in a drawing.

Willett’s other projects at Princeton have focused on creating methods to enhance live animation of characters by adding secondary motion, such as movements of hair or clothing; and quickly swapping parts of a live animated character to change hand gestures or accessories. She discussed her background and demonstrated these methods during a 2017 Facebook Live event for Princeton Engineering.

Willett presented the team’s results on October 16 at the Association for Computing Machinery’s Symposium on User Interface Software and Technology in Berlin. She began working on the tool during an internship at Autodesk Research in Toronto. In addition to Finkelstein, other co-authors were Rubaiat Kazi, Michael Chen and George Fitzmaurice of Autodesk Research; and Tovi Grossman of Autodesk Research and the University of Toronto.

Related Faculty

Adam Finkelstein

Related Departments

Computer Science

Computer Science

Leading the field through foundational theory, applications, and societal impact