Creating Realistic Animations Using Video

Abstract

Generating realistic animations of passive dynamic systems such as rigid bodies, cloth and fluids is an important problem in computer graphics. Although several techniques for animating these phenomena have been developed, achieving the level of realism exhibited by objects in the real world has proved to be incredibly hard. We argue, therefore, that an effective method for increasing the realism of these types of computer animations is to infer the behavior of real phenomena from video. This thesis presents two techniques for using video to create realistic animations of phenomena such as rigid bodies, cloth, waterfalls, streams, smoke and fire.

The first technique, inverse simulation, estimates the parameters of physical simulations from video. Physical simulation techniques are widely used in computer graphics to generate animations of passive phenomenon using physical laws and numerical integration techniques. The behavior of a physical simulator is governed by a set of parameters, typically specified by the animator. However, directly tuning the physical parameters of complex simulations like rigid bodies or cloth to achieve a desired motion is often cumbersome and nonintuitive. The inverse simulation framework uses optimization to automatically estimate the simulation parameters from video sequences obtained from simple calibration experiments (e.g., throwing a rigid body, waving a swatch of fabric). This framework has three key components: (1) developing a physical model that accurately captures the dynamics of the phenomena, (2) developing a metric that compares the simulated motion with video and (3) applying optimization to find simulation parameters that minimize the chosen metric. To demonstrate the power of this approach, we apply this framework to find the parameters for tumbling rigid bodies and for four different fabrics.

The second technique presents a video editing framework for creating photorealistic animations of natural phenomena such as waterfalls by directly editing reference video footage. Our algorithm analyzes the dynamics and appearance of textured particles in the input video along user-specified flow lines, and synthesizes seamless infinite video by continuously generating particles along the flow lines. The user can then edit the video by manipulating flow lines from the original footage. The algorithm is simple to implement and use. We applied this technique to perform significant edits to the video, like changing the terrain of a waterfall, adding obstacles along the the flow or adding wind to smoke and flames, to demonstrate the editing capability of our approach.

The results from these two techniques demonstrate the effectiveness of using video to improve the realism of computer animations. Our experience with applying inverse simulation for cloth resulted in improvements to existing cloth models and produced animations that matched the realism of real fabrics. Our research on flow-based modelling produced simple 1D particle models that enable animating a wide variety of natural phenomena. We hope that the methods developed in this thesis provide useful insights to design realistic animations for other domains.

Thesis Document

Download the thesis document:

Links to project pages

Inverse Simulation:
Flow-based Video Editing: