How to buy
Privacy | Do not sell or share my personal information | Cookie preferences | Report noncompliance | Terms of use | Legal | © 2024 Autodesk Inc. All rights reserved
Performance capture is the simultaneous recording of a person’s voice, body movements, and facial expressions as data that animators, visual effects (VFX) artists, and game developers superimpose onto realistic yet larger-than-life CGI characters.
Performance capture employs a variety of hardware systems to translate a person’s entire performance into data that animators and VFX artists can use to make enchanting digital characters. These include optical, inertial, and markerless performance capture.
Optical performance capture uses reflective or LED markers adhered to the actors’ faces and bodies on a motion capture suit, with cameras recording the performances. Marker data from the recordings are translated into moving digital skeletons, which are incorporated into rigged 3D character models. This common performance capture method results in accurate and realistic digital characters.
Inertial motion capture (IMC) gathers data from the movements of sensors like accelerometers, gyroscopes, and magnetometers placed on an actor’s body, rather than from markers. IMC data can be processed in real-time. Compared to optical performance capture, IMC is simpler to implement because it doesn’t need the same elaborate camera setups.
Markerless performance capture does not require markers, suits, or data sensors. Instead, advanced technology like computer vision and other algorithmic techniques in performance capture software analyzes video frames like a 3D scanner to track movement and digitize the image into manipulable 3D data. This method allows actors to perform as normal, free of any unnatural apparatus on their person. Markerless performance capture technology may require multi-view camera setups, like those Martin Scorsese used in 2019’s The Irishman to create younger-looking versions of Robert De Niro and Joe.
While in its infancy, markerless performance capture was considered less accurate than other methods. But as performance capture software improves—in part by harnessing more artificial intelligence (AI) technology—markerless performance capture could become accessible from any smartphone, allowing anyone to create a cartoon avatar of themselves with an app.
Regardless of the visual medium or data capture method used, performance capture software is the crucial common denominator in the workflow a media post-production team uses to embody a digital character with the “soul” of a human performer. This software will also play a part in making performance capture an aspect of extended reality (XR) media—virtual, augmented, and mixed reality—as well as in live entertainment through real-time performance capture.
Motion capture (mocap) and performance capture (Pcap) are intimately related processes, and the two terms are often used synonymously. However, performance capture is more precisely described as a continuation of the earlier motion capture innovation.
Whereas motion capture records the major body movements of a stunt performer, dancer, or actor as data through special mocap suits and motion capture software, performance capture comprises the simultaneous capture of a performer’s voice, major body movements, and subtle movements such as gestures and facial expressions through performance capture software and other technology detailed above.
Early motion capture was used in the 1990s in video games such as Virtua Fighter 1 & 2 and in movies like Batman Forever and Star Wars: Episode I—The Phantom Menace, with its character Jar Jar Binks. An interesting bridge between mocap and P-Cap occurred famously in the early 2000s with Andy Serkis’ portrayal of Gollum in The Lord of the Rings: The Two Towers and The Return of the King. While that role utilized a new real-time motion capture system (and is sometimes called performance capture), the producers captured his performance in several stages. Since then, performance capture technology has evolved to capture an entire performance in real-time, while also being less physically encumbering on the actors.
There are many well-known examples of Pcap, as performance capture software and technology continue to become less expensive and more powerful. Serkis has popularized many of them, including Caesar in The Planet of the Apes reboot trilogy (2011–2017) and Supreme Leader Snoke in Star Wars: Episodes VII & VIII (2015–2017). Josh Brolin’s Thanos in Avengers: Infinity War (2018) and Avengers: End Game (2019) was performance capture, as is Zoe Saldaña’s Neytiri and other characters in the ongoing Avatar movies.
Performance capture technology adds a luster to CGI productions that’s difficult, if not impossible, to create without it.
Performance capture animation enables the height of visual artistic style to combine with the minute human nuances of extraordinary natural acting, adding up to memorable spectacles that audiences and gamers love.
Performance capture software and techniques accurately record the most subtle movements and expressions in an actor’s performance as concise data, making more emotionally engaging CGI character performances possible for games, TV, film, and other media.
Over the last few years, the efficiency and realism of real-time performance capture have gone up while the total costs have gone down, allowing more bootstrapped productions to take advantage of performance capture software that humanizes CGI characters.
WARNER BROS GAMES AVALANCHE
For the bestselling and acclaimed game Hogwarts Legacy, Avalanche’s virtual production team used real-time cinematics and the latest performance capture technology in Autodesk Maya and Autodesk MotionBuilder to graft actors’ movements onto rigged 3D characters, bringing the well-known fantasy world to life.
Image courtesy of Warner Bros. Games Avalanche
MPC/DISNEY
In Disney’s live-action The Jungle Book remake, the boy Mowgli’s co-stars are a bunch of animals. To achieve director Jon Favreau’s goal of endowing those talking animals with both realism and personality, creative studio MPC (The Mill) used performance capture with Autodesk Maya for rigging and animation.
Image courtesy of MPC ©2016 Disney Enterprises, Inc.
LUMINOUS PRODUCTIONS
Take an inside look at the performance capture, standard character rigging layouts, and animation in Autodesk Maya and MotionBuilder needed to bring Forspoken’s many game cutscenes to life.
Image courtesy of © 2023 Luminous Productions Co, Ltd.
Check out what motion capture can do for your productions, and how the sophisticated performance capture software from Autodesk can map actors’ movements and facial expressions onto 3D modeled characters.
Get up to speed on the overall picture of visual effects (VFX), which includes CGI, motion capture/performance capture technology, green screen filming, and compositing.
Performance capture and motion capture are important techniques within the larger concept of virtual production, which combines game technology with live-action filmmaking to place live actors within digitally created environments.
Read about Autodesk’s initiatives and partnerships that will make cloud-based virtual production more accessible to lower-budget projects as performance capture technology becomes more affordable, and real-time game engine image quality improves.
Learn the technologies associated with recording performance-capture data and then how to import and work with that data in Autodesk 3ds Max to animate a biped character.
Performance capture is the evolution of motion-capture technology, where an actor’s entire performance—body movements, gestures, facial expressions, and voice—are all captured as data simultaneously through the use of sensors, visual markers, and/or advanced camera techniques.
Then, through performance-capture software, animators and game producers can map the actor’s performance onto 3D-modeled characters, resulting in the highest level of realism from CGI characters.
The difference between motion capture (mocap) and performance capture (Pcap) is the extra detail—especially in facial expressions—that Pcap records.
Motion capture records the major body movements of an actor or stunt performer through mocap suits or special camera techniques. That data is transferred to a rigged 3D character model.
In addition to body movements, Pcap also captures an actor’s much more data-dense facial expressions and subtle gestures. This is accomplished through marker tracking on the actor’s face and body or through newer 3D scanning methods that translate every frame of a camera recording into performance data that can be transferred onto 3D character models—resulting in highly realistic and genuine CGI performances.
Performance capture (Pcap) is used because animators and visual effects (VFX) producers have always sought methods for making their work more life-like and believable. Pcap is currently the best way to transfer a human actor’s authentic, full performance onto a 3D-modeled character.
Early motion capture and Pcap transcended the technique of rotoscoping, where animators traced over filmed footage. In recent years, Pcap has evolved to become more effective at a lower overall expense than ever before.
Performance capture technology allows media producers to combine the power and subtlety of actors’ performances with the unique artistic vision of each production’s CGI animation and VFX.