MCS Login



MCS Login

Blog Archive


Press
behind real steel
Tuesday, 08 November 2011 07:57

Real Steel: A New Virtual Production Paradigm

Bill Desowitz finds out how Digital Domain and Giant Studios took virtual production to the next level.
By Bill Desowitz | Friday, October 14, 2011 at 5:27 pm
Posted In | Site Categories: CG, Films, Visual Effects
Image
Atom on the set and in the movie. All images courtesy of Digital Domain.

Real Steel, the new boxing robot movie, takes the Simulcam developed for Avatar and puts it into a real world setting for the next advancement in virtual production. Giant Studios, under the leadership of Matt Maden, the virtual production technical supervisor, came up with a new system for a new paradigm.

"It really worked beautifully for us with production and with Digital Domain," Madden suggests. "We spent the time upfront figuring out how the pieces would fit together and how we would communicate. It's a model we're going to be referring to time and time again moving forward."

Unlike previs, Giant knew they we were ultimately going to be delivering a form of the movie back to DD in a game-level quality in terms of rendering. But the action itself was going to be fairly close to final, with the exception of the additional animation layer and effects that they would be putting on top of it with the electronics and liquids and ripping metal.

 

Image
Before

Image
The virtual production pipeline allowed the CG robots to be handled closer to a live-action shoot.

 

One of the main principles of this virtual production pipeline is that Giant was in sync with the visual effects department and its department in terms of the look and structure of the assets that were created. So there's an approval process and Giant knew whether it was from the art department or VFX.

"Our responsibility is to get them ready for this interactive world of virtual production so we can play them live, we can record changes, we can add new versions of prop elements, if we need to change a lighting setup, we can do that, and all those things can be recorded and referenced in a data base so the visual effects department can access that information intuitively," Maden continues.

What was helpful with Real Steel, however, was that director Shawn Levy completely bought into this process. "The whole MO is to make it more like traditional filmmaking and make it interactive like live action," Maden emphasizes. "Only we come in with a real time display of the CG elements.

 

Image
Before


Image
Director Shawn Levy could direct his virtual boxers.

 

"He was able to direct his fighters, which were ultimately the robots, prior to location in Detroit. And then, once he reviewed the cut from our renders at the virtual production level, he could then request changes to speed or the blocking or the timing of a punch, and we made those changes and submitted updated renders back to him to lay into the cut. He and the editor [Dean Zimmerman] and the producers were happy with the general action and timing of the fight prior to going to location. And, consequently, we were able to get through those fight scenes in record time because we were armed with that prior to photography."

Everyone involved in the physical set up got to review the process as well, not just as boards, but as an actual cut. So they understood upcoming beats, they understood the coverage and they understood where the camera was and what was in the background. And what wasn't in the background. Maden says it helped across the board.

But Giant significantly took the Simulcam process of simultaneous CG display to the next level. It wasn't just cranes and dollies; there was quite an extensive use of steadicam. But it required Giant to have a system that was robust enough to record this fast-moving, dynamic camera action.

\

Posted In | Site Categories: CG, Films, Visual Effects

Image
Before


Image
Unlike Avatar, the synthetic characters were able to be on-set.

According to Digital Domain's Erik Nash, the production VFX supervisor, previs was achieved completely through real time interactive means in which Levy was in the ring with the boxing performers, directing them as he would human boxers, and then was able to come up with his camera moves in a very hands-on way.

"So heading to Detroit we brought the motion capture technology with us, but, unlike Avatar, we were putting our synthetic characters into the real world," Nash explains. "We were able to make the boxing robots visible to the camera operator and to Sean on his monitor. We now have plates that are photographed as if the robots are there.

"So the efficiency is huge, but, to me, the reason for taking this technology and pushing it to the next level was to attain a grittier and more visceral experience.

Image
Before


Image
The mo-cap was only a foundation for the overall CG performances.

But the motion capture was only a foundation for the performance. Because of the two-foot scale difference between the real actors and the CG robots, all of the data prior to virtual camera and the Simulcam process in Detroit slowed down 10%. "We did that to help sell the weight, size and mass of the robots," Nash offers. And then once that data was turned over to the animators at DD, the process had several phases: to attain the robotic nature of the characters, they addressed the fidelity with which motion capture records all of the subtle nuances of human motion by developing tools to filter the MoCap data. Then there was a lot of keyframing to heighten the action and make some of the movement less fluid. There's always inaccuracy when you have two CG characters making contact with each other. Plus the MoCap actors didn't actually hit each other as hard as the CG robots needed to, so they sped up punches, hardened the punch impact and exaggerated the reactions.

"One of the biggest challenges was a result of the fact that three of the hero robots had practical onset animatronic versions built by Legacy," Nash explains. "That was great to have something physical for our robots to be intercut with."

Digital Domain used Vray at the renderer in conjunction with its lighting pipeline, creating more than a half-dozen prime robots. The toughest was the villain, Zeus, to fight the hero, Atom, because he was all-black and didn't have an animatronic counterpart.

"But our job wasn't done until you couldn't tell them apart," Nash concludes.

 
the debate rages
Saturday, 15 October 2011 09:20

Demystifying Motion Capture Technique

By Mike Seymour

==

The VES recently held a Motion Capture event where a range of motion capture experts presented a snap shot of the state of the art in the industry, David Stripinis reports a personal perspective, from someone who works with it daily.

Motion capture is one of the more polarizing technologies used in the world of visual effects. Some people love it, some people fear it. Personally, as an artist working with motion capture for over ten years, I see it as just another tool. Perhaps out of ignorance, experience with unskilled artists, or impossible promises made to them in the past, many producers and directors have an inaccurate view of the technology. They either believe it a cure all that will let them get rid of all those pesky and expensive animators or see it as an evil shortcut that will leave them with a movie full of zombie eyed people.

Of course, neither is entirely accurate. To help educate producers, directors and the general visual effects community, the VES, in co-operation with the Motion Capture Society, hosted an event at Sony Pictures Imageworks in Culver City, California. Entitled “Demystifying Motion Capture Techniques”, it was a panel discussion with representatives from ten different companies, representing a good cross section of technologies, techniques and business models.

The two major themes of the night was an attempt to rebrand motion capture as performance capture. Though the two terms mean the same thing, performance capture has a less clinical sounding name, making it appeal to the more artistic filmmakers. The second overarching idea of the presentation was an emphasis on realtime production, either in the actual capture itself, or by being extremely low impact on the production, allowing filmmakers to more or less ignore the technology.

Each speaker was given approximately ten minutes, and a question and answer period followed.

Eyetronics

First up was Nick Tesi, from Eyetronics talking about their inertial based capture system. The unique benefits they touted of such a system were the unlimited volume, and since optics were not involved, occlusion was not an issue. They also showed off a facial capture system based on structured light projection that was used on “The League of Extraordinary Gentlemen”.

Motion Analysis

Next was industry veteran Dave Blackburn from Motion Analysis. Dave used his time to talk about using motion capture not only for the exact recreation of motion, but also for more artistic applications. To this end, he showed a feminine hygiene commercial from the UK market featuring a motion captured dancer, that left many in the audience nervously laughing.

Henson Studios

THe only female panelist followed, Kerry Shea from Henson Digital Puppetry Studios, part of the Jim Henson Company. Kerry discussed the decades long legacy of Jim Henson, and their dedication to the sanctity of the performance of the puppeteer. The video she brought showed the development of the Waldo and it’s evolution into the Henson Digital Puppet System. It was followed by the first public exhibition of their PBS show “Sid the Science Kid”. It showed capture performers in costumes providing the proportions of the characters, with giant screens showing the capture results in realtime. Each performer was paired with a puppeteer who both performed the facial animation with hand controls as well as the voice. By having all this in realtime, it lets them have dailies, much like a regular show.

Vicon / House of Move

Brian Rausch from Vicon / House of Moves followed. He showed a variety of work from film, television and video games. He used his time mainly to discuss Naughty Dog’s “Uncharted Territory”. He stressed that technology should never get in the way of performance. He got slightly off on a tangent talking about animation and motion capture working together and burned up the remainder of his time – to the point where panel moderator Demian Gordon kept flashing a light at him till he quieted down. It was all in good fun, as the two are good friends.

Sony Pictures Imageworks

John Meehan from Sony Pictures Imageworks spoke about his experiences on “I Am Legend” as motion capture supervisor. John is a good friend of mine, and both his humor and intelligence was on display. He spoke of the challenges of capturing 150 moves for their Massive motion tree in two days, and how those moves were enhanced by animation, either by adding moves that weren’t able to be captured or enhancing the ones that were. He stressed the partnership between animation and motion capture at Sony and on “I Am Legend” in particular.

Giant Studios

Kevin Cushing from Giant Studios filled in for an ill Ryan Champney, who had to go to the hospital an hour before the presentation began. Kevin honored all the hard work Ryan had put in on the presentation. Giant’s presentation stressed their ability to do complex capture, including retargeting, all in realtime. They showed footage of The Incredible Hulk, and Jon Favrau in a mocap suit for Iron Man. They also showed their ability to capture live on set, even during principle photography. The video they showed was actually filmed on our stages at Avatar, and I was in the background in some of the HD video shots. Ryan is fine now, by the way.

ILM

Mike Sanders from Industrial Light & Magic spoke about both their proprietary iMocap system and the motion capture stage at their Presidio facility. One of the most interesting things about ILM’s methodology was that they never use motion capture as a 100% solution, simply as a starting point for animation. Their iMocap system, which is rooted in matchmoving, is very low impact on-set, completely relying on a post solution. Mike also mention that they keep the motion capture stage at ILM live all the time, so if an animator wants motion capture reference for a shot, they go down, put on a suit and perform it. Because they rely on a completely automated tracking system, not needing the fidelity of hand trackers for a 100% solve, the captured motion is usually waiting for the animator by the time they are back at their desks.

ICT

Research legend Paul DeBevec wowed the crowd with his latest work on facial capture. Furthering the research that resulted in his Light Stage technology, he showed an amazing method for extracting complex deformations of the face based on image analysis of specular highlights as light is cast from multiple angles. There is a calibration pass that has to be performed first, but the results were quite impressive. In 10 minutes he obviously raced through the material, (see fxguidetv for more indepth on this).

Image Metrics

Wrapping up the evening was Patrick Davenport from Image Metrics. Image Metrics technology is dedicated to analysis of markerless facial performances, and generating data from it. They showed some impressive duplication of an actress brought in specifically for data acquisition. Even more impressive was an analysis and digital replacement of Marilyn Monroe, done in partnership with Double Negative. The results were seamless, producing an audible gasp from the audience.

Gordon opened the session to questions, starting it off with a simple “What has been your favorite project?” I’m proud to say Kevin Cushing named us on “Avatar” as his, garnering more than a few laughs from the producer and director heavy crowd when he mentioned the difficulty in pleasing the director. Very quickly, the discussion digressed to a conversation about the rights of actors and the data created from their performance. It’s an interesting philosophical and legal debate, and one that is ongoing. While the panelists did their best to answer the questions raised, there wasn’t enough time to fully cover the topic in the time allotted.

All in all it was an amazing evening gathering many of the best mocap artists in LA together and hopefully proved educational for the DGA and PGA members in attendance. Hopefully more discussions and presentations in the same vein will continue to educate those in the position to make decisions that motion capture is neither an instant solution to a problem, nor is it something evil to be afraid of.

 
realtime face replacement
Monday, 10 October 2011 08:37

Some day in the not-too-distant future, you may be on a service like Chatroulette, and suddenly find yourself matched up with a person who looks exactly like Angelina Jolie. Well, chances are it won't really be her. Instead, it will likely be someone using the descendant of a system put together by Arturo Castro. Using a combination of existing software, the Barcelona digital artist has demonstrated how a variety of famous faces can be mapped onto his own, moving with it in real time. While Castro's system isn't likely to fool anyone - in its present version - it's an unsettling indication of what could be possible with just a little more finessing.

 

WATCH THE VIDEO HERE

Castro's application was created using openFrameworks, an open source framework for creative coding. This was combined with FaceTracker, which produces a virtual mesh that matches a human subject's facial features. The colors of the famous faces were blended with those of Arturo's own using an image clone code developed by artist Kevin Atkinson. Finally, the FaceTracker meshes were wrapped around his face using the ofxFaceTracker add-on for openFrameworks.

The resulting video, which can be seen below, alternates between being funny and just plain creepy, with Castro taking on the identities of celebrities such as Marilyn Monroe, Michael Jackson and Paris Hilton.

His collaborator Kyle McDonald, who developed ofxFaceTracker, utilized a different blending algorithm for more lifelike results.

It's not hard to imagine the shenanigans that could result, should more advanced forms of this technology be used for the wrong purposes - is that really your best friend on Skype, asking you for that money? Is that really Mick Jagger telling us how white our sheets can be?
 
Interview with a Motion Capture Actor
Thursday, 15 September 2011 03:56
If you've played Guitar Hero, you've seen Adam Jennings work. He's the facial capture talent that does all the facial performances.
Recently Adam sat down to answer some questions about his craft and what it means to be a Motion Capture actor.
after working on every title of the video game franchise since Legends of Rock. He has also worked on such games as Guitar Hero: Metallica, Guitar Hero: World Tour, Guitar Hero: Aerosmith, Guitar Hero III: Legends of Rock, Tony Hawk's Proving Ground, Tony Hawk's Project 8 and Tony Hawk's American Wasteland.
Adam made his big screen film debut in motion capture acting in Mars Needs Moms (2011) in which he played the roles of multiple aliens. He has logged the most facial motion capture hours of any other actor in the entire world.

Adam answers the following questions:

Jennings's entire interview can be found here: http://www.youtube.com/watch?v=qKdsFEKy3c4&feature=player_embedded



 
Man of Steel
Saturday, 10 September 2011 21:00

The new General Zod explains his Man of Steel costume.

Marc Forster explains how World War Z will preserve the key themes of Max Brooks' book. Lost's Michael Emerson gives the lowdown on his new show Person of Interest. Plus a Dark Knight Rises set video might reveal Bane's hideout!

Man of Steel

Michael Shannon, who is playing General Zod in the new Superman movie, confirms that he is the one who has been seen around the Plano, Illinois set in a skintight motion-capture suit. Here, Shannon explains that he didn't exactly expect to end up in such a suit, but he hopes the end result is worth it:

Yeah, it's one of the most humiliating garments that exists in the known universe, yes. It's very tight. It has a variety of different colors and shapes on it and it makes you feel like you're the court jester. And it's funny because when I met with Zack we were talking about it before it started and he mentioned that there was going to be a lot of CGI, or whatever. I said, 'Just don't make me wear one of those silly suits.' He said, 'Oh, yeah, don't worry, I know exactly what you're talking about.' I was like, 'It's going to be really hard for me to be intimidating if I have to wear one of those silly suits.' He said, 'I totally understand.'

Then I showed up and he's like, 'Dude, I swear to God, it's going to be so bad ass when we're done. Trust me, it's going to be wicked.' And, you know, people understand and you get used to it. The first day, you feel like you're getting rushed by a fraternity... and then it wears off the next day. Because I'm not the only one wearing one – there are other people wearing them, too.

 
ROTPOTA
Tuesday, 12 July 2011 08:17

Director Rupert Wyatt and Andy Serkis Talk About the Motion Capture Technology Used in RISE OF THE PLANET OF THE APES

by Rob Vaux



rupert-wyatt-andy-serkis-motion-capture-slice

Primatologists, professors and filmmakers gathered at CalTech in Pasadena this past Thursday to discuss 20th Century Fox’s new film Rise of the Planet of the Apes. Apes director Rupert Wyatt headlined a panel that included visual effects supervisor Joe Lettieri, Diane Fossey Foundation representative Clare Richardson and CalTech professor of philosophy Steve R. Quartz, as well as actor Andy Serkis via Skype from London. Their discussion centered around Fox’s re-imagining of their venerable Planet of the Apes franchise, the role of motion capture technology in the film, and its implications on the status of the great apes in our world today.

Hit the jump for a recap of the panel that includes quotes from both Wyatt and Serkis. Starring James Franco, Tom Felton, Freida Pinto, Brian Cox, John Lithgow, and the aforementioned Andy Serkis, Rise of the Planet of the Apes hits theaters on August 5th.

rupert-wyatt-image-1For me, the most surprising revelation was that the film itself involved no live animals at all. The apes – particularly the hyper-intelligent Caesar, who leads a simian revolt against humanity – are all rendered by performance capture. The filmmakers touted the advances of the technology in the years since it was first introduced, as well its ability to help them avoid any moral compromises on the film. As Wyatt explained:

“We had a choice: we could use live apes or we could use performance capture technology. There was no way we could put actors in… simian suits and pull it off. We immediately set about exploring both options, and we very quickly put to bed the idea of using live apes for all sorts of reasons. I personally think it would have been a bit of an irony to be telling the story of our most exploited and closest cousins, and use live apes to tell that story. I think it would have been a cruel twist.”

With performance capture technology as the only viable option, the filmmakers turned to WETA Workshop in New Zealand to see it through.  Their challenge was to render the ape characters with total realism, often appearing side-by-side with live actors in the same shot. Serkis – who has become the go-to performer for motion-capture roles – explained that such a notion wouldn’t have been possible just a few years ago:

Andy-Serkis“When we first started working on Gollum, it was a sort of freakish activity that people didn’t really understand… I would actually shoot the scenes on set, and my performances were shot on 35mm. We’d always shoot a blank plate, so I’d play the scene with other actors: with Sean Astin and Elijah Wood… Then I would go back months later and – by myself in a motion capture studio – work in isolation with the plates we had shot. Using tennis balls and a stick, I’d then have to act with a pretend Elijah Wood and a pretend Sean Astin.

Here, this is the first film that uses performance capture on a live-action set. For the entire shoot, we were fully integrated into the live-action shooting… with head-mounted camera and so on. We get the entire performance in one hit: emotionally connecting with the other actors, not having to repeat everything, every single decision made with the director… It’s come to the point where it really isn’t any different from live-action acting.”

rise-of-the-planet-of-the-apes-teaser-poster-01Wyatt also showed a number of clips from the film which depicted Serkis’s Caesar being raised in secret by a benevolent scientist, only to be shown first-hand the cruelty that humanity is capable of. Feeling abandoned by his “parent,” he begins to realize his own worth as he rallies his fellow apes against those he believes are oppressing them.  For a summer film, it looks extremely intense, with scenes of animal entrapment and torture in science labs, and a deep connection – ultimately betrayed – between Caesar and his human family. Serkis’s performance shines through in each of the clips. If the film ultimately works, it will be because the technology captures his expressions and body movements so perfectly. The actor explained his level of comfort with the technology, saying:

“A motion capture costume is actually a very liberating costume. The alternative is to wear a suit with fur and have layers of prosthetics on your face, like the actors did in the original Apes movies. For me, I find that much more restrictive… performance capture allows you to just play intention without being encumbered in any way.”

Rise of the Planet of the Apes opens in theaters on August 5th.

rise-of-the-planet-of-the-apes-movie-image-04

 
<< Start < Prev 1 2 3 4 5 6 7 8 9 10 Next > End >>

Page 2 of 35