Pause
Pixar had a problem. It had a great new idea for a movie—Elemental, based on characters from The Good Dinosaur's director Peter Sohn—but actually animating the film’s titular elements was proving to be a problem. After all, it’s one thing to draw a crumbling mound of sentient dirt, but how do you capture the ethereal nature of fire onscreen, and how would a corporeal body made of water even work? Can you see through it? Do the eyes just float around?
While some of those questions could be answered with good old-fashioned suspension of disbelief, Pixar’s animators thought the fire issue was a real conundrum, especially considering that one of their movie’s leads, Ember, was actually supposed to be made of the stuff. They had tools to make a flame effect from years of previous animations, but when you actually tried to shape it into a character, the results were pretty terrifying, a cross between Studio Ghibli’s Calcifer and Nicolas Cage’s Ghost Rider, but somehow harsher.
“Our fire fluid simulations are very naturalistic and they're designed to mimic reality,” says VFX supervisor Sanjay Bakshi. With a character like Ember, Bakshi says, “it's really important to concentrate on the performance of the face,” but the studio was having trouble balancing the dynamism of the fire with the character’s shape and sensibilities. Paul Kanyuk, a crowds technical supervisor at Pixar, says that at first crack, Ember looked like a ghost or even a demon. “It can look horrifying if it's too realistic, like you actually have a human figure made of real pyro,” he explains.
Even if you can get the scary tamped down, Sohn says, you still have to craft something that’s recognizably fiery. “Fire naturally is so busy, but if you slow it down, it can turn into something that looks like a plasma,” he explains. “It was interesting to compare it to other anthropomorphized characters, because they’re all very fantastical and you can do anything with them. If you’re drawing an emotion, there is no one-to-one, but everyone knows what fire looks like.”
Basically, Sohn explains, to make Ember, every single shot of Elemental would need an effects pass, something that’s not only incredibly time-consuming, but also very expensive.
Fortunately, Kanyuk had an idea. He’d been working on crowd animation at Pixar since 2005, starting with Ratatouille, and always struggled with ways to make the clothes on big groups of people look right. While trying to solve the problem he’d gotten involved with the Association for Computing Machinery’s Siggraph, a community organization devoted to the advancement of computer graphics. Around 2016, he found some of the group’s research on using machine learning to hone cloth simulations and has been trying to master it ever since.
Elemental gave him an opportunity to apply what he learned.
Around 2019, Kanyuk came across a paper out of Siggraph Asia about using neural style transfer (NST)—the type of artificial intelligence used to make a photo look like a Van Gogh or a Picasso—to move voxels (basically 3D pixels, with volume) around in animation, all with the goal of giving a character a certain look. Kanyuk thought NST could help Pixar master its flame problem, though he told Sohn (who’d also signed on to direct the film) that, like much of machine learning, there was only about a 50 percent chance it would work. “I said, ‘I’m going to give you five ideas, and maybe two of them will work.’ But he said, ‘Let’s do all of them,’” Kanyuk says.
Kanyuk enlisted the help of Disney Research Studios, who Pixar had worked with once before, on Toy Story 4. The lab, based in Zurich, specializes in researching how AI and machine learning can do things like make actors appear older or younger, or how to best recreate someone’s skin quality. “Many of us didn't do machine learning until it started becoming prevalent recently, so we've kind of learned on the job,” says Kanyuk, “whereas the research coming out of the Disney lab—they live and breathe this stuff.”
He started meeting regularly with the Research Studios team and eventually they cracked the issue, recruiting a Pixar artist named Jonathan Hoffman to draw a set of swirly, pointy, and almost cartoonish flames the team dubbed the “fleur-de-lis.” The NST could combine them with the blobbier fire from the original simulation and—bam—you get the movement and intensity of fire tempered with just a bit of Pixar’s control and style.
“Once you apply a style transfer to naturalistic fire, you can actually start to direct its style and start to bring the artist’s hand into something that is otherwise not touchable,” says co-character supervisor Jeremie Talbot. “It was a real breakthrough to be able to say, ‘I see the size of her features and the shapes here, and I want to complement those shapes with my own style.’ It harmonized the look of Ember in a really unique way.”
To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video
The only drawback, of course, was that using that kind of machine learning took a whole lot of computing power. After all, doing a full pass on all 1,600 shots from Elemental would amount to an absolutely monumental task, especially considering that the process required a whole lot of graphical processors. “‘Originally, we didn't have the resources, so we told [Sohn] we could probably only do Ember in close-ups,” says Bakshi. Then, Kanyuk says, the animators realized that if they were using the technology on Ember, they had to use it on other fire characters, lest they stand out like a blobby (and fiery) sore thumb.
“The requirements went up and up,” says Kanyuk, “so we ended up getting a 20X speed-up from when we started to when we ended up deploying it by tapping into the GPUs everyone at Pixar has on their computers. We figured out a way to virtualize the GPU and take half of it to use overnight, making the time to render a frame go from about five minutes to one second.”
It worked. Ultimately, Kanyuk and everyone else involved with Elemental was able to render the shots they needed. Pixar is still “scratching the surface” of what NST can do, he says, “but I'm very excited that we found a use case on Elemental that elevated the kind of imagery that we can create.”
For Sohn, it was an opportunity to make the movie look the way he wanted, while also making something that looked like nothing audiences had ever seen before. It symbolized, he says, one of the things he loves about Pixar: the meeting of art and technology, where the latter is a big part of the process, but only one element.
“It’s this coming together of left brain and right brain, and using technology as a tool to help express emotions,” Sohn says, “and in turn we can connect to tech, versus it feeling like some cold new thing.”
To honor your privacy preferences, this content can only be viewed on the site it originates from.