Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
485 Discussions

From Thanos to a Digital You

Edward_Dixon
Employee
0 0 825

After creating the virtual villain in the highest-grossing movie of all time, you might wonder what’s next for a visual effect studio. For Digital Domain, it means expanding into somewhat uncharted territory: virtual humans powered by artificial intelligence (AI) systems, which can respond and interact with people in real-time. In a recent episode of the Intel on AI podcast, hosts Intel AI Tech Evangelist Abigail Hing Wen and Vice President of Intel Capital Amir Khosrowshahi talk with Doug Roble, Senior Director of software research and development at Digital Domain, about the brave new world the company is creating.


Digital Domain might not be a household name, like say Marvel Comics, but the global visual effects studio is responsible for some of the biggest entertainment moments in the last thirty years. Founded by director James Cameron in the early 1990s, Digital Domain produced effects for films such as Apollo 13, Titanic, and The Fifth Element before developing digital human characters for Brad Pitt in The Curious Case of Benjamin Button. Given the studio’s longstanding relationship with Marvel Studios, Digital Domain was tapped to translate Josh Brolin’s highly emotional performance to the 8-foot, purple villain, Thanos, in Avengers: Endgame.




“In 2014, I read this really cool paper about using machine learning to accelerate fluid simulation, and I saw the power of machine learning and what it could do for just about anything.”


-Doug Roble



Creating Thanos


In the podcast, Doug goes into detail about how Digital Domain was able to take Josh’s nuanced, on-set performance and transfer it into the digitally created character of Thanos. Doug explains that the team decided there wasn’t an acceptable way to preserve the actor’s exact expressions with the typical digital transfer process. They could, however, learn how Josh Brolin's face moves and then teach the digital system to mimic the actor’s expressions using Thanos’s face.


Like with many machine learning problems, the team faced questions of what specific kind of data would be needed and how to structure it. Motion capture is also very sensitive to light; deep shadows and bright highlights can confuse the systems and the output can decrease dramatically. On set, Josh Brolin wore a traditional motion capture suit for movement, and for facial expressions he wore what is essentially a bicycle helmet with a camera on a stick, pointing at his face, which was covered with 150 dots in order to give the digital system some landmarks as to how his face is moving, even under harsh, changing lighting conditions. From those points, the digital character was generated—a polygonal mesh with over 60,000 triangles to bring a realistic level of detail to Thanos's face.


Then came the hard part: the team then had to figure out how to get the data captured from Josh Brolin's face movements and map it onto Thanos's features. This is a tough challenge because we humans spend our lives “reading” each other’s faces, and missing realism by inches can be more jarring to an audience than missing it by a mile, an effect termed the “uncanny valley” (android and zombie faces can be very disquieting, but we accept cartoon animals quite readily). Anything but perfection could break the audience’s immersion in the story. Faces are complicated structures, which change shape at different scales. Our expressions have forty-two muscles to play with, and these must bunch up as they contract, here swelling and stretching the skin, there crumpling it into wrinkles—a complexity that is extremely challenging to manage with a conventional digital animation rig, but which, given sufficient data, machines can now learn to capture.



Digital Douglas


For over a decade, Digital Domain has created digital humans for projects beyond film, including a virtual likeness of the late Tupac Shakur in 2012 for the Coachella festival. Along with the rapid advancements in AI, these different projects have created a new opportunity for expansion. Today, the company is developing “Douglas,” an ambitious undertaking to create a believable autonomous digital human.



A team of about twenty people started working on the development of “Douglas” in early 2019, unveiling it to the public this fall. The avatar is created using a PyTorch framework with separate GPUs for speech recognition and generating the conversational AI aspect, along with visual creation from the Unreal Engine. In order to portray the real-life Doug, “Douglas” required a huge amount of data from high-resolution scanning of Doug’s face and mannerism, plus hours of audio recording. There’s been a great expansion of interest in digital doubles like deep fakes, and a strong downward trend in the amount of data needed to bring them to life; as exotic and ambitious as this project may sound, digital humans are a matter of “when” rather than “if.”


Futuristic rendering of a public train station featuring a video kiosk with a Digital Douglas acting as an AI guide.

Coming to a Screen Near You


Doug—the real one, we think—is excited about the future of digital, real-time AI characters. In the podcast, he notes that Facebook has been developing conversational AI assistants for its Oculus series. (For further reading about how Facebook is using AI, read my recap blog about the podcast with Jerome Pesenti.) Doug is also excited about the applications in video games, like the ones Epic Games is developing. Someday, Doug predicts, players will be able to walk up to almost any character in an open-world style game, like Red Dead Redemption, and instead of finding a bot with a limited repertoire of heavily scripted behaviors, have a richer interaction with an entirely plausible fictional person with their own backstory, personality and agency—shades of the “Turing Test” and Westworld!


Image of Thanos from Avengers_ Infinity War

Open Source Rendering Libraries


Intel, you may be surprised, is no stranger to Hollywood either. Jim Jeffers, Sr. Principal Engineer and Sr. Director of Intel’s Advanced Rendering and Visualization team, is a member of the governing board of the Academy Software Foundation. Jim leads the design and development of the open source rendering library family known as the Intel® oneAPI Rendering Toolkit, elements of which have been used in the production of films such as Disney’s Moana and Avengers: Infinity War. Creators, scientists, and engineers can push the boundaries of visualization by using the toolkit to develop amazing studio animation and visual effects or to create scientific and industrial visualizations.


If you’re at all interested in digital effects, I highly recommend you listen to the full podcast to hear from those working behind the scenes. The Intel on AI series has several other guests worth hearing, including Rana el Kaliouby’s insights into how emotional intelligence (EQ) will be incorporated into AI. Find all the episodes at: intel.com/aipodcast


For more about Intel’s work in AI, visit: https://intel.com/ai








The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.