Exascale Computing Will Redefine Content Creation

I couldn’t be more excited for SIGGRAPH 2019 and as the conference kicks into high gear, I want to share some exciting updates on the latest software and hardware innovations Intel is bringing to the content creation community. Raja Koduri, Intel’s Chief Architect and Senior Vice President of the Architecture, Software and Graphics group, has challenged my team and others across Intel to deliver a 1000x workflow improvement for creators over the next three years. At Intel’s first CREATE event on July 30, Raja, Jim Keller (the General Manager of Intel’s Silicon Engineering Group) and I laid out our plan to deliver this ambitious improvement to the creator community.

Why 1000x? Raja did not just pull that figure from a proverbial hat. That goal is closely related to many of the audacious initiatives Intel is pursuing that will drive exponential increases in computing capabilities well into the future. Perhaps the most relevant of those initiatives is our work to make exascale computing a reality. As you might know, we’re working closely with the U.S. Department of Energy to build the world’s first exascale supercomputer for the Argonne National Laboratory. That supercomputer is called Aurora and it will be capable of an exaflop – that’s a quintillion (or a billion billion) floating point calculations per second!

But advances in raw compute capability won’t be enough to solve the problems that exascale computers are designed to tackle. Big – no, huge! – data will demand that exascale systems be equipped with much greater memory capacity, fast storage methods, and innovative intra-system interconnects to keep these powerful compute engines fed.

You might ask: “What does exascale computing have to do with professional content creation?” As it turns out, today’s workflows for animated films, visual effects, digital product design and architectural engineering all exhibit challenges like those that need to be overcome to make exascale computing a reality.

A typical animated studio’s “render farm” is very similar in both structure and application to a supercomputer that’s used for more typical HPC tasks. Animation studios generally perform their rendering using a technique known as “ray tracing” to deliver the highest fidelity images. Ray tracing works by essentially calculating the physics of light transport. Interestingly enough, most of the compute capacity of today’s supercomputers is utilized by physics processing of all kinds from weather forecasting to fluid dynamics simulations. The similarities between supercomputers and rendering farms aren’t so surprising given that context.

Memory is critical in the exascale era

Memory has always been at the heart of content creation as increases in visual fidelity require corresponding growth in dataset sizes. Today, the CPU, with its scalar, vector and matrix capabilities, is the only compute engine that can effectively address the entire memory stack. Intel is also delivering memory innovations, such as our Intel® Optane™ DC Persistent Memory, to deliver significant memory capacity increases and enable new use cases that can leverage persistent memory.

Intel works tirelessly to invent new technologies for high performance computing because speeding up those calculations can result in truly world-changing science and products. What if we could leverage some of those groundbreaking innovations to give motion picture artists the tools to bring their imaginations to life, free of the compromises that they today must make due to a lack of computing power, memory capacity, and so on?

The truth is that we can, and that’s what the 1000x goal is all about. Now, this is far from a trivial task, even with the power of exascale technologies. To take full advantage of a computer capable of delivering an exaflop, software systems, applications and the overall flow of data need to be designed in concert with the underlying hardware in a process referred to as “co-design.” For us to achieve our 1000x goal in content creation capability, it is imperative for the industry to embrace a similar co-design philosophy.

To highlight this codependency, Intel has created its six pillars of innovation. I liken this to an orchestra comprised of world-class musicians (represented by the five inner rings) with the group mutually complementing and feeding off each other. The conductor – in this case, represented by software – sits at the top, bringing the musicians together to produce beautiful music.

You might not think of Intel as a software company, but did you know that Intel is one of the largest and most prolific software companies in the world? Intel is the largest contributor to the Linux* kernel, for example.  To get the most from its six pillars of innovation, Intel has introduced a concept and a project called “oneAPI”. At a high level, oneAPI is a set of software capabilities that will enable a “solutions and workflow focused” software development paradigm instead of programming for individual “components”.

Our goal is to enable developers to apply the same methods and programming models irrespective of the underlying hardware. I call this a holistic platform view of software and application development. oneAPI will intelligently ensure that the right tool is used for each job, delivering the best possible performance and power efficiency. At an architectural level, we characterize today’s most important compute engines as being scalar, vector, matrix or spatial.

With that background on Intel’s direction, I am thrilled to announce that we are renaming the Intel Rendering Framework family of open source libraries to the Intel oneAPI Rendering Toolkit. We are doing this to acknowledge that over time, our rendering solutions will apply the oneAPI paradigm of using the best computational tools to maximize rendering performance, including our future accelerators based on the Xe architecture.

The libraries comprising Intel oneAPI Rendering Toolkit are constantly being enhanced to meet the needs of professional rendering, scientific visualization, virtual reality, and design applications. I’m pleased to announce that this week we are releasing the revision 3.6 of our popular Intel Embree ray tracing library, which adds multi-level instancing to deliver dramatic memory footprint savings and point query capabilities that enable sophisticated rendering control for scenes. We are also thrilled to announce the release of Open Image Denoise 1.0 and its integration into Maxon’s popular Cinema4D* application. Open Image Denoise leverages the power of artificial intelligence to offer the highest fidelity image denoising available today.

At FMX, I highlighted the in-progress development of Open Volume Kernel Library. Today, I am pleased to announce that we have made great progress on its design and implementation. The first release of the Open Volume Kernel Library is imminent and will be available for preview in the next few weeks with a more formal introduction in Q4 of this year. We also showed OpenVKL in action handling Volumetric Path Tracing of a large scale 1.5 TB time-series visualization of Stellar Radiation data from Argonne national labs running in real-time with smooth interactivity.

We are also very excited to update the community on the under development OSPRay 2.0. Today we showed it running with Disney’s Moana Island Scene on a compact nine server node platform. We did not stop there, and we integrated the “Disney Cloud” into the Moana Island Scene utilizing our integration of OpenVKL into OSPRay 2.0. This updated, more flexible, and feature rich version will be available no later than October. For those of you who can’t wait to get your hands on them, expect “pre-release” open source versions in the next two weeks.

Exascale for everyone

Exascale computing coupled with the right software capabilities will enable the world’s best content creators to keep pushing the limits of state-of-the-art rendering quality, delivering breathtaking visuals in your favorite movies.

But there’s more to it. The exascale era and the cloud will also bring these capabilities closer to every human being on earth, empowering them to share their creativity with the world. What if exaflops and exabytes were less than 10 milliseconds away from you? What would you create?

Published on Categories High Performance Computing, Visual ComputingTags , , , ,
Jim Jeffers

About Jim Jeffers

Jim Jeffers, Sr. Principal Engineer and Sr. Director of Intel’s Advanced Rendering and Visualization team, leads the design and development of the open source rendering library family known as the Intel® Rendering Framework. Intel RF is used for generating animated movies, special effects, automobile design and scientific visualization. Jim joined Intel in 2008 participating in the development of manycore parallel computing and the Intel® Xeon Phi™ product family, including co-authoring 4 books on manycore parallel programming. Jim's experience includes software design and technical leadership in high performance computing, graphics, digital television, and data communications. Jim's notable work prior to Intel includes development for the Tech Emmy winning virtual 'First Down Line' technology seen on live American football TV broadcasts.