Solving Science and Engineering Problems with Supercomputers and AI

Machine learning and deep learning represent new approaches scientists can use to interrogate data, develop hypotheses, and make predictions, particularly in areas where no overarching theory exists.

Traditional applications on high-performance computing (HPC) start from “first principles” — typically mathematical formulas representing the physics of a natural system — and then transform them into a problem that can be solved by distributing the calculations to many processors.

By contrast, machine learning and deep learning are two subsets of the field of artificial intelligence that take advantage of the availability of powerful computers and very large datasets to find subtle correlations in data and rapidly simulate, test and optimize solutions. These capabilities enable scientists to derive the governing models (or workable analogs) for complex systems that cannot be modeled from first principles.

Machine learning involves using a variety of algorithms that “learn” from analyzing data and improve performance based on real-world experience.  Deep learning, a branch of machine learning, relies on large data sets to iteratively “train” many-layered neural networks, inspired by the human brain. These trained neural networks are then used to “infer” the meaning of new data.

Training can be a complex and time-consuming activity, but once a model has been trained, it is fast and easy to interpret each new piece of data accordingly in order to recognize, for example, cancerous versus healthy brain tissue or enable a self-driving vehicle to identify a pedestrian crossing a street.

In Search of Deep Learning Trainers: Heavy Computation Required

Just like traditional HPC, training a convolutional neural network or running a machine learning algorithm requires extremely large numbers of computations (quintillions!) – theoretically making them a good fit for supercomputers and their large numbers of parallel processors.

Training an image classifier requires roughly 1018 single precision operations (an exaFLOPS). Stampede2 — a Dell/Intel system at the Texas Advanced Computing Center (TACC) that is one of the world’s fastest supercomputers and the fastest at any U.S. university — can perform ~2x1016 double precision operations per second.

Logically, supercomputers should be able to train deep neural networks (DNN) rapidly. But in the past, DNN training has required hours, days or even months to complete (as was the case with Google’s AlphaGo).

With frameworks optimized for modern CPUs, however, experts have recently been able to train DNN models in minutes. For instance, researchers from TACC and UC Berkeley used 512 Intel® Xeon Phi™ processors to finish a 100-epoch ImageNet training with AlexNet in 24 minutes and later used 1024 Intel® Xeon® Scalable processors to complete the same task 11 minutes, the fastest that such training has ever been reported. Furthermore, they were able to scale to 1600 Intel Xeon Scalable processors and finish the 90-epoch ImageNet training with ResNet-50 in 31 minutes without losing accuracy. High-speed, high-accuracy image classification can be useful in characterizing satellite imagery for environmental monitoring or labeling nanoscience images obtained by scanning electron microscope.

This fast training will impact the speed of science, as well as the kind of science that researchers can explore with these new methods.

Overcoming Bottlenecks in Neural Networks

Efforts at TACC and elsewhere show that they can effectively overcome bottlenecks in fast DNN training with high-performance computing systems by using well-optimized kernels and libraries, employing hyper-threading, and sizing the batches of training data properly.

Reporting at the Intel Xeon Phi User Group Meeting in September, TACC’s experts showed how they scaled Caffe*, a leading deep learning framework, to 512 Intel Xeon Phi processors, running more than 400 times faster than a single Intel Xeon Phi processor (∼80% efficiency). This kind of scaling efficiency requires a high-speed fabric and shows the capability of Intel® Omni-Path Architecture to enable DNN jobs across hundreds and even thousands of nodes.

The recent results on ImageNet training confirm the applicability of these methods to real-world problems. In addition to Caffe, TACC also supports other popular CPU- and GPU-optimized deep learning frameworks, such as MXNet* and TensorFlow*.

Successes in Critical Applications

Meanwhile, researchers have been using TACC supercomputers to apply machine learning and deep learning to science and engineering problems ranging from healthcare to transportation.

Artificial intelligence has been used to discover the exact interventions needed to obtain a specific, brand-new result in a living organism. Pigment cells over a tadpole's left eye became cancer-like; those over the right eye remained normal. Credit: Patrick Collins, Tufts University

For instance, researchers from Tufts University and the University of Maryland, Baltimore County, used Stampede1 to reverse engineer the cell signaling network that determines tadpole coloration. The research helped identify the various genes and feedback mechanisms that control this aspect of pigmentation (which is related to melanoma in humans) and created never-before-seen mixed coloration in the animals. They are exploring the possibility of using this method to uncovering the cell signaling that underlies various forms of cancer, so new therapies can be developed.

Deep learning experts from TACC collaborated with researchers at the University of Texas Center for Transportation Research and the City of Austin to automatically detect vehicles and pedestrians at critical intersections throughout the city using machine learning and video image analysis. Credit: Weijia Xu, TACC

In another impressive project, deep learning experts at TACC collaborated with researchers at the University of Texas Center for Transportation Research and the City of Austin to automatically detect vehicles and pedestrians at critical intersections throughout the city using machine learning and video image analysis. The work will help officials analyze traffic patterns to understand infrastructure needs and increase safety and efficiency in the city. (Results of the large-scale traffic analyses will be presented at IEEE Big Data in December 2017 and the Transportation Research Board Annual Meeting in January 2018.)

Researchers developed a method to automatically identify and classify brain tumors, as well as different types of cancerous regions, using biophysical models of tumor growth and machine learning algorithms. The method was powered by supercomputers at the Texas Advanced Computing Center. Credit: George Biros, The University of Texas at Austin

Most recently, George Biros, a mechanical engineering professor at the University of Texas at Austin, used Stampede2 to train a brain tumor identification and classification system that can identify brain tumors and different types of cancerous regions with greater than 90 percent accuracy (roughly equivalent to an experienced radiologist). The image analysis framework will be deployed at the University of Pennsylvania and will be used for various clinical studies of gliomas.

We have learned via our research and research-enabling efforts that HPC architectures are well suited to machine learning and deep learning frameworks and algorithms. Using these approaches in diverse fields, scientists are beginning to develop solutions that will have near-term impacts on health and safety, not to mention material science, synthetic biology and basic physics.

We invite and encourage our university users and industry partners to try out deep learning and machine learning frameworks on Stampede2 and possibly incubate research projects that leverage these powerful, emerging techniques.

To learn more about Stampede2 and the Texas Advanced Computing Center, please visit the Texas Advanced Computing Center (TACC). At SC17, Dr. Gaffney presented on machine learning at TACC at the Dell EMC booth. Dr. Weijia Xu, group lead for TACC’s Data Mining & Statistics group, presented a talk on TACC’s deep learning efforts in booth #1343.

Published on Categories High Performance ComputingTags , , , , , , , , ,
Niall Gaffney

About Niall Gaffney

Niall Gaffney’s background largely revolves around the management and utilization of large inhomogeneous scientific datasets. Niall, who earned his B.A., M.A., and Ph.D. degrees in astronomy from The University of Texas at Austin, joined TACC in May 2013. Prior to that he worked for 13 years in the role of designer and developer for the archives housed at the Space Telescope Science Institute (STScI), which hold the data from the Hubble Space Telescope, Kepler, and James Webb Space Telescope missions. He was also a leader in the development of the Hubble Legacy Archive, projects that harvested the 20+ years of Hubble Space Telescope data to create some of the most sensitive astronomical data products available for open research. Prior to his work at STScI, Niall was worked as “the friend of the telescope” for the Hobby Eberly Telescope (HET) project at the McDonald Observatory in west Texas where he started working to create systems to acquire and handle the storage and distribution of the data the HET produced.