For decades, scientific computing has focused on continually improving the performance of known algorithms and equations to solve ever-larger problems, using ever-more-powerful compute clusters. These efforts have brought big gains in our understanding of the problems that scientists investigate, yet any scientist would tell you that we still have a long way to go.
In the world of scientific research, some subjects are so large and so complex that they appear to be all but beyond our practical capabilities to simulate them, let alone understand them in full. We’re talking about problems on the scale of mapping our galaxy or enabling real-time weather forecasting. Problems like these are so complex that it is nearly impossible to describe and simulate in an equation.
The functioning of the human brain is one such problem. While we learn more about its functions every day, the brain remains largely a mystery. Historically, we have had neither the algorithms nor the computing horsepower to break through the barriers to understanding the intricacies of the circuitry in the brain and the behavior of its billions of neurons, and how everything works together to create an amazing operating system that controls our thoughts, feelings and behavior.
For neuroscientists to reach this understanding, in a timely manner, they need extremely powerful compute clusters and modern algorithms that can take full advantage of the latest technologies the computing community has to offer. They also need to make more discoveries and gain more insights with the help of capabilities like large-scale image processing and machine learning . And at this point this story gets exciting, because today all of these things are coming together.
Thanks to dramatic advances in algorithms and the systems they run on, along with rich collaborative efforts among researchers in the neuroscience and computational communities, we are now on the path to greatly expanding our understanding of the brain. While there are many such efforts under way, I’m particularly excited about an ongoing collaboration among research scientists from Princeton Neuroscience Institute (PNI) and computer scientists from Intel Labs. This collaboration will be the subject of a keynote presentation this week at the 2016 Intel HPC Developer Conference.
Known informally as the Mind’s Eye Collaboration, this effort is focused on the development of a system for decoding neural representations of cognitive states in individuals in real time. The researchers involved in this effort are working toward real-time processing and analysis of brain images. Machine learning techniques are a vital part of the process; the algorithms learn in real-time from the data. The results of this learning can be used to alter the course of the experiment.
In more specific terms, the team learns by processing the enormous amount of data generated from functional magnetic resonance imaging (fMRI). With the systems of the past, these activities would have taken years to complete. In recent years, researchers have found ways to reduce the computational time requirements significantly, down to as little as minutes or even seconds. Faster computation allows a closed-loop process in which the training makes the model smarter, which is used in turn to alter experimental conditions and thereby change future inputs to the model, making it even smarter. This has the potential to permit scientists to learn more about the causal aspects of human cognition.
The results have been truly remarkable. They include four major breakthroughs in performance that will lead to better understanding of how the brain works.
One of these breakthroughs is real-time fMRI decoding of full-brain data. An increasingly valuable tool for cognitive neuroscience, real-time fMRI analysis leverages the ability of fMRI to identify mental representations, and does this online with high accuracy during the experiment. Traditionally, this analysis is limited to small, selected regions of the brain due to the intense computation needed. Real-time analysis of full-brain data involves rapidly processing the entire incoming data stream with an HPC cluster in order to mine complex interactions across a multitude of distant regions of the brain. This permits machine learning algorithms to discover key interactions that were previously indistinguishable from noise. This puts real-time fMRI on the critical path for the goals of the research project.
Another big breakthrough brought dramatic advances in an algorithm for neural data, known as Hierarchical Topographic Factor Analysis (HTFA). At the start of the project, the usefulness of this algorithm was severely constrained by limitations that permitted its use with only small, sub-sampled datasets. Over the course of a year, the team sped up the algorithm by more than 50x. This made it possible to fit HTFA to large-scale fMRI datasets in a matter of hours.
Truthfully, we’re just scratching the surface here. If you’re in Salt Lake City this week, you can take a much deeper look at this project in a keynote presentation at the Intel HPC Developer Conference by the principal investigators on the Princeton project. For now, let’s close with a bigger-picture view. The breakthroughs brought by the Mind’s Eye Collaboration are a great illustration of what is possible when scientific researchers and computer scientists work together to find novel ways to capitalize on machine learning techniques, optimized HPC algorithms and the latest processing architectures.
When that type of collaboration happens, great things can happen. What was seemingly impossible becomes possible, and what was previously impractical becomes practical, and we all benefit from the ensuing breakthroughs.