Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
481 Discussions

How Google DeepMind Turns AI Research into Application

Edward_Dixon
Employee
0 0 1,288

Founded just over a decade ago (making it one of the oldest artificial intelligence firms) DeepMind cut its teeth teaching machines to master early video games—not so machines could displace humans in the arcades, but because they offered ready-made testbeds for applying artificial intelligence to the problem of learning policies.


While we often talk about training models on classification tasks (“there is a tumour in this X-ray"), these are relatively “easy” tasks in that, when we have labelled data, we already know the correct answer, at least for the training set. However, the real world is full of open-ended tasks, where we know the goal, but not how to achieve it (e.g. win a chess game). In AI parlance, the “how” of getting to a solution is called a policy. Getting really good at policy training is likely a pre-requisite in making machines useful in the real world. (For more on that subject, you may want to listen to the Intel on AI podcast interview with Pieter Abbeel, founder of robotics firm Covariant.)


DeepMind is focused on the big prize: creating systems that will reach artificial general intelligence (AGI). In a recent episode of the Intel on AI podcast, Colin Murdoch, Senior Business Director at DeepMind, talks with host and New York Times best-selling author Abigail Hing Wen about some of the incredible advancements the team at DeepMind has made along the journey toward AGI.
Exactly what “general” intelligence means remains a debate within the AI community. ACM Turing Award Laureate Yann LeCunn says that all intelligence is specialized, including human intelligence. DeepMind’s working definition is robustly practical: general intelligence means AI that can be used for almost anything and sits on a continuum.




“When I think about artificial general intelligence, for me it's, I guess, just a kind of growing cone of capability. I don't imagine one day we won't have it, and the next day we will.”


-Colin Murdoch



From Products to Deep Science


Colin sees two major phases in the move towards AGI, the first of which is the realm of using AI technology to further digitize the global economy by incorporating breakthroughs in research into new products and services. This is future is here! Companies like HuggingFace take mere weeks to turn new research results into new product enhancements that can equally rapidly be adopted by their customers. Although the future is here, Colin points out that this particular future isn’t evenly distributed yet; we’ve barely scratched the surface of possible applications. In the second phase, Colin envisages AI systems that create themselves and can be applied to deep science problems (more on this topic below).


For now, his job at DeepMind is to keep growing the application pipeline, matching research teams who have developed cutting-edge technology with product teams searching for solutions to specific problems. This is a hard problem! Conventional wisdom is to start with a customer pain point and work backwards, which works well for incremental progress, but not necessarily for new classes of products. (Back when the microprocessor was invented, Intel had exactly this problem—convincing customers who built electronics that they needed to do a bit less soldering and a bit more coding).


Colin jokes that it sometimes feels like running a dating service. Not every date goes well, but Cupid’s arrow seems to have hit the mark on many occasions. In the podcast, Colin and Abigail talk at length about just a few of DeepMind’s projects and how they’ve been further developed by Google.



AlphaFold


Perhaps the most exciting breakthrough DeepMind has been working on is one of the grand research problems in biology: protein folding—how an amino acid sequence determines the three-dimensional structure of a protein, which in turn governs the protein's ability to perform its functions. DeepMind originally trained a neural network on a dataset of 30,000 known protein structures to develop AlphaFold. In a December 2018 competition, AlphaFold beat out 98 entrants, predicting the most accurate structure for 25 out of 43 proteins, beating the second-place team, which predicted the structures for only three, by a wide margin.
Two years later in 2020, AlphaFold 2 achieved a score of 92.4 GDT, nearly twice the accuracy as in 2018 and considered on par with results obtained from the months-long process done via experimental methods in a laboratory. Colin is particularly excited to watch how AlphaFold 2 will help accelerate future drug discovery and believes the research will be applicable across a wide range of diseases.


We don’t have to wait for the future to see AlphaFold’s benefits, though. Using the latest version of the AlphaFold system, the team at DeepMind released into open source structure predictions of several under-studied proteins associated with SARS-CoV-2, the virus that causes COVID-19.


This isn’t even close to being the most exciting use of AlphaFold, however. Being able to predict structure from a sequence could be combined with policy learning to design proteins. In this scenario, one would begin with a therapeutic target (some receptor on the surface of a cell, let’s say), then design a protein, and then output the DNA sequence necessary to produce that protein. Automated design and validation in silico (as opposed to in vitro or in vivo) has the potential to transform the drug development pipeline. As 2020 reminded us, compressing development times from the scale of decades to weeks has enormous social and commercial value.



Graph Nets


In 2018, a team from DeepMind, MIT, and the University of Edinburgh published the position paper “Relational inductive biases, deep learning, and graph networks,” arguing how graph neural networks (Graph Nets) can support combinatorial generalization—the ability to construct new predictions from known building blocks, which can lay the foundations for more sophisticated patterns of reasoning. Using Graph Nets, the team at Google Maps was later able to conduct spatiotemporal reasoning by incorporating relational learning biases to model the connectivity structure of real-world road networks.


Colin estimates that Google Maps users travel over a billion kilometers per day. Using that data, the two teams at Google were able to improve the accuracy of real time ETAs in cities like Berlin, Jakarta, São Paulo, Sydney, Tokyo, and Washington D.C.. This is a great example of how once AI makes the transition from research to engineering it becomes just as invisible as any other piece of software. (Andrew Ng makes this point in his podcast interview earlier in our series). When you run a search query or ask your phone a question, you may not think of it as an AI application, but it is.



WaveNet


WaveNet is another great example of how cutting-edge AI is now seen as a relatively standard service. In 2016, DeepMind released WaveNet—a deep generative model of raw audio waveforms that is able to generate speech mimicking a human voice. Today the system is used in almost every Google service across multiple languages. Anytime you hear your Google Map telling you that “your destination is on the right,” you’re listening to a voice created by DeepMind. I still remember the first time I listened to WaveNet sample outputs: the voice included hesitations and you could just catch “breath” sounds between some words. My first thought was, “she sounds more human than I do!”


Being able to create a realistic automated voice is powerful! In episode ten of the podcast, Lama Nachman, Director of Intel’s Anticipatory Computing Lab, discussed the AI systems her team has used to help roboticist Peter Scott Morgan and the late Stephen Hawking. How these systems interact with humans on an emotional level can have far reaching implications, as Rana el Kaliouby discussed in episode twelve. My personal favourite is the story of how Apple’s Siri helped a boy with autism, an entirely accidental “collateral good” that is hard to imagine without the emotional power of the spoken word.


In the podcast, Colin alludes to future opportunities using WaveNet for video content and other translations services. As a European, I’m keenly aware of the advantage my languages give me in terms of access to information. As a child, I benefitted from a home with two full encyclopedia sets. As an adult, my kids get huge value from Wikipedia at a marginal cost of $0 with access to a phone and Wi-Fi. But this access is conditional on language: my English-speaking kids are working their way through 6 million English-language entries, but even a major African language like Kiswahili (with about 100 million speakers) has only about 68,000 Wikipedia articles. Machine translation is transforming access to information in a way that was quite literally science fiction when I was a child. (Remember the babel fish, anyone?)



Solving Big Problems


Like Colin, I want to see AI solve really big problems, like energy consumption and production. Data centers use a massive amount of energy, including a large portion to run complex cooling systems. (Intel’s Rebecca Weekly has written about how we’re working with the Open Compute Project to try meet standards for a carbon neutral data center.) In 2016, DeepMind was able achieve a 40 percent reduction in the amount of energy used for cooling Google data centers by training an ensemble of deep neural networks on historical data collected by thousands of power and other sensors in the facilities.


Similarly, in 2018 DeepMind worked with the team at Android to extend the battery life of smartphones. Perhaps not as impressively impactful as greatly reducing data center energy consumption, but considering there’s an estimated 130 million Android users in the U.S. alone, those everyday wall charges add up.


Chemical production is another sector which requires huge amounts of energy in the form of heat to successfully make compounds. In the podcast, Colin speculates that through AlphaFold scientists might be able to develop new enzymes or other catalysts to reduce the energy requirements in those systems. I have written elsewhere about this, but to illustrate the potential, the industrial process for fixing nitrogen (an essential fertilizer) uses 1-2% of global energy output due to the very high temperatures and pressures required. Yet in my garden, soil bacteria work with plants to pull off the same trick at ambient temperatures and pressures. We’ve got a long way to go to reach their level of energy efficiency, but AI can get us there faster and I’m extremely interested in what DeepMind will do next.


We can’t get to a better world only by saving energy, though. We also need more and better energy supplies. Here in Ireland, my laptop is charging from an ultra-reliable grid, drawing much of its power cleanly from the blustery southwesterlies that keep this island so green. But in much of the world, electricity is unreliable and/or expensive with businesses and households relying on tiny diesel or petrol generators, which are noisy, dirty, expensive and hard to scale.


In the short term, companies like Google and Innowatts are using AI to match variable energy sources, like wind and solar, with demand from consumers and businesses, increasing the value and utilization of renewables. In the longer term, we’ll see organizations like DeepMind use AI to improve the design of everything from the semi-conductors in solar panels to the control algorithms that may one day stabilize the plasma in commercial-scale fusion reactors.



The Future of AI in Society


Like previous podcast guest World Bank’s Ed Hsu, Colin believes AI has extraordinary benefits for humanity, and similarly, he sees how AI could have negative impacts unless we carefully consider how these systems are built and used. DeepMind has a technical safety team that works closely with researches at OpenAI, the Alan Turing Institute, and other leading labs to understand algorithmic technical safety. DeepMind also has an ethics team, working with not-for-profits, academics, and other companies to consider the potential impact such AI systems might have on society.


Colin stresses that developers need to think about how their data sets are managed and how their training is set up in order to create the right objectives for their models, something Alice Xiang discussed at length in a previous podcast episode around the topic of algorithmic fairness. If you haven’t had a model “go rogue” yet through test set leakage or a misaligned loss function, then you just haven’t been in AI long enough! AI is software, and as one of my key mentors taught me, “Anything you haven’t tested is broken.”


Like all of the Intel on AI podcast episodes, this one is worth listening to in full. You can find it along with the others on your favorite streaming platform by going to: intel.com/aipodcast








The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.