High Performance Computing Opens Possibilities for Smart Cities

High Performance Computing (HPC): it’s no longer just for government labs and academic institutions.

In my conversations with members of the HPC community worldwide, I’m constantly inspired by new, novel HPC applications, including the need for Artificial Intelligence (AI) and high-performance data analytics to run the most complex workloads and process the mass amounts of data. These examples reemphasize the need for serious computing capabilities for modern and future AI/HPC applications that can harness this wealth of data to address societal issues. This is particularly clear in the case of Internet of Things (IoT) enabled smart cities, which could derive major benefit from HPC and HPC-like capabilities.

Data Opportunities and Challenges

I recently spoke with my colleague Sameer Sharma, Intel’s Global General Manager for IoT (New Markets, Smart Cities, and Intelligent Transportation), about his organization’s efforts to enable the future of smart cities -- those which are incorporating insights derived from IoT devices and other sensors into the governance of shared spaces such as roads, waterways, airports, seaports, sporting venues, and universities. To guide their work, Sameer’s team took a step back and talked to people around the globe about challenges for urban life. Three key areas emerged:

  • Safety—People want to be safe and also feel safe.
  • Mobility—People need to get from point A to point B expediently and efficiently.
  • Sustainability—People want to manage the city’s environmental impact.

Sameer’s team worked with Harbor Research, a well-known research firm in the smart cities space, to better understand the data generation aspects of smart cities—in other words, what data is generated and how it is generated. The results were staggering. The analysis found that smart cities will produce a total of approximately 16.5 zettabytes of data in 2020 alone. To put this in context, a zettabyte is one trillion gigabytes, or roughly as many gigabytes of data as 11oz cups of coffee would fit in The Great Wall of China.

While each individual city’s data footprint depends on the unique mix of smart applications deployed, you could guess that each smart city still has a lot of data to contend with. In the future, many cities may turn to AI algorithms to tap into this data to manage city operations. However, the scale of the data and need for rapid insights will make HPC-level computing resources a requirement to make the most of this opportunity.

Federated Data Platforms for Urban Mobility and Safety

Projects throughout the world are revealing the potential of integrating highly-capable compute into urban spaces. In one example, the city of Bangkok, Thailand, installed smart cameras in three traffic intersections. Real-time tracking algorithms fed by these cameras and run atop Intel® Core™ processor based systems optimized traffic signal timing to improve traffic flow. This solution reduced queue length in these intersections by 30.5%, saving more than 50,000 vehicle commuter hours.

However, the value of the data from cameras like these shouldn’t end at the individual intersection. Results from the edge could be used to optimize traffic at the macro level as well. City-level traffic analysis via deep learning algorithms would require a more capable converged AI/HPC system in a data center, but could have big benefits to commuter safety and mobility, as well as city planning. The more cameras and sensors involved, the greater the potential, but also the greater the need for HPC resources for analysis.

In another example, the city of Rio De Janeiro, Brazil, deployed 1,800 HD video cameras to ensure the safety of the estimated 500,000 people visiting for the 2016 Summer Olympic Games. This solution used Intel Atom® and Intel Core processors for analysis at the edge and a higher-performance Intel® Xeon® processor based system for further analysis in the data center.

This system processed about 1.5 million pieces of video data each day to help staff detect and respond to abandoned objects and prevent unauthorized access to off-limits areas. Sameer noted in our conversation that similar systems deployed today can use energy-efficient purpose-built AI accelerators like the Intel® Movidius™ Myriad™ X Visual Processing Unit (VPU) for analysis at the edge that just a few years ago would have required analysis in a traditional data center. Additionally, today’s HPC systems benefit from 2nd Generation Intel Xeon Scalable processors with built-in AI acceleration (Intel® Deep Learning Boost). The HPC capabilities of Intel architecture become all the more important as the number of IoT sensors scales up, increasing the overhead of data analysis. Future HPC deployments will enable new opportunities for converged AI/HPC applications to derive maximum value from data like those in Rio De Jeneiro’s solution.

A Future of Widespread Smart Cities and Transportation

The market and demand for HPC capabilities continues to expand as more organizations turn towards AI/HPC converged solutions to turn data into social benefit. With more than 1,100 cities with population greater than 500,000 people worldwide in 2018 and many thousand more with population greater than 100,000 IoT-enabled smart cities could be a big new user group for HPC in the near future.

IoT’s drive for HPC compute doesn’t stop there. Industrial and healthcare IoT systems, to name just two other verticals, will also generate huge amounts of data and thus demand for compute to analyze that data. It is an exciting time for those of us at Intel continuing to work to enable a diversifying customer base.

Thanks to Sameer and his team for their contributions to this blog. Please visit Intel’s pages on our HPC and IoT product portfolios for more information on how Intel technologies are enabling innovation in both traditional an unexpected ways.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com

Intel, the Intel logo, Xeon, Movidius, and Myriad are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

© Intel Corporation

Published on Categories High Performance ComputingTags , , ,
Trish Damkroger

About Trish Damkroger

Patricia (Trish) A. Damkroger is vice president and general manager of the High Performance Computing organization in the Data Platforms Group at Intel Corporation. She leads Intel’s global technical and high-performance computing (HPC) business and is responsible for developing and executing strategy, building customer relationships and defining a leading product portfolio for technical computing workloads, including emerging areas such as high-performance data analytics, HPC in the cloud and artificial intelligence. An expert in the HPC field, Damkroger has more than 27 years of technical and managerial expertise both in the private and public sectors. Prior to joining Intel in 2016, she was the associate director of computation at the U.S. Department of Energy’s Lawrence Livermore National Laboratory where she led a 1,000-member group comprised of world-leading supercomputing and scientific experts. Since 2006, Damkroger has been a leader of the annual Supercomputing Conference (SC) series, the premier international meeting for high performance computing. She served as general chair of the SC’s international conference in 2014 and has held many other committee positions within industry organizations. Damkroger holds a bachelor’s degree in electrical engineering from California Polytechnic State University, San Luis Obispo, and a master’s degree in electrical engineering from Stanford University. She was recognized on HPC Wire’s “People to Watch” list in 2014 and 2018.