Intel Was Doing Software-Defined Infrastructure Before It Was Cool

Following Intel’s lead – decoupling software from hardware and automating IT and business processes — can help IT departments do more with less.

When I think back to all the strategic decisions that Intel IT has made over the last two decades, I can think of one that set the stage for all the rest: our move in 1999 from RISC-based computing systems to industry-standard Intel® architecture and Linux for our silicon design workloads. That transition, which took place over a 5-year period, helped us more than double our performance while eliminating approximately $1.4 billion in IT costs.

While this may seem like old news, it really was the first step in developing a software-defined infrastructure (SDI) – before it was known as such – at Intel. We solidified our compute platform with the right mix of software on the best hardware to get our products out on time.

Today, SDI has become a data center buzzword and is considered one of the critical next steps for the IT industry as a whole.

StorageCapacity.png


Why is SDI (compute, storage, and network) so important?

SDI is the only thing that is going to enable enterprise data centers to meet spending constraints, maximize infrastructure utilization, and keep up with demand that increases dramatically every year.

Here at Intel, compute demand is growing at around 30 percent year-over-year. And as you can see from the graphic, our storage demand is also growing at a phenomenal rate.

But our budget remains flat or has even decreased in some cases.

Somehow, we have to deliver ever-increasing services without increasing cost.


What’s the key?

Success lies in decoupling hardware and software.

As I mentioned, Intel decoupled hardware and software in our compute environment nearly 16 years ago, replacing costly proprietary solutions that tightly coupled hardware and software with industry-standard x86 servers and the open source Linux operating system. We deployed powerful, performance-optimized Intel® Xeon® processor-based servers for delivering throughput computing. We followed this by adding performance-centric higher-clock, higher-density Intel Xeon processor-based servers to accelerate silicon design TTM (time to market) while significantly reducing EDA  (Electronic Design Automation) application license cost — all of which resulted in software-defined compute capabilities that were powerful but affordable.

Technology has been continuously evolving, enabling us to bring a similar level of performance, availability, scalability, and functionality with open source, software-based solutions on x86-based hardware to our storage and network environments.

Screen Shot 2015-03-20 at 12.45.32 PM.pngAs we describe in a new white paper, Intel IT is continuously progressing and transforming Intel’s storage and network environments from proprietary fixed-function solutions to standard, agile, and cost-effective systems.

We are currently piloting software-defined storage and identifying quality gaps to improve the capability for end-to-end deployment for business critical use.

We transitioned our network from proprietary to commodity hardware resulting in more than a 50-percent reduction in cost. We are also working with the industry to adopt and certify an open-source-based network software solution that we anticipate will drive down per-port cost by an additional 50 percent. Our software-defined network deployment is limited to a narrow virtualized environment within our Office and Enterprise private cloud.


But that’s not enough…

Although decoupling hardware and software is a key aspect of building SDI, we must do more. Our SDI vision, which began many years ago, includes automated orchestration of the data center infrastructure resources. We have already automated resource management and federation at the global data center level. Our goal is total automation of IT and business processes, to support on-demand, self-service provisioning, monitoring, and management of the entire compute/network/storage infrastructure. Automation will ensure that when a workload demand occurs, it lands on the right-sized compute and storage so that the application can perform at the needed level of quality of service without wasting resources.


Lower cost, greater relevancy

Public clouds have achieved great economy of scale by adopting open-standard-based hardware, operating systems, and resource provisioning and orchestration software through which they can deliver cost-effective capabilities to the consumers of IT. If enterprise IT wants to stay relevant, we need to compete at a price point and agility similar to the public cloud. SDI lets IT compete while maintaining a focus on our clients’ business needs.

As Intel IT continues its journey toward end-to-end SDI, we will share our innovations and learnings with the rest of the IT industry — and we want to hear about yours, too! Together, we can not only stay relevant to our individual institutions, but also contribute to the maturity of the data center industry.

Published on Categories Archive
Shesha Krishnapura

About Shesha Krishnapura

Shesha Krishnapura is an Intel Fellow and chief technology officer in the Information Technology organization at Intel Corporation. He is responsible for advancing Intel data centers for energy and rack space efficiency, high-performance computing (HPC) for electronic design automation (EDA), and optimized platforms for enterprise computing. He is also responsible for fostering unified technical governance across IT, leading consolidated IT strategic research and pathfinding efforts, and advancing the talent pool within the IT technical community to help shape the future of Intel. Shesha has led the introduction and optimization of Intel® architecture compute platforms in the EDA industry since 2001. He and his team have delivered five generations of HPC clusters and four supercomputers for Intel silicon design and device physics computation. Earlier in his Intel career, as director of software in the Intel Communications Group, he delivered the driver and protocol software stack for Intel’s Ethernet switch products. As an engineering manager in the Intel® Itanium® processor validation group, he led the development of commercial validation content that produced standardized workload and e-commerce scenarios for successful product launches. He joined Intel in 1991 and spent the early years of his Intel career with the Design Technology group. A three-time recipient of the Intel Achievement Award, Shesha was appointed an Intel Fellow in 2016. His external honors include an InformationWeek Elite 100 award, an InfoWorld Green 15 award and recognition by the U.S. Department of Energy for industry leadership in energy efficiency. He has been granted several patents and has published more than 75 technical articles. Shesha holds a bachelor’s degree in electronics and communications engineering from University Visvesvaraya College of Engineering in Bangalore, India, and a master’s degree in computer science from Oregon State University. He is the founding chair of the EDA computing board of advisers that influences computer platform standards among EDA application vendors. He has also represented Intel as a voting member of the Open Compute Project incubation committee since its inception.