Intel’s Disaggregated Server Complements Intel Rack Scale Design

Advancing data center modernization to increase energy efficiency, performance scale, and infrastructure reliability is of paramount importance to build and operate world-class data centers. I believe that IT departments have a crucial role in transforming their companies through innovation, solving critical business challenges by leveraging breakthrough technologies, solutions, and processes.

One of the challenges Intel IT faces is the ability to make a compelling case based on ROI to refresh compute technology in our data centers fast enough. We want to use the latest generation of Intel® Xeon® processors to deliver higher compute performance and throughput for Intel’s silicon design teams, but we face the same budgetary pressures as any IT shop. We currently refresh our data center servers every four years to gain the advantages of either more cores or better performance per core and more DRAM per core, but I felt we could do better.

As I evaluated our refresh cycle and processes, I was struck by the fact that, although many times all we want to refresh is the CPU/DRAM module, we have to refresh the entire server, including chassis, cabling, fans, power supplies, drives, and so on—even though most of these components have many years of useful service left. It’s sort of like a homeowner replacing an entire light fixture and the wiring when all he or she wants is a more powerful, more efficient light bulb. And so, the disaggregated server was born—the first major breakthrough in server design since blade servers hit the market more than a decade ago.

When I first presented my idea of a server where the CPU/DRAM and NIC/Drives modules were entirely independent of each other, my colleagues greeted the idea with skepticism. But that’s part of the innovation process—you have to prove your idea is worth considering. My colleagues did a thorough technical assessment and confirmed that the idea was a good one, and the next phase of Intel’s data center transformation was “off to the races.”

From an initial conversation with our supplier, using a whiteboard illustration, to the delivery of an optimally tuned, high-quality product with full supply chain and large-scale delivery support took only five weeks. Within a few more weeks, several thousand of the new servers were installed and running Intel® silicon design jobs.

In our internal tests, we have determined that the disaggregated server design saves at least 44% over a full acquisition (rip-and-replace) refresh, and can cut provisioning time (IT technician labor) by as much as 77%. In addition, the disaggregated server is environmentally responsible, because you are not throwing away very good, stable technology components such as power supplies, cables, network switches, chassis controllers, or SAS drives that all have many years of life left. So far, Intel has deployed more than 40,000 of the disaggregated servers across several data centers.

The development of the disaggregated server, conceived as part of the Intel® Rack Scale Design (Intel® RSD) architecture, will bring huge advantages to the industry. These disaggregated servers can be installed in any existing data center—there’s no need to rebuild the data center to accommodate them—and they can provide significant cost and material savings as well as environmental benefits through reduced shipping costs, supply chain efficiencies, and so forth.

With 280 Intel Xeon processors based server blades packed into a nine-foot rack, the high-density, high-efficiency, and disaggregated architecture is a game changer. For the first time, you can independently refresh server compute modules without touching fans, drives, power supplies, and network switches, most of which can typically last for more than 10 years. The disaggregated server enables data center managers to invest in emerging technology without wasting money on replacing technology that is not evolving. So many things are possible with this new server design. I anticipate that the new design will unleash a new wave of disaggregated hardware architecture, and everyone, from server vendors and suppliers to the end customer, will benefit.

Read the IT@Intel White Paper, “Disaggregated Servers Drive Data Center Efficiency and Innovation,” or listen to the podcast, "Inside IT: The Disaggregated Server as Part of Intel's Rack Scale Design" to learn more about how disaggregated servers, along with Intel Rack Scale Design, can transform your data center.

Published on Categories Data CenterTags , , , ,
Shesha Krishnapura

About Shesha Krishnapura

Shesha Krishnapura is an Intel Fellow and chief technology officer in the Information Technology organization at Intel Corporation. He is responsible for advancing Intel data centers for energy and rack space efficiency, high-performance computing (HPC) for electronic design automation (EDA), and optimized platforms for enterprise computing. He is also responsible for fostering unified technical governance across IT, leading consolidated IT strategic research and pathfinding efforts, and advancing the talent pool within the IT technical community to help shape the future of Intel. Shesha has led the introduction and optimization of Intel® architecture compute platforms in the EDA industry since 2001. He and his team have delivered five generations of HPC clusters and four supercomputers for Intel silicon design and device physics computation. Earlier in his Intel career, as director of software in the Intel Communications Group, he delivered the driver and protocol software stack for Intel’s Ethernet switch products. As an engineering manager in the Intel® Itanium® processor validation group, he led the development of commercial validation content that produced standardized workload and e-commerce scenarios for successful product launches. He joined Intel in 1991 and spent the early years of his Intel career with the Design Technology group. A three-time recipient of the Intel Achievement Award, Shesha was appointed an Intel Fellow in 2016. His external honors include an InformationWeek Elite 100 award, an InfoWorld Green 15 award and recognition by the U.S. Department of Energy for industry leadership in energy efficiency. He has been granted several patents and has published more than 75 technical articles. Shesha holds a bachelor’s degree in electronics and communications engineering from University Visvesvaraya College of Engineering in Bangalore, India, and a master’s degree in computer science from Oregon State University. He is the founding chair of the EDA computing board of advisers that influences computer platform standards among EDA application vendors. He has also represented Intel as a voting member of the Open Compute Project incubation committee since its inception.