Intel Takes On E-Waste With Disaggregated Servers

The server industry traditionally associates environmental stewardship with energy efficiency and water conservation. However, no conversation about green computing is complete without addressing the other elephant in the room: e-waste. Intel is taking on the challenge of e-waste with disaggregated server design, an innovation that is rooted in the merger of two concepts: total cost of ownership (TCO) and total cost to the environment (TCE).

Most people in data center management incorporate TCO into their growth strategy. But few understand or even consider TCE, particularly when it comes to updating data center infrastructure.

Historically, the server industry has focused on power usage effectiveness (PUE) and reducing water usage since both directly impact a company’s bottom line while also reducing environmental impact. At Intel, we’ve also done those things. Our innovative cooling design and high rack densities have helped us to reach unprecedentedly low PUEs and reduced our data center space footprint by 26 percent. But our commitment to TCE, as well as TCO, has forced us to look at e-waste.

Intel IT’s Lightbulb Moment

Intel IT refreshes data center servers every four years to take advantage of improvements in the Intel® Xeon® processor, every generation of which yields better performance per core and more memory per socket. While this intense schedule enables us to meet the current double-digit growth in demand, it also creates significant e-waste since selective replacement of processors alone has not historically been an option. As a result, perfectly functioning chassis, cables, power supplies, network switches, fans, as well as SSDs and SAS drives are ripped out, even though many years of useful life may remain for these components.

To mitigate the costs and environmental impact of server refresh, we had to rethink our approach to server design, leading to Intel IT’s own “lightbulb moment.”

Several years back, the lighting industry experienced a number of closely timed efficiency innovations – taking consumers from traditional incandescent light bulbs to more efficient compact fluorescent lamps (CFLs), and then to the highly efficient and long-lasting light-emitting diode (LED) bulbs. The transition to more efficient light bulbs was not always straight forward. Manufacturers had to adjust lighting designs to make this transition simple and less costly for consumers. Ultimately, switching to more efficient lighting became as simple as screwing in a new lightbulb.

We decided to be equally as innovative. We created the world’s first disaggregated server architecture, making it possible to independently refresh the CPU/DRAM module without involving adjacent components. After all, why discard perfectly good server components when they themselves do not change from one processor generation to the next? We simply decoupled the CPU/DRAM module from the NIC/drives module on the motherboard. Now, instead of spending hours on refresh, we just remove a few screws, slide out the old CPU/DRAM, and install the new module.

 

Positive Outcomes From Disaggregating Servers

Since introducing the first disaggregated server design in 2016, Intel IT has deployed more than 220,000 disaggregated servers, using 13 different motherboard designs. The benefits include:

  • No need to replace perfectly good components.
  • No need to reinstall the OS.
  • A minimum of 44 percent reduction in refresh costs.
  • A 77 percent reduction in technician time spent on refresh.
  • Decrease in refresh materials’ shipping weight by 82 percent.
  • A greater than 50 percent estimated reduction in e-waste.

The implications for rapidly growing data centers are undeniable. Disaggregated servers offer enormous value to expanding data centers by allowing them to quickly and efficiently upgrade performance, cores, and memory without leaving literally tons of e-waste in the wake.

Selective replacement of components is an environmentally conscious decision that can lower TCE without harming Intel’s TCO, proving that environmental and profit initiatives can not only coexist within the server industry, but can even be mutually beneficial.

Read the IT@Intel White Paper, “Green Computing at Scale,” to learn more about how disaggregated servers, along with other data center design innovations, can transform your data center.

Published on Categories High Performance Computing, IT LeadershipTags , , , , , ,
Shesha Krishnapura

About Shesha Krishnapura

Shesha Krishnapura is an Intel Fellow and chief technology officer in the Information Technology organization at Intel Corporation. He is responsible for advancing Intel data centers for energy and rack space efficiency, high-performance computing (HPC) for electronic design automation (EDA), and optimized platforms for enterprise computing. He is also responsible for fostering unified technical governance across IT, leading consolidated IT strategic research and pathfinding efforts, and advancing the talent pool within the IT technical community to help shape the future of Intel. Shesha has led the introduction and optimization of Intel® architecture compute platforms in the EDA industry since 2001. He and his team have delivered five generations of HPC clusters and four supercomputers for Intel silicon design and device physics computation. Earlier in his Intel career, as director of software in the Intel Communications Group, he delivered the driver and protocol software stack for Intel’s Ethernet switch products. As an engineering manager in the Intel® Itanium® processor validation group, he led the development of commercial validation content that produced standardized workload and e-commerce scenarios for successful product launches. He joined Intel in 1991 and spent the early years of his Intel career with the Design Technology group. A three-time recipient of the Intel Achievement Award, Shesha was appointed an Intel Fellow in 2016. His external honors include an InformationWeek Elite 100 award, an InfoWorld Green 15 award and recognition by the U.S. Department of Energy for industry leadership in energy efficiency. He has been granted several patents and has published more than 75 technical articles. Shesha holds a bachelor’s degree in electronics and communications engineering from University Visvesvaraya College of Engineering in Bangalore, India, and a master’s degree in computer science from Oregon State University. He is the founding chair of the EDA computing board of advisers that influences computer platform standards among EDA application vendors. He has also represented Intel as a voting member of the Open Compute Project incubation committee since its inception.