Sometimes more is more. Extending Linux Server Memory with Intel® Optane™ Technology

Sometimes more is more. Why make tradeoffs when we don’t have to? For candy lovers, the classic example is a peanut butter cup. Why choose between savory peanut butter or sweet chocolate? Have both. Similarly, you can have both a thin and powerful notebook PC. And, for you basketball fans, why choose between ball handling and low post presence? Many players can do both.

Same for memory. Why choose between memory size and memory affordability? You might be thinking to yourself “because you have to”. Memory is expensive, and bigger memory is even more expensive. And yes, that has historically been the case. Until now.

With the launch of Intel® Optane™ DC Persistent Memory combined with the already available Intel® Optane™ DC SSD with Intel® Memory Drive Technology, you have a portfolio that renders the classic memory tradeoff moot. Bigger memory? With Intel Optane persistent memory supporting up to 6TB memory in a 2S configuration, and Intel Optane SSD with Intel Memory Drive Technology in a 2S system up to 24TB, you got it. Affordability? This range of solutions delivers affordability not possible with DRAM. And in classic advertising parlance, “but wait, there’s more”. Persistence? With Intel Optane persistent memory, you got it.

So what does all of this mean for the memory footprint on your Linux* server? If you are focused on cost savings, you might build a low latency cache in an in-memory key value data store, such as Redis or Memcached. Historically, these stores lived only in DRAM, but today with innovative software optimizations, these organizations have enabled new configuration options to offload from DRAM.
Read on for a quick fly-by and links to learn more on how Intel Optane technology renders your classic memory tradeoffs of size, affordable and persistence – yes, persistence– moot.

extend memory capacity with Intel® Optane™ DC TechnologyExtending Memory with Intel® Optane™ DC SSDs
Today, you can deploy Intel® Optane™ DC SSDs with Intel® Memory Drive technology to extend your DRAM with a software-defined memory pool on Linux. With this combination, you can add up to 8x your Linux server’s installed DRAM! This configuration performs at around 10 microseconds of latency, which is slower than DRAM but with a much more affordable price tag than the same amount of DRAM.

You can also take advantage of memory extension through Memcached Extstore* or with Redis on Flash, both of which allow users to offload cache data onto fast storage. The endurance and latency advantage of Intel® Optane™ SSDs over standard NAND SSDs enables you to write and read from fast storage for years to come, and to meet your responsiveness SLAs to keep end users happy. Whatever leading software solution you choose to leverage, you can save money with less DRAM and achieve denser nodes for edge caching or offloading that expensive SQL environment with just a few server nodes.

Extending Memory with Intel® Optane™ DC Persistent Memory

Today, you can use Intel Optane DC persistent memory in volatile memory mode. With this choice, you can combine the 128GB, 256GB, or 512GB Intel Optane DC persistent memory DIMMs with system DRAM to enable up to 6TB of system memory. This configuration will enable higher performance as compared to the Intel Optane DC SSD option, but you will need to consider which is right for your needs.

If you need latency performance lower than 1 microsecond, Intel Optane DC persistent memory may be the best fit. This solution can be used with Memcached and Redis today, as well as Aerospike, a high-performance database solution, among others.

Let me share some recent benchmark numbers on Open Source Redis 4.0* that we ran between DRAM, Intel Optane DC persistent memory and Intel Optane SSD DC P4800X with Intel Memory Drive Technology. This is apples to apples, the same exact test, the same system, the same working set. We simply swapped out the memory subsystems in all three cases. You can see persistent memory getting amazingly close to DRAM (99%). Even our top engineers on the program were impressed. All virtual machines tested stayed under 1 milliseconds for their Application P99 latency. (See Footnote 1 for test and system configuration details.)

For more detailed information, see the references below, and talk with your Intel sales representative or preferred technology provider. From today, the choice is yours on whether to keep your less used data in DRAM (and pay more), or whether you want to do it differently!

See other ways in which Intel® Optane™ DC SSDs are making an impact in the data center.


1Tested on March 12, 2019 – Intel S2600WP with Intel Xeon ™ Gold 6254 (72 hw thread systems), DDR4 2666Mhz, using CentOS 7.6, special 4.18 kernel, Standard Redis 4.0, in non-clustering mode over KVM virtual machines running CentOS 7.0 kernel 3.10. Client system also running Intel Xeon ™ 72 hardware threads over an 80Gbit direct attached network, with memtier benchmark, Redis data size 1024b. All records 1024b, running a standard 90/10 (get / set, or read / write in-memory test). There was no storage used all workload was 100% “in memory” to compare DRAM to Optane Media.

Intel® Optane™ DC Persistent Memory

Product information:

Aerospike with Intel® Optane™ DC Persistent Memory:

Try Intel® Optane™ DC Persistent Memory on Google Cloud Platform:

Intel® Optane™ DC SSD

Extending Redis onto storage: -

Intel® Optane™ SSD DC P4800X with Intel® Memory Drive Technology:

Extending Memcached onto storage with Extstore*: and

Try Intel® Optane™ DC SSDs on Packet:

Intel Optane DC P4800X Evaluation Guide:

Intel® Optane™ SSD DC P4800X with Intel® Memory Drive Technology Setup and Configuration Guide:

*Other names and brands may be claimed as the property of others.
©Intel Corporation. All rights reserved.

Published on Categories StorageTags , , , ,
Frank Ober

About Frank Ober

Frank Ober, Data Center Solutions Architect, focuses on innovation related to next generation Non-Volatile Memory based on Optane Media and NAND Flash. Frank has over 30 years’ experience from working in Enterprise to Cloud real world deployments to delivering innovative Solutions from Intel’s advanced NVM Labs in California. Frank lives out Intel’s mission of leveraging the world’s most advanced and innovative storage building blocks into deliverable Solutions for end users across the world, specifically for the Cloud segment.