Meet the Intel Intelligent Storage Acceleration Library

By Leah Schoeb, Intel

As data centers manage their growing volumes of data while maintaining SLAs, storage acceleration and optimization become all the more important. To help enterprises keep pace with data growth, storage developers and OEMs need technologies that enable them to accelerate the performance and throughput of data while making the optimal use of available storage capacity.

These goals are at the heart of the Intel Intelligent Storage Acceleration Library (Intel ISA-L), a set of building blocks designed to help storage developers and OEMs maximize performance, throughput, security, and resilience, while minimizing capacity usage, in their storage solutions. The acceleration comes from highly optimized assembly code, built with deep insight into the Intel® Architecture processors.

Intel® ISA-L is an algorithmic library that enables users to obtain more performance from Intel CPUs and reduce investment in developing their own optimizations. The library also uses dynamic linking to allow the same code to run optimally across Intel’s line of processors, from Atom to Xeon, and the same technique assures forwards and backwards compatibility as well, making it ideally suited for both software-defined storage and OEM or “known hardware” usage. Ultimately, the library helps end-user customers accelerate service deployment, improve interoperability, and reduce TCO by providing support for storage solutions that make data centers more efficient.

This downloadable library is composed of optimized algorithms in five main areas: data protection, data integrity, compression, cryptography, and hashing. For instance, Intel® ISA-L delivers up to 7x bandwidth improvement for hash functions compared to OpenSSL algorithms. In addition it, delivers up to 4x bandwidth improvement on compression compared to the zlib compression library, and it lets users get to market faster and with fewer resources than they would need if they had to develop (and maintain!) their own optimizations.

One way Intel® ISA-L could assist to accelerate storage performance in a cost-effective manner is by accelerating data deduplication algorithms using chunking and hashing functions. If you develop storage solutions, you know all about the goodness of data deduplication and how it can improve capacity optimization by reducing the need for duplicated data. During the data deduplication process, a hashing function can be combined to generate a fingerprint for the data chunks. Once each chunk has a fingerprint, incoming data can be compared to a stored database of existing fingerprints and, when a match is found, the duplicate data does not need to be written to the disk again.

Data deduplication algorithms can be very CPU-intensive and leave little processor utilization for other tasks, Intel® ISA-L removes this barrier. The combination of Intel® processors and Intel® ISA-L can provide the tools to help accelerate everything from small office NAS appliances up to enterprise storage systems.

The Intel® ISA-L toolkit is free to be downloaded, and parts of it are available as open source software. The open source version contains data protection, data integrity, and compression algorithms, while the full licensed version also includes cryptographic and hashing functions. In both cases, the code is provided free of charge.

Our investments in Intel® ISA-L reflect our commitment to helping our industry partners bring new, faster, and more efficient storage solutions to market. This is the same goal that underlies the new Storage Performance Development Kit (SPDK), launched this week at the Storage Developer Conference (SDC) in Santa Clara. This open source initiative, spearheaded by Intel, leverages an end-to-end user-level storage reference architecture, spanning from NIC to disk, to achieve performance that is both highly scalable and highly efficient.

For a deeper dive, visit the Intel Intelligent Storage Acceleration Library site. Or for a high-level overview, check out this quick Intel ISA-L video presentation from my colleague Nathan Marushak.