IDH Integration With SLURM

IDH integration with SLURM for HPC

As discussed in a previous blog, the big data community in the scientific and High Performance Computing (HPC) domain are looking to harness the power of the Hadoop map reduce framework to tackle some of their domain specific analytics challenges. In this blog we discuss additional details on the integration efforts of IDH into existing HPC frameworks.

A Tale of two Communities

To set some context here, the requirement for high performance computing came out of a growing need for computational speed to support scientific research. The problems in HPC are characterized by relatively small quantities of input data that generate large volumes of output data. Over the years, the specialized equipment and technology that evolved to support this paradigm was scaled up system architectures that included a multitude of CPU cores connected via high speed expensive network fabrics, using large parallel file systems and utilizing disk less compute nodes for reliability and security. The language and operating environments of the scientific community that these technological advances catered to were predominantly C and FORTRAN.

Big data technologies like Hadoop, in contrast, arose in the mid-2000s out of a need to process large amounts of data, much of which were streaming continuously to the back end systems from a connected world. Platforms such as Hadoop handled the deluge of data by breaking the data to smaller actionable blocks that could be worked on in parallel in a distributed cluster. Since the motivation for this technology was primarily internet commerce, the hardware composed usually of low speed networks between disjoint commodity server nodes with cheap local storage glued together by a software framework.  The language of the internet community was mainly Java, C++ among a host of new scripting and programming frameworks.

As we have previously touched upon, Intel distribution of Hadoop (IDH) is evolving to bridge the gap between the two contrasting communities that are trying to tackle a very similar big data deluge problem. More specifically, Intel Hadoop is working on coming up with a version of IDH for the HPC domains to address their big data problem.

Integrating with a Resource Manager

The HPC domain has many resource managers that have been developed over time to meet specific needs of managing and sharing cluster hardware across various data platforms and applications.  Some popular examples of resource managers are SLURM, torque, LSF, MOAB and SGE. All of these provide a wide range of services for high availability, scalable operations and flexible scheduling. Intel’s distribution of Hadoop (IDH) has chosen to use Scalable Utility for Unix Resource Management (SLURM) as the initial reference implementation of supporting Hadoop on HPC clusters.

SLURM has a general-purpose plugin mechanism available to easily support various infrastructures and many plugins have been developed to emulate the Hadoop runtime in a SLURM controlled environment. There will be no requirement for SLURM to make any changes, be it configuration or otherwise, to accommodate the MapReduce framework of IDH. The SLURM administrator may choose to create and designate a specific SLURM queue for Hadoop jobs to allow separate accounting as well as define separate priorities for Hadoop applications.

Since IDH 3.x already includes a resource manager called YARN (Yet another Resource Negotiator) the integrated framework of IDH with SLURM for HPC would not have any need for YARN. IDH therefore, will need to only integrate SLURM with Hadoop’s MapReduce framework to begin with. The various components of the Hadoop eco-system can be integrated as the need arises from the HPC community.

Integrating with Luster FS

HPC workloads have evolved sophisticated parallel distributed file servers to cater specific needs of the scientific and technical computing community. The Luster file system is the most popular HPC file system and IDH 3.x already provides support to integrate with Luster FS out of the box. This renders HDFS, the default Hadoop file system, redundant as explained below.

  • Even though it may appear counter intuitive to go against the Hadoop philosophy of moving computation to data locality, the elimination of shuffle phase (a highly expensive operation) by virtue of removing HDFS, combined with high-speed network interconnects used in HPC file systems such as Luster FS, compensates the data locality loss by a large scale.
  • Replacing HDFS by Luster FS eliminates the problem of having to move data back and forth between Luster FS and HDFS just to use the Map Reduce framework.
  • Large scale operations can be supported on Luster file system by appropriately placing processes on the Luster FS data nodes depending upon resource limits.

Framework Changes in IDH

By integrating SLURM with the MapReduce framework in IDH, we eliminate the need for many Hadoop daemons such as name node, data node etc. All data will now appear as local data to the processing nodes.

By eliminating YARN, we no longer have the need for the YARN configuration files except for the need for mapred-site.xml. Since there is no need for YARN connections between the nodes of the cluster, there is no need for interface specific host names to be provided in the /etc/hosts file as SLURM the resource manager, will be able to figure out the nodes and their interfaces outside of the operating environment. This is a huge benefit for the HPC cluster, as the Hadoop application can now be one among many workloads that may run on the same set of nodes using the same OS image.

IDH will need to make suitable changes to support the Intel Manager on HPC clusters. This will be tackled beyond the first phase of integration as features in the Intel Manager such as resource monitoring, job specific profiling and cluster administration may be beneficial to HPC cluster users as well.

In summary, for the first phase of this integration IDH will attempt to support the Map Reduce framework together with SLURM and Luster FS. Not all components of the IDH eco-system will be made available initially. However with sufficient time and depending upon the adoption by the HPC community other components will be added.

Contributions to this article have been made by:

Dr. Ralph Castain currently leads the Intel effort to port Hadoop to the HPC environment. Over his 30 year career, he has worked in a variety of areas spanning resilient runtimes to artificial intelligence and proliferation detection. He has served as a lead developer in the Open MPI community for the last decade.

Raghu Sakleshpur is an engineering manager at Intel who works on Hadoop deployments and Big data technologies with partners, ISVs and customers. He is a technologist to the core and loves to share his experiences on Big data and Hadoop technologies whenever the opportunity presents itself. In his spare time, he loves pursuing his other passions like running, hiking, biking and traveling.