From some of my previous posts on the impact of analytics and BI at Intel, the evolution of Intel IT’s use of Big Data, and the migration to Cloudera from another Hadoop distribution, you might get the impression that Hadoop and its native Map Reduce processing model is all that there is to Big Data. In this presentation from the 2015 Hadoop Summit in San Jose, Intel IT’s Seshu Edala and Joydeep Ghosh look at what kind of Big Data use cases do not work well with map reduce. They describe their investigation of up and coming technologies that might do better on these use cases.
How can Map Reduce be problematic? Intermediate results need to be written to storage. While this may not be a problem for many batch processing jobs, use cases that iteratively process data, such as an analysis of continuously streamed log data, these intermediate writes to storage like disk can drastically slow processing. As a data stream is split and sent through a number of analysis functions, the processing can be modeled as a Directed Acyclic Graph (DAG). Map Reduce is not particularly efficient for handling this kind of graph processing. We show storage writes in Map Reduce vs a Generic DAG problem in the diagram to the right.
Much of Edala and Ghosh’s presentation is a look at the technologies that would be efficient and effective at handling DAG type problems. You can look at their presentation for their conclusions, but one of the more promising technologies is called Spark. Spark was developed in UC Berkeley AMPlab, commercialized by the company Databricks, and supported in Cloudera’s Hadoop Distribution. Another consequence of this looking at post Map Reduce technologies is that we have to rethink how Hadoop will fit with different Big Data technologies and technologies evolve. The diagram below shows how the original Hadoop/Map Reduce combination (with green fill) will evolve over time.