Can We Outrun/Outcompute the Data Avalanche? (Part 3)


I’ve written about the big data problem and asserted some ideas on what makes up an ideal infrastructure. Let’s look at the some progress I think is relevant.

At SGI we’ve been wrestling with the big data problem for many years now and we’re continuing to building and integrating systems with the attributes we feel are ideal for data intensive computing. More recently we’ve been encouraged by the potential of Intel’s 5500 series Xeon Processor (code named Nehalem) to take on the data intensive computing problem. We have run various “Data Intensive” performance benchmarks using the SGI Altix ICE platform along with the Intel Xeon Processor 5500 Series to see how well the combination would handle real world Data Intensive Computing.

The results have been outstanding and represent material progress in sustainable efficiency for big data problems. The new system delivers reliably scalable performance gains of up to 140 percent over current generation systems across a variety of data-intensive applications

So, can we outrun the data avalanche? We can discuss that more in the discussion room, but I think the answer is that we don’t really have a choice if we want to survive. It is just a matter of figuring out the best approach to keeping one step ahead of the huge amounts of data cascading towards us and if I have my way, a better quality of live by feeling less stress from that data yoke on my shoulders.

I recently co-authored a technical white paper on Data Intensive Computing that will provide you a bit more insight on this topic. Feel free to download at