HPC Babble / Supercomputing 2007 impressions

I just got back from Supercomputing 2007. I remember a conversation 10 or 12 years ago with someone I really respect. That was just after Intel's Supercomputing Division had folded and HPC was in one of its cyclical downturns. Our conversation was roughly about there being no demand for large supercomputers anymore (outside of govt). We surmised that what seemed to be needed most was a cheap gigaflop. To some extent we were right, but mostly we were wrong. Right in that since then, basic engineering analysis has been a driver in the growth of HPC (thinking clusters) . Very wrong in thinking that demand for compute cycles would not continue to increase and a whole host of other things. Big, big miss on that one. If you believe the current market survey numbers, the high end of HPC is mostly stagnant (dollar wise, but not innovation wise), but the low end, small clusters, is growing by leaps and bounds.

One of the things I wanted to get a read on at the conference is 10GbE adoption. While a high performance interconnect can be important, especially if you are paying significant amounts for a SW license, so is convenience & ease of use. Particularly if the user base is increasingly non HPC geeks, but mechanical / electrical / aero type engineers who just need to get some work done. Plus, 20 Gb Infiniband / Myrinet / Quadrics might be overkill for small jobs (4 - 16 cluster nodes). My impression is that we still aren't there on 10GbE. I was hoping to see 10GBaseT but it was rare. A couple of vendors had it & could actually show me a switch, but that was it. CX4 really does give me the hives.

And there is the question of the day - accelerators. What I wanted to understand is the details of how people are programming these things to get an idea of whether the PCIe interface is going to be a bottleneck or not. Are people 'blocking' real codes at coarse enough granularity to avoid a PCIe bottleneck? I mostly struck out. I did have a good chat with the Clearspeed folks. Their programmability looked much better than I expected, but I wonder if it will be too labor intensive for all but the highest ROI situations.

Another item for me was small form factor boards & density in general. Supermicro, Tyan, Fujitsu, Intel EPSD all had small form factor / high density stuff - for rack n' stack configurations. SGI and Appro showed off what I considered complete systems based on small form factors. There were several more exotic options, but they tend to be outside my customer base.

The Sun / Rackable 'datacenter in a shipping container' seemed to get a good amount of attention. I'll be very curious to hear why end users like them (assuming they do). Is it reduced CapEx? Is it shorter time to datacenter implementation / expansion?

Going back to the conversation of a decade ago. We've gotten to the cheap *flop - I'll claim its clusters, or something close to them. Now the focus seems to be on making them 'user friendly' enough for the small industrial cluster crowd. Intel Cluster Ready is one example. WinCCS (or whatever they are calling it today) is another. But there were also a lot of booths emphasizing out of the box experience (SGI & Appro come to mind) or smaller players emphasizing custom configuration (per the application) & hand holding / down the street throat to choke type service levels.