ISC 2010 (International Super-Computing) just wrapped up with another update of the Top500 (www.top500.org), where the top 500 supercomputers in the world are ranked and listed twice a year. In terms of the fabric used for these supercomputers, InfiniBand continues be the dominant choice for performance. But there were two new 10 Gigabit Ethernet (10GbE) clusters added this time and both use iWARP. #102 at Purdue University is showing 52.2TF with 993 nodes and #208 at HHMI (Howard Hughes Medical Institute) is delivering 35.8TF with 500 nodes.
The HHMI cluster uses Intelâ€™s NetEffect 10GbE NICs that implement iWARP - learn more at www.intel.com/go/ethernet and click on the iWARP link. Also of note is the impressive Top-100 efficiency of the HHMI iWARP cluster - #84 @ 84.1% - exceeding the efficiencies of many InfiniBand 40Gb QDR clusters and other custom fabric solutions. These are good examples of how Ethernet continues to evolve to be a strong performer in the HPC world.