The tension is rising at the ISC’17 Student Cluster Competition being held this week in Frankfurt, Germany. Eleven university teams are vying for the coveted Ultimate Champion award, the Highest LINPACK award, and the Fan Favorite Prize.
I know that everyone is closely following these competitions, but just a few words to the people who aren’t: get on the bandwagon. This is like college football for nerds. It’s compelling competition, it’s fun, it’s science-y, and it features huge computers – what’s not to like?
These kids have been working for months to design their clusters, learn the applications, and learn how to tune the apps for head-spinning performance. The only constraint they have, and it’s a biggie, is that their hardware can’t consume more than 3,000W at any time during the competition.
What’s always surprising about these competitions is the creativity of the students in designing their clusters. As you can see on the chart below, there’s quite a bit of difference between the teams in terms of their approaches.
(Click image to enlarge.)
On the small side, we have Team UMass with the smallest cluster possible – one node – although they have a brace of NVIDIA Jetsons to provide a low-power punch to their system. We have a few teams coming in with two and three nodes, but jam-packed with NVIDIA P100 GPUs.
For the first time we’re seeing the new motherboards that support two CPUs but a shocking eight full-size GPUs on the same board. Purdue/NEU and FAU are using these boards and looking to ride them into the upper echelon of the competition.
I’m usually in the “more gear, more better” school of thought, but maybe this eight-GPUs-per-node configuration is too much of a good thing. At some point those GPUs are going to saturate the PCIe bus, and chaos will ensue.
It reminds me of the old “coffee table of doom” behemoth we saw at one of the earlier competitions. One of the German teams brought a row of towers that each had two Intel Phi and NVIDIA K40 GPUs – for a grand total of 16 accelerators. When they fired that bad boy up, I swear that the lights in the hall dimmed.
Nanyang, the pride of Singapore, is also running a two-node cluster, but with a more modest eight GPUs. This is a team that has been steadily improving in these competitions, and I predict that they’re going to have a breakthrough into the upper tier of competitors soon.
EPCC from Edinburgh is sporting a three-node, liquid-cooled cluster with nine GPUs, and they’re hoping to recapture the LINPACK crown, as they did a couple of competitions ago.
Spain’s UPC team, backed by the Barcelona Supercomputer Center, has brought a beefy ARM-fueled rig that has a whopping 768 ARM cores and over 2TB of memory.
University of Hamburg, one of two German teams at this year’s competition, is driving an Intel KNL platform with ten one-socket nodes providing 680 compute cores.
The US team from NERSC has eschewed any accelerator in favor of a six-node traditional CPU-based cluster. While this won’t capture the LINPACK trophy, it’s a solid machine and should do well for them.
Beihang University from China is looking to improve upon their second place finish at the recently concluded Asian Student Cluster competition, and is running their five-node cluster equipped with 10 P100 accelerators.
The two most lauded teams, China’s Tsinghua and South Africa’s CHPC, have nearly the same cluster. Tsinghua is coming off a huge win over a 20-team field at the ASC17 competition, while the CHPC team handily won ISC’16.
These teams are backed by two of the most stalwart vendor partners of cluster competitions worldwide. Inspur is sponsoring the Tsinghua team (Beihang too) and sponsors the entire Asia Cluster Competition.
Dell is supporting the CHPC team and even helps out with their training by flying them to Dell’s Texas HQ for a week of HPC instruction, tech talks with HPC engineers, and BBQ.
Next up in our coverage is an up-close-and-personal look at each team via our video interviews. Stay tuned.
Posted In: Latest News, ISC 2017 Frankfurt
Tagged: supercomputing, Student Cluster Competition, HPC, LINPACK, GPU, CPU, accelerators, ISC 2017, Configurations