As our faithful Twitter followers have heard, Team Longhorn from The University of Texas at Austin won the Overall Championship Award in the Big Iron (Standard Track) division of the recently concluded SC13 Student Cluster Competition in Denver.
I’ve been very late in getting the detailed results to you, the rabid student cluster competition aficionados, due to my travel to South Africa to cover their regional cluster competition. (That, and how painfully long it takes me to write even a single word.)
And now, our anxiously-awaited analysis…
Big Iron Track: Texas Repeat
The only thing more difficult than winning the Overall Championship in a Student Cluster Competition is doing it twice in a row. Texas has become only the second team to score a repeat win in a major international tourney, joining Taiwan’s National Tsing Hua University, which took home the gold in 2010 and 2011.
This is Team Longhorn’s fourth SC competition. Along the way, they notched a couple of historic firsts. In 2010, they became the first student team to break through the TeraFLOP barrier on the way to winning the LINPACK crown.
At SC11 in Seattle, Team Longhorn was the first team to use liquid cooling. They immersed their entire system in a vat of mineral oil. It was very messy and earned them style points, but didn’t give them enough added “oomph” to win an award.
Texas came to Salt Lake City in 2012 with a new attitude and a handful of GPUs. They used both of these to grab their first Overall Championship Award and a celebratory dinner at The Cheesecake Factory.
Who Won What & How?
Texas scored strongly in both the application performance and the interview portions of the competition. The Longhorns were the top finisher in four of the seven scoring categories and took second place in an additional two. Their biggest single win was on the HPCC benchmark suite, where they topped other competitors by about 25%.
Hardware-wise, the Texans brought in what they call a “balanced system.” By this, they mean that they didn’t go hog-wild on accelerators; they stocked four NVIDIA K20s (one per compute node) vs. other teams who sported up to eight GPU crunchers.
Team Longhorn didn’t have the most CPU cores either. Their five-node cluster (four compute nodes plus a head node) hosted 100 CPU cores with “only” 320 GB ram – which totaled 3.2GB per core, also less than other teams.
Team Boston (the Mass Green Consortium) took second place in the race for the Overall Championship, finishing about 15% behind the Longhorns and 10% above the third-place finisher.
Boston scored a big win on the “Mystery App,” which was OpenFOAM, topping all other teams by a significant margin. They also took second place on HPCC, GraphLab, and NEMO5.
Team Chowdah had six nodes containing twelve Xeon processors, giving them 120 CPU cores, with 768 GB of RAM (6.4 GB per CPU core). They also had six NVIDIA Kepler K20 GPUs – one for each node.
Their only weakness, if it could be called that, was their middle-of-the pack finish on the interview portion of the competition.
Team Germany, or Team Kraut (as they prefer), from Friedrich-Alexander University of Erlangen-Nuremberg took home the third-place honors, which is pretty good for a first-time competitor.
On the hardware side they brought four nodes, each with dual Xeon processors. They had a total of 64 CPU cores and 512 GB RAM, yielding 8 GB per CPU core. They also had a grand total of eight NVIDIA K20 GPUs to spur things along on the GPU-friendly apps.
They scored outright wins in the interview category and a big win on GraphLab. The team also took third place on HPCC and OpenFOAM. A nice debut for the Germans.
Honorable Mentions
Some of the other Standard Track noteworthy results include Team Venus 2.0 (The University of the Pacific) taking second place on two of the WRF categories and a third-place finish on LINPACK – which shows great improvement over their results last year.
The Aussies from iVEC finished in the money on WRF and were in the thick of things on the other applications as well. Team Buffalo from the University of Colorado Boulder also put up a good effort, scoring in the middle of the pack on all competition categories. However, they were busy tracking rocket launches to Mars (team members were responsible for tracking telemetry from the probes), and thus can be excused if their full attention wasn’t on the competition.
China’s NUDT was also in the race until the very end. As everyone already knows, NUDT took home the LINPACK crown with a score of just over eight TFLOP/s. But they also turned in a good performance on the other competition applications, taking the third spot on NEMO5, one of the WRF categories, and in the interview section.
Team Volunteer from The University of Tennessee, Knoxville had a string of bad luck at the worst possible time. The basic problem was that their competition system had some serious (and difficult to pinpoint) hardware incompatibilities. The only solution to their problem was open-heart surgery, meaning an interconnect transplant. This cost them all of Monday.
While the surgery took up precious competition time, there was another cost as well: they lost all of their application optimization modifications in the process. They spent weeks crafting these mods, and their system performance wasn’t nearly the same without them. It’s sort of like trying to compete in an auto race with a stock minivan.
But they persevered and finished the competition, and that’s what counts here. They didn’t give up and throw in the towel. It’s too bad that their hardware betrayed them. Judging by their interview scores, the team had pretty good application and HPC knowledge – enough to have made a very good showing in the competition – if only their hardware had cooperated.
However, that’s how it goes in the highly competitive world of big-time Student Cluster Competitions. It’s not all science and numbers; luck (bad or good) plays a role.
Congratulations to all of the teams who competed at SC13 this year. We’ll probably be seeing NUDT at the upcoming Asia Student Supercomputer Challenge next spring, and will definitely be seeing some of these schools competing at the newly expanded ISC’14 Student Cluster Challenge this summer in Germany. More details on the way…
Posted In: Latest News, SC 2013 Denver
Tagged: supercomputing, Student Cluster Competition, HPC, Dell, National University of Defense Technology, Silicon Mechanics, University of the Pacific, University of Colorado, Inspur, MEGWARE, iVEC, FAU, Friedrich-Alexander University of Erlangen-Nuremberg, The University of Texas at Austin, The University of Tennessee, SGI, Cray, Configurations
Pingback: SC13 Honor Roll | Student Cluster Competition
jameslinsjtu
·
RT @HPC_Guru: Who won what, and how? #SC13 Student Cluster Competition Big Iron analysis http://t.co/LTK1yEwLCx #HPC #SCC via @Student_C_C
LorenaABarba
·
Barba-group prepared the “Mystery App” for the Student Cluster Competition—The analysis http://t.co/iykB4VBBDF @Student_C_C @Supercomputing
LorenaABarba
·
RT @Student_C_C: Who won what, and how? #SC13 Big Iron analysis http://t.co/2ffjeMW7ev #HPC @Supercomputing
ProfMatsuoka
·
RT @HPC_Guru: Who won what, and how? #SC13 Student Cluster Competition Big Iron analysis http://t.co/LTK1yEwLCx #HPC #SCC via @Student_C_C
HPC_Guru
·
Who won what, and how? #SC13 Student Cluster Competition Big Iron analysis http://t.co/LTK1yEwLCx #HPC #SCC via @Student_C_C
Student_C_C
·
Who won what, and how? #SC13 Big Iron analysis http://t.co/2ffjeMW7ev #HPC @Supercomputing