We spent a few minutes talking to Team China before they submitted their final results for the 2011 Student Cluster Competition. They’re happy and had a good time, but it’s hard to figure out how they gauge their chances. We’ll find out soon…
Final Hours at SC11: Team Boston (Video)
We caught up with Team Boston (aka Team Chowder) a few hours before they turned in their results for SC11. They shared their thoughts about the competition and their results thus far, along with whatever else went through their sleep-deprived minds.
SC11: LINPACK Shocker in Seattle; Longshot comes through (Video)
The results from the LINPACK portion of the Student Cluster Competition in Seattle have been released. This brief (barely three minute) video reveals all, including a short discussion of the LINPACK rules, the winner, individual team results, and the last odds.
Knowledgeable bettors who put their virtual money on system configuration and experience will find themselves rewarded. Those who bet brand names, emotion, or a mindless urge to follow the herd will find themselves a bit poorer today than yesterday. Stay tuned for more updates…
Meet the SC11 Team: Texas Longhorns (Video)
What can you say about the University of Texas Longhorns that they haven’t already said about themselves? They’ve got swagger for sure, and their LINPACK-topping success in 2010 showed that they can back up at least part of it. The Longhorns have brought the most attention-grabbing entry in Student Cluster Competition history with their 2011 deep-fried cluster.
What they’ve done is immerse all four of their servers (11 nodes with 132 Intel Xeon cores) in a vat of mineral oil. This gives them very effective cooling and saves enough energy to allow them to drive anywhere from 5-15% more cores.
The energy savings arise from being able to remove system fans – each of which could draw as much as 5 amps (theoretical maximum under extremely harsh conditions). The oil circulation and heat dissipation takes some juice to be sure, but it’s still less than what it would take to drive the various system and power supply coolers.
Check out the video to get a better look at the Texas hardware and the guys who pulled it together. Upcoming vids will show them removing and replacing nodes when their initial cabling proved to less than optimal.
As the competition progresses, we’ll see if their bold experiment pays off or if it just ends up as an ill-conceived, oily mess. I’m really happy to see a team take a chance on new technology – it’s that kind of spirit that drives the tech industry.
Meet the SC11 Team: Taiwan’s National Tsing Hua University (Video)
The team from Taiwan’s National Tsing Hua University is on a mission: to become the first university to repeat as Student Cluster Competition champions. While the team this year is almost entirely new, they have the same coach and have been mentored by their predecessors from the 2010 championship team.
They’ve added a new sponsor to the mix this year. Acer returns as their system sponsor, supplying a 72-core Xeon-based cluster, and NVIDIA is jumping on the Taiwan bandwagon with their contribution of six Tesla GPU cards.
Like the other teams driving GPUs, Taiwan’s success is somewhat dependent on how well they’ve adapted the scientific codes to optimize these specialized number-crunching beasts. Or, in the case of non-GPU friendly apps, how well Taiwan can utilize their traditional cluster hardware to handle the loads.
Take a look at the video to get a feel for Team Taiwan. To me, they’re the definition of quiet confidence and competence. Like last year, they never seem to hurry and never show any signs of frustration. As one observer noted, they’re one of the most well-prepared teams in the competition – something that should aid them in their quest to take home another SCC trophy. (There isn’t a real SCC trophy, but there should be.)
Meet the SC11 Team: Russia’s NNSU (Video)
Team Russia, representing the State University of Nizhny Novgorod, is returning to the Student Cluster Competition for the second time. The team is again going with a hybrid approach, mixing Xeon CPUs (84 cores) and as many as 12 NVIDIA Tesla GPUs.
I use the term ‘as many as’ due to the fact that they’ll probably end up changing their configuration to adhere to the 26 amp limit. Even with throttling down system components, 84 Xeon cores and a dozen GPUs will suck up a lot of power – enough to push them over the limit.
The team returns with the same coaching staff, the same sponsor (Microsoft), and a roster of both veteran and newbie competitors. While they finished in the middle of the pack last year, they figure that another year of honing their clustering craft and another year of GPU development should pay off in a better finish.
Their first test was LINPACK on Monday, a task that should play to their “GPU-riffic” strengths. The other applications may or may not be a good fit for Team Russia’s hybrid cluster, depending on whether the team was able to find GPU- and Microsoft-ready versions of the code. With an experienced team and solid hardware, the Russians have a solid shot at SCC success.
Meet the SC11 Team: Purdue Boilermakers (Video)
Purdue is another team that’s participated in the SC Student Cluster Competition (SCC) since its inception. They’re a solid team with a half-n-half mixture of rookies and SCC veterans. This year they’re bringing the typical workmanlike Purdue attitude to the competition – along with a plethora of traditional HPC gear.
The Boilermaker cluster relies on the latest 10-core Intel Xeon CPUs provided by sponsor Intel. They’re running four quad-socket nodes, which gives them a total of 160 cores to devote to the various challenge applications. At 64GB per node their cluster is mid-range memory-wise, which may put them at a disadvantage vs. a few of their competitors.
In the off season, the Purdue team worked on gaining greater understanding of the scientific disciplines associated with the specific workloads in this year’s competition. This approach could pay off; they may have learned tuning secrets that can be gained only through experience – or through picking the brains of those experienced practitioners.
Another constant with Team Purdue is their signature sledgehammer. I was concerned when I didn’t see it prominently displayed in their booth as they set up their gear. But it was there today while they were running LINPACK, putting my mind at ease. Check out the video to get an up close and personal look at this crop of Boilermakers.
Meet the SC11 Team: Tecnológico de Costa Rica (Video)
Team Costa Rica is another first-time entrant in the 2011 Student Cluster Competition. Their official name is Tecnológico de Costa Rica, but to me they’re the Rainforest Eagles, and they’ve earned respect for not only representing their entire region but for the hardships they overcame to make it to the SCC.
The team lost their hardware sponsor at the last minute, putting their chance to participate in the challenge in jeopardy. But the HPC Advisory Council and its chair, Gilad Shainer, stepped in to ensure that the team could not only show up but compete with the other teams on an equal footing.
The Rainforest Eagles are running the latest AMD Interlagos 16-core processors on Dell servers, utilizing a Mellanox Infiniband interconnect. As you’ll see in the video, Team Costa Rica is experiencing Infiniband for the first time and is anxious to share this discovery with their computing compatriots back home.
Even though the audio quality on the video could be better (my fault – new microphone) I hope that the enthusiasm of the team shows through. They’re a bit shy; this is all new to them. But they’re having a great time and are ready to test themselves against much more experienced competitors. If you like pulling for the underdog, Costa Rica is your team. Take a look at the video and prepare to be convinced that maybe, just maybe, this is the year of the underdog.
Meet the SC11 Team: Colorado Buffaloes (Buffalos? Buffaloi?) (Video)
The University of Colorado Buffalo team has competed at every Student Cluster Competition – this is year six. The 2011 team is packed with veterans; almost every member has been to the big cluster dance before. Team Buffalo is driving 16-core AMD Interlagos processors perched on four Dell quad-socket server chassis.
Their 256 cores put them at the upper end of the conventional core count among the competitors. While half of this year’s teams added GPUs to their cluster stew, Colorado was looking to build a system that would perform well on all of the apps – not just those that have been optimized for CUDA.
The Buffs have separated themselves from the herd in two key areas: experience and memory. They have the most experienced team in the competition, which should pay dividends when it comes to deciding how to attack the scientific workloads during the 48-hour marathon part of the competition. Their cluster also has significantly more memory per node, sporting 128GB/node vs. an average of approximately 64GB for their competitors.
Take a look at the video to get a better feel for Team Colorado. It’s easy to see why they’ve won awards for “Fan Favorite” in the past; they’re an engaging bunch, and earnest as all get-out. They also have the distinction of fielding the tallest SCC competitor to date – a 7-footer. How will this factor into the battle? Will experience, high core counts, vast system memory, and the tallest average team member height combine to become the winning formula for Colorado in 2011?
Meet the SC11 Team: China’s NUDT (Video)
China’s NUDT (National University of Defense Technology) shocked the world in 2010 when they unveiled the fastest supercomputer known to man, the 1.86 Pflop Tianhe-1A. It came out of nowhere and caught the entire industry by surprise.
Team China, sponsored by NUDT, is hoping to do much the same thing at this year’s SC11 Student Cluster Competition (SCC) in Seattle. The team didn’t bring as much hardware as the others, but what they brought is potent.
Their cluster consists of only two nodes, each with dual Intel Xeon processors, for a total of twenty-four x86 cores. The punch comes with their NVIDIA GPU accelerators – six C2070s, to be specific – that will make short work of number-crunching workloads.
The success of Team China will depend on how well they’ve been able to adapt the scientific workloads to run on their GPUs. Some of the challenge workloads, like PFA, will run like a greased weasel (i.e. fast) on GPUs while others, like POP, haven’t yet been remodeled to take advantage of them.
While the scientific applications are an open question, Team China should be able to mount an effective challenge on LINPACK. Teams this year have the ability to submit a separate LINPACK run on modified hardware. This means that they can throw all the GPUs they have at it in order to capture the LINPACK crown. This should work in Team China’s favor and give them a solid chance to take home the LINPACK blue ribbon. (There isn’t an actual blue ribbon.)
Despite the obvious language barrier, it was clear that the team was happy to be at SCC and enjoying the experience. Take a look at the video to get a better feel for Team China and their chances at clustering glory.