So here’s what all the fuss is about: university students build their own supercomputers – that’s the “cluster” part – for a live face-off to see whose is the fastest. That’s the “competition” part. They’re given real, scientific workloads to run and a power limit they can’t exceed, and the team with the highest-performing system wins.
These are great programs – chock full of technical challenges, both “book learning” and practical learning, and quite a bit of fun too. There are rules and traditions, just as there are in cricket, soccer, and the Air Guitar World Championships. Whether you’ve been following the competitions obsessively (sponsoring office betting pools, making action figures, etc.) or this is the first time you’ve ever heard of them, you probably have some questions… so here’s everything you need to know. Listen up; we’re only explaining this once.
Student Clustering Triple Crown
There are three major world-wide student cluster events. The United States-based Supercomputing Conference (SC) held the first Student Cluster Competition (SCC) in November 2007. The contest has been included at every subsequent SC conference, usually featuring eight or more university teams from the US, Europe, and Asia. As the first organization to hold a cluster competition, SC pretty much established the template on which the other competitions are based.
The other large HPC conference, the imaginatively named ISC (International Supercomputing Conference), held its first competition at the June 2012 event in Hamburg. This contest, jointly sponsored by the HPC Advisory Council, attracts teams from the US, Europe, and Asia. It has been a big hit with enthusiastic support from conference organizers, competitors and show attendees.
The third entry is the Asia Student Supercomputer Challenge (ASC). These competitions are typically the largest in terms of numbers, with more than 300 teams applying for the finals. The competition, sponsored by Inspur, invites the 20 best applicants to the competition finals, which are hosted in various Chinese cities.
How Do These Things Work?
All three organizations use roughly the same process. The first step is to form a team of six undergraduate students (from any discipline) and at least one faculty advisor. Each team submits an application to the event managers in which they answer questions about why they want to participate, their university’s HPC/computer science curriculum, team skills, etc. A few weeks later, the selection committee decides which teams make the cut and which need to wait another year.
The teams who get the nod have several months of work ahead. They’ll need to find a sponsor (usually a hardware vendor) and make sure they have their budgetary bases covered. Sponsors usually provide the latest and greatest gear along with a bit of financial support for travel and other logistical costs. Incidentally, getting a sponsor isn’t all that difficult. Conference organizers (and other well-wishers, like me) can usually help teams and sponsors connect. The Asian Student Cluster Competition (ASC) is different because competition sponsor Inspur provides the servers for every competing team.
The rest of the time prior to the competition is spent designing, configuring, testing, and tuning the clusters. Then the teams take these systems to the event and compete against each another in a live benchmark face-off. The competition takes place in cramped booths right on the tradeshow floor.
All three events require competitors to run the HPCG benchmark, an HPL (LINPACK) benchmark, plus a set of real-world scientific applications. Teams receive points for system performance (usually “getting the most done” on scientific apps) and, in some cases, the quality/precision of their results. In addition to the benchmarks and app runs, teams are usually interviewed by subject matter experts to gauge how well they understand their systems and the scientific tasks they’ve been running.
The team that amasses the most points is dubbed the “Overall Winner.” There is usually an additional award for the highest LINPACK score, “Fan Favorite” and the ASC gives awards for “Application Innovation” and an “E-Prize.”
While many of the rules and procedures are common between competition hosts, there are some differences:
SC competitions are grueling, 46-hour marathons. The students begin their HPCC and separate LINPACK runs on Monday morning, and the results are due around 5:00 p.m. that day. This usually isn’t very stressful; most teams have run these benchmarks many times and could do it in their sleep. The action really picks up Monday evening when the datasets for the scientific applications are released.
The apps and accompanying datasets are complex enough that it’s pretty much impossible for a team to complete every task. So from Monday evening until their final results are due on Wednesday afternoon, the students are pushing to get as much done as possible. Teams that can efficiently allocate processing resources have a big advantage.
ISC competitions are a set of three day-long sprints. Students run HPCC and LINPACK the afternoon of day one but don’t receive their application datasets until the next morning. On days two and three, they’ll run through the list of workloads for that day and turn in the results later that afternoon.
The datasets usually aren’t so large that they’ll take a huge amount of time to run, meaning that students will have plenty of time to optimize the app to achieve max performance. However, there’s another wrinkle: the organizers spring a daily “surprise” application on the students. The teams don’t know what the app will be, so they can’t prepare for it; this puts a premium on teamwork and general HPC/science knowledge.
The ASC competition is sort of a blend of the SC and ISC competitions. Students usually have the entire day to run a slate of applications, with results due at the end of the day. They’ll get another set of applications the next day and the competition usually includes a “mystery application” much like the ISC competition.
Hardware Dreams Spawn Electrical Nightmares
When it comes to hardware, the sky’s the limit. Over the past few years, we’ve seen traditional CPU-only systems supplanted by hybrid CPU+GPU-based clusters. We’ve also seen some ambitious teams experiment with cooling, using liquid immersion cooling for their nodes. Last year at SC12, one team planned to combine liquid immersion with overclocking in an attempt to clean the clocks of their competitors. While their effort was foiled by logistics (their system was trapped in another country), we’re sure to see more creative efforts along these lines.
There’s no limit on how much gear, or what type of hardware, teams can bring to the competition. But there’s a catch: whatever they run can’t consume more than 3,000 watts at whatever volts and amps are customary in that location. In the US, the limit is 26 amps (26*115 volts = 3000 watts.) At the ISC’13 competition in Germany, the limit will be 13 amps (13*230 volts = 3,000 watts.) The same 3,000-watt limit also applies to the upcoming ASC competition in Shanghai.
This is the power limit for their compute nodes, file servers, switches, storage and everything else with the exception of PCs monitoring the system power usage. There aren’t any loopholes to exploit, either – the entire system must remain powered on and operational during the entire three-day competition. This means that students can’t use hibernation or suspension modes to power down parts of the cluster to reduce electric load. They can modify BIOS settings before the competition begins but typically aren’t allowed to make any mods after kickoff. In fact, reboots are allowed only if the system fails or hangs up.
Each system is attached to a PDU (Power Distribution Unit) that will track second-by-second power usage. When teams go over the power limit, they’ll be alerted by the PDU – as will the competition managers. Going over the limit will result in a warning to the team and possible point deduction. According to the HPAC-ISC Student Cluster Challenge FAQ, “if power consumption gets well beyond the 13A limit, Bad Things™ will happen…” meaning that the team will trip a circuit breaker and lose lots of time rebooting and recovering their jobs.
On the software side, teams can use any o/s, clustering, or management software they desire as long as the configuration will run the required workloads. The vast majority of teams run some flavor of Linux, although there were Russian teams in 2010 and 2011 who competed with a Microsoft-based software stack, and they won highest LINPACK at SC11 in (appropriately enough) Seattle.
Performance Feats
It’s amazing how much hardware can fit under that 3 kW limbo bar. At SC12 in Salt Lake City, the Overall Winner Texas Longhorn team used a 10-node cluster that sported 160 CPU cores plus two NVIDIA m2090 GPU boards. Texas had the highest node count, but other teams used as few as six nodes and as many as eight GPU cards. The latest trend is small two to four node clusters that are jam packed with NVIDIA GPUs.
Performance gains in the SCC’s are as large (or larger in some cases) as the performance gains seen in the general HPC market. For example, the highest LINPACK score in the 2009 SC competition was 0.692 TFlop/s. The current (2019) LINPACK student performance record today is over 56 TFlop/s – an incredible increase over just 10 years.
Compelling Competition
Speaking for myself (and the untold millions of maniacal fans worldwide), these competitions are highly compelling affairs. The one thing I hear time and time again from students is, “I learned sooo much from this…” They’re not just referring to what they’ve learned about systems and clusters, but what they’ve learned about science and research. And they’re so eager and enthusiastic when talking about this new knowledge and what they can do with it – it’s almost contagious.
For some participants, the Student Cluster Competition is a life-changing event. It’s prompted some students to embrace or change their career plans – sometimes radically. These events have led to internships and full-time, career-track jobs. For many of the students, this is their first exposure to the world of supercomputing and the careers paths that are available in industry and research. Watching them begin to realize the range of opportunities open to them is very gratifying; it even penetrates a few layers of my own dispirited cynicism.
The schools sending the teams also realize great value from the contests. Several universities have used the SCC as a springboard to build a more robust computer science and HPC curriculum – sometimes designing classes around the competition to help prepare their teams. The contests also give the schools an opportunity to highlight student achievement, regardless of whether or not they win.
Just being chosen to compete is an achievement. As these competitions receive more attention, the number of schools applying for a slot has increased. Interest is so high in China that annual ‘play-in’ cluster competitions are held to select the university teams that will represent the country at ISC and SC.
With all that said, there’s another reason I find these competitions so compelling: they’re just plain fun. The kids are almost all friendly and personable, even when there’s a language barrier hindering full-bandwidth communications. They’re eager and full of energy. They definitely want to win, but it’s a good-spirited brand of competition. Almost every year we’ve seen teams donate hardware to teams in need when there are shipping problems or when something breaks.
It’s that spirit, coupled with their eagerness to learn and their obvious enjoyment, that really defines these events. And it’s quite a combination.
— Dan Olds
adana escort bayan