How much supercomputing can you do with $2,500 worth of hardware? The three university teams (plus one high school team) competing in the SC13 Student Cluster Commodity Track Competition will answer this question and plenty of others.
The Commodity Track is a new addition to the SC Student Cluster Competition event this year. We all know and love the Standard Track: university teams build the fastest cluster they can, then compete live at the show to see who can turn in the best numbers on a set of HPC applications. The only limit on Standard Track competitors is the 26 amp (110 volt) power cap and the requirement that their gear fits in one rack.
In the Commodity Track, the name of the game is getting the most bang for the buck. Competitors have $2,500 they can use to buy components to build a true HPC cluster.
The teams in the Commodity Track will be required to run the exact same applications as the Standard Track big iron teams. These apps include the HPCC benchmark (with a separate LINPACK), NEMO5, WRF, GraphLab, and a “Mystery App” that will be revealed during the competition. (We’ll be discussing these apps in an upcoming article.)
Commodity Track Rules of the Road
The rules are pretty simple: teams have to field configurations of at least two nodes, and they can’t use more than 15 amps (110 volt) to power their creations. The components they use have to be commercially available but don’t have to be brand-new. So teams could scour Ebay, Craigslist, and garage sales in tech-centric neighborhoods to find deals on retired gear.
However, competitors do have to provide organizers with a complete breakdown on their parts, sources, and prices. This is to ensure that everyone stays under the $$ limit.
Decisions, Decisions, Decisions
Figuring out how to spend the money to get the most flops/dollar might seem simple on the surface. But as event organizer Daniel Kamalic commented recently, “In this track of the competition, you’re going to see some very (very!) creative solutions and approaches. These really exemplify the spirit of what we’re trying to develop in the next generation of computational scientists and engineers.”
Consider the different options for a moment. Give the dollar constraint, we’re not going to see much in the way of fancy Infiniband interconnects – it’s probably going to be 1GbE across the board. But that decision still leaves many questions unanswered.
What kind of nodes should they look for? At the high end, they could buy a dual-socket board for as little as $270. For this, they’d get six memory slots, one x16 PCI slot, and a single GbE LAN slot.
Adding a couple of CPUs (Xeons in this case) at $210 to $230 each drives the cost per node to around $700 – just for the CPU and motherboard.
Since you need two nodes to compete, you’d have to commit $1,400 of your budget just to the motherboard/CPU combo. This leaves only $1,100 for an enclosure, cables, power supplies, memory, storage, switches, and all the other bits. Is that enough money to bring the cluster to life?
At the low end of the spectrum, they could pick up some inexpensive, single-socket motherboards for as little as $50. These boards would have two memory slots, a single GbE LAN connection, and a single x16 PCI slot. The starting cost for a dual-core CPU (socket LGA 1155, for example) is around $60 each. They can go with a quad-core processor at about $200 each.
With single-core nodes, the motherboard/CPU cost could be as low as $110 each. Two of these nodes would total $200 vs. the $700 cost of the single dual-socket node we discussed a few paragraphs back. Of course, we’d still need to add power supplies, memory, disk, etc., and some of these costs will be higher because you need multiple items – like dual power supplies and cables – to support two nodes vs. a single node.
Another consideration is whether to try to jack up performance by adding some GPUs to the mix. Even the least expensive NVIDIA GPUs these days are CUDA compatible. For example, the Tesla-based GeForce 210 retails for around $30, but still packs a decent number-crunching punch.
Doubling GPU spending to $60 per card will get you a Fermi-based GT630 that should deliver close to 2x the performance of the Tesla cards. NVIDIA has a handy table showing relative performance values for their GPUs here.
Teams could also practice the formerly black art of overclocking their CPUs and/or GPUs. To me, overclocking isn’t as scary as it once was; there are a lot of ‘how to’ guides out there. But, of course, there aren’t any guarantees you won’t fry your chips. Doing a significant overclock means the teams will have to pay more attention to their motherboard/CPU combinations and will certainly have to increase their cooling capacity.
If I were heading down the overclocking road, I’d configure in some liquid cooling or maybe immerse the whole damn thing in a vat of mineral oil. Sure, you’d have to strip off the fans and remote connect the drives (or seal them up), but you’d take heat off the table as a factor. Add in a cheap pump, a junkyard radiator, some tubing, a fan, and there you go.
Given the same $2,500 budget, what would you build? Would you go with dual socket nodes or mini board single socket motherboards? How many and what kind of GPUs would you add? And would you go for broke and overclock it?