The apps for this year’s edition of the SC12 Student Cluster Competition are the typical mix of HPC workloads, chosen to represent a range of scientific disciplines and computational challenges. In order to drink deeply from the chalice of victory student teams will need to crawl inside each of the apps, find the bottlenecks, and figure out how to work around them – or make them less bottlenecky. (Note: there is no actual chalice of victory in the Student Cluster Competition. But there should be, don’t you think?)
Here are the scientific apps that the students will be wrestling with this year in Salt Lake City:
LAMMPS: a ‘classical’ molecular dynamics application. Meaning, I guess, it’s like the “Stairway to Heaven” of molecular dynamics. It’s used to model or simulate atoms and other smallish particles. It can be parallelized using message passing at 80% or higher efficiency.
QMCPACK: a Monte Carlo simulation with a quantum twist. It’s helpful when trying to solve ‘many body’ problems that are typical in predicting what will happen when large numbers of particles interact with each other. Sure, some of you might say that the many-body Schrodinger equation does a pretty good job at this, but you run into problems when the number of particles gets too big or they’re moving too fast. It rapidly becomes a computational problem that can bog down even the most super of supercomputers. This is where you’d use Quantum Monte Carlo; it allows you to model a many-body wave function directly rather than by approximation. To get the desired statistical accuracy, you run more and deeper simulations. QMCPACK is interesting in that it’s highly scalable (90%+) on CPUs using MPI. It can run on GPUs very quickly but doesn’t scale nearly as well.
CAM: the Community Atmospheric Model developed by NCAR (National Center for Atmospheric Research) as a tool for weather and climate research types. It’s been in wide use for quite a few years now. As an application, it doesn’t look to be as easy to scale as LAMMPS or QMCPACK. As you add more nodes, the message-passing traffic increases to the point where you’re barely getting 40% out of the additional hardware. However, this will depend on the data set and size of the grid the judges require them to calculate.
PFLOTRAN: a massively parallel 3-D reservoir simulation application used to model flows through geologic formations. This comes in handy if you want to store something nasty underground and need to know whether something bad could result. This code is used to figure out what’s happening at the DOE Hanford nuclear facility in the Pacific Northwest. Plutonium for the first nuclear bomb was manufactured at Hanford, and there’s a fair amount (53 million gallons) of leftover toxic waste. It’s a good idea to understand how that material might react to both the containment vessels, surrounding soil, and groundwater – thus the need for PFLOTRAN. This app scales fairly linearly until you get to large core counts (more than 27,000) which is much larger than what we’ll see at the Student Cluster Competition.
The real challenge for students in this competition is how to best make use of the hardware they have. On Monday evening, the teams will receive the data sets for each task. This is a critical time in the competition. Some teams just jump in and start running code as quickly as they can. Others will take some time, get a feel for the size and complexity of the various tasks, and then plan out how they’re going to attack each application.
In the past, we’ve seen ‘analysis paralysis’ set in – teams waiting too long to get started. The best teams understand their apps and how they work well enough to quickly estimate how much machine they’ll want to devote to each task. They also track their jobs as they’re running and are ready to make adjustments on the fly to keep things moving along as quickly as possible.
These apps are good representatives of mainstream HPC tasks, but I think it’s time to move toward the future. I’d like to see at least one business- or Big Data-centric application in the mix next year. For example, students could be given a graph problem where they have to find interrelationships between social networking contacts. Or they could perform some ‘what if’ scenarios on various portfolios and use the output to set up trading strategies.
The world of HPC has already extended far beyond academic and lab environs, so why not bring this new world into the Student Cluster Competition?
Posted In: Latest News, SC 2012 Salt Lake City
Tagged: supercomputing, SC 2012, Student Cluster Competition, HPC, apps, data, bottleneck, Big Data