There is a performance-at-all-costs mentality at most of the nation's supercomputing centers that has resulted in significant and growing energy use. Energy consumption and the resultant heat dissipation are becoming important performance-limiting factors that we believe will eventually come to bear on high-performance computing users. The goal of our research is to design and implement system software that will allow HPC programs to consume less energy (generating less heat) with no more than a modest performance penalty---and to do so without burdening computational scientists.
This talk first discusses the energy consumption and execution time of applications from a standard benchmark suite (NAS) on a power-scalable cluster. Our results show that many standard scientific applications executed on a such a cluster can save energy, without a significant increase in time, by reducing the processor "gear" (i.e., frequency and voltage). Next, we present software techniques to transparently determine effective gears. These include allowing different gears for different program phases and lowering the gear for nodes not on the critical path. Finally, we discuss using fewer nodes for programs where parallel efficiency decreases as the number of nodes increases.
Bio:
David Lowenthal is an associate professor of Computer Science at the University of Georgia. He received his Ph.D. in the Computer Science department at the University of Arizona in 1996. His research centers on parallel and distributed computing, operating systems, and networks. Current research projects include addressing scalability and energy for high-performance computing, as well as developing an infrastructure for flexible TCP-compatible protocols.