|Topic:||Advanced Optimization Techniques for Data-Parallel Programming Languages|
|Date:||Thursday, April 8, 1999|
|Place:||Gould-Simpson, Room 701|
Data-parallel languages aim to provide a simple, abstract, portable programming model applicable to a wide variety of parallel systems. The success of these languages has been hindered by the lack of sophisticated compilers and programming tools needed to achieve performance competitive with hand-coded parallel programs. The Rice dHPF project is aimed at developing compiler techniques and tool principles that provide consistently high performance for a wide class of data-parallel applications. The project has developed a prototype compiler and programming environment for High Performance Fortran (HPF) to demonstrate these ideas.
In this talk, I first give a brief overview of the dHPF project and then focus on two broad areas of innovation in the dHPF compiler: (1) a flexible framework for computation partitioning, and (2) an abstract integer set framework for program optimization and code generation. The computation partitioning framework in dHPF is significantly more general than that in previous compilers, enabling more aggressive partitioning algorithms. The abstract integer-set framework enables simple, yet general formulations of communication analysis and optimization tasks. We have developed a number of novel optimizations made possible by the generality of these two frameworks. Several of these optimizations cannot be directly implemented in any other data-parallel compiler we know of.
For HPF versions of the NAS application benchmarks, the dHPF compiler achieves execution times within 0 - 21% of sophisticated hand-coded message passing versions of the same codes. These results require only minimal modifications of the original serial form of the codes (modifying less than 6%). Some of the new optimizations in dHPF provide orders-of-magnitude improvements in performance, and were crucial in obtaining these results.