Downey's main research area is probabilistic analysis of algorithms and systems, with particular emphasis on modeling the performance of parallel algorithms.
Parallel Program Performance
The goal of this research project is
the development of
analytic methods to evaluate the performance of parallel asynchronous
algorithms and systems.
In order to understand the origins and effects of overheads in parallel
computation, such as synchronization delays,
methods focus on probabilistic models of workloads and
task processing times.
Rather than developing exact but unrealistic models with only narrow
applicability, emphasis is on the development of bounds and approximations
that are robust and valid for extremely general workload and task
distributions (distribution-free methods).
For large workloads and massive numbers of parallel processors, laws of
large number apply and allow run-time and speed-up to be approximated or
bounded in terms of a few underlying parameters of the workload.
Analysis of Overheads
Analysis of parallel
algorithms is distinctly different from that of
sequential algorithms because of the presence of overheads,
following from the need to coordinate parallel activity. Overheads
are of two fundamentally different kinds.
Explicit
overheads
result from the need to add additional code
to a parallel algorithm to handle coordination matters such
as forks and message sends.
Implicit
overheads
arise from delays spent waiting on synchronization
events, such as joins,
to occur.
Implicit and explicit overheads
increase with increasing degree of parallelism;
expected implicit overheads
increase with the mean, variance and skewness of the underlying task time
distributions.
Much of this research aims
to quantify the effects
of such overheads,
and to understand how they depend on
various parameters of the workload distribution.
Foundations: Extreme Order Statistics
Let X1, ... , Xn
be random
variables, representing execution times of tasks.
Two random variables derived from the task times
play a critical
role in determining the performance of
parallel activity:
Work aimed at
understanding the
behavior of both
Sn
and
Mn for large
n,
even if
the
task times are dependent, forms a fundamental part
of this research upon which all analysis is based.
This foundational
theory must meet a very practical methodological constraint:
to be able to bound or approximate
the behavior of
extrema and sums
without detailed information about the task workload distributions.
Scheduling and Sequencing
Parallel performance issues occur at the operating systems level
in the design of multiprocessor schedulers.
The design and analysis of simple
scheduling policies having guaranteed
performance bounds is an important part of this research.
New applications of stochastic ordering relations to prove
optimality of static scheduling policies and
to develop bounds for dynamic policies are being
extended as part of this work.
Department Home Page
http://www.cs.arizona.edu/~pete/