Peter J. Downey's Home Page
Peter J. Downey
Peter J. Downey is Professor Emeritus of
Computer Science
at the
University of Arizona.
He received a Ph.D. from Harvard University in 1974, and subsequently
joined
the faculty of The Pennsylvania State University.
In 1978 he moved to Arizona, where he has been for 33 years.
He served as Head of the Department from 1998 to 2007.
He became Emeritus Professor in May 2011.
Downey's main research area is probabilistic analysis of
algorithms and systems, with particular emphasis on modeling
the performance of parallel
algorithms.
Department of Computer Science
GouldSimpson Building, Room 917
1040 E Fourth St
The University of Arizona
Tucson AZ 857210077
pete at cs dot arizona dot edu
The goal of this research project is
the development of
analytic methods to evaluate the performance of parallel asynchronous
algorithms and systems.
In order to understand the origins and effects of overheads in parallel
computation, such as synchronization delays,
methods focus on probabilistic models of workloads and
task processing times.
Rather than developing exact but unrealistic models with only narrow
applicability, emphasis is on the development of bounds and approximations
that are robust and valid for extremely general workload and task
distributions (distributionfree methods).
For large workloads and massive numbers of parallel processors, laws of
large number apply and allow runtime and speedup to be approximated or
bounded in terms of a few underlying parameters of the workload.
Analysis of Overheads
Analysis of parallel
algorithms is distinctly different from that of
sequential algorithms because of the presence of overheads,
following from the need to coordinate parallel activity. Overheads
are of two fundamentally different kinds.
Explicit
overheads
result from the need to add additional code
to a parallel algorithm to handle coordination matters such
as forks and message sends.
Implicit
overheads
arise from delays spent waiting on synchronization
events, such as joins,
to occur.
Implicit and explicit overheads
increase with increasing degree of parallelism;
expected implicit overheads
increase with the mean, variance and skewness of the underlying task time
distributions.
Much of this research aims
to quantify the effects
of such overheads,
and to understand how they depend on
various parameters of the workload distribution.
Foundations: Extreme Order Statistics
Let X_{1}, ... , X_{n}
be random
variables, representing execution times of tasks.
Two random variables derived from the task times
play a critical
role in determining the performance of
parallel activity:

the extreme
M_{n} = max ( X_{1}, ... , X_{n})
and

the sum
S_{n} = X_{1} + ... + X_{n}.
Work aimed at
understanding the
behavior of both
S_{n}
and
M_{n} for large
n,
even if
the
task times are dependent, forms a fundamental part
of this research upon which all analysis is based.
This foundational
theory must meet a very practical methodological constraint:
to be able to bound or approximate
the behavior of
extrema and sums
without detailed information about the task workload distributions.
Scheduling and Sequencing
Parallel performance issues occur at the operating systems level
in the design of multiprocessor schedulers.
The design and analysis of simple
scheduling policies having guaranteed
performance bounds is an important part of this research.
New applications of stochastic ordering relations to prove
optimality of static scheduling policies and
to develop bounds for dynamic policies are being
extended as part of this work.
Department Home Page
http://www.cs.arizona.edu/~pete/