The University of Arizona

Events & News

Computer Science Colloquium

CategoryLecture
DateTuesday, November 13, 2007
Time11:30 am
LocationGS 906
DetailsLight refreshments served in the 9th floor atrium at 10:45 AM.
SpeakerJoe Roback
AffiliationComputer Science

Gossamer: A Threading Framework for Multicore Architectures

A new milestone in computing, multicore processors, brings parallel computing to the commodity market. These new multicore designs follow previous paradigms of binary compatibility and cache coherence, but they present immense challenges for software developers. This is because achieving performance requires extracting parallelism from applications, and finding or expressing parallelism in programming is still difficult. Programming explicit parallel codes can greatly increase overall development time, because it requires understanding concurrency in algorithms, determining the granularity of parallelism, creating data structures that allow correct parallel execution, and rewriting sequential programs to run in parallel. Additionally, parallel codes are prone to many programming hazards, such as race conditions, deadlocks, and memory contention. Programmers must also understand the underlying hardware and perform optimizations whose effects are often determined by tedious and error-prone experimentation.

The goal of the work presented here is to create an execution framework that can handle multiple parallel codes efficiently while concealing the difficulties of parallel programming from the programmer. To reach this goal, we are creating the Gossamer package, a C-based threading framework for shared-memory multicore architectures. We create a lightweight threading framework capable of handling the almost hidden parallelism found in many common applications and couple it with C-language annotations to assist the programmer in expressing that newly found parallelism. A source-to-source compiler translates the annotations into the library calls to handle thread creation, communication, and synchronization. Applications will include arbitrary number of parallel kernels and our framework will dynamically adjust to system load, data input, and user interaction to ensure efficient parallel computation.