The University of Arizona

Events & News

Computer Science Colloquium

CategoryLecture
DateFriday, January 16, 2009
Time10:00 am
LocationGS 906
DetailsRefreshments will be available at 9:45AM in the 9th floor atrium
SpeakerHaifeng He
TitleDoctoral Candidate
AffiliationDepartment of Computer Science

Memory Footprint Reduction of Operating System Kernels

As the complexity of embedded systems grows, there is an increasing use of operating systems (OSes) in embedded devices, such as mobile phones, media players and other consumer electronics. Despite their convenience and flexibility, such operating systems can be overly general and contain features and code that are not needed in every application context, which incur unnecessary performance overheads. In most embedded systems, resources, such as processing power, available memory, and power consumption, are strictly constrained. In particular, the amount of memory on embedded devices is often very limited. This, together with the popular usage of operating systems in embedded devices, makes it important to reduce the memory footprint of operating systems for embedded systems. This dissertation addresses this challenge and presents automated ways to reduce the memory footprint of OS kernels for embedded systems.

Embedded systems typically run a small fixed set of applications. First, we present kernel code compaction, an automated approach that reduces the code size of OS kernel statically by removing unused functionality of OS kernel. OS kernel code tends to be different from ordinary application code, including the presence of a significant amount of hand-written assembly code, multiple entry points, implicit control flow paths involving interrupt handlers, and frequent indirect control flow via function pointers. We use a novel "approximate decompilation" technique to apply source-level pointer analysis to hand-written assembly code. A prototype implementation of our idea on an Intel x86 platform, applied to a Linux kernel that has been configured to exclude unnecessary modules, obtains a code size reduction of close to 24%.

Even though code compaction can remove a portion of the entire OS kernel code, when exercised with typical embedded benchmarks, such as MiBench, most of kernel code is executed infrequently if at all. Our second contribution is to develop on-demand code loading, an automated approach that keeps the rarely used code on secondary storage, and load it into main memory only when it is needed. In order to minimize the overhead of code loading, a greedy node-coalescing algorithm is proposed to group closely related code together. The experimental results show that this approach can reduce memory requirements for the Linux kernel code by about 53% with little degradation in performance.

Last, we describe dynamic data structure compression, an approach that reduces the runtime memory footprint of dynamic data structures in OS kernel. A prototype implementation for Linux kernel reduces the memory consumption of the slab allocators in Linux by 17.5% when running the MediaBench suite, while incurring only minimal increases in execution time (1.9%).