Next Up Previous Contents References

2.4 Applications

Applications

With the path model in place, it is time to discuss a few specific applications of paths. The goal of this section is provide a more concrete idea of how paths are used, and at the same time, argue that many ideas and techniques that have been implemented in an ad hoc fashion in the past, can be realized in a straight-forward manner using paths. In that sense, the path model is believed to be a unifying abstraction. However, this discussion should not be taken to mean that paths are limited to the specific examples given below. Rather, the path model should be considered a framework that simplifies realizing the specific examples and that should enable development of other path-based techniques that have yet to be discovered.

Broadly speaking, paths can provide two kinds of benefits: (1) they can improve code quality, and (2) they can improve resource management. The two kinds of benefits are typically independent, and hence, additive.

2.4.1 Code Optimizations

Code Optimizations

Paths can help improving the code quality by virtue of the fact that they allow partial evaluation of the path processing function, as explained in Section 2.2. A few specific examples follow below.

2.4.1.1 Code Synthesis

Code Synthesis

Pu et al. propose a runtime code synthesis technique that involves collapsing layers to avoid the overhead that is typically caused by crossing module or layer boundaries [86].3 The technique allows further optimization of the collapsed code through factoring of invariants and elimination of data copying. For this technique to be applicable, it is necessary to know the invariants that are true for the code-path through the system that is to be optimized. In addition, it is necessary to know the sequence of functions that need to be collapsed. The path model proposed in this chapter provides both kinds of information and thus makes this technique readily applicable.

2.4.1.2 Integrated Layer Processing

Integrated Layer Processing

For this technique (see Section 1.4.1.2) to be applicable, it is necessary to know the sequence of data-processing steps that a network packet will follow. The path model can trivially support this kind of application since the sequence of modules being traversed is known and fixed for the lifetime of a path. A combination of the language-based ILP approach presented by Abbott and Peterson [1] and paths should therefore enable making ILP a truly practical technique.

2.4.1.3 PathIDs

PathIDs

The PathID approach (see Section 1.4.1.3) consists of a combination of two techniques: a mechanism to efficiently find the path for a network packet and a highly sophisticated partial evaluator---namely a human being. The issue of how paths are located is not part of the path model proper, but as will be discussed in Section 3.4, a solution to this problem is needed in any path-based system, and as such, the PathID technique is applicable. More interesting to the path model is the way the hand-optimized (partially evaluated) code is employed. This could be done by specifying a path transformation rule that matches the sequence of network protocols that were manually optimized. If a path contains a sequence for which hand-optimized code exists, the old (unoptimized) code in the path can be replaced with this manually optimized version.

2.4.1.4 Single-Copy TCP/IP

Single-Copy TCP/IP

As far as the path model is concerned, the single-copy TCP/IP technique discussed in Section 1.4.1.4 is almost identical to PathIDs. Again, unoptimized code is replaced by highly tuned, manually written code.

2.4.1.5 Summary

Summary

The four examples discussed above all depend on being able to associate optimized code with a particular path. They all differ in what type of partial evaluation they apply, but fundamentally they are all quite similar in that they exploit the linear structure that is often present in the performance critical code. The defined path model therefore is ideally suited for this kind of code-related optimizations.

2.4.2 Resource Management

Resource Management

The second kind of benefits that the path model affords is related to improved resource management. A discussion of three applications that exploit this follows.

2.4.2.1 Fbufs

Fbufs

Fbufs [29] are a path-oriented buffer management mechanism designed to efficiently move data across multiple modules that are in different protection domains. This technique depends on knowing the sequence of modules that will be traversed by a data-item. In addition, it requires a provision for path-specific memory allocators. Both requirements can be accommodated easily in the proposed path model.6

2.4.2.2 Migrating/Distributed Threads

Migrating/Distributed Threads

Migrating (distributed) threads [19, 45, 37] address the issue of anonymity of processing that often poses problems in modular systems. Typically, when data enters a new module, information on whose behalf the data is being processed is lost. Since, in the path model, all execution is in the context of paths, they can serve the same purpose as distributed threads. In essence, this application uses paths as an account that can be charged for resources (e.g., memory or CPU cycles) that are consumed as part of the data processing.

2.4.2.3 Segregation of Work

Segregation of Work

The dataflow view of paths is important when building systems that require offering distinct quality-of-service (QoS) to different data streams (applications). In a modular system, lower layers often mix processing of different data streams. This multiplexing makes it difficult to provide differentiated service. Paths force a segregation of work on a per-path basis. Thus, as long as each service class is represented by a separate path, even lower layer modules can easily distinguish between the needs of different streams and provide service accordingly. For example, when processing data with a realtime constraint, the deadline by which the data needs to be processed could be associated with the path. This makes the deadline accessible and visible to all modules along the path as well as to the path scheduler itself.


Next Up Previous Contents References