* Faculty       * Staff       * Students & Alumni       * Committees       * Contact       * Institute Directory
* Undergraduate Program       * Graduate Program       * Courses       * Institute Catalog      
* Undergraduate       * Graduate       * Institute Admissions: Undergraduate | Graduate      
* Colloquia       * Seminars       * News       * Events       * Institute Events      
* Overview       * Lab Manual       * Institute Computing      
No Menu Selected

* Research

Ph.D. Theses

Efficient Large-Scale Computer and Network Models Using Optimistic Parallel Simulation

By Garrett R. Yaun
Advisor: Christopher D. Carothers
June 8, 2005

Modeling and simulation is a valuable tool in the analysis of large-scale networks and computer systems. To tackle these complexities, conservative parallel simulation is often employed as an approach to reduce the runtime. Optimistic simulation has previously been viewed out of the performance envelop for such models. However, with the advent of a new technique called reverse computation the memory requirements for benchmark models have been dramatically reduced.

In this thesis, we demonstrate the use of reverse computation for allowing large-scale simulation models to achieve greater scalability and performance. The models consisted of network protocols and distributed computer system applications.

Within these models, reverse computation was important in achieving performance gains and dispelling views that optimistic techniques operate outside of the performance envelop. These are the first real-world models to leverage reverse computation and demonstrate its efficiency. Our TCP model executed at 5.5 million packets per second which is 5.14 times greater then PDNS' packet rate of 1.07 million for the same large-scale network scenario. This experiment was performed across a distributed cluster of 32 nodes and executed on one processor per node.

Observations made from the creation of these models led to the development of the reverse memory subsystem and the idea of shared event data. The contribution of this subsystem is that it allows for easier implementation of models and allows for the models to use dynamic memory. This subsystem permits for an overall reduction of memory in simulation models as compared to models that work in statically "pre-allocated" memory. The idea of shared event data works by decreasing the amount of duplicate information in the event. Our experiments with shared event data show significant memory reductions when there is a high degree of redundant data.

These contributions when taken together as a whole enable real-world large-scale models to be efficiently developed and executed in an optimistic parallel simulation framework.

* Return to main PhD Theses page



---