Despite the advances made by Ns in terms of flexibility and
ease of use, it is not adequate for simulating large-scale networks.
Consider the following: in order for Ns to simulate a simple network
with 512 source nodes connected by a duplex link to 512 sink nodes
with UDP packets traversing the link configured for
drop-tail-queueing, it will allocate almost 180MB of RAM. This
simulation processes events at a rate of 1000 per
second. For large-scale networks, we require almost 1000 to 100000
times greater performance.
One might believe that because of faster microprocessors, any sequential simulator's performance will adequately scale. That is not the case. In fact, faster microprocessors escalate performance limitations for network simulations. As processors get faster, they are capable of producing more and more packets. Thus, it is absolutely imperative that the networking community invest in the construction of efficient, scalable tools that are not only capable addressing today's modeling problems, but will also continue to improve along with microprocessor technology and enable the time-efficient simulation of even larger capacity networks when the need arises. To accomplish this goal, we believe parallel and distributed simulation techniques hold the key.