Thursday, September 18, 2008

Fundamental Design Issues for the Futre Internet (Shenker '94)

The internet has always been structured to give "best-effort" service. It deliver packets in the most efficient or fair manner, in the protocol design point of view. However, the author argues that the best network design should be sensitive the application requirements that may be different for everyone. The overall performance of the network is measured by the sum of the utility functions for each app. Utility functions take delivery delay as parameter (i.e. each app has a different sensitivity to delay).

The way to maximize these utility functions is to extend the existing "best-effort" service model by incorporating other kinds of services. For example, the paper mentioned elastic, hard RT, delay-adaptive RT, and rate-adaptive RT. There are potentially more that would even depend on parameters other than delay.

There is simple analysis of suggesting how to extend the service model. It shows having multiple homogeneous service networks is less efficient than having heterogeneous service networks. This seems counter-intuitive at first. It seems there can be more optimization opportunities in a homogeneous setting (i.e. more assumptions can be made). In terms of RT and non-RT scheduling, this suggests that the two actually complements each other. For example, datagram service can fill up the leftover BW of the RT traffic.

The choice of having admission control or overprovisioning is a concern for future network design. On one hand, overprovisioning affects application performance. But having admission control is nice because one can avoid potentially all overloading (or congestion).

I like the paper. It presents the big ideas for future network design. Although the analysis were quite simplistic, it does illustrates the design issues.

No comments: