This describes a series of enhancements to the pre-90's TCP in order to deal with collapses due to congestion. The author, Van Jacobson, is big in bettering the performance of TCP in many ways (i.e. he also came up with Header Compression). The techniques described are significant and are said to have saved the internet from being unusable at the time. The root cause of the congestion problem is due to equilibrium not reached. Three techniques were proposed, including slow-start, round-trip time variance est., and congestion avoidance algorithm.
Slow-start requires new-comers or restarters to start at a low window size and gradually (exponentially) increase to saturation. Rtt variance estimation gives a better tolerance for high delay connections so that retransmission is minimized (s.t. network won't become even more congested). Finally, their congestion avoidance is basically using multiplicative decrease and additive increase to adjust window size.
It was quite clever to make the network signals implicit rather some bit in the header. A timeout tells an end-point to either backoff or restart after exceeding a certain threshold. Increasing the window size is triggered by a successful ACK. All these seem wonderful in keeping the protocol simple and stateless. However, does that mean window size is constantly adjusting? One major focus for the paper is on reaching equilibrium. Constantly pushing for increase and chopping it in half does seem a little wasteful. Toward the end of the paper has a small discussion about keeping the drop rate low, which was not totally clear.
The paper should be kept on the list. It shows many features we see in today TCP. It also illustrates the impact of these features (i.e. what it is like without them).