It describes a conceptual queuing model for network gateways, called fair queuing (FQ). It aims to provide protection between user loads for a router. It proposes to use separate queues to hold different user streams, and transmits with a round-robin fashion. FQ is contrasted with first-come-first-serve (FCFS) in terms of fairness, throughout, efficiency (i.e. # of dropped packets), and promptness (or avg. rtt). The two is simulated with rather unrealistic but isolated (understandable) scenarios.
FQ is found to be more fair in relation to users. The bulk of it is in defining what is "fair" and what is considered a "user". It turns out varying these definitions gives very different implications. For example, comparing package count favors large packages over small ones. The paper defined fairness in terms of bit-by-bit (rather than packet-by-packet) transmission ratio between users, and a user to be a source-dest conversation.
FQ rewards well-behaved users. Ill-behaved users would end up overwriting their own queues, thus increase their number of retransmits. This means that flow-control algorithm used by the end-hosts is encouraged the play nice and not create any congestion, since they are only hurting themselves. The paper showed that the Jain-Karel flow control with slow-start + AIMD out-performs other not-as-nice flow-controls. But what is considered to be nice enough, the paper didn't say...
One thing FQ adds is a trust boundary between gateways and end-hosts. One end host misbehaves may not influence others. It separates flow-control (end hosts) and queuing (gateway). In effect, gateways are "smarter" in resolving congestion problems on its own.