> On 4 Feb, 2019, at 12:42 am, David P. Reed <dpr...@deepplum.com> wrote:
> 
> This fairy story about traffic giving way to higher priority traffic being a 
> normal mode of operation is just that. A made up story, largely used by folks 
> who want to do selective pricing based on what customers are willing to pay, 
> not on value received.

Honestly, I can believe that selective pricing was the original motivation 
behind Diffserv (and the older TOS definition).  I think it probably originated 
from the way telegrams were handled at the time.

In telegraph networks, there is a real cost to handling each message, because 
traffic volume correlates directly with the number of human operators involved. 
 The telegraph network was therefore perpetually congested, as the network 
sought to balance costs and revenue.

In modern packet-switched networks, it's traffic *capacity* which bears a 
*capital* cost, not so much in terms of running maintenance costs.  That's 
where the difference lies; in the latter case, selective pricing leads to 
perverse incentives in that inducing congestion can lead to higher revenue 
without higher costs, but with poorer service delivered.

Hence the present fight over Net Neutrality and, perhaps, the resistance of 
hardware manufacturers to properly tackling bufferbloat.  After all, if 
congestion doesn't lead to obvious signs of poor performance, there is less 
incentive to buy newer and faster network hardware and/or service plans.

> When Mother's Day happens, you should have enough capacity to absorb vast 
> demand. Therefore what you do all the other days doesn't matter. And on 
> Mother's Day, if you have congestion, there's no way in hell that anybody is 
> happy.

You're clearly looking at this from a core-network perspective - which has its 
place.  My philosophy on core and backhaul networks is that they should be 
sized so that congestion is rare, and they should *also* degrade gracefully 
when congestion does in fact occur.  Application of simple AQM algorithms, to 
keep latency low and spread the pain more-or-less fairly, seems sufficient 
there.

Last-mile links have different characteristics; they spend long periods 
completely idle, and also significant periods completely saturated.  That's 
because they don't have the high degree of statistical multiplexing that you 
see deeper in the network.  This is, therefore, where putting intelligence into 
the network has maximum advantage - hence the all-singing, all-dancing design 
of Cake.

Consumer ISPs tend to run their backhaul networks much fuller on average than 
the core.  Some are congested as much as 50% of the time; most are reliably 
congested at certain times of the day or week, and performance degrades 
noticeably when exceptional demand occurs (such as a sporting event or 
newsworthy disaster, or simply the release of a blockbuster AAA game).  
Whatever the reason, these networks are *not* prepared for Mother's Day.

Diffserv, as currently specified, does not reliably help.  It is too rarely 
implemented, by either consumer-grade ISPs, CPE or applications, to gain much 
traction.  Cake's interpretation is starting to trickle into CPE, but it's too 
soon to tell whether that'll have much practical effect.

And you're right that strict priority is the wrong approach - it's too easy to 
abuse.  That's why Cake doesn't do strict priority, and actively tries to avoid 
starving any class of traffic.

In my view, there are just four classes of traffic with global meaning, though 
additional classes may have local meaning:

 - High Throughput; what we now call Best Effort.  Latency and packet loss are 
relatively minor concerns for this traffic.

 - Low Latency; throughput and packet loss are less important than immediacy.  
Attempting high throughput in this class should be penalised, so that overall 
throughput is less under congested conditions than if High Throughput were 
requested; this forms a natural incentive to choose the correct class for the 
traffic.

 - High Reliability; packet loss would hurt performance more than latency or 
throughput limitations.  This might be suitable for simple request-response 
protocols such as DNS.  Long queues with fairness metrics are appropriate here, 
with a similar throughput handicap as Low Latency.

 - Low Priority; this traffic volunteers to "get out of the way" when 
congestion occurs.  It receives a relatively small guaranteed throughput under 
congested conditions, but may fill otherwise-unused capacity when the situation 
eases.

If networks were widely designed to respect and preserve these four 
classifications of traffic, which could be represented using a two-bit 
codepoint rather than the present six bits, I'm certain that applications would 
start to use those classes appropriately.  Expanding to three bits would allow 
representing locally-valid traffic classes such as Network Control and 
Isochronous Audio/Video, which might reasonably have strict priority over the 
global classes but which Should Not be sent over the core network, and Should 
be interpreted as Low Priority if encountered by a device not specifically 
configured to understand them.

 - Jonathan Morton

_______________________________________________
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel

Reply via email to