On Feb 28, 2012, at 10:04 AM, Paul Vixie wrote:

> On 2/28/2012 5:57 PM, Nicholas Weaver wrote:
>> But since if you could coalesce you could do parallel, this savings doesn't 
>> help latency much at all: just the transit time for 50B out and 50B back.  
>> If anything, parallel will be BETTER on the latency since batching would 
>> probably require a coalesced response, while parallel is just that, 
>> parallel, so if the first record is more useful, it can be acted on 
>> immediately.
> 
> parallel (what i called "blast") is great solution for things that don't
> run at scale. but the lock-step serialization that we get from the MX
> approach (where the A/AAAA RRset may be in the additional data section
> and so we have to wait for the first answer before we ask the second or
> third question) has a feedback loop effect on rate limiting. it's not as
> good as tcp windowing but it does tend to avoid WRED in core and edge
> router egress buffers.
> 
> all i'm saying is, we have to be careful about too much parallelism
> since UDP unlike TCP has no windowing of its own.

We don't need to be "careful" on this until you are talking about ~10 KB of 
data or more in a single transaction with no interactions, because before then 
TCP has the same dynamics due to the initial window size (with browsers opening 
4+ connections!), and this doesn't seem to bother people.

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to