On Thu, May 29, 2014 at 4:40 PM, Michael Richardson wrote:
>
> David P. Reed wrote:
> > ECN-style signaling has the right properties ... just like TTL it can
> > provide
>
> How would you send these signals?
>
> > A Bloom style filter can remember flow statistics for both of these
>
Good points...
On May 29, 2014, Michael Richardson wrote:
>
>David P. Reed wrote:
>> ECN-style signaling has the right properties ... just like TTL it can
>> provide
>
>How would you send these signals?
>
>> A Bloom style filter can remember flow statistics for both of these
>local
>> po
David P. Reed wrote:
> ECN-style signaling has the right properties ... just like TTL it can
> provide
How would you send these signals?
> A Bloom style filter can remember flow statistics for both of these local
> policies. A great use for the memory no longer misapplied to
The problem is that without co-existing well with existing stacks (and
especially misbehaing stacks), you are not talking about something that will
ever be able to be used in real life.
unless I am mixing things up, RED and it's varients are a perfect example of
this. If everyone on the networ
Note: this is all about "how to achieve and sustain the ballistic phase that is
optimal for Internet transport" in an end-to-end based control system like TCP.
I think those who have followed this know that, but I want to make it clear
that I'm proposing a significant improvement that requires
ECN-style signaling has the right properties ... just like TTL it can provide
valid and current sampling of the packet ' s environment as it travels. The
idea is to sample what is happening at a bottleneck for the packet ' s flow.
The bottleneck is the link with the most likelihood of a collisi
On Wed, 28 May 2014, dpr...@reed.com wrote:
I did not mean that "pacing". Sorry I used a generic term. I meant what my
longer description described - a specific mechanism for reducing bunching that
is essentially "cooperative" among all active flows through a bottlenecked
link. That's part
Ok, I am not understanding your proposal then.
I thought you were claiming that since the optimum buffer length is 1-2 packets,
the endpoints should be adjusting their sending speeds to try and make that
happen on all switches and routers in the path.
The endpoints do know what their latency
Interesting conversation. A particular switch has no idea of the "latency
budget" of a particular flow - so it cannot have its *own* latency budget.
The switch designer has no choice but to assume that his latency budget is near
zero.
The number of packets that should be sustained in flig
Same concern I mentioned with Jim's message. I was not clear what I meant by
"pacing" in the context of optimization of latency while preserving throughput.
It is NOT just a matter of spreading packets out in time that I was talking
about. It is a matter of doing so without reducing throug
I did not mean that "pacing". Sorry I used a generic term. I meant what my
longer description described - a specific mechanism for reducing bunching that
is essentially "cooperative" among all active flows through a bottlenecked
link. That's part of a "closed loop" control system driving eac
On Tue, 27 May 2014, Dave Taht wrote:
On Tue, May 27, 2014 at 4:27 PM, David Lang wrote:
On Tue, 27 May 2014, Dave Taht wrote:
There is a phrase in this thread that is begging to bother me.
"Throughput". Everyone assumes that throughput is a big goal - and it
certainly is - and latency is a
On Tue, May 27, 2014 at 4:27 PM, David Lang wrote:
> On Tue, 27 May 2014, Dave Taht wrote:
>
>> There is a phrase in this thread that is begging to bother me.
>>
>> "Throughput". Everyone assumes that throughput is a big goal - and it
>> certainly is - and latency is also a big goal - and it certa
On Tue, 27 May 2014, Dave Taht wrote:
There is a phrase in this thread that is begging to bother me.
"Throughput". Everyone assumes that throughput is a big goal - and it
certainly is - and latency is also a big goal - and it certainly is -
but by specifying what you want from "throughput" as a
There is a phrase in this thread that is begging to bother me.
"Throughput". Everyone assumes that throughput is a big goal - and it
certainly is - and latency is also a big goal - and it certainly is -
but by specifying what you want from "throughput" as a compromise with
latency is not the right
the problem is that paths change, they mix traffic from streams, and in other
ways the utilization of the links can change radically in a short amount of
time.
If you try to limit things to exactly the ballistic throughput, you are not
going to be able to exactly maintain this state, you are e
This has been a good thread, and I'm sorry it was mostly on
cerowrt-devel rather than the main list...
It is not clear from observing google's deployment that pacing of the
IW is not in use. I see
clear 1ms boundaries for individual flows on much lower than iw10
boundaries. (e.g. I see 1-4
packets
On Sun, May 25, 2014 at 4:00 PM, wrote:
> Not that it is directly relevant, but there is no essential reason to
> require 50 ms. of buffering. That might be true of some particular
> QOS-related router algorithm. 50 ms. is about all one can tolerate in any
> router between source and destinatio
Codel and PIE are excellent first steps... but I don't think they are the best
eventual approach. I want to see them deployed ASAP in CMTS' s and server load
balancing networks... it would be a disaster to not deploy the far better
option we have today immediately at the point of most leverage.
On Mon, 26 May 2014, dpr...@reed.com wrote:
I would look to queue minimization rather than "queue management" (which
implied queues are often long) as a goal, and think harder about the
end-to-end problem of minimizing total end-to-end queueing delay while
maximizing throughput.
As far as I
On Monday, May 26, 2014 9:02am, "Mikael Abrahamsson" said:
> So, I'd agree that a lot of the time you need very little buffers, but
> stating you need a buffer of 2 packets deep regardless of speed, well, I
> don't see how that would work.
>
My main point is that looking to increased buffe
On Mon, 26 May 2014, dpr...@reed.com wrote:
Len Kleinrock and his student proved that the "optimal" state for
throughput in the internet is the 1-2 buffer case. It's easy to think
this through...
Yes, but how do we achieve it?
If you signal congestion with very small buffer depth used, TCP
Len Kleinrock and his student proved that the "optimal" state for throughput in
the internet is the 1-2 buffer case. It's easy to think this through...
A simple intuition is that each node that qualifies as a "bottleneck" (meaning
that the input rate exceeds the service rate of the outbound q
On Sun, 25 May 2014, dpr...@reed.com wrote:
The optimum buffer state for throughput is 1-2 packets worth - in other
words, if we have an MTU of 1500, 1500 - 3000 bytes. Only the bottleneck
No, the optimal state for througbut is to have huge buffers and have them
filled. The optimal state for
Not that it is directly relevant, but there is no essential reason to require
50 ms. of buffering. That might be true of some particular QOS-related router
algorithm. 50 ms. is about all one can tolerate in any router between source
and destination for today's networks - an upper-bound rather
On Sun, May 25, 2014 at 11:39 AM, Sebastian Moeller wrote:
> Hi Dane,
>
>
> On May 25, 2014, at 08:17 , Dane Medic wrote:
>
>> Is it true that devices with less than 64 MB can't handle QOS? ->
>> https://lists.chambana.net/pipermail/commotion-dev/2014-May/001816.html
>
> I think this mea
Hi Dane,
On May 25, 2014, at 08:17 , Dane Medic wrote:
> Is it true that devices with less than 64 MB can't handle QOS? ->
> https://lists.chambana.net/pipermail/commotion-dev/2014-May/001816.html
I think this means that the commotion developers think that 64MB are
required. But it d
On Sun, 25 May 2014, Dane Medic wrote:
Is it true that devices with less than 64 MB can't handle QOS? ->
https://lists.chambana.net/pipermail/commotion-dev/2014-May/001816.html
At gig speeds you need around 50ms worth of buffering. 1 gigabit/s =
125 megabyte/s meaning for 50ms you need 6.25 m
On Sun, 25 May 2014 08:17:47 +0200, Dane Medic said:
> Is it true that devices with less than 64 MB can't handle QOS? ->
> https://lists.chambana.net/pipermail/commotion-dev/2014-May/001816.html
I'm not going to give one post on a list very much credence, especially when
it doesn't contain a sing
Is it true that devices with less than 64 MB can't handle QOS? ->
https://lists.chambana.net/pipermail/commotion-dev/2014-May/001816.html
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-dev
30 matches
Mail list logo