On 02/21/13 20:20, Sepherosa Ziehau wrote:
> On Wed, Feb 20, 2013 at 11:59 AM, Lawrence Stewart
> wrote:
>> Hi Sephe,
>>
>> On 02/20/13 13:37, Sepherosa Ziehau wrote:
>>> On Wed, Feb 20, 2013 at 9:46 AM, Lawrence Stewart
>>> wrote:
*crickets chirping*
Time to move this discussion
On Wed, Feb 20, 2013 at 11:59 AM, Lawrence Stewart wrote:
> Hi Sephe,
>
> On 02/20/13 13:37, Sepherosa Ziehau wrote:
>> On Wed, Feb 20, 2013 at 9:46 AM, Lawrence Stewart
>> wrote:
>>> *crickets chirping*
>>>
>>> Time to move this discussion forward...
>>>
>>>
>>> If any robust counter-arguments
On Tuesday, February 19, 2013 9:37:54 pm Sepherosa Ziehau wrote:
> John,
>
> I came across this draft several days ago, you may be interested:
> http://tools.ietf.org/html/draft-ietf-tcpm-newcwv-00
Yes, that is extremely relevant. My application does use its own
rate-limiting. And now that I've
Hi Sephe,
On 02/20/13 13:37, Sepherosa Ziehau wrote:
> On Wed, Feb 20, 2013 at 9:46 AM, Lawrence Stewart wrote:
>> *crickets chirping*
>>
>> Time to move this discussion forward...
>>
>>
>> If any robust counter-arguments exist, now is the time for us to hear
>> them. I haven't read anything thus
On Wed, Feb 20, 2013 at 9:46 AM, Lawrence Stewart wrote:
> *crickets chirping*
>
> Time to move this discussion forward...
>
>
> If any robust counter-arguments exist, now is the time for us to hear
> them. I haven't read anything thus far which convinces me that we should
> not provide knobs to t
*crickets chirping*
Time to move this discussion forward...
On 02/14/13 12:36, Lawrence Stewart wrote:
> On 02/14/13 01:48, Andre Oppermann wrote:
>> On 13.02.2013 15:26, Lawrence Stewart wrote:
>>> On 02/13/13 21:27, Andre Oppermann wrote:
On 13.02.2013 09:25, Lawrence Stewart wrote:
>
On 02/14/13 05:37, Adrian Chadd wrote:
> On 13 February 2013 02:27, Andre Oppermann wrote:
>
>> Again I'd like to point out that this sort of modification should
>> be implemented as a congestion control module. All the hook points
>> are already there and can readily be used instead of adding m
On 02/14/13 01:48, Andre Oppermann wrote:
> On 13.02.2013 15:26, Lawrence Stewart wrote:
>> On 02/13/13 21:27, Andre Oppermann wrote:
>>> On 13.02.2013 09:25, Lawrence Stewart wrote:
The idea is useful. I'd just like to discuss the implementation
specifics a little further before recommen
.. and I should say, "competing / parallel" congestion algorithms. Ie
- how multiple CC's work for/against each other on the same "internet"
at the same time.
Adrian
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/free
On 13 February 2013 02:27, Andre Oppermann wrote:
> Again I'd like to point out that this sort of modification should
> be implemented as a congestion control module. All the hook points
> are already there and can readily be used instead of adding more special
> cases to the generic part of TCP
On 13.02.2013 15:26, Lawrence Stewart wrote:
On 02/13/13 21:27, Andre Oppermann wrote:
On 13.02.2013 09:25, Lawrence Stewart wrote:
The idea is useful. I'd just like to discuss the implementation
specifics a little further before recommending whether the patch should
go in as is to provide a st
On 02/13/13 21:27, Andre Oppermann wrote:
> On 13.02.2013 09:25, Lawrence Stewart wrote:
>> FYI I've read the whole thread as of this reply and plan to follow up to
>> a few of the other posts separately, but first for my initial thoughts...
>>
>> On 01/23/13 07:11, John Baldwin wrote:
>>> As I men
On 02/10/13 16:05, Kevin Oberman wrote:
> On Sat, Feb 9, 2013 at 6:41 AM, Alfred Perlstein wrote:
>> On 2/7/13 12:04 PM, George Neville-Neil wrote:
>>>
>>> On Feb 6, 2013, at 12:28 , Alfred Perlstein wrote:
>>>
On 2/6/13 4:46 AM, John Baldwin wrote:
>
> On Wednesday, February 06, 201
On 02/08/13 07:04, George Neville-Neil wrote:
>
> On Feb 6, 2013, at 12:28 , Alfred Perlstein wrote:
>
>> On 2/6/13 4:46 AM, John Baldwin wrote:
>>> On Wednesday, February 06, 2013 6:27:04 am Randall Stewart wrote:
John:
A burst at line rate will *often* cause drops. This is becau
On 13.02.2013 09:25, Lawrence Stewart wrote:
FYI I've read the whole thread as of this reply and plan to follow up to
a few of the other posts separately, but first for my initial thoughts...
On 01/23/13 07:11, John Baldwin wrote:
As I mentioned in an earlier thread, I recently had to debug an
FYI I've read the whole thread as of this reply and plan to follow up to
a few of the other posts separately, but first for my initial thoughts...
On 01/23/13 07:11, John Baldwin wrote:
> As I mentioned in an earlier thread, I recently had to debug an issue we were
> seeing across a link with a h
On 11.02.2013 19:56, Adrian Chadd wrote:
On 11 February 2013 03:18, Andre Oppermann wrote:
In general Google does provide quite a bit of data with their experiments
showing that it isn't harmful and that it helps the case.
Smaller RTO (1s) has become a RFC so there was very broad consensus in
On 12.02.2013 11:55, Andrey Zonov wrote:
On 2/11/13 3:18 PM, Andre Oppermann wrote:
Smaller RTO (1s) has become a RFC so there was very broad consensus in
TCPM that is a good thing. We don't have it yet because we were not fully
compliant in one case (loss of first segment). I've fixed that a
On 2/11/13 3:18 PM, Andre Oppermann wrote:
>
> Smaller RTO (1s) has become a RFC so there was very broad consensus in
> TCPM that is a good thing. We don't have it yet because we were not fully
> compliant in one case (loss of first segment). I've fixed that a while
> back and will bring 1s RTO
On 2/11/13 3:10 AM, Andre Oppermann wrote:
On 09.02.2013 15:41, Alfred Perlstein wrote:
However, the end result must be far different than what has occurred
so far.
If the code was deemed unacceptable for general inclusion, then we
must find a way to provide a
light framework to accomplish t
On 11 February 2013 03:18, Andre Oppermann wrote:
> In general Google does provide quite a bit of data with their experiments
> showing that it isn't harmful and that it helps the case.
>
> Smaller RTO (1s) has become a RFC so there was very broad consensus in
> TCPM that is a good thing. We don
On 10.02.2013 11:36, Andrey Zonov wrote:
On 2/10/13 9:05 AM, Kevin Oberman wrote:
This is a subject rather near to my heart, having fought battles with
congestion back in the dark days of Windows when it essentially
defaulted to TCPIGNOREIDLE. It was a huge pain, but it was the only
way Windows
On 05.02.2013 22:40, John Baldwin wrote:
On Tuesday, February 05, 2013 12:44:27 pm Andre Oppermann wrote:
I would prefer to encapsulate it into its own not-so-much-congestion-management
algorithm so you can eventually do other tweaks as well like more aggressive
loss recovery which would fit you
On 09.02.2013 15:41, Alfred Perlstein wrote:
However, the end result must be far different than what has occurred so far.
If the code was deemed unacceptable for general inclusion, then we must find a
way to provide a
light framework to accomplish the needs of the community member.
We've got
On Feb 10, 2013, at 11:36, Andrey Zonov wrote:
> Google made many many TCP tweaks. Increased initial window, small RTO,
> enabled ignore after idle and others. They published that, other people
> just blindly applied these tunings and the Internet still works.
MANY people are experimenting with
On 2/10/13 9:05 AM, Kevin Oberman wrote:
>
> This is a subject rather near to my heart, having fought battles with
> congestion back in the dark days of Windows when it essentially
> defaulted to TCPIGNOREIDLE. It was a huge pain, but it was the only
> way Windows did TCP in the early days. It sim
I'm somewhat sympathetic to the purity of TCP. Nevertheless...
On 02/10/2013 16:05, Kevin Oberman wrote:
[..]
What I would like to see is a way to have it available, but make it
unlikely to be enabled except in a way that would put up flashing red
warnings and sound sirens to warn peopl
On 02/10/2013 18:30, Eggert, Lars wrote:
On Feb 10, 2013, at 6:05, Kevin Oberman wrote:
One idea that popped into my head (and may be completely ridiculous,
is to make its availability dependent on a kernel option and have
warning in NOTES about it contravening normal and accepted practice
an
On Feb 10, 2013, at 6:05, Kevin Oberman wrote:
> One idea that popped into my head (and may be completely ridiculous,
> is to make its availability dependent on a kernel option and have
> warning in NOTES about it contravening normal and accepted practice
> and that it can cause serious problems b
On Sat, Feb 9, 2013 at 6:41 AM, Alfred Perlstein wrote:
> On 2/7/13 12:04 PM, George Neville-Neil wrote:
>>
>> On Feb 6, 2013, at 12:28 , Alfred Perlstein wrote:
>>
>>> On 2/6/13 4:46 AM, John Baldwin wrote:
On Wednesday, February 06, 2013 6:27:04 am Randall Stewart wrote:
>
> J
On 2/7/13 12:04 PM, George Neville-Neil wrote:
On Feb 6, 2013, at 12:28 , Alfred Perlstein wrote:
On 2/6/13 4:46 AM, John Baldwin wrote:
On Wednesday, February 06, 2013 6:27:04 am Randall Stewart wrote:
John:
A burst at line rate will *often* cause drops. This is because
router queues are a
On Feb 6, 2013, at 12:28 , Alfred Perlstein wrote:
> On 2/6/13 4:46 AM, John Baldwin wrote:
>> On Wednesday, February 06, 2013 6:27:04 am Randall Stewart wrote:
>>> John:
>>>
>>> A burst at line rate will *often* cause drops. This is because
>>> router queues are at a finite size. Also such a b
On 2/6/13 4:46 AM, John Baldwin wrote:
On Wednesday, February 06, 2013 6:27:04 am Randall Stewart wrote:
John:
A burst at line rate will *often* cause drops. This is because
router queues are at a finite size. Also such a burst (especially
on a long delay bandwidth network) cause your RTT to in
On Wednesday, February 06, 2013 6:27:04 am Randall Stewart wrote:
> John:
>
> A burst at line rate will *often* cause drops. This is because
> router queues are at a finite size. Also such a burst (especially
> on a long delay bandwidth network) cause your RTT to increase even
> if there is no dro
John:
In-line
On Jan 24, 2013, at 11:14 AM, John Baldwin wrote:
> On Thursday, January 24, 2013 3:03:31 am Andre Oppermann wrote:
>> On 24.01.2013 03:31, Sepherosa Ziehau wrote:
>>> On Thu, Jan 24, 2013 at 12:15 AM, John Baldwin wrote:
On Wednesday, January 23, 2013 1:33:27 am Sepherosa Zi
John:
A burst at line rate will *often* cause drops. This is because
router queues are at a finite size. Also such a burst (especially
on a long delay bandwidth network) cause your RTT to increase even
if there is no drop which is going to hurt you as well.
A SHOULD in an RFC says you really real
On Tuesday, February 05, 2013 12:44:27 pm Andre Oppermann wrote:
> On 05.02.2013 18:11, John Baldwin wrote:
> > On Wednesday, January 30, 2013 12:26:17 pm Andre Oppermann wrote:
> >> You can simply create your own congestion control algorithm with only the
> >> restart window changed. See (pseudo)
On 05.02.2013 18:11, John Baldwin wrote:
On Wednesday, January 30, 2013 12:26:17 pm Andre Oppermann wrote:
You can simply create your own congestion control algorithm with only the
restart window changed. See (pseudo) code below. BTW, I just noticed that
the other cc algos don't do not reset t
On Wednesday, January 30, 2013 12:26:17 pm Andre Oppermann wrote:
> You can simply create your own congestion control algorithm with only the
> restart window changed. See (pseudo) code below. BTW, I just noticed that
> the other cc algos don't do not reset the idle window.
*sigh* I am fully co
On 1/30/13 12:29 PM, Andre Oppermann wrote:
On 30.01.2013 18:11, Alfred Perlstein wrote:
On 1/30/13 11:58 AM, John Baldwin wrote:
On Tuesday, January 29, 2013 6:07:22 pm Andre Oppermann wrote:
Yes, unfortunately I do object. This option, combined with the
inflated
CWND at the end of a burst
On 30.01.2013 18:11, Alfred Perlstein wrote:
On 1/30/13 11:58 AM, John Baldwin wrote:
On Tuesday, January 29, 2013 6:07:22 pm Andre Oppermann wrote:
Yes, unfortunately I do object. This option, combined with the inflated
CWND at the end of a burst, effectively removes much, if not all, of the
On 30.01.2013 17:58, John Baldwin wrote:
On Tuesday, January 29, 2013 6:07:22 pm Andre Oppermann wrote:
On 29.01.2013 19:50, John Baldwin wrote:
On Thursday, January 24, 2013 11:14:40 am John Baldwin wrote:
Agree, per-socket option could be useful than global sysctls under
certain situation.
On 1/30/13 11:58 AM, John Baldwin wrote:
On Tuesday, January 29, 2013 6:07:22 pm Andre Oppermann wrote:
Yes, unfortunately I do object. This option, combined with the inflated
CWND at the end of a burst, effectively removes much, if not all, of the
congestion control mechanisms originally put
On Tuesday, January 29, 2013 6:07:22 pm Andre Oppermann wrote:
> On 29.01.2013 19:50, John Baldwin wrote:
> > On Thursday, January 24, 2013 11:14:40 am John Baldwin wrote:
> Agree, per-socket option could be useful than global sysctls under
> certain situation. However, in addition to th
On 29.01.2013 19:50, John Baldwin wrote:
On Thursday, January 24, 2013 11:14:40 am John Baldwin wrote:
Agree, per-socket option could be useful than global sysctls under
certain situation. However, in addition to the per-socket option,
could global sysctl nodes to disable idle_restart/idle_cwv
On Thursday, January 24, 2013 11:14:40 am John Baldwin wrote:
> > > Agree, per-socket option could be useful than global sysctls under
> > > certain situation. However, in addition to the per-socket option,
> > > could global sysctl nodes to disable idle_restart/idle_cwv help too?
> >
> > No. Th
On 1/24/13 11:14 AM, John Baldwin wrote:
On Thursday, January 24, 2013 3:03:31 am Andre Oppermann wrote:
On 24.01.2013 03:31, Sepherosa Ziehau wrote:
On Thu, Jan 24, 2013 at 12:15 AM, John Baldwin wrote:
On Wednesday, January 23, 2013 1:33:27 am Sepherosa Ziehau wrote:
On Wed, Jan 23, 2013 a
On Thursday, January 24, 2013 3:03:31 am Andre Oppermann wrote:
> On 24.01.2013 03:31, Sepherosa Ziehau wrote:
> > On Thu, Jan 24, 2013 at 12:15 AM, John Baldwin wrote:
> >> On Wednesday, January 23, 2013 1:33:27 am Sepherosa Ziehau wrote:
> >>> On Wed, Jan 23, 2013 at 4:11 AM, John Baldwin wrote
On 24.01.2013 03:31, Sepherosa Ziehau wrote:
On Thu, Jan 24, 2013 at 12:15 AM, John Baldwin wrote:
On Wednesday, January 23, 2013 1:33:27 am Sepherosa Ziehau wrote:
On Wed, Jan 23, 2013 at 4:11 AM, John Baldwin wrote:
As I mentioned in an earlier thread, I recently had to debug an issue we w
On Thu, Jan 24, 2013 at 12:15 AM, John Baldwin wrote:
> On Wednesday, January 23, 2013 1:33:27 am Sepherosa Ziehau wrote:
>> On Wed, Jan 23, 2013 at 4:11 AM, John Baldwin wrote:
>> > As I mentioned in an earlier thread, I recently had to debug an issue we
>> > were
>> > seeing across a link with
On Wednesday, January 23, 2013 1:33:27 am Sepherosa Ziehau wrote:
> On Wed, Jan 23, 2013 at 4:11 AM, John Baldwin wrote:
> > As I mentioned in an earlier thread, I recently had to debug an issue we
> > were
> > seeing across a link with a high bandwidth-delay product (both high
> > bandwidth
> >
On Wed, Jan 23, 2013 at 4:11 AM, John Baldwin wrote:
> As I mentioned in an earlier thread, I recently had to debug an issue we were
> seeing across a link with a high bandwidth-delay product (both high bandwidth
> and high RTT). Our specific use case was to use a TCP connection to reliably
> for
On Tuesday, January 22, 2013 3:35:40 pm Alfred Perlstein wrote:
> On 1/22/13 12:11 PM, John Baldwin wrote:
> > As I mentioned in an earlier thread, I recently had to debug an issue we
> > were
> > seeing across a link with a high bandwidth-delay product (both high
> > bandwidth
> > and high RTT).
On 22.01.2013 21:35, Alfred Perlstein wrote:
On 1/22/13 12:11 PM, John Baldwin wrote:
As I mentioned in an earlier thread, I recently had to debug an issue we were
seeing across a link with a high bandwidth-delay product (both high bandwidth
and high RTT). Our specific use case was to use a TCP
On 1/22/13 12:11 PM, John Baldwin wrote:
As I mentioned in an earlier thread, I recently had to debug an issue we were
seeing across a link with a high bandwidth-delay product (both high bandwidth
and high RTT). Our specific use case was to use a TCP connection to reliably
forward a latency-sens
As I mentioned in an earlier thread, I recently had to debug an issue we were
seeing across a link with a high bandwidth-delay product (both high bandwidth
and high RTT). Our specific use case was to use a TCP connection to reliably
forward a latency-sensitive datagram stream across a WAN conne
56 matches
Mail list logo