On Sun, 2018-02-18 at 22:49 +0100, Oleksandr Natalenko wrote:
> Hi.
>
> On neděle 18. února 2018 22:04:27 CET Eric Dumazet wrote:
> > I was able to take a look today, and I believe this is the time to
> > switch TCP to GSO being always on.
> >
> > As a bonus, we get speed boost for cubic as well.
Hi.
On neděle 18. února 2018 22:04:27 CET Eric Dumazet wrote:
> I was able to take a look today, and I believe this is the time to
> switch TCP to GSO being always on.
>
> As a bonus, we get speed boost for cubic as well.
>
> Todays high BDP and recent TCP improvements (rtx queue as rb-tree, sac
On Sun, 2018-02-18 at 13:04 -0800, Eric Dumazet wrote:
>
> Can you please test the following patch ?
>
> Note that some cleanups can be done later in TCP stack, removing lots
> of legacy stuff.
>
> Also TCP internal-pacing could benefit from something similar to this
> fq patch eventually, altho
On Sat, 2018-02-17 at 10:52 -0800, Eric Dumazet wrote:
>
> This must be some race condition in the code I added in TCP for self-
> pacing, when a sort timeout is programmed.
>
> Disabling SG means TCP cooks 1-MSS packets.
>
> I will take a look, probably after the (long) week-end : Tuesday.
I w
On Sat, 2018-02-17 at 11:01 +0100, Oleksandr Natalenko wrote:
> Hi.
>
> On pátek 16. února 2018 23:59:52 CET Eric Dumazet wrote:
> > Well, no effect here on e1000e (1 Gbit) at least
> >
> > # ethtool -K eth3 sg off
> > Actual changes:
> > scatter-gather: off
> > tx-scatter-gather: off
> > tcp-se
Hi.
On pátek 16. února 2018 23:59:52 CET Eric Dumazet wrote:
> Well, no effect here on e1000e (1 Gbit) at least
>
> # ethtool -K eth3 sg off
> Actual changes:
> scatter-gather: off
> tx-scatter-gather: off
> tcp-segmentation-offload: off
> tx-tcp-segmentation: off [requested on]
> tx-tcp6-segmen
On pátek 16. února 2018 23:50:35 CET Eric Dumazet wrote:
> /* snip */
> If you use
>
> tcptrace -R test_s2c.pcap
> xplot.org d2c_rtt.xpl
>
> Then you'll see plenty of suspect 40ms rtt samples.
That's odd. Even the way how they look uniformly.
> It looks like receiver misses wakeups for some rea
On Fri, Feb 16, 2018 at 2:50 PM, Oleksandr Natalenko
wrote:
> Hi.
>
> On pátek 16. února 2018 21:54:05 CET Eric Dumazet wrote:
>> /* snip */
>> Something fishy really :
>> /* snip */
>> Not only the receiver suddenly adds a 25 ms delay, but also note that
>> it acknowledges all prior segments (ack
Hi.
On pátek 16. února 2018 21:54:05 CET Eric Dumazet wrote:
> /* snip */
> Something fishy really :
> /* snip */
> Not only the receiver suddenly adds a 25 ms delay, but also note that
> it acknowledges all prior segments (ack 112949), but with a wrong ecr
> value ( 2327043753 )
> instead of 2327
On Fri, 2018-02-16 at 12:54 -0800, Eric Dumazet wrote:
> On Fri, Feb 16, 2018 at 9:25 AM, Oleksandr Natalenko
> wrote:
> > Hi.
> >
> > On pátek 16. února 2018 17:33:48 CET Neal Cardwell wrote:
> > > Thanks for the detailed report! Yes, this sounds like an issue in BBR. We
> > > have not run into
On Fri, Feb 16, 2018 at 9:25 AM, Oleksandr Natalenko
wrote:
> Hi.
>
> On pátek 16. února 2018 17:33:48 CET Neal Cardwell wrote:
>> Thanks for the detailed report! Yes, this sounds like an issue in BBR. We
>> have not run into this one in our team, but we will try to work with you to
>> fix this.
>
Hi.
On pátek 16. února 2018 18:56:12 CET Holger Hoffstätte wrote:
> There is simply no reason why you shouldn't get approx. line rate
> (~920+-ish) Mbit over wired 1GBit Ethernet; even my broken 10-year old
> Core2Duo laptop can do that. Can you boot with spectre_v2=off and try "the
> simplest cas
On 02/16/18 18:25, Oleksandr Natalenko wrote:
> So, going on with two real HW hosts. They are both running latest stock Arch
> Linux kernel (4.15.3-1-ARCH, CONFIG_PREEMPT=y, CONFIG_HZ=1000) and are
> interconnected with 1 Gbps link (via switch if that matters). Using iperf3,
> running each test
Hi.
On pátek 16. února 2018 17:25:58 CET Eric Dumazet wrote:
> The way TCP pacing works, it defaults to internal pacing using a hint
> stored in the socket.
>
> If you change the qdisc while flow is alive, result could be unexpected.
I don't change a qdisc while flow is alive. Either the VM is c
Hi.
On pátek 16. února 2018 17:26:11 CET Holger Hoffstätte wrote:
> These are very odd configurations. :)
> Non-preempt/100 might well be too slow, whereas PREEMPT/1000 might simply
> have too much overhead.
Since the pacing is based on hrtimers, should HZ matter at all? Even if so,
poor 1 Gbps
Hi.
On pátek 16. února 2018 17:33:48 CET Neal Cardwell wrote:
> Thanks for the detailed report! Yes, this sounds like an issue in BBR. We
> have not run into this one in our team, but we will try to work with you to
> fix this.
>
> Would you be able to take a sender-side tcpdump trace of the slow
On 02/16/18 17:56, Neal Cardwell wrote:
> On Fri, Feb 16, 2018 at 11:26 AM, Holger Hoffstätte
> wrote:
>>
>> BBR in general will run with lower cwnd than e.g. Cubic or others.
>> That's a feature and necessary for WAN transfers.
>
> Please note that there's no general rule about whether BBR will
Hi!
On pátek 16. února 2018 17:45:56 CET Neal Cardwell wrote:
> Eric raises a good question: bare metal vs VMs.
>
> Oleksandr, your first email mentioned KVM VMs and virtio NICs. Your
> second e-mail did not seem to mention if those results were for bare
> metal or a VM scenario: can you please c
On Fri, Feb 16, 2018 at 11:26 AM, Holger Hoffstätte
wrote:
>
> BBR in general will run with lower cwnd than e.g. Cubic or others.
> That's a feature and necessary for WAN transfers.
Please note that there's no general rule about whether BBR will run
with a lower or higher cwnd than CUBIC, Reno, o
On Fri, Feb 16, 2018 at 11:43 AM, Eric Dumazet wrote:
>
> On Fri, Feb 16, 2018 at 8:33 AM, Neal Cardwell wrote:
> > Oleksandr,
> >
> > Thanks for the detailed report! Yes, this sounds like an issue in BBR. We
> > have not run into this one in our team, but we will try to work with you to
> > fix
On Fri, Feb 16, 2018 at 8:33 AM, Neal Cardwell wrote:
> Oleksandr,
>
> Thanks for the detailed report! Yes, this sounds like an issue in BBR. We
> have not run into this one in our team, but we will try to work with you to
> fix this.
>
> Would you be able to take a sender-side tcpdump trace of th
On 02/16/18 16:15, Oleksandr Natalenko wrote:
> Hi, David, Eric, Neal et al.
>
> On čtvrtek 15. února 2018 21:42:26 CET Oleksandr Natalenko wrote:
>> I've faced an issue with a limited TCP bandwidth between my laptop and a
>> server in my 1 Gbps LAN while using BBR as a congestion control mechanis
On Fri, Feb 16, 2018 at 7:15 AM, Oleksandr Natalenko
wrote:
> Hi, David, Eric, Neal et al.
>
> On čtvrtek 15. února 2018 21:42:26 CET Oleksandr Natalenko wrote:
>> I've faced an issue with a limited TCP bandwidth between my laptop and a
>> server in my 1 Gbps LAN while using BBR as a congestion co
Lets CC BBR folks at Google, and remove the ones that probably have no
idea.
On Thu, 2018-02-15 at 21:42 +0100, Oleksandr Natalenko wrote:
> Hello.
>
> I've faced an issue with a limited TCP bandwidth between my laptop and a
> server in my 1 Gbps LAN while using BBR as a congestion control mec
Hi, David, Eric, Neal et al.
On čtvrtek 15. února 2018 21:42:26 CET Oleksandr Natalenko wrote:
> I've faced an issue with a limited TCP bandwidth between my laptop and a
> server in my 1 Gbps LAN while using BBR as a congestion control mechanism.
> To verify my observations, I've set up 2 KVM VMs
Hello.
I've faced an issue with a limited TCP bandwidth between my laptop and a
server in my 1 Gbps LAN while using BBR as a congestion control mechanism. To
verify my observations, I've set up 2 KVM VMs with the following parameters:
1) Linux v4.15.3
2) virtio NICs
3) 128 MiB of RAM
4) 2 vCPUs
26 matches
Mail list logo