On Fri, Jul 20, 2018 at 10:32 AM Paolo Abeni wrote:
> Thank you for the feedback. I must admit this quite in the opposite
> direction of what I have attempted so far. I'll try that.
> Thanks.
> Still for ipv6 it will require a litte more work inside fq_codel.
IPv6 packets would use nf_ct_frag6_g
On Fri, 2018-07-20 at 08:58 -0700, Eric Dumazet wrote:
> On Fri, Jul 20, 2018 at 7:48 AM Paolo Abeni wrote:
> >
> > Hi,
> >
> > On Mon, 2018-07-09 at 05:50 -0700, Eric Dumazet wrote:
> > > On 07/09/2018 04:39 AM, Eric Dumazet wrote:
> > >
> > > > Alternatively, you could try to patch fq_codel t
On Fri, Jul 20, 2018 at 7:48 AM Paolo Abeni wrote:
>
> Hi,
>
> On Mon, 2018-07-09 at 05:50 -0700, Eric Dumazet wrote:
> > On 07/09/2018 04:39 AM, Eric Dumazet wrote:
> >
> > > Alternatively, you could try to patch fq_codel to drop all frags of one
> > > UDP datagram
> > > instead of few of them.
Hi,
On Mon, 2018-07-09 at 05:50 -0700, Eric Dumazet wrote:
> On 07/09/2018 04:39 AM, Eric Dumazet wrote:
>
> > Alternatively, you could try to patch fq_codel to drop all frags of one UDP
> > datagram
> > instead of few of them.
>
> A first step would be to make sure fq_codel_hash() (using skb_g
On 07/09/2018 04:39 AM, Eric Dumazet wrote:
> Alternatively, you could try to patch fq_codel to drop all frags of one UDP
> datagram
> instead of few of them.
A first step would be to make sure fq_codel_hash() (using skb_get_hash(skb))
selects
the same bucket for all frags of a datagram :/
On 07/09/2018 04:34 AM, Eric Dumazet wrote:
> and number of tx queues?
>
> You seem to self inflict losses on the sender, and that is terrible for the
> (convoluted) stress test you want to run.
>
> I use mq + fq : no losses on the sender.
>
> Do not send patches to solve a problem that does
On 07/09/2018 02:43 AM, Paolo Abeni wrote:
> On Fri, 2018-07-06 at 07:20 -0700, Eric Dumazet wrote:
>> I will test/polish it later, I am coming back from vacations and have a
>> backlog.
>>
>> Here are my results : (Note that I have _not_ changed
>> /proc/sys/net/ipv4/ipfrag_time )
>>
>> lpaa6
On Fri, 2018-07-06 at 07:20 -0700, Eric Dumazet wrote:
> I will test/polish it later, I am coming back from vacations and have a
> backlog.
>
> Here are my results : (Note that I have _not_ changed
> /proc/sys/net/ipv4/ipfrag_time )
>
> lpaa6:~# grep . /proc/sys/net/ipv4/ipfrag_* ; grep FRAG /p
On 07/06/2018 06:56 AM, Paolo Abeni wrote:
> With:
>
> schedule_work_on(smp_processor_id(), #... )
>
> We can be sure to run exclusively on the cpu handling the RX queue even with
> the worker.
>
Make sure to test your patch with 16 RX queues (16 cpus) feeding the defrag
unit.
In this (co
On 07/06/2018 06:56 AM, Paolo Abeni wrote:
> On Fri, 2018-07-06 at 05:09 -0700, Eric Dumazet wrote:
>> On 07/06/2018 04:56 AM, Paolo Abeni wrote:
>>> With your setting, you need a bit more concurrent connections (400 ?)
>>> to saturate the ipfrag cache. Above that number, performances will
>>> s
On Fri, 2018-07-06 at 05:09 -0700, Eric Dumazet wrote:
> On 07/06/2018 04:56 AM, Paolo Abeni wrote:
> > With your setting, you need a bit more concurrent connections (400 ?)
> > to saturate the ipfrag cache. Above that number, performances will
> > still sink.
>
> Maybe, but IP defrag can not be '
On 07/06/2018 04:56 AM, Paolo Abeni wrote:
> Hi,
>
> On Fri, 2018-07-06 at 04:23 -0700, Eric Dumazet wrote:
>> Ho hum. No please.
>>
>> I do not think adding back a GC is wise, since my patches were going in the
>> direction
>> of allowing us to increase limits on current hardware.
>>
>> Meani
Hi,
On Fri, 2018-07-06 at 04:23 -0700, Eric Dumazet wrote:
> Ho hum. No please.
>
> I do not think adding back a GC is wise, since my patches were going in the
> direction
> of allowing us to increase limits on current hardware.
>
> Meaning that the amount of frags to evict would be quite big u
On 07/06/2018 03:10 AM, Paolo Abeni wrote:
> Currently, the ip frag cache is fragile to overload. With
> flow control disabled:
>
> ./super_netperf.sh 10 -H 192.168.101.2 -t UDP_STREAM -l 60
> 9618.08
> ./super_netperf.sh 200 -H 192.168.101.2 -t UDP_STREAM -l 60
> 28.66
>
> Once that the ove
Currently, the ip frag cache is fragile to overload. With
flow control disabled:
./super_netperf.sh 10 -H 192.168.101.2 -t UDP_STREAM -l 60
9618.08
./super_netperf.sh 200 -H 192.168.101.2 -t UDP_STREAM -l 60
28.66
Once that the overload condition is reached, the system does not
recover until it
15 matches
Mail list logo