On Tue, 2016-05-24 at 13:50 +, David Laight wrote:
> From: Jesper Dangaard Brouer
> > Sent: 20 May 2016 18:50
> ...
> > If would be cool if you could run a test with removed busylock and
> > allow HTB to bulk dequeue.
>
> (Without having looked )
> Could you have two queues and separate qu
From: Jesper Dangaard Brouer
> Sent: 20 May 2016 18:50
...
> If would be cool if you could run a test with removed busylock and
> allow HTB to bulk dequeue.
(Without having looked )
Could you have two queues and separate queue and dequeue locks.
The enqueue code would acquire the enqueue lock
On Fri, 20 May 2016 14:32:40 -0700
Eric Dumazet wrote:
> On Fri, 2016-05-20 at 19:49 +0200, Jesper Dangaard Brouer wrote:
> > On Fri, 20 May 2016 07:16:55 -0700
> > Eric Dumazet wrote:
> >
> > > Since bonding pretends to be multiqueue, TCQ_F_ONETXQUEUE is not set
> > > on sch->flags when HTB
On Fri, 2016-05-20 at 19:49 +0200, Jesper Dangaard Brouer wrote:
> On Fri, 20 May 2016 07:16:55 -0700
> Eric Dumazet wrote:
>
> > Since bonding pretends to be multiqueue, TCQ_F_ONETXQUEUE is not set
> > on sch->flags when HTB is installed at the bonding device root.
>
> If would be cool if you c
On Fri, 20 May 2016 07:16:55 -0700
Eric Dumazet wrote:
> Since bonding pretends to be multiqueue, TCQ_F_ONETXQUEUE is not set
> on sch->flags when HTB is installed at the bonding device root.
If would be cool if you could run a test with removed busylock and
allow HTB to bulk dequeue.
--
Best
On 16-05-20 06:11 AM, Eric Dumazet wrote:
> On Fri, 2016-05-20 at 09:29 +0200, Jesper Dangaard Brouer wrote:
>
>
>> The hole idea behind allowing bulk qdisc dequeue, was to mitigate this,
>> by allowing dequeue to do more work, while holding the lock.
>>
>> You mention HTB. Notice HTB does not t
On Fri, 2016-05-20 at 06:47 -0700, Eric Dumazet wrote:
> On Fri, 2016-05-20 at 06:11 -0700, Eric Dumazet wrote:
> > On Fri, 2016-05-20 at 09:29 +0200, Jesper Dangaard Brouer wrote:
> >
> >
> > > The hole idea behind allowing bulk qdisc dequeue, was to mitigate this,
> > > by allowing dequeue to d
On Fri, 2016-05-20 at 06:11 -0700, Eric Dumazet wrote:
> On Fri, 2016-05-20 at 09:29 +0200, Jesper Dangaard Brouer wrote:
>
>
> > The hole idea behind allowing bulk qdisc dequeue, was to mitigate this,
> > by allowing dequeue to do more work, while holding the lock.
> >
> > You mention HTB. Not
On Fri, 2016-05-20 at 09:29 +0200, Jesper Dangaard Brouer wrote:
> The hole idea behind allowing bulk qdisc dequeue, was to mitigate this,
> by allowing dequeue to do more work, while holding the lock.
>
> You mention HTB. Notice HTB does not take advantage of bulk dequeue.
> Have you tried to
On Thu, 19 May 2016 11:03:32 -0700
Alexander Duyck wrote:
> On Thu, May 19, 2016 at 10:08 AM, Eric Dumazet wrote:
> > busylock was added at the time we had expensive ticket spinlocks
> >
> > (commit 79640a4ca6955e3ebdb7038508fa7a0cd7fa5527 ("net: add additional
> > lock to qdisc to increase thro
On Thu, 2016-05-19 at 21:49 -0700, John Fastabend wrote:
> I plan to start looking at this again in June when I have some
> more time FWIW. The last set of RFCs I sent out bypassed both the
> qdisc lock and the busy poll lock. I remember thinking this was a
> net win at the time but I only did ver
On 16-05-19 01:39 PM, Alexander Duyck wrote:
> On Thu, May 19, 2016 at 12:35 PM, Eric Dumazet wrote:
>> On Thu, 2016-05-19 at 11:56 -0700, Eric Dumazet wrote:
>>
>>> Removing busylock helped in all cases I tested. (at least on x86 as
>>> David pointed out)
>>>
>>> As I said, we need to revisit bus
On Thu, May 19, 2016 at 12:35 PM, Eric Dumazet wrote:
> On Thu, 2016-05-19 at 11:56 -0700, Eric Dumazet wrote:
>
>> Removing busylock helped in all cases I tested. (at least on x86 as
>> David pointed out)
>>
>> As I said, we need to revisit busylock now that spinlocks are different.
>>
>> In one
On Thu, 2016-05-19 at 11:56 -0700, Eric Dumazet wrote:
> Removing busylock helped in all cases I tested. (at least on x86 as
> David pointed out)
>
> As I said, we need to revisit busylock now that spinlocks are different.
>
> In one case (20 concurrent UDP netperf), I even got a 500 % increase.
On Thu, 2016-05-19 at 11:03 -0700, Alexander Duyck wrote:
> On Thu, May 19, 2016 at 10:08 AM, Eric Dumazet wrote:
> > busylock was added at the time we had expensive ticket spinlocks
> >
> > (commit 79640a4ca6955e3ebdb7038508fa7a0cd7fa5527 ("net: add additional
> > lock to qdisc to increase throug
On Thu, 2016-05-19 at 11:12 -0700, David Miller wrote:
> From: Eric Dumazet
> Date: Thu, 19 May 2016 10:08:36 -0700
>
> > busylock was added at the time we had expensive ticket spinlocks
> >
> > (commit 79640a4ca6955e3ebdb7038508fa7a0cd7fa5527 ("net: add additional
> > lock to qdisc to increase
On 05/19/2016 11:03 AM, Alexander Duyck wrote:
On Thu, May 19, 2016 at 10:08 AM, Eric Dumazet wrote:
With HTB qdisc, here are the numbers for 200 concurrent TCP_RR, on a host with
48 hyperthreads.
...
That would be a 8 % increase.
The main point of the busy lock is to deal with the bulk t
From: Eric Dumazet
Date: Thu, 19 May 2016 10:08:36 -0700
> busylock was added at the time we had expensive ticket spinlocks
>
> (commit 79640a4ca6955e3ebdb7038508fa7a0cd7fa5527 ("net: add additional
> lock to qdisc to increase throughput")
>
> Now kernel spinlocks are MCS, this busylock things
On Thu, May 19, 2016 at 10:08 AM, Eric Dumazet wrote:
> busylock was added at the time we had expensive ticket spinlocks
>
> (commit 79640a4ca6955e3ebdb7038508fa7a0cd7fa5527 ("net: add additional
> lock to qdisc to increase throughput")
>
> Now kernel spinlocks are MCS, this busylock things is no
19 matches
Mail list logo