On 27/04/2018 11:45 AM, Aaron Lu wrote:
On Mon, Apr 23, 2018 at 09:10:33PM +0800, Aaron Lu wrote:
On Mon, Apr 23, 2018 at 11:54:57AM +0300, Tariq Toukan wrote:
Hi,
I ran my tests with your patches.
Initial BW numbers are significantly higher than I documented back then in
this mail-thread.
F
On Mon, Apr 23, 2018 at 09:10:33PM +0800, Aaron Lu wrote:
> On Mon, Apr 23, 2018 at 11:54:57AM +0300, Tariq Toukan wrote:
> > Hi,
> >
> > I ran my tests with your patches.
> > Initial BW numbers are significantly higher than I documented back then in
> > this mail-thread.
> > For example, in drive
On Mon, Apr 23, 2018 at 11:54:57AM +0300, Tariq Toukan wrote:
> Hi,
>
> I ran my tests with your patches.
> Initial BW numbers are significantly higher than I documented back then in
> this mail-thread.
> For example, in driver #2 (see original mail thread), with 6 rings, I now
> get 92Gbps (sligh
On 22/04/2018 7:43 PM, Tariq Toukan wrote:
On 21/04/2018 11:15 AM, Aaron Lu wrote:
Sorry to bring up an old thread...
I want to thank you very much for bringing this up!
On Thu, Nov 02, 2017 at 07:21:09PM +0200, Tariq Toukan wrote:
On 18/09/2017 12:16 PM, Tariq Toukan wrote:
On 15
On 21/04/2018 11:15 AM, Aaron Lu wrote:
Sorry to bring up an old thread...
I want to thank you very much for bringing this up!
On Thu, Nov 02, 2017 at 07:21:09PM +0200, Tariq Toukan wrote:
On 18/09/2017 12:16 PM, Tariq Toukan wrote:
On 15/09/2017 1:23 PM, Mel Gorman wrote:
On Thu, S
Sorry to bring up an old thread...
On Thu, Nov 02, 2017 at 07:21:09PM +0200, Tariq Toukan wrote:
>
>
> On 18/09/2017 12:16 PM, Tariq Toukan wrote:
> >
> >
> > On 15/09/2017 1:23 PM, Mel Gorman wrote:
> > > On Thu, Sep 14, 2017 at 07:49:31PM +0300, Tariq Toukan wrote:
> > > > Insights: Major de
On Wed, 8 Nov 2017 09:35:47 +
Mel Gorman wrote:
> On Wed, Nov 08, 2017 at 02:42:04PM +0900, Tariq Toukan wrote:
> > > > Hi all,
> > > >
> > > > After leaving this task for a while doing other tasks, I got back to it
> > > > now
> > > > and see that the good behavior I observed earlier was n
On 08/11/2017 6:35 PM, Mel Gorman wrote:
On Wed, Nov 08, 2017 at 02:42:04PM +0900, Tariq Toukan wrote:
Hi all,
After leaving this task for a while doing other tasks, I got back to it now
and see that the good behavior I observed earlier was not stable.
Recall: I work with a modified driver t
On Wed, Nov 08, 2017 at 02:42:04PM +0900, Tariq Toukan wrote:
> > > Hi all,
> > >
> > > After leaving this task for a while doing other tasks, I got back to it
> > > now
> > > and see that the good behavior I observed earlier was not stable.
> > >
> > > Recall: I work with a modified driver that
On 03/11/2017 10:40 PM, Mel Gorman wrote:
On Thu, Nov 02, 2017 at 07:21:09PM +0200, Tariq Toukan wrote:
On 18/09/2017 12:16 PM, Tariq Toukan wrote:
On 15/09/2017 1:23 PM, Mel Gorman wrote:
On Thu, Sep 14, 2017 at 07:49:31PM +0300, Tariq Toukan wrote:
Insights: Major degradation between
On Thu, Nov 02, 2017 at 07:21:09PM +0200, Tariq Toukan wrote:
>
>
> On 18/09/2017 12:16 PM, Tariq Toukan wrote:
> >
> >
> > On 15/09/2017 1:23 PM, Mel Gorman wrote:
> > > On Thu, Sep 14, 2017 at 07:49:31PM +0300, Tariq Toukan wrote:
> > > > Insights: Major degradation between #1 and #2, not get
On 18/09/2017 12:16 PM, Tariq Toukan wrote:
On 15/09/2017 1:23 PM, Mel Gorman wrote:
On Thu, Sep 14, 2017 at 07:49:31PM +0300, Tariq Toukan wrote:
Insights: Major degradation between #1 and #2, not getting any
close to linerate! Degradation is fixed between #2 and #3. This is
because page a
On Mon, Sep 18, 2017 at 06:33:20PM +0300, Tariq Toukan wrote:
>
>
> On 18/09/2017 10:44 AM, Aaron Lu wrote:
> > On Mon, Sep 18, 2017 at 03:34:47PM +0800, Aaron Lu wrote:
> > > On Sun, Sep 17, 2017 at 07:16:15PM +0300, Tariq Toukan wrote:
> > > >
> > > > It's nice to have the option to dynamicall
On 18/09/2017 10:44 AM, Aaron Lu wrote:
On Mon, Sep 18, 2017 at 03:34:47PM +0800, Aaron Lu wrote:
On Sun, Sep 17, 2017 at 07:16:15PM +0300, Tariq Toukan wrote:
It's nice to have the option to dynamically play with the parameter.
But maybe we should also think of changing the default fraction
On 15/09/2017 1:23 PM, Mel Gorman wrote:
On Thu, Sep 14, 2017 at 07:49:31PM +0300, Tariq Toukan wrote:
Insights: Major degradation between #1 and #2, not getting any
close to linerate! Degradation is fixed between #2 and #3. This is
because page allocator cannot stand the higher allocation rat
On Mon, Sep 18, 2017 at 03:34:47PM +0800, Aaron Lu wrote:
> On Sun, Sep 17, 2017 at 07:16:15PM +0300, Tariq Toukan wrote:
> >
> > It's nice to have the option to dynamically play with the parameter.
> > But maybe we should also think of changing the default fraction guaranteed
> > to the PCP, so t
On Sun, Sep 17, 2017 at 07:16:15PM +0300, Tariq Toukan wrote:
>
> It's nice to have the option to dynamically play with the parameter.
> But maybe we should also think of changing the default fraction guaranteed
> to the PCP, so that unaware admins of networking servers would also benefit.
I coll
On 15/09/2017 10:28 AM, Jesper Dangaard Brouer wrote:
On Thu, 14 Sep 2017 19:49:31 +0300
Tariq Toukan wrote:
Hi all,
As part of the efforts to support increasing next-generation NIC speeds,
I am investigating SW bottlenecks in network stack receive flow.
Here I share some numbers I got for
On 14/09/2017 11:19 PM, Andi Kleen wrote:
Tariq Toukan writes:
Congestion in this case is very clear.
When monitored in perf top:
85.58% [kernel] [k] queued_spin_lock_slowpath
Please look at the callers. Spinlock profiles without callers
are usually useless because it's just blaming the me
On Thu, Sep 14, 2017 at 07:49:31PM +0300, Tariq Toukan wrote:
> Insights:
> Major degradation between #1 and #2, not getting any close to linerate!
> Degradation is fixed between #2 and #3.
> This is because page allocator cannot stand the higher allocation rate.
> In #2, we also see that the addit
On Thu, 14 Sep 2017 19:49:31 +0300
Tariq Toukan wrote:
> Hi all,
>
> As part of the efforts to support increasing next-generation NIC speeds,
> I am investigating SW bottlenecks in network stack receive flow.
>
> Here I share some numbers I got for a simple experiment, in which I
> simulate th
Tariq Toukan writes:
>
> Congestion in this case is very clear.
> When monitored in perf top:
> 85.58% [kernel] [k] queued_spin_lock_slowpath
Please look at the callers. Spinlock profiles without callers
are usually useless because it's just blaming the messenger.
Most likely the PCP lists are t
Hi all,
As part of the efforts to support increasing next-generation NIC speeds,
I am investigating SW bottlenecks in network stack receive flow.
Here I share some numbers I got for a simple experiment, in which I
simulate the page allocation rate needed in 200Gpbs NICs.
I ran the test below
23 matches
Mail list logo