On 12/04/2014 09:28 AM, Jeff Layton wrote:
> On Thu, 04 Dec 2014 09:17:17 -0800
> Shirley Ma wrote:
>
>> > I am looking at how to reduce total RPC execution time in NFS/RDMA.
>> > mountstats output shows that RPC backlog wait is too long, but increasing
>> >
I am looking at how to reduce total RPC execution time in NFS/RDMA. mountstats
output shows that RPC backlog wait is too long, but increasing the credit limit
doesn't seem help. Would this patchset help reducing total RPC execution time?
Shirley
On 12/04/2014 03:47 AM, Jeff Layton wrote:
> I w
On Tue, 2012-08-21 at 09:07 +0200, Peter Zijlstra wrote:
> On Mon, 2012-08-20 at 15:17 -0700, Shirley Ma wrote:
> > On Mon, 2012-08-20 at 14:00 +0200, Peter Zijlstra wrote:
> > > On Fri, 2012-08-17 at 12:46 -0700, Shirley Ma wrote:
> > > > Add/Export a new API for p
On Mon, 2012-08-20 at 14:00 +0200, Peter Zijlstra wrote:
> On Fri, 2012-08-17 at 12:46 -0700, Shirley Ma wrote:
> > Add/Export a new API for per-cpu thread model networking device
> driver
> > to choose a preferred idlest cpu within allowed cpumask.
> >
> > The
/from the other
NUMA node.
KVM per-cpu vhost will be the first one to use this API. Any other
device driver which uses per-cpu thread model and has cgroup cpuset
control will use this API later.
Signed-off-by: Shirley Ma
---
include/linux/sched.h |2 ++
kernel/sched/fair.c | 41
On Fri, 2012-08-17 at 18:48 +0200, Peter Zijlstra wrote:
> On Fri, 2012-08-17 at 08:39 -0700, Shirley Ma wrote:
>
> > Hello Ingo, Peter,
> > Have you had chance to review below patch?
>
> Well no of course not, nobody CC'ed us..
>
> Your patch submis
Hello Ingo, Peter,
Have you had chance to review below patch?
Thanks
Shirley
On Sun, 2012-07-22 at 23:57 -0700, Shirley Ma wrote:
> Introduce a new API to choose per-cpu thread from cgroup control cpuset
> (allowed) and preferred cpuset (local numa-node).
>
> The receivi
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 64d9df5..46cc4a7 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2806,4 +2806,6 @@ static inline unsigned long rlimit_max(unsigned int limit)
#endif /* __KERNEL__ */
+extern int find_idlest_prefer_cpu(struct cp
might not be part of cgroup cpusets
without this API. On numa system, the preferred cpusets would help to
reduce expensive cross memory access to/from the other node.
Signed-off-by: Shirley Ma
---
include/linux/sched.h |2 ++
kernel/sched/fair.c | 30 ++
2
Roland Dreier <[EMAIL PROTECTED]> wrote on 09/17/2007 02:47:42 PM:
> > > IPoIB CM handles this properly by gathering together single pages
in
> > > skbs' fragment lists.
>
> > Then can we reuse IPoIB CM code here?
>
> Yes, if possible, refactoring things so that the rx skb allocation
> code
> IPoIB CM handles this properly by gathering together single pages in
> skbs' fragment lists.
>
> - R.
Then can we reuse IPoIB CM code here?
Thanks
Shirley
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo i
Hello Roland,
Since ehca can support 4K MTU, we would like to see a patch in
IPoIB to allow link MTU to be up to 4K instead of current 2K for 2.6.24
kernel. The idea is IPoIB link MTU will pick up a return value from SM's
default broadcast MTU. This patch should be a small patch, I hop
> Just to be clear, in the previous email I posted on this thread, I
> described a worst-case network ping-pong test case (send a packet, wait
> for reply), and found out that a deffered interrupt scheme just damaged
> the performance of the test case.
When splitting rx and tx handler, I found so
Hello Roland,
FYI, we are working on several IPoIB performance improvement
patches which are not on the list. Some of the patches are under test,
some of the patches are going to be submitted soon. They are:
1. skb aggregations for both dev xmit(networking layer) and IPoIB send
(it wi
Hello Roland,
> > Any plans to do something with multiple EQ support in mthca?
>
> I haven't done any work on it or seen anything from anyone else, so I
> expect this will have to wait for 2.6.24.
We are working on IPoIB to use multiple EQ for multiple
links/connetions scalability. Doe
15 matches
Mail list logo