RE: [PATCH v2 0/3] hv_netvsc: Prevent packet loss during VF add/remove

2021-01-12 Thread Long Li
> Subject: Re: [PATCH v2 0/3] hv_netvsc: Prevent packet loss during VF > add/remove > > On Fri, 8 Jan 2021 16:53:40 -0800 Long Li wrote: > > From: Long Li > > > > This patch set fixes issues with packet loss on VF add/remove. > > These patches are for net-ne

[PATCH v2 2/3] hv_netvsc: Wait for completion on request SWITCH_DATA_PATH

2021-01-08 Thread Long Li
From: Long Li The completion indicates if NVSP_MSG4_TYPE_SWITCH_DATA_PATH has been processed by the VSP. The traffic is steered to VF or synthetic after we receive this completion. Signed-off-by: Long Li Reported-by: kernel test robot --- Change from v1: Fixed warnings from kernel test robot

[PATCH v2 3/3] hv_netvsc: Process NETDEV_GOING_DOWN on VF hot remove

2021-01-08 Thread Long Li
From: Long Li On VF hot remove, NETDEV_GOING_DOWN is sent to notify the VF is about to go down. At this time, the VF is still sending/receiving traffic and we request the VSP to switch datapath. On completion, the datapath is switched to synthetic and we can proceed with VF hot remove. Signed

[PATCH v2 1/3] hv_netvsc: Check VF datapath when sending traffic to VF

2021-01-08 Thread Long Li
From: Long Li The driver needs to check if the datapath has been switched to VF before sending traffic to VF. Signed-off-by: Long Li Reviewed-by: Haiyang Zhang --- drivers/net/hyperv/netvsc_drv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/hyperv

[PATCH v2 0/3] hv_netvsc: Prevent packet loss during VF add/remove

2021-01-08 Thread Long Li
From: Long Li This patch set fixes issues with packet loss on VF add/remove. Long Li (3): hv_netvsc: Check VF datapath when sending traffic to VF hv_netvsc: Wait for completion on request SWITCH_DATA_PATH hv_netvsc: Process NETDEV_GOING_DOWN on VF hot remove drivers/net/hyperv/netvsc.c

RE: [PATCH 2/3] hv_netvsc: Wait for completion on request NVSP_MSG4_TYPE_SWITCH_DATA_PATH

2021-01-07 Thread Long Li
608753098%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIj > > > oiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=MMXkQ > > KENGpyfW0NJs2khBSKTuBExFSZaWHgWyyIj6UU%3D&reserved=0 > > git remote add linux-review > > > https://nam06.safelinks.protection.outloo

RE: [PATCH 2/3] hv_netvsc: Wait for completion on request NVSP_MSG4_TYPE_SWITCH_DATA_PATH

2021-01-06 Thread Long Li
7C0 > %7C637455042608753098%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjA > wMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&s > data=uge6PX2NyAe%2BjRtvgOhR5xzN2ltBctZXeZwn0hoYco0%3D&reser > ved=0 > git fetch --no-tags linux-review Long-Li/hv_netvsc-Check-VF-datapat

[PATCH 2/3] hv_netvsc: Wait for completion on request NVSP_MSG4_TYPE_SWITCH_DATA_PATH

2021-01-05 Thread Long Li
From: Long Li The completion indicates if NVSP_MSG4_TYPE_SWITCH_DATA_PATH has been processed by the VSP. The traffic is steered to VF or synthetic after we receive this completion. Signed-off-by: Long Li --- drivers/net/hyperv/netvsc.c | 34 +++-- drivers/net

[PATCH 3/3] hv_netvsc: Process NETDEV_GOING_DOWN on VF hot remove

2021-01-05 Thread Long Li
From: Long Li On VF hot remove, NETDEV_GOING_DOWN is sent to notify the VF is about to go down. At this time, the VF is still sending/receiving traffic and we request the VSP to switch datapath. On completion, the datapath is switched to synthetic and we can proceed with VF hot remove. Signed

[PATCH 1/3] hv_netvsc: Check VF datapath when sending traffic to VF

2021-01-05 Thread Long Li
From: Long Li The driver needs to check if the datapath has been switched to VF before sending traffic to VF. Signed-off-by: Long Li --- drivers/net/hyperv/netvsc_drv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv

[PATCH v1] mm/migrate: fix comment spelling

2020-10-24 Thread Long Li
The word in the comment is misspelled, it should be "include". Signed-off-by: Long Li --- mm/migrate.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/migrate.c b/mm/migrate.c index 5ca5842df5db..d79640ab8aa1 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1694

[PATCH v4] mm, slab: Check GFP_SLAB_BUG_MASK before alloc_pages in kmalloc_order

2020-07-02 Thread Long Li
r to make the code clear, the warning message is put in one place. Signed-off-by: Long Li --- changes in V4: -Change the check function name to kmalloc_check_flags() -Put the flags check into the kmalloc_check_flags() changes in V3: -Put the warning message in one place -updage the change log t

[PATCH v3] mm, slab: Check GFP_SLAB_BUG_MASK before alloc_pages in kmalloc_order

2020-07-01 Thread Long Li
r to make the code clear, the warning message is put in one place. Signed-off-by: Long Li --- changes in V3: -Put the warning message in one place -updage the change log to be clear mm/slab.c| 10 +++--- mm/slab.h| 1 + mm/slab_common.c | 17 + mm/sl

[PATCH v2] mm, slab: Check GFP_SLAB_BUG_MASK before alloc_pages in kmalloc_order

2020-06-30 Thread Long Li
virtual address, kmalloc_order() will return NULL and the page has been allocated. After modification, GFP_SLAB_BUG_MASK has been checked before allocating pages, refer to the new_slab(). Signed-off-by: Long Li --- Changes in v2: - patch is rebased againest "[PATCH] mm: Free unused pag

[PATCH v1] mm:free unused pages in kmalloc_order

2020-06-26 Thread Long Li
NULL, the pages has not been released. Usually driver developers will not use the GFP_HIGHUSER flag to allocate memory in kmalloc, but I think this memory leak is not perfect, it is best to be fixed. This is the first time I have posted a patch, there may be something wrong. Signed-off-by: Long Li

RE: [Patch v4] storvsc: setup 1:1 mapping between hardware queue and CPU queue

2019-09-23 Thread Long Li
>Subject: RE: [Patch v4] storvsc: setup 1:1 mapping between hardware queue >and CPU queue > >>Subject: Re: [Patch v4] storvsc: setup 1:1 mapping between hardware >>queue and CPU queue >> >>On Fri, Sep 06, 2019 at 10:24:20AM -0700, lon...@linuxonhyperv.com wrote:

RE: [PATCH 1/4] softirq: implement IRQ flood detection mechanism

2019-09-23 Thread Long Li
>Thanks for the clarification. > >The problem with what Ming is proposing in my mind (and its an existing >problem that exists today), is that nvme is taking precedence over anything >else until it absolutely cannot hog the cpu in hardirq. > >In the thread Ming referenced a case where today if the

RE: [PATCH 1/4] softirq: implement IRQ flood detection mechanism

2019-09-20 Thread Long Li
> >> Long, does this patch make any difference? > > > > Sagi, > > > > Sorry it took a while to bring my system back online. > > > > With the patch, the IOPS is about the same drop with the 1st patch. I think > the excessive context switches are causing the drop in IOPS. > > > > The following are ca

RE: [PATCH 1/4] softirq: implement IRQ flood detection mechanism

2019-09-17 Thread Long Li
>Subject: Re: [PATCH 1/4] softirq: implement IRQ flood detection mechanism > >Hey Ming, > Ok, so the real problem is per-cpu bounded tasks. I share Thomas opinion about a NAPI like approach. >>> >>> We already have that, its irq_poll, but it seems that for this >>> use-case, we get l

RE: [Patch v4] storvsc: setup 1:1 mapping between hardware queue and CPU queue

2019-09-06 Thread Long Li
>Subject: Re: [Patch v4] storvsc: setup 1:1 mapping between hardware queue >and CPU queue > >On Fri, Sep 06, 2019 at 10:24:20AM -0700, lon...@linuxonhyperv.com wrote: >>From: Long Li >> >>storvsc doesn't use a dedicated hardware queue for a given CPU qu

RE: [PATCH 1/4] softirq: implement IRQ flood detection mechanism

2019-09-06 Thread Long Li
>Subject: Re: [PATCH 1/4] softirq: implement IRQ flood detection mechanism > >On Fri, Sep 06, 2019 at 09:48:21AM +0800, Ming Lei wrote: >> When one IRQ flood happens on one CPU: >> >> 1) softirq handling on this CPU can't make progress >> >> 2) kernel thread bound to this CPU can't make progress >>

RE: [PATCH 1/4] softirq: implement IRQ flood detection mechanism

2019-09-05 Thread Long Li
>Subject: Re: [PATCH 1/4] softirq: implement IRQ flood detection mechanism > > >On 06/09/2019 03:22, Long Li wrote: >[ ... ] >> > >> Tracing shows that the CPU was in either hardirq or softirq all the >> time before warnings. During tests, the system was un

RE: [Patch v3] storvsc: setup 1:1 mapping between hardware queue and CPU queue

2019-09-05 Thread Long Li
>Subject: RE: [Patch v3] storvsc: setup 1:1 mapping between hardware queue >and CPU queue > >From: Long Li Sent: Thursday, September 5, 2019 3:55 >PM >> >> storvsc doesn't use a dedicated hardware queue for a given CPU queue. >> When issuing I/O, it

RE: [PATCH 1/4] softirq: implement IRQ flood detection mechanism

2019-09-05 Thread Long Li
>Subject: Re: [PATCH 1/4] softirq: implement IRQ flood detection mechanism > > >Hi Ming, > >On 05/09/2019 11:06, Ming Lei wrote: >> On Wed, Sep 04, 2019 at 07:31:48PM +0200, Daniel Lezcano wrote: >>> Hi, >>> >>> On 04/09/2019 19:07, Bart Van Assche wrote: On 9/3/19 12:50 AM, Daniel Lezcano wro

RE: [PATCH 1/4] softirq: implement IRQ flood detection mechanism

2019-08-28 Thread Long Li
interval to evaluate if IRQ flood >>>is triggered. The Exponential Weighted Moving Average(EWMA) is used to >>>compute CPU average interrupt interval. >>> >>>Cc: Long Li >>>Cc: Ingo Molnar , >>>Cc: Peter Zijlstra >>>Cc: Keith Bu

RE: [PATCH 3/3] nvme: complete request in work queue on CPU with flooded interrupts

2019-08-23 Thread Long Li
>>>Subject: Re: [PATCH 3/3] nvme: complete request in work queue on CPU >>>with flooded interrupts >>> >>> Sagi, Here are the test results. Benchmark command: fio --bs=4k --ioengine=libaio --iodepth=64 -- >>>filename=/dev/nvme0n1:/dev/nvme1n1:/dev/nvme2n1:/dev/nvm

RE: [Patch v2] storvsc: setup 1:1 mapping between hardware queue and CPU queue

2019-08-22 Thread Long Li
>>>Subject: RE: [Patch v2] storvsc: setup 1:1 mapping between hardware >>>queue and CPU queue >>> >>>>>>Subject: RE: [Patch v2] storvsc: setup 1:1 mapping between hardware >>>>>>queue and CPU queue >>>>

RE: [Patch v2] storvsc: setup 1:1 mapping between hardware queue and CPU queue

2019-08-22 Thread Long Li
>>>Subject: RE: [Patch v2] storvsc: setup 1:1 mapping between hardware >>>queue and CPU queue >>> >>>From: Long Li Sent: Thursday, August 22, 2019 >>>1:42 PM >>>> >>>> storvsc doesn't use a dedicated hardware queue for a

RE: [PATCH] storvsc: setup 1:1 mapping between hardware queue and CPU queue

2019-08-22 Thread Long Li
>>>Subject: Re: [PATCH] storvsc: setup 1:1 mapping between hardware queue >>>and CPU queue >>> >>>On Tue, Aug 20, 2019 at 3:36 AM wrote: >>>> >>>> From: Long Li >>>> >>>> storvsc doesn't use a dedica

RE: [PATCH 3/3] nvme: complete request in work queue on CPU with flooded interrupts

2019-08-21 Thread Long Li
>>>Subject: RE: [PATCH 3/3] nvme: complete request in work queue on CPU >>>with flooded interrupts >>> >>>>>>Subject: Re: [PATCH 3/3] nvme: complete request in work queue on >>>CPU >>>>>>with flooded interrupts >>&g

RE: [PATCH 0/3] fix interrupt swamp in NVMe

2019-08-21 Thread Long Li
>>>Subject: Re: [PATCH 0/3] fix interrupt swamp in NVMe >>> >>>On Wed, Aug 21, 2019 at 07:47:44AM +, Long Li wrote: >>>> >>>Subject: Re: [PATCH 0/3] fix interrupt swamp in NVMe >>>> >>> >>>> >>

RE: [PATCH 3/3] nvme: complete request in work queue on CPU with flooded interrupts

2019-08-21 Thread Long Li
>>>Subject: Re: [PATCH 3/3] nvme: complete request in work queue on CPU >>>with flooded interrupts >>> >>> >>>> From: Long Li >>>> >>>> When a NVMe hardware queue is mapped to several CPU queues, it is >>>> possib

RE: [PATCH 3/3] nvme: complete request in work queue on CPU with flooded interrupts

2019-08-21 Thread Long Li
>>>Subject: Re: [PATCH 3/3] nvme: complete request in work queue on CPU >>>with flooded interrupts >>> >>>On Mon, Aug 19, 2019 at 11:14:29PM -0700, lon...@linuxonhyperv.com >>>wrote: >>>> From: Long Li >>>> >>>> Whe

RE: [PATCH 1/3] sched: define a function to report the number of context switches on a CPU

2019-08-21 Thread Long Li
>>>Subject: Re: [PATCH 1/3] sched: define a function to report the number of >>>context switches on a CPU >>> >>>On Mon, Aug 19, 2019 at 11:14:27PM -0700, lon...@linuxonhyperv.com >>>wrote: >>>> From: Long Li >>>> >>>

RE: [PATCH 0/3] fix interrupt swamp in NVMe

2019-08-21 Thread Long Li
>>>Subject: Re: [PATCH 0/3] fix interrupt swamp in NVMe >>> >>>On 20/08/2019 09:25, Ming Lei wrote: >>>> On Tue, Aug 20, 2019 at 2:14 PM wrote: >>>>> >>>>> From: Long Li >>>>> >>>>> This patch set

RE: [Patch (resend) 5/5] cifs: Call MID callback before destroying transport

2019-05-13 Thread Long Li
>>>-Original Message- >>>From: Pavel Shilovsky >>>Sent: Thursday, May 9, 2019 11:01 AM >>>To: Long Li >>>Cc: Steve French ; linux-cifs >>c...@vger.kernel.org>; samba-technical ; >>>Kernel Mailing List >>>Sub

RE: [PATCH] x86/hyper-v: implement EOI assist

2019-04-15 Thread Long Li
;>hypervisor." >>>> >>>> Implement the optimization in Linux. >>>> >>> >>>Simon, Long, >>> >>>did you get a chance to run some tests with this? I have ran some tests on Azure L80s_v2. With 10 NVMe disks on raid0 and formatted to EXT4, I'm getting 2.6m max IOPS with the patch, compared to 2.55m IOPS before. The VM has been running stable. Thank you! Tested-by: Long Li >>> >>>-- >>>Vitaly

[PATCH] cifs: smbd: take an array of reqeusts when sending upper layer data

2019-04-15 Thread Long Li
From: Long Li To support compounding, __smb_send_rqst() now sends an array of requests to the transport layer. Change smbd_send() to take an array of requests, and send them in as few packets as possible. Signed-off-by: Long Li --- fs/cifs/smbdirect.c | 55

[Patch (resend) 2/5] cifs: smbd: Return EINTR when interrupted

2019-04-05 Thread Long Li
From: Long Li When packets are waiting for outbound I/O and interrupted, return the proper error code to user process. Signed-off-by: Long Li --- fs/cifs/smbdirect.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c index 7259427

[Patch (resend) 1/5] cifs: smbd: Don't destroy transport on RDMA disconnect

2019-04-05 Thread Long Li
From: Long Li Now upper layer is handling the transport shutdown and reconnect, remove the code that handling transport shutdown on RDMA disconnect. Signed-off-by: Long Li --- fs/cifs/cifs_debug.c | 8 ++-- fs/cifs/smbdirect.c | 120 +++ fs

[Patch (resend) 3/5] cifs: smbd: Indicate to retry on transport sending failure

2019-04-05 Thread Long Li
From: Long Li Failure to send a packet doesn't mean it's a permanent failure, it can't be returned to user process. This I/O should be retried or failed based on server packet response and transport health. This logic is handled by the upper layer. Give this decision to upper lay

[Patch (resend) 4/5] cifs: smbd: Retry on memory registration failure

2019-04-05 Thread Long Li
From: Long Li Memory registration failure doesn't mean this I/O has failed, it means the transport is hitting I/O error or needs reconnect. This error is not from the server. Indicate this error to upper layer, and let upper layer decide how to reconnect and proceed with this I/O. Signe

[Patch (resend) 5/5] cifs: Call MID callback before destroying transport

2019-04-05 Thread Long Li
From: Long Li When transport is being destroyed, it's possible that some processes may hold memory registrations that need to be deregistred. Call them first so nobody is using transport resources, and it can be destroyed. Signed-off-by: Long Li --- fs/cifs/connect.c

[Patch v2 2/2] CIFS: Fix an issue with re-sending rdata when transport returning -EAGAIN

2019-03-15 Thread Long Li
From: Long Li When sending a rdata, transport may return -EAGAIN. In this case we should re-obtain credits because the session may have been reconnected. Change in v2: adjust_credits before re-sending Signed-off-by: Long Li --- fs/cifs/file.c | 71

[Patch v2 1/2] CIFS: Fix an issue with re-sending wdata when transport returning -EAGAIN

2019-03-15 Thread Long Li
From: Long Li When sending a wdata, transport may return -EAGAIN. In this case we should re-obtain credits because the session may have been reconnected. Change in v2: adjust_credits before re-sending Signed-off-by: Long Li --- fs/cifs/file.c | 77

[PATCH 1/2] CIFS: Fix a bug with re-sending wdata when transport returning -EAGAIN

2019-03-01 Thread Long Li
From: Long Li When sending a wdata, transport may return -EAGAIN. In this case we should re-obtain credits because the session may have been reconnected. Signed-off-by: Long Li --- fs/cifs/file.c | 61 +- 1 file changed, 31 insertions(+), 30

[PATCH 2/2] CIFS: Fix a bug with re-sending rdata when transport returning -EAGAIN

2019-03-01 Thread Long Li
From: Long Li When sending a rdata, transport may return -EAGAIN. In this case we should re-obtain credits because the session may have been reconnected. Signed-off-by: Long Li --- fs/cifs/file.c | 51 +- 1 file changed, 26 insertions(+), 25

[PATCH] CIFS: use the correct length when pinning memory for direct I/O for write

2018-12-16 Thread Long Li
From: Long Li The current code attempts to pin memory using the largest possible wsize based on the currect SMB credits. This doesn't cause kernel oops but this is not optimal as we may pin more pages then actually needed. Fix this by only pinning what are needed for doing this writ

[PATCH] CIFS: return correct errors when pinning memory failed for direct I/O

2018-12-16 Thread Long Li
From: Long Li When pinning memory failed, we should return the correct error code and rewind the SMB credits. Reported-by: Murphy Zhou Signed-off-by: Long Li Cc: sta...@vger.kernel.org Cc: Murphy Zhou --- fs/cifs/file.c | 8 +++- 1 file changed, 7 insertions(+), 1 deletion(-) diff

[PATCH] CIFS: Avoid returning EBUSY to upper layer VFS

2018-12-05 Thread Long Li
From: Long Li EBUSY is not handled by VFS, and will be passed to user-mode. This is not correct as we need to wait for more credits. This patch also fixes a bug where rsize or wsize is used uninitialized when the call to server->ops->wait_mtu_credits() fails. Reported-by: Dan Carpenter

RE: [Patch v4 1/3] CIFS: Add support for direct I/O read

2018-11-29 Thread Long Li
> Subject: Re: [Patch v4 1/3] CIFS: Add support for direct I/O read > > ср, 28 нояб. 2018 г. в 15:43, Long Li : > > > > > Subject: Re: [Patch v4 1/3] CIFS: Add support for direct I/O read > > > > > > Hi Long, > > > > > > Please find my c

RE: [Patch v4 2/3] CIFS: Add support for direct I/O write

2018-11-29 Thread Long Li
> Subject: Re: [Patch v4 2/3] CIFS: Add support for direct I/O write > > ср, 28 нояб. 2018 г. в 18:20, Long Li : > > > > > Subject: Re: [Patch v4 2/3] CIFS: Add support for direct I/O write > > > > > > ср, 31 окт. 2018 г. в 15:26, Long Li : > >

RE: [Patch v4 2/3] CIFS: Add support for direct I/O write

2018-11-28 Thread Long Li
> Subject: Re: [Patch v4 2/3] CIFS: Add support for direct I/O write > > ср, 31 окт. 2018 г. в 15:26, Long Li : > > > > From: Long Li > > > > With direct I/O write, user supplied buffers are pinned to the memory > > and data are transferred directly f

RE: [Patch v4 1/3] CIFS: Add support for direct I/O read

2018-11-28 Thread Long Li
> Subject: Re: [Patch v4 1/3] CIFS: Add support for direct I/O read > > Hi Long, > > Please find my comments below. > > > ср, 31 окт. 2018 г. в 15:14, Long Li : > > > > From: Long Li > > > > With direct I/O read, we transfer the data directly

RE: [Patch v4] genirq/matrix: Choose CPU for managed IRQs based on how many of them are allocated

2018-11-06 Thread Long Li
> Subject: Re: [Patch v4] genirq/matrix: Choose CPU for managed IRQs based > on how many of them are allocated > > Long, > > On Tue, 6 Nov 2018, Long Li wrote: > > > From: Long Li > > > > On a large system with multiple devices of the same class (e.g. N

[tip:irq/core] genirq/matrix: Improve target CPU selection for managed interrupts.

2018-11-06 Thread tip-bot for Long Li
Commit-ID: e8da8794a7fd9eef1ec9a07f0d4897c68581c72b Gitweb: https://git.kernel.org/tip/e8da8794a7fd9eef1ec9a07f0d4897c68581c72b Author: Long Li AuthorDate: Tue, 6 Nov 2018 04:00:00 + Committer: Thomas Gleixner CommitDate: Tue, 6 Nov 2018 23:20:13 +0100 genirq/matrix: Improve

[Patch v4] genirq/matrix: Choose CPU for managed IRQs based on how many of them are allocated

2018-11-05 Thread Long Li
From: Long Li On a large system with multiple devices of the same class (e.g. NVMe disks, using managed IRQs), the kernel tends to concentrate their IRQs on several CPUs. The issue is that when NVMe calls irq_matrix_alloc_managed(), the assigned CPU tends to be the first several CPUs in the

[tip:irq/core] genirq/affinity: Spread IRQs to all available NUMA nodes

2018-11-05 Thread tip-bot for Long Li
Commit-ID: b82592199032bf7c778f861b936287e37ebc9f62 Gitweb: https://git.kernel.org/tip/b82592199032bf7c778f861b936287e37ebc9f62 Author: Long Li AuthorDate: Fri, 2 Nov 2018 18:02:48 + Committer: Thomas Gleixner CommitDate: Mon, 5 Nov 2018 12:16:26 +0100 genirq/affinity: Spread IRQs

RE: [PATCH v3] genirq/matrix: Choose CPU for managed IRQs based on how many of them are allocated

2018-11-03 Thread Long Li
> Subject: Re: [PATCH v3] genirq/matrix: Choose CPU for managed IRQs based > on how many of them are allocated > > On Sat, 3 Nov 2018, Thomas Gleixner wrote: > > On Fri, 2 Nov 2018, Long Li wrote: > > > /** > > > * irq_matrix_assign_system - Assign system w

[Patch v2] genirq/affinity: Spread IRQs to all available NUMA nodes

2018-11-02 Thread Long Li
From: Long Li On systems with large number of NUMA nodes, there may be more NUMA nodes than the number of MSI/MSI-X interrupts that device requests for. The current code always picks up the NUMA nodes starting from the node 0, up to the number of interrupts requested. This may left some later

RE: [PATCH] genirq/affinity: Spread IRQs to all available NUMA nodes

2018-11-02 Thread Long Li
> Subject: RE: [PATCH] genirq/affinity: Spread IRQs to all available NUMA > nodes > > From: Long Li Sent: Thursday, November 1, 2018 > 4:52 PM > > > > --- a/kernel/irq/affinity.c > > +++ b/kernel/irq/affinity.c > > @@ -117,12 +117,13 @@ static in

[PATCH v3] genirq/matrix: Choose CPU for managed IRQs based on how many of them are allocated

2018-11-01 Thread Long Li
From: Long Li On a large system with multiple devices of the same class (e.g. NVMe disks, using managed IRQs), the kernel tends to concentrate their IRQs on several CPUs. The issue is that when NVMe calls irq_matrix_alloc_managed(), the assigned CPU tends to be the first several CPUs in the

[PATCH] genirq/affinity: Spread IRQs to all available NUMA nodes

2018-11-01 Thread Long Li
From: Long Li On systems with large number of NUMA nodes, there may be more NUMA nodes than the number of MSI/MSI-X interrupts that device requests for. The current code always picks up the NUMA nodes starting from the node 0, up to the number of interrupts requested. This may left some later

RE: [Patch v2] genirq/matrix: Choose CPU for assigning interrupts based on allocated IRQs

2018-11-01 Thread Long Li
> Subject: Re: [Patch v2] genirq/matrix: Choose CPU for assigning interrupts > based on allocated IRQs > > Long, > > On Thu, 1 Nov 2018, Long Li wrote: > > On a large system with multiple devices of the same class (e.g. NVMe > > disks, using managed IRQs), the ker

[Patch v2] genirq/matrix: Choose CPU for assigning interrupts based on allocated IRQs

2018-10-31 Thread Long Li
From: Long Li On a large system with multiple devices of the same class (e.g. NVMe disks, using managed IRQs), the kernel tends to concentrate their IRQs on several CPUs. The issue is that when NVMe calls irq_matrix_alloc_managed(), the assigned CPU tends to be the first several CPUs in the

[Patch v4 1/3] CIFS: Add support for direct I/O read

2018-10-31 Thread Long Li
From: Long Li With direct I/O read, we transfer the data directly from transport layer to the user data buffer. Change in v3: add support for kernel AIO Change in v4: Refactor common read code to __cifs_readv for direct and non-direct I/O. Retry on direct I/O failure. Signed-off-by: Long Li

[Patch v4 2/3] CIFS: Add support for direct I/O write

2018-10-31 Thread Long Li
From: Long Li With direct I/O write, user supplied buffers are pinned to the memory and data are transferred directly from user buffers to the transport layer. Change in v3: add support for kernel AIO Change in v4: Refactor common write code to __cifs_writev for direct and non-direct I/O

[Patch v4 3/3] CIFS: Add direct I/O functions to file_operations

2018-10-31 Thread Long Li
From: Long Li With direct read/write functions implemented, add them to file_operations. Dircet I/O is used under two conditions: 1. When mounting with "cache=none", CIFS uses direct I/O for all user file data transfer. 2. When opening a file with O_DIRECT, CIFS uses direct I/O fo

RE: [PATCH] Choose CPU based on allocated IRQs

2018-10-30 Thread Long Li
> Subject: Re: [PATCH] Choose CPU based on allocated IRQs > > Long, > > On Tue, 23 Oct 2018, Long Li wrote: > > thanks for this patch. > > A trivial formal thing ahead. The subject line > >[PATCH] Choose CPU based on allocated IRQs > > is lacking a

[PATCH] Choose CPU based on allocated IRQs

2018-10-22 Thread Long Li
From: Long Li In irq_matrix, "available" is set when IRQs are allocated earlier in the IRQ assigning process. Later, when IRQs are activated those values are not good indicators of what CPU to choose to assign to this IRQ. Change to choose CPU for an IRQ based on how many IRQs a

RE: [PATCH V3 (resend) 3/7] CIFS: Add support for direct I/O read

2018-09-24 Thread Long Li
> Subject: Re: [PATCH V3 (resend) 3/7] CIFS: Add support for direct I/O read > > чт, 20 сент. 2018 г. в 14:22, Long Li : > > > > From: Long Li > > > > With direct I/O read, we transfer the data directly from transport > > layer to the user data buffer

[PATCH V3 (resend) 3/7] CIFS: Add support for direct I/O read

2018-09-20 Thread Long Li
From: Long Li With direct I/O read, we transfer the data directly from transport layer to the user data buffer. Change in v3: add support for kernel AIO Signed-off-by: Long Li --- fs/cifs/cifsfs.h | 1 + fs/cifs/cifsglob.h | 5 ++ fs/cifs/file.c | 210

[PATCH V3 (resend) 4/7] CIFS: Add support for direct I/O write

2018-09-20 Thread Long Li
From: Long Li With direct I/O write, user supplied buffers are pinned to the memory and data are transferred directly from user buffers to the transport layer. Change in v3: add support for kernel AIO Signed-off-by: Long Li --- fs/cifs/cifsfs.h | 1 + fs/cifs/file.c | 196

[PATCH V3 (resend) 5/7] CIFS: Add direct I/O functions to file_operations

2018-09-20 Thread Long Li
From: Long Li With direct read/write functions implemented, add them to file_operations. Dircet I/O is used under two conditions: 1. When mounting with "cache=none", CIFS uses direct I/O for all user file data transfer. 2. When opening a file with O_DIRECT, CIFS uses direct I/O fo

[PATCH V3 (resend) 2/7] CIFS: SMBD: Do not call ib_dereg_mr on invalidated memory registration

2018-09-20 Thread Long Li
From: Long Li It is not necessary to deregister a memory registration after it has been successfully invalidated. Signed-off-by: Long Li --- fs/cifs/smbdirect.c | 38 +++--- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/fs/cifs/smbdirect.c b

[PATCH V3 (resend) 1/7] CIFS: pass page offsets on SMB1 read/write

2018-09-20 Thread Long Li
From: Long Li When issuing SMB1 read/write, pass the page offset to transport. Signed-off-by: Long Li --- fs/cifs/cifssmb.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c index 41329f4..f82fd34 100644 --- a/fs/cifs/cifssmb.c +++ b/fs/cifs/cifssmb.c

RE: [Patch v7 21/22] CIFS: SMBD: Upper layer performs SMB read via RDMA write through memory registration

2018-09-20 Thread Long Li
> Subject: Re: [Patch v7 21/22] CIFS: SMBD: Upper layer performs SMB read via > RDMA write through memory registration > > Replying to a very old message, but it's something we discussed today at the > IOLab event so to capture it: > > On 11/7/2017 12:55 AM, Long L

RE: [Patch v3 00/16] CIFS: add support for direct I/O

2018-09-15 Thread Long Li
> From: Steve French > Sent: Saturday, September 15, 2018 2:28 AM > To: Long Li > Cc: Steve French ; CIFS ; > samba-technical ; LKML ker...@vger.kernel.org>; linux-r...@vger.kernel.org > Subject: Re: [Patch v3 00/16] CIFS: add support for direct I/O > > could yo

[Patch v3 02/16] CIFS: Use offset when reading pages

2018-09-07 Thread Long Li
From: Long Li With offset defined in rdata, transport functions need to look at this offset when reading data into the correct places in pages. Signed-off-by: Long Li --- fs/cifs/cifsproto.h | 4 +++- fs/cifs/cifssmb.c | 1 + fs/cifs/connect.c | 5 +++-- fs/cifs/file.c | 52

[Patch v3 05/16] CIFS: Calculate the correct request length based on page offset and tail size

2018-09-07 Thread Long Li
From: Long Li It's possible that the page offset is non-zero in the pages in a request, change the function to calculate the correct data buffer length. Signed-off-by: Long Li --- fs/cifs/transport.c | 20 +--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git

[Patch v3 03/16] CIFS: Add support for direct pages in wdata

2018-09-07 Thread Long Li
From: Long Li Add a function to allocate wdata without allocating pages for data transfer. This gives the caller an option to pass a number of pages that point to the data buffer to write to. wdata is reponsible for free those pages after it's done. Signed-off-by: Long Li --- fs

[Patch v3 16/16] CIFS: Add direct I/O functions to file_operations

2018-09-07 Thread Long Li
From: Long Li With direct read/write functions implemented, add them to file_operations. Dircet I/O is used under two conditions: 1. When mounting with "cache=none", CIFS uses direct I/O for all user file data transfer. 2. When opening a file with O_DIRECT, CIFS uses direct I/O fo

[Patch v3 04/16] CIFS: pass page offset when issuing SMB write

2018-09-07 Thread Long Li
From: Long Li When issuing SMB writes, pass along the write data page offset to transport. Signed-off-by: Long Li --- fs/cifs/cifssmb.c | 1 + fs/cifs/smb2pdu.c | 1 + 2 files changed, 2 insertions(+) diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c index 503e0ed..0a57c61 100644 --- a/fs

[Patch v3 08/16] CIFS: SMBD: Support page offset in RDMA send

2018-09-07 Thread Long Li
From: Long Li The RDMA send function needs to look at offset in the request pages, and send data starting from there. Signed-off-by: Long Li --- fs/cifs/smbdirect.c | 27 +++ 1 file changed, 19 insertions(+), 8 deletions(-) diff --git a/fs/cifs/smbdirect.c b/fs/cifs

[Patch v3 14/16] CIFS: Add support for direct I/O read

2018-09-07 Thread Long Li
From: Long Li With direct I/O read, we transfer the data directly from transport layer to the user data buffer. Change in v3: added support for kernel AIO Signed-off-by: Long Li --- fs/cifs/cifsfs.h | 1 + fs/cifs/cifsglob.h | 5 ++ fs/cifs/file.c | 209

[Patch v3 12/16] CIFS: Pass page offset for calculating signature

2018-09-07 Thread Long Li
From: Long Li When calculating signature for the packet, it needs to read into the correct page offset for the data. Signed-off-by: Long Li --- fs/cifs/cifsencrypt.c | 9 + 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/fs/cifs/cifsencrypt.c b/fs/cifs/cifsencrypt.c

[Patch v3 13/16] CIFS: Pass page offset for encrypting

2018-09-07 Thread Long Li
From: Long Li Encryption function needs to read data starting page offset from input buffer. This doesn't affect decryption path since it allocates its own page buffers. Signed-off-by: Long Li --- fs/cifs/smb2ops.c | 20 +--- 1 file changed, 13 insertions(+), 7 dele

[Patch v3 01/16] CIFS: Add support for direct pages in rdata

2018-09-07 Thread Long Li
From: Long Li Add a function to allocate rdata without allocating pages for data transfer. This gives the caller an option to pass a number of pages that point to the data buffer. rdata is reponsible for free those pages after it's done. Signed-off-by: Long Li --- fs/cifs/cifsglob.h

[Patch v3 09/16] CIFS: SMBD: Support page offset in RDMA recv

2018-09-07 Thread Long Li
From: Long Li RDMA recv function needs to place data to the correct place starting at page offset. Signed-off-by: Long Li --- fs/cifs/smbdirect.c | 18 +++--- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c index 6141e3c

[Patch v3 07/16] CIFS: When sending data on socket, pass the correct page offset

2018-09-07 Thread Long Li
From: Long Li It's possible that the offset is non-zero in the page to send, change the function to pass this offset to socket. Signed-off-by: Long Li --- fs/cifs/transport.c | 14 ++ 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/fs/cifs/transport.c b/fs

[Patch v3 10/16] CIFS: SMBD: Do not call ib_dereg_mr on invalidated memory registration

2018-09-07 Thread Long Li
From: Long Li It is not necessary to deregister a memory registration after it has been successfully invalidated. Signed-off-by: Long Li --- fs/cifs/smbdirect.c | 82 ++--- 1 file changed, 41 insertions(+), 41 deletions(-) diff --git a/fs/cifs

[Patch v3 06/16] CIFS: Introduce helper function to get page offset and length in smb_rqst

2018-09-07 Thread Long Li
From: Long Li Introduce a function rqst_page_get_length to return the page offset and length for a given page in smb_rqst. This function is to be used by following patches. Signed-off-by: Long Li --- fs/cifs/cifsproto.h | 3 +++ fs/cifs/misc.c | 17 + 2 files changed, 20

[Patch v3 00/16] CIFS: add support for direct I/O

2018-09-07 Thread Long Li
From: Long Li This patch set implements direct I/O. In normal code path (even with cache=none), CIFS copies I/O data from user-space to kernel-space for security reasons of possible protocol required signing and encryption on user data. With this patch set, CIFS passes the I/O data directly

[Patch v3 11/16] CIFS: SMBD: Support page offset in memory registration

2018-09-07 Thread Long Li
From: Long Li Change code to pass the correct page offset during memory registration for RDMA read/write. Signed-off-by: Long Li --- fs/cifs/smb2pdu.c | 18 -- fs/cifs/smbdirect.c | 29 + fs/cifs/smbdirect.h | 2 +- 3 files changed, 34 insertions

[Patch v3 15/16] CIFS: Add support for direct I/O write

2018-09-07 Thread Long Li
From: Long Li With direct I/O write, user supplied buffers are pinned to the memory and data are transferred directly from user buffers to the transport layer. Change in v3: added support for kernel AIO Signed-off-by: Long Li --- fs/cifs/cifsfs.h | 1 + fs/cifs/file.c | 195

RE: [Patch v2 14/15] CIFS: Add support for direct I/O write

2018-06-26 Thread Long Li
> Subject: Re: [Patch v2 14/15] CIFS: Add support for direct I/O write > > On 6/26/2018 12:39 AM, Long Li wrote: > >> Subject: Re: [Patch v2 14/15] CIFS: Add support for direct I/O write > >> > >> On 5/30/2018 3:48 PM, Long Li wrote: > >>> From:

RE: [Patch v2 06/15] CIFS: Introduce helper function to get page offset and length in smb_rqst

2018-06-26 Thread Long Li
> Subject: Re: [Patch v2 06/15] CIFS: Introduce helper function to get page > offset and length in smb_rqst > > On 6/25/2018 5:14 PM, Long Li wrote: > >> Subject: Re: [Patch v2 06/15] CIFS: Introduce helper function to get > >> page offset and length in smb_rqst

RE: [Patch v2 02/15] CIFS: Add support for direct pages in rdata

2018-06-26 Thread Long Li
> Subject: Re: [Patch v2 02/15] CIFS: Add support for direct pages in rdata > > On 6/25/2018 5:01 PM, Jason Gunthorpe wrote: > > On Sat, Jun 23, 2018 at 09:50:20PM -0400, Tom Talpey wrote: > >> On 5/30/2018 3:47 PM, Long Li wrote: > >>> From: Long Li > >

RE: [Patch v2 14/15] CIFS: Add support for direct I/O write

2018-06-25 Thread Long Li
> Subject: Re: [Patch v2 14/15] CIFS: Add support for direct I/O write > > On 5/30/2018 3:48 PM, Long Li wrote: > > From: Long Li > > > > Implement the function for direct I/O write. It doesn't support AIO, > > which will be implemented in a follow up

RE: [Patch v2 13/15] CIFS: Add support for direct I/O read

2018-06-25 Thread Long Li
> Subject: Re: [Patch v2 13/15] CIFS: Add support for direct I/O read > > > > On 5/30/2018 3:48 PM, Long Li wrote: > > From: Long Li > > > > Implement the function for direct I/O read. It doesn't support AIO, > > which will be implemented in a fol

  1   2   3   4   5   6   >