Re: [PATCH 1/1] vduse: moving kvfree into caller

2021-12-06 Thread Jason Wang
On Mon, Dec 6, 2021 at 3:54 PM Guanjun wrote: > > From: Guanjun > > This free action should be moved into caller 'vduse_ioctl' in > concert with the allocation. > > No functional change. > > Fixes: c8a6153b6c59 ("vduse: Introduce VDUSE - vDPA Device in Userspace") Does this fix a real problem? I

Re: [PATCH] eni_vdpa: alibaba: select VIRTIO_PCI_LIB

2021-12-06 Thread Jason Wang
On Mon, Dec 6, 2021 at 4:14 PM Arnd Bergmann wrote: > > On Mon, Dec 6, 2021 at 4:12 AM Jason Wang wrote: > > > > On Sat, Dec 4, 2021 at 2:55 AM Arnd Bergmann wrote: > > > > > > From: Arnd Bergmann > > > > > > When VIRTIO_PCI_LIB is not built-in but the alibaba driver is, the > > > kernel runs i

Re: [PATCH] drm: Return error codes from struct drm_driver.gem_create_object

2021-12-06 Thread Dan Carpenter
On Tue, Nov 30, 2021 at 10:52:55AM +0100, Thomas Zimmermann wrote: > GEM helper libraries use struct drm_driver.gem_create_object to let > drivers override GEM object allocation. On failure, the call returns > NULL. > > Change the semantics to make the calls return a pointer-encoded error. > This

Re: [PATCH] drm: Return error codes from struct drm_driver.gem_create_object

2021-12-06 Thread Thomas Zimmermann
Hi Am 06.12.21 um 11:42 schrieb Dan Carpenter: On Tue, Nov 30, 2021 at 10:52:55AM +0100, Thomas Zimmermann wrote: GEM helper libraries use struct drm_driver.gem_create_object to let drivers override GEM object allocation. On failure, the call returns NULL. Change the semantics to make the call

Re: [PATCH v3 0/5] iommu/virtio: Add identity domains

2021-12-06 Thread Joerg Roedel
On Wed, Dec 01, 2021 at 05:33:20PM +, Jean-Philippe Brucker wrote: > Jean-Philippe Brucker (5): > iommu/virtio: Add definitions for VIRTIO_IOMMU_F_BYPASS_CONFIG > iommu/virtio: Support bypass domains > iommu/virtio: Sort reserved regions > iommu/virtio: Pass end address to viommu_add_ma

Re: [PATCH] drm: Return error codes from struct drm_driver.gem_create_object

2021-12-06 Thread Dan Carpenter
On Mon, Dec 06, 2021 at 12:16:24PM +0100, Thomas Zimmermann wrote: > Hi > > Am 06.12.21 um 11:42 schrieb Dan Carpenter: > > On Tue, Nov 30, 2021 at 10:52:55AM +0100, Thomas Zimmermann wrote: > > > GEM helper libraries use struct drm_driver.gem_create_object to let > > > drivers override GEM object

[PATCH 0/7] vhost flush cleanups

2021-12-06 Thread Mike Christie
The following patches are Andrey Ryabinin's flush cleanups and some from me. They reduce the number of flush calls and remove some bogus ones where we don't even have a worker running anymore. I wanted to send these patches now, because my vhost threading patches have conflicts and are now built o

[PATCH 3/7] vhost_net: get rid of vhost_net_flush_vq() and extra flush calls

2021-12-06 Thread Mike Christie
From: Andrey Ryabinin vhost_net_flush_vq() calls vhost_work_dev_flush() twice passing vhost_dev pointer obtained via 'n->poll[index].dev' and 'n->vqs[index].vq.poll.dev'. This is actually the same pointer, initialized in vhost_net_open()/vhost_dev_init()/vhost_poll_init() Remove vhost_net_flush_

[PATCH 1/7] vhost: get rid of vhost_poll_flush() wrapper

2021-12-06 Thread Mike Christie
vhost_poll_flush() is a simple wrapper around vhost_work_dev_flush(). It gives wrong impression that we are doing some work over vhost_poll, while in fact it flushes vhost_poll->dev. It only complicate understanding of the code and leads to mistakes like flushing the same vhost_dev several times in

[PATCH 5/7] vhost_vsock: simplify vhost_vsock_flush()

2021-12-06 Thread Mike Christie
From: Andrey Ryabinin vhost_vsock_flush() calls vhost_work_dev_flush(vsock->vqs[i].poll.dev) before vhost_work_dev_flush(&vsock->dev). This seems pointless as vsock->vqs[i].poll.dev is the same as &vsock->dev and several flushes in a row doesn't do anything useful, one is just enough. Signed-off

[PATCH 2/7] vhost: flush dev once during vhost_dev_stop

2021-12-06 Thread Mike Christie
When vhost_work_dev_flush returns all work queued at that time will have completed. There is then no need to flush after every vhost_poll_stop call, and we can move the flush call to after the loop that stops the pollers. Signed-off-by: Mike Christie --- drivers/vhost/vhost.c | 6 +++--- 1 file

[PATCH 4/7] vhost_test: remove vhost_test_flush_vq()

2021-12-06 Thread Mike Christie
From: Andrey Ryabinin vhost_test_flush_vq() just a simple wrapper around vhost_work_dev_flush() which seems have no value. It's just easier to call vhost_work_dev_flush() directly. Besides there is no point in obtaining vhost_dev pointer via 'n->vqs[index].poll.dev' while we can just use &n->dev.

[PATCH 6/7] vhost-scsi: drop flush after vhost_dev_cleanup

2021-12-06 Thread Mike Christie
The flush after vhost_dev_cleanup is not needed because: 1. It doesn't do anything. vhost_dev_cleanup will stop the worker thread so the flush call will just return since the worker has not device. 2. It's not needed for the re-queue case. vhost_scsi_evt_handle_kick grabs the mutex and if the bac

[PATCH 7/7] vhost-test: drop flush after vhost_dev_cleanup

2021-12-06 Thread Mike Christie
The flush after vhost_dev_cleanup is not needed because: 1. It doesn't do anything. vhost_dev_cleanup will stop the worker thread so the flush call will just return since the worker has not device. 2. It's not needed. The comment about jobs re-queueing themselves does not look correct because han

[PATCH V5 01/12] vhost: add vhost_worker pointer to vhost_virtqueue

2021-12-06 Thread Mike Christie
This patchset allows userspace to map vqs to different workers. This patch adds a worker pointer to the vq so we can store that info. Signed-off-by: Mike Christie --- drivers/vhost/vhost.c | 24 +--- drivers/vhost/vhost.h | 1 + 2 files changed, 14 insertions(+), 11 deletion

[PATCH V5 00/12] vhost: multiple worker support

2021-12-06 Thread Mike Christie
The following patches apply over linus's tree and the user_worker patchset here: https://lore.kernel.org/virtualization/20211129194707.5863-1-michael.chris...@oracle.com/T/#t which allows us to check the vhost owner thread's RLIMITs, and they are built over Andrey's flush cleanups: https://lore.

[PATCH V5 03/12] vhost: take worker or vq instead of dev for queueing

2021-12-06 Thread Mike Christie
This patch has the core work queueing function take a worker for when we support multiple workers. It also adds a helper that takes a vq during queueing so modules can control which vq/worker to queue work on. This temp leaves vhost_work_queue. It will be removed when the drivers are converted in

[PATCH V5 05/12] vhost: convert poll work to be vq based

2021-12-06 Thread Mike Christie
This has the drivers pass in their poll to vq mapping and then converts the core poll code to use the vq based helpers. Signed-off-by: Mike Christie --- drivers/vhost/net.c | 6 -- drivers/vhost/vhost.c | 8 +--- drivers/vhost/vhost.h | 4 +++- 3 files changed, 12 insertions(+), 6 dele

[PATCH V5 04/12] vhost: take worker or vq instead of dev for flushing

2021-12-06 Thread Mike Christie
This patch has the core work flush function take a worker for when we support multiple workers. Signed-off-by: Mike Christie --- drivers/vhost/vhost.c | 24 +++- 1 file changed, 15 insertions(+), 9 deletions(-) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index

[PATCH V5 06/12] vhost-sock: convert to vhost_vq_work_queue

2021-12-06 Thread Mike Christie
Convert from vhost_work_queue to vhost_vq_work_queue. Signed-off-by: Mike Christie --- drivers/vhost/vsock.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c index 1f38160b249d..068ccdbd3bcd 100644 --- a/drivers/vhost/vsock.c +

[PATCH V5 08/12] vhost-scsi: convert to vhost_vq_work_queue

2021-12-06 Thread Mike Christie
Convert from vhost_work_queue to vhost_vq_work_queue. Signed-off-by: Mike Christie --- drivers/vhost/scsi.c | 20 ++-- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index b2592e927316..93c6ad1246eb 100644 --- a/drivers

[PATCH V5 09/12] vhost: remove vhost_work_queue

2021-12-06 Thread Mike Christie
vhost_work_queue is no longer used. Each driver is using the poll or vq based queueing, so remove vhost_work_queue. Signed-off-by: Mike Christie --- drivers/vhost/vhost.c | 6 -- drivers/vhost/vhost.h | 1 - 2 files changed, 7 deletions(-) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/

[PATCH V5 12/12] vhost: allow worker attachment after initial setup

2021-12-06 Thread Mike Christie
This patch allows userspace to change the vq to worker mapping while it's in use so tools can do this setup post device creation if needed. Signed-off-by: Mike Christie --- drivers/vhost/vhost.c | 76 +++--- drivers/vhost/vhost.h | 2 +- include/uapi/li

[PATCH V5 11/12] vhost: allow userspace to create workers

2021-12-06 Thread Mike Christie
For vhost-scsi with 3 vqs and a workload like that tries to use those vqs like: fio --filename=/dev/sdb --direct=1 --rw=randrw --bs=4k \ --ioengine=libaio --iodepth=128 --numjobs=3 the single vhost worker thread will become a bottlneck. To better utilize virtqueues and available CPUs, this pat

[PATCH V5 07/12] vhost-scsi: make SCSI cmd completion per vq

2021-12-06 Thread Mike Christie
This patch separates the scsi cmd completion code paths so we can complete cmds based on their vq instead of having all cmds complete on the same worker/CPU. This will be useful with the next patches that allow us to create mulitple worker threads and bind them to different vqs, so we can have comp

[PATCH V5 02/12] vhost, vhost-net: add helper to check if vq has work

2021-12-06 Thread Mike Christie
This adds a helper to check if a vq has work pending and converts vhost-net to use it. Signed-off-by: Mike Christie --- drivers/vhost/net.c | 2 +- drivers/vhost/vhost.c | 6 +++--- drivers/vhost/vhost.h | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/vhost/net.c

[PATCH V5 10/12] vhost-scsi: flush IO vqs then send TMF rsp

2021-12-06 Thread Mike Christie
With one worker we will always send the scsi cmd responses then send the TMF rsp, because LIO will always complete the scsi cmds first then call into us to send the TMF response. With multiple workers, the IO vq workers could be running while the TMF/ctl vq worker is so this has us do a flush befo

[PATCH] hv_sock: Extract hvs_send_data() helper that takes only header

2021-12-06 Thread Kees Cook
When building under -Warray-bounds, the compiler is especially conservative when faced with casts from a smaller object to a larger object. While this has found many real bugs, there are some cases that are currently false positives (like here). With this as one of the last few instances of the war