On Mon, Dec 6, 2021 at 3:54 PM Guanjun wrote:
>
> From: Guanjun
>
> This free action should be moved into caller 'vduse_ioctl' in
> concert with the allocation.
>
> No functional change.
>
> Fixes: c8a6153b6c59 ("vduse: Introduce VDUSE - vDPA Device in Userspace")
Does this fix a real problem? I
On Mon, Dec 6, 2021 at 4:14 PM Arnd Bergmann wrote:
>
> On Mon, Dec 6, 2021 at 4:12 AM Jason Wang wrote:
> >
> > On Sat, Dec 4, 2021 at 2:55 AM Arnd Bergmann wrote:
> > >
> > > From: Arnd Bergmann
> > >
> > > When VIRTIO_PCI_LIB is not built-in but the alibaba driver is, the
> > > kernel runs i
On Tue, Nov 30, 2021 at 10:52:55AM +0100, Thomas Zimmermann wrote:
> GEM helper libraries use struct drm_driver.gem_create_object to let
> drivers override GEM object allocation. On failure, the call returns
> NULL.
>
> Change the semantics to make the calls return a pointer-encoded error.
> This
Hi
Am 06.12.21 um 11:42 schrieb Dan Carpenter:
On Tue, Nov 30, 2021 at 10:52:55AM +0100, Thomas Zimmermann wrote:
GEM helper libraries use struct drm_driver.gem_create_object to let
drivers override GEM object allocation. On failure, the call returns
NULL.
Change the semantics to make the call
On Wed, Dec 01, 2021 at 05:33:20PM +, Jean-Philippe Brucker wrote:
> Jean-Philippe Brucker (5):
> iommu/virtio: Add definitions for VIRTIO_IOMMU_F_BYPASS_CONFIG
> iommu/virtio: Support bypass domains
> iommu/virtio: Sort reserved regions
> iommu/virtio: Pass end address to viommu_add_ma
On Mon, Dec 06, 2021 at 12:16:24PM +0100, Thomas Zimmermann wrote:
> Hi
>
> Am 06.12.21 um 11:42 schrieb Dan Carpenter:
> > On Tue, Nov 30, 2021 at 10:52:55AM +0100, Thomas Zimmermann wrote:
> > > GEM helper libraries use struct drm_driver.gem_create_object to let
> > > drivers override GEM object
The following patches are Andrey Ryabinin's flush cleanups and some
from me. They reduce the number of flush calls and remove some bogus
ones where we don't even have a worker running anymore.
I wanted to send these patches now, because my vhost threading patches
have conflicts and are now built o
From: Andrey Ryabinin
vhost_net_flush_vq() calls vhost_work_dev_flush() twice passing
vhost_dev pointer obtained via 'n->poll[index].dev' and
'n->vqs[index].vq.poll.dev'. This is actually the same pointer,
initialized in vhost_net_open()/vhost_dev_init()/vhost_poll_init()
Remove vhost_net_flush_
vhost_poll_flush() is a simple wrapper around vhost_work_dev_flush().
It gives wrong impression that we are doing some work over vhost_poll,
while in fact it flushes vhost_poll->dev.
It only complicate understanding of the code and leads to mistakes
like flushing the same vhost_dev several times in
From: Andrey Ryabinin
vhost_vsock_flush() calls vhost_work_dev_flush(vsock->vqs[i].poll.dev)
before vhost_work_dev_flush(&vsock->dev). This seems pointless
as vsock->vqs[i].poll.dev is the same as &vsock->dev and several flushes
in a row doesn't do anything useful, one is just enough.
Signed-off
When vhost_work_dev_flush returns all work queued at that time will have
completed. There is then no need to flush after every vhost_poll_stop
call, and we can move the flush call to after the loop that stops the
pollers.
Signed-off-by: Mike Christie
---
drivers/vhost/vhost.c | 6 +++---
1 file
From: Andrey Ryabinin
vhost_test_flush_vq() just a simple wrapper around vhost_work_dev_flush()
which seems have no value. It's just easier to call vhost_work_dev_flush()
directly. Besides there is no point in obtaining vhost_dev pointer
via 'n->vqs[index].poll.dev' while we can just use &n->dev.
The flush after vhost_dev_cleanup is not needed because:
1. It doesn't do anything. vhost_dev_cleanup will stop the worker thread
so the flush call will just return since the worker has not device.
2. It's not needed for the re-queue case. vhost_scsi_evt_handle_kick grabs
the mutex and if the bac
The flush after vhost_dev_cleanup is not needed because:
1. It doesn't do anything. vhost_dev_cleanup will stop the worker thread
so the flush call will just return since the worker has not device.
2. It's not needed. The comment about jobs re-queueing themselves does
not look correct because han
This patchset allows userspace to map vqs to different workers. This
patch adds a worker pointer to the vq so we can store that info.
Signed-off-by: Mike Christie
---
drivers/vhost/vhost.c | 24 +---
drivers/vhost/vhost.h | 1 +
2 files changed, 14 insertions(+), 11 deletion
The following patches apply over linus's tree and the user_worker
patchset here:
https://lore.kernel.org/virtualization/20211129194707.5863-1-michael.chris...@oracle.com/T/#t
which allows us to check the vhost owner thread's RLIMITs, and they
are built over Andrey's flush cleanups:
https://lore.
This patch has the core work queueing function take a worker for when we
support multiple workers. It also adds a helper that takes a vq during
queueing so modules can control which vq/worker to queue work on.
This temp leaves vhost_work_queue. It will be removed when the drivers
are converted in
This has the drivers pass in their poll to vq mapping and then converts
the core poll code to use the vq based helpers.
Signed-off-by: Mike Christie
---
drivers/vhost/net.c | 6 --
drivers/vhost/vhost.c | 8 +---
drivers/vhost/vhost.h | 4 +++-
3 files changed, 12 insertions(+), 6 dele
This patch has the core work flush function take a worker for when we
support multiple workers.
Signed-off-by: Mike Christie
---
drivers/vhost/vhost.c | 24 +++-
1 file changed, 15 insertions(+), 9 deletions(-)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index
Convert from vhost_work_queue to vhost_vq_work_queue.
Signed-off-by: Mike Christie
---
drivers/vhost/vsock.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index 1f38160b249d..068ccdbd3bcd 100644
--- a/drivers/vhost/vsock.c
+
Convert from vhost_work_queue to vhost_vq_work_queue.
Signed-off-by: Mike Christie
---
drivers/vhost/scsi.c | 20 ++--
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index b2592e927316..93c6ad1246eb 100644
--- a/drivers
vhost_work_queue is no longer used. Each driver is using the poll or vq
based queueing, so remove vhost_work_queue.
Signed-off-by: Mike Christie
---
drivers/vhost/vhost.c | 6 --
drivers/vhost/vhost.h | 1 -
2 files changed, 7 deletions(-)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/
This patch allows userspace to change the vq to worker mapping while it's
in use so tools can do this setup post device creation if needed.
Signed-off-by: Mike Christie
---
drivers/vhost/vhost.c | 76 +++---
drivers/vhost/vhost.h | 2 +-
include/uapi/li
For vhost-scsi with 3 vqs and a workload like that tries to use those vqs
like:
fio --filename=/dev/sdb --direct=1 --rw=randrw --bs=4k \
--ioengine=libaio --iodepth=128 --numjobs=3
the single vhost worker thread will become a bottlneck.
To better utilize virtqueues and available CPUs, this pat
This patch separates the scsi cmd completion code paths so we can complete
cmds based on their vq instead of having all cmds complete on the same
worker/CPU. This will be useful with the next patches that allow us to
create mulitple worker threads and bind them to different vqs, so we can
have comp
This adds a helper to check if a vq has work pending and converts
vhost-net to use it.
Signed-off-by: Mike Christie
---
drivers/vhost/net.c | 2 +-
drivers/vhost/vhost.c | 6 +++---
drivers/vhost/vhost.h | 2 +-
3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/vhost/net.c
With one worker we will always send the scsi cmd responses then send the
TMF rsp, because LIO will always complete the scsi cmds first then call
into us to send the TMF response.
With multiple workers, the IO vq workers could be running while the
TMF/ctl vq worker is so this has us do a flush befo
When building under -Warray-bounds, the compiler is especially
conservative when faced with casts from a smaller object to a larger
object. While this has found many real bugs, there are some cases that
are currently false positives (like here). With this as one of the last
few instances of the war
28 matches
Mail list logo