This has vhost-scsi support the worker ioctls by calling the
vhost_worker_ioctl helper.
With a single worker, the single thread becomes a bottlneck when trying
to use 3 or more virtqueues like:
fio --filename=/dev/sdb --direct=1 --rw=randrw --bs=4k \
--ioengine=libaio --iodepth=128 --numjobs=3
The next patches add new vhost worker ioctls which will need to get a
vhost_virtqueue from a userspace struct which specifies the vq's index.
This moves the vhost_vring_ioctl code to do this to a helper so it can
be shared.
Signed-off-by: Mike Christie
---
drivers/vhost/vhost.c | 29
This patch has the core work queueing function take a worker for when we
support multiple workers. It also adds a helper that takes a vq during
queueing so modules can control which vq/worker to queue work on.
This temp leaves vhost_work_queue. It will be removed when the drivers
are converted in
Convert from vhost_work_queue to vhost_vq_work_queue so we can
remove vhost_work_queue.
Signed-off-by: Mike Christie
---
drivers/vhost/scsi.c | 18 +-
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index a77c53bb035a..166
vhost_work_queue is no longer used. Each driver is using the poll or vq
based queueing, so remove vhost_work_queue.
Signed-off-by: Mike Christie
---
drivers/vhost/vhost.c | 6 --
drivers/vhost/vhost.h | 5 ++---
2 files changed, 2 insertions(+), 9 deletions(-)
diff --git a/drivers/vhost/vho
With one worker we will always send the scsi cmd responses then send the
TMF rsp, because LIO will always complete the scsi cmds first then call
into us to send the TMF response.
With multiple workers, the IO vq workers could be running while the
TMF/ctl vq worker is running so this has us do a fl
Convert from vhost_work_queue to vhost_vq_work_queue, so we can drop
vhost_work_queue.
Signed-off-by: Mike Christie
---
drivers/vhost/vsock.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index 6578db78f0ae..817d377a3f36 100
This patch drops the requirement that we can only switch workers if work
has not been queued by using RCU for the vq based queueing paths and a
mutex for the device wide flush.
We can also use this to support SIGKILL properly in the future where we
should exit almost immediately after getting that
The next patch allows userspace to create multiple workers per device,
so this patch replaces the vhost_worker pointer with an xarray so we
can store mupltiple workers and look them up.
Signed-off-by: Mike Christie
---
drivers/vhost/vhost.c | 64 ---
drive
For vhost-scsi with 3 vqs or more and a workload that tries to use
them in parallel like:
fio --filename=/dev/sdb --direct=1 --rw=randrw --bs=4k \
--ioengine=libaio --iodepth=128 --numjobs=3
the single vhost worker thread will become a bottlneck and we are stuck
at around 500K IOPs no matter ho
This patchset allows userspace to map vqs to different workers. This
patch adds a worker pointer to the vq so in later patches in this set
we can queue/flush specific vqs and their workers.
Signed-off-by: Mike Christie
---
drivers/vhost/vhost.c | 21 ++---
drivers/vhost/vhost.h |
This patchset allows us to allocate multiple workers, so this has us
move from the vhost_worker that's embedded in the vhost_dev to
dynamically allocating it.
Signed-off-by: Mike Christie
---
drivers/vhost/vhost.c | 66 ---
drivers/vhost/vhost.h | 4 +--
This patch separates the scsi cmd completion code paths so we can complete
cmds based on their vq instead of having all cmds complete on the same
worker/CPU. This will be useful with the next patches that allow us to
create mulitple worker threads and bind them to different vqs, and we can
have com
In the next patches each vq might have different workers so one could
have work but others do not. For net, we only want to check specific vqs,
so this adds a helper to check if a vq has work pending and converts
vhost-net to use it.
Signed-off-by: Mike Christie
Acked-by: Jason Wang
---
drivers
This patch has the core work flush function take a worker. When we
support multiple workers we can then flush each worker during device
removal, stoppage, etc. It also adds a helper to flush specific
virtqueues, so vhost-scsi can flush IO vqs from it's ctl vq.
Signed-off-by: Mike Christie
---
dr
vsock can start queueing work after VHOST_VSOCK_SET_GUEST_CID, so
after we have called vhost_worker_create it can be calling
vhost_work_queue and trying to access the vhost worker/task. If
vhost_dev_alloc_iovecs fails, then vhost_worker_free could free
the worker/task from under vsock.
This moves
The following patches were built over Linux's tree. The also apply over
the mst vhost branch if you revert the previous vhost worker patchset.
The patches allow us to support multiple vhost workers tasks per device.
The design is a modified version of Stefan's original idea where userspace
has the
This has the drivers pass in their poll to vq mapping and then converts
the core poll code to use the vq based helpers. In the next patches we
will allow vqs to be handled by different workers, so to allow drivers
to execute operations like queue, stop, flush, etc on specific polls/vqs
we need to k
On Sun, Jun 25, 2023 at 02:30:46PM +0800, Baolu Lu wrote:
> Agreed. We should avoid workqueue in sva iopf framework. Perhaps we
> could go ahead with below code? It will be registered to device with
> iommu_register_device_fault_handler() in IOMMU_DEV_FEAT_IOPF enabling
> path. Un-registering in t
On Sun, Jun 18, 2023 at 09:24:47AM +0300, Arseniy Krasnov wrote:
Hello,
This patchset does several things around MSG_PEEK flag support. In
general words it reworks MSG_PEEK test and adds support for this flag
in SOCK_SEQPACKET logic. Here is per-patch description:
1) This is cosmetic change for
On Sun, Jun 18, 2023 at 09:24:49AM +0300, Arseniy Krasnov wrote:
This adds support of MSG_PEEK flag for SOCK_SEQPACKET type of socket.
Difference with SOCK_STREAM is that this callback returns either length
of the message or error.
Signed-off-by: Arseniy Krasnov
---
net/vmw_vsock/virtio_transpo
On Sun, Jun 18, 2023 at 09:24:48AM +0300, Arseniy Krasnov wrote:
This reworks current implementation of MSG_PEEK logic:
1) Replaces 'skb_queue_walk_safe()' with 'skb_queue_walk()'. There is
no need in the first one, as there are no removes of skb in loop.
2) Removes nested while loop - MSG_PEEK
On Sat, Jun 03, 2023 at 11:49:22PM +0300, Arseniy Krasnov wrote:
Hello,
DESCRIPTION
this is MSG_ZEROCOPY feature support for virtio/vsock. I tried to follow
current implementation for TCP as much as possible:
1) Sender must enable SO_ZEROCOPY flag to use this feature.
On Sat, Jun 03, 2023 at 11:49:34PM +0300, Arseniy Krasnov wrote:
Add 'msgzerocopy_allow()' callback for loopback transport.
Signed-off-by: Arseniy Krasnov
---
net/vmw_vsock/vsock_loopback.c | 8
1 file changed, 8 insertions(+)
diff --git a/net/vmw_vsock/vsock_loopback.c b/net/vmw_vsock
On Sat, Jun 03, 2023 at 11:49:33PM +0300, Arseniy Krasnov wrote:
Add 'msgzerocopy_allow()' callback for virtio transport.
Signed-off-by: Arseniy Krasnov
---
net/vmw_vsock/virtio_transport.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsoc
On Sat, Jun 03, 2023 at 11:49:32PM +0300, Arseniy Krasnov wrote:
Add 'msgzerocopy_allow()' callback for vhost transport.
Signed-off-by: Arseniy Krasnov
---
drivers/vhost/vsock.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index b254aa4b
On Sat, Jun 03, 2023 at 11:49:29PM +0300, Arseniy Krasnov wrote:
This adds handling of MSG_ERRQUEUE input flag in receive call. This flag
is used to read socket's error queue instead of data queue. Possible
scenario of error queue usage is receiving completions for transmission
with MSG_ZEROCOPY
On Sat, Jun 03, 2023 at 11:49:28PM +0300, Arseniy Krasnov wrote:
If socket's error queue is not empty, EPOLLERR must be set. Otherwise,
reader of error queue won't detect data in it using EPOLLERR bit.
Signed-off-by: Arseniy Krasnov
---
net/vmw_vsock/af_vsock.c | 2 +-
1 file changed, 1 insertio
On Sat, Jun 03, 2023 at 11:49:27PM +0300, Arseniy Krasnov wrote:
This adds handling of MSG_ZEROCOPY flag on transmission path: if this
flag is set and zerocopy transmission is possible, then non-linear skb
will be created and filled with the pages of user's buffer. Pages of
user's buffer are lock
On Sat, Jun 03, 2023 at 11:49:26PM +0300, Arseniy Krasnov wrote:
For tap device new skb is created and data from the current skb is
copied to it. This adds copying data from non-linear skb to new
the skb.
Signed-off-by: Arseniy Krasnov
---
net/vmw_vsock/virtio_transport_common.c | 31 ++
On Sat, Jun 03, 2023 at 11:49:25PM +0300, Arseniy Krasnov wrote:
For non-linear skb use its pages from fragment array as buffers in
virtio tx queue. These pages are already pinned by 'get_user_pages()'
during such skb creation.
Signed-off-by: Arseniy Krasnov
---
net/vmw_vsock/virtio_transport.c
On Mon, Jun 26, 2023 at 10:03:25AM -0500, Mike Christie wrote:
> On 6/26/23 2:15 AM, Michael S. Tsirkin wrote:
> > On Mon, Jun 26, 2023 at 12:06:54AM -0700, syzbot wrote:
> >> Hello,
> >>
> >> syzbot found the following issue on:
> >>
> >> HEAD commit:8d2be868b42c Add linux-next specific files
On Sat, Jun 03, 2023 at 11:49:24PM +0300, Arseniy Krasnov wrote:
This adds copying to guest's virtio buffers from non-linear skbs. Such
skbs are created by protocol layer when MSG_ZEROCOPY flags is used. It
changes call of 'copy_to_iter()' to 'skb_copy_datagram_iter()'. Second
function can read d
On Sat, Jun 03, 2023 at 11:49:23PM +0300, Arseniy Krasnov wrote:
This is preparation patch for non-linear skbuff handling. It replaces
direct calls of 'memcpy_to_msg()' with 'skb_copy_datagram_iter()'. Main
advantage of the second one is that is can handle paged part of the skb
by using 'kmap()'
On 6/26/23 2:15 AM, Michael S. Tsirkin wrote:
> On Mon, Jun 26, 2023 at 12:06:54AM -0700, syzbot wrote:
>> Hello,
>>
>> syzbot found the following issue on:
>>
>> HEAD commit:8d2be868b42c Add linux-next specific files for 20230623
>> git tree: linux-next
>> console+strace: https://syzkall
On Fri, Jun 23, 2023 at 04:37:55AM +, Bobby Eshleman wrote:
On Thu, Jun 22, 2023 at 06:09:12PM +0200, Stefano Garzarella wrote:
On Sun, Jun 11, 2023 at 11:49:02PM +0300, Arseniy Krasnov wrote:
> Hello Bobby!
>
> On 10.06.2023 03:58, Bobby Eshleman wrote:
> > This commit adds support for data
On Fri, Jun 23, 2023 at 02:59:23AM +, Bobby Eshleman wrote:
On Fri, Jun 23, 2023 at 02:50:01AM +, Bobby Eshleman wrote:
On Thu, Jun 22, 2023 at 05:19:08PM +0200, Stefano Garzarella wrote:
> On Sat, Jun 10, 2023 at 12:58:30AM +, Bobby Eshleman wrote:
> > This patch adds support for mu
On Fri, Jun 23, 2023 at 11:14:39PM +0200, Julia Lawall wrote:
> Use array_size to protect against multiplication overflows.
>
> The changes were done using the following Coccinelle semantic patch:
>
> //
> @@
> expression E1, E2;
> constant C1, C2;
> identifier alloc = {vmalloc,vzall
On Mon, Jun 19, 2023 at 11:35:50AM +0800, Baolu Lu wrote:
> > Another outstanding issue was what to do for PASID stop. When the guest
> > device driver stops using a PASID it issues a PASID stop request to the
> > device (a device-specific mechanism). If the device is not using PRI stop
> > markers
Looks good:
Reviewed-by: Christoph Hellwig
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
On 23.06.23 23:14, Julia Lawall wrote:
Use array_size to protect against multiplication overflows.
The changes were done using the following Coccinelle semantic patch:
//
@@
expression E1, E2;
constant C1, C2;
identifier alloc = {vmalloc,vzalloc};
@@
(
alloc(C1 * C
On Mon, Jun 26, 2023 at 12:06:54AM -0700, syzbot wrote:
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit:8d2be868b42c Add linux-next specific files for 20230623
> git tree: linux-next
> console+strace: https://syzkaller.appspot.com/x/log.txt?x=12872950a8
> kernel co
42 matches
Mail list logo