When switching from kthreads to vhost_tasks two bugs were added:
1. The vhost worker tasks's now show up as processes so scripts doing
ps or ps a would not incorrectly detect the vhost task as another
process. 2. kthreads disabled freeze by setting PF_NOFREEZE, but
vhost tasks's didn't disable o
Oleg Nesterov writes:
> Hi Mike,
>
> sorry, but somehow I can't understand this patch...
>
> I'll try to read it with a fresh head on Weekend, but for example,
>
> On 06/01, Mike Christie wrote:
>>
>> static int vhost_task_fn(void *data)
>> {
>> struct vhost_task *vtsk = data;
>> -int
Hi Shunsuke,
kernel test robot noticed the following build warnings:
[auto build test WARNING on mst-vhost/linux-next]
[also build test WARNING on linus/master horms-ipvs/master v6.4-rc4
next-20230602]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting
On Fri, Jun 2, 2023 at 1:59 PM Oleg Nesterov wrote:
>
> As I said from the very beginning, this code is fine on x86 because
> atomic ops are fully serialised on x86.
Yes. Other architectures require __smp_mb__{before,after}_atomic for
the bit setting ops to actually be memory barriers.
We *shoul
Hi Mike,
sorry, but somehow I can't understand this patch...
I'll try to read it with a fresh head on Weekend, but for example,
On 06/01, Mike Christie wrote:
>
> static int vhost_task_fn(void *data)
> {
> struct vhost_task *vtsk = data;
> - int ret;
> + bool dead = false;
> +
>
Hi Shunsuke,
kernel test robot noticed the following build errors:
[auto build test ERROR on mst-vhost/linux-next]
[also build test ERROR on linus/master horms-ipvs/master v6.4-rc4 next-20230602]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch
On 06/02, Jason Wang wrote:
>
> On Thu, Jun 1, 2023 at 3:43 PM Oleg Nesterov wrote:
> >
> > and the final rewrite:
> >
> > if (work->node) {
> > work_next = work->node->next;
> > if (true)
> > clear_bit(&work->flags);
> > }
>
Hi Shunsuke,
kernel test robot noticed the following build warnings:
[auto build test WARNING on mst-vhost/linux-next]
[also build test WARNING on linus/master horms-ipvs/master v6.4-rc4
next-20230602]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting
On Fri, 2023-06-02 at 08:22 -0400, Michael S. Tsirkin wrote:
> On Tue, May 30, 2023 at 10:19:18AM +, Dragos Tatulea wrote:
> > On Tue, 2023-05-23 at 07:45 -0400, Michael S. Tsirkin wrote:
> > > On Tue, May 23, 2023 at 08:38:47AM +, Dragos Tatulea wrote:
> > > > On Tue, 2023-05-23 at 04:18 -
When the Virtio queue is full, a work item is scheduled
to execute in 1ms that retries adding the request to the queue.
This is a large amount of time on the scale on which a
virtio-fs device can operate. When using a DPU this is around
40us baseline without going to a remote server (4k, QD=1).
Thi
On 01/06/2023 20:45, Vivek Goyal wrote:
> On Thu, Jun 01, 2023 at 10:08:50AM -0400, Stefan Hajnoczi wrote:
>> On Wed, May 31, 2023 at 04:49:39PM -0400, Vivek Goyal wrote:
>>> On Wed, May 31, 2023 at 10:34:15PM +0200, Peter-Jan Gootzen wrote:
On 31/05/2023 21:18, Vivek Goyal wrote:
> On Wed
On Tue, May 30, 2023 at 10:19:18AM +, Dragos Tatulea wrote:
> On Tue, 2023-05-23 at 07:45 -0400, Michael S. Tsirkin wrote:
> > On Tue, May 23, 2023 at 08:38:47AM +, Dragos Tatulea wrote:
> > > On Tue, 2023-05-23 at 04:18 -0400, Michael S. Tsirkin wrote:
> > > > On Tue, May 23, 2023 at 07:16
On Fri, May 12, 2023 at 04:55:38PM -0700, Shannon Nelson wrote:
> On 5/12/23 6:30 AM, Michael S. Tsirkin wrote:
> >
> > On Fri, May 12, 2023 at 12:51:21PM +, Dragos Tatulea wrote:
> > > On Thu, 2023-05-04 at 14:51 -0400, Michael S. Tsirkin wrote:
> > > > On Thu, May 04, 2023 at 01:08:54PM -040
On Tue, Feb 07, 2023 at 08:08:43PM +0800, Nanyong Sun wrote:
> From: Rong Wang
>
> Once enable iommu domain for one device, the MSI
> translation tables have to be there for software-managed MSI.
> Otherwise, platform with software-managed MSI without an
> irq bypass function, can not get a corre
On Mon, May 29, 2023 at 09:35:08AM +0200, Christophe JAILLET wrote:
> 'inq_result' is known to be NULL. There is no point calling kfree().
>
> Signed-off-by: Christophe JAILLET
Acked-by: Michael S. Tsirkin
> ---
> drivers/scsi/virtio_scsi.c | 4 +---
> 1 file changed, 1 insertion(+), 3 deleti
Hi Mike,
On Fri, Apr 28, 2023 at 11:35:20AM -0500, michael.chris...@oracle.com wrote:
> The following patches were built over Linux's tree. They allow us to
> support multiple vhost workers tasks per device. The design is a modified
> version of Stefan's original idea where userspace has the kerne
On Fri, Jun 02, 2023 at 05:22:02PM +0800, Xuan Zhuo wrote:
> Under the premapped mode, the driver needs to unmap the DMA address
> after receiving the buffer. The virtio core records the DMA address,
> so the driver needs a way to get the dma info from the virtio core.
>
> A straightforward approa
On Thu, May 18, 2023 at 09:34:25AM +0200, Stefano Garzarella wrote:
> I think we should do one of these things, though:
> - mask VIRTIO_F_RING_PACKED in the stable kernels when
> VHOST_GET_FEAETURES is called
> - backport this patch on all stable kernels that support vhost-vdpa
>
> Maybe the last
On Mon, May 01, 2023 at 11:59:42AM +, Alvaro Karsz wrote:
> > First up to 4k should not be a problem. Even jumbo frames e.g. 9k
> > is highly likely to succeed. And a probe time which is often boot
> > even 64k isn't a problem ...
> >
> > Hmm. We could allocate large buffers at probe time. Reu
On Mon, May 01, 2023 at 11:41:44AM +, Alvaro Karsz wrote:
> > > > Why the difference?
> > > >
> > >
> > > Because the RING_SIZE < 4 case requires much more adjustments.
> > >
> > > * We may need to squeeze the virtio header into the headroom.
> > > * We may need to squeeze the GSO header into t
On Wed, Feb 15, 2023 at 03:33:49PM -0700, Ross Zwisler wrote:
> The canonical location for the tracefs filesystem is at /sys/kernel/tracing.
>
> But, from Documentation/trace/ftrace.rst:
>
> Before 4.1, all ftrace tracing control files were within the debugfs
> file system, which is typically
On Fri, Jun 02, 2023 at 05:56:12PM +0800, kernel test robot wrote:
> Hi Shunsuke,
>
> kernel test robot noticed the following build warnings:
>
> [auto build test WARNING on mst-vhost/linux-next]
> [also build test WARNING on linus/master horms-ipvs/master v6.4-rc4
> next
Hi Shunsuke,
kernel test robot noticed the following build warnings:
[auto build test WARNING on mst-vhost/linux-next]
[also build test WARNING on linus/master horms-ipvs/master v6.4-rc4
next-20230602]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting
Added virtqueue_dma_dev() to get DMA device for virtio. Then the
caller can do dma operation in advance. The purpose is to keep memory
mapped across multiple add/get buf operations.
Signed-off-by: Xuan Zhuo
---
drivers/virtio/virtio_ring.c | 17 +
include/linux/virtio.h |
Introduce the module param "experiment_premapped" to enable the function
that the virtio-net do dma mapping.
If that is true, the vq of virtio-net is under the premapped mode.
It just handle the sg with dma_address. And the driver must get the dma
address of the buffer to unmap after get the buffe
Under the premapped mode, the driver needs to unmap the DMA address
after receiving the buffer. The virtio core records the DMA address,
so the driver needs a way to get the dma info from the virtio core.
A straightforward approach is to pass an array to the virtio core when
calling virtqueue_get_
Introduce virtqueueu_add_sg(), so that in virtio-net we can create an
unify api for rq and sq.
Signed-off-by: Xuan Zhuo
---
drivers/virtio/virtio_ring.c | 23 +++
include/linux/virtio.h | 4
2 files changed, 27 insertions(+)
diff --git a/drivers/virtio/virtio_rin
Under the premapped mode, the driver needs to unmap the DMA address
after receiving the buffer. The virtio core records the DMA address,
so the driver needs a way to get the dma info from the virtio core.
A straightforward approach is to pass an array to the virtio core when
calling virtqueue_get_
This patch introduces three helpers for premapped mode.
* virtqueue_get_buf_premapped
* virtqueue_detach_unused_buf_premapped
The above helpers work like the non-premapped funcs. But a cursor is
passed.
virtqueue_detach is used to get the dma info of the last buf by
cursor.
Signed-off-by: Xua
If the vq is the premapped mode, use the sg_dma_address() directly.
Signed-off-by: Xuan Zhuo
---
drivers/virtio/virtio_ring.c | 36 ++--
1 file changed, 26 insertions(+), 10 deletions(-)
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
ind
## About DMA APIs
Now, virtio may can not work with DMA APIs when virtio features do not have
VIRTIO_F_ACCESS_PLATFORM.
1. I tried to let DMA APIs return phy address by virtio-device. But DMA APIs
just
work with the "real" devices.
2. I tried to let xsk support callballs to get phy address fr
This patch put the dma addr error check in vring_map_one_sg().
The benefits of doing this:
1. reduce one judgment of vq->use_dma_api.
2. make vring_map_one_sg more simple, without calling
vring_mapping_error to check the return value. simplifies subsequent
code
Signed-off-by: Xuan Zhuo
--
If the vq is the premapped mode, use the sg_dma_address() directly.
Signed-off-by: Xuan Zhuo
---
drivers/virtio/virtio_ring.c | 46 ++--
1 file changed, 28 insertions(+), 18 deletions(-)
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
ind
This helper allows the driver change the dma mode to premapped mode.
Under the premapped mode, the virtio core do not do dma mapping
internally.
This just work when the use_dma_api is true. If the use_dma_api is false,
the dma options is not through the DMA APIs, that is not the standard
way of th
34 matches
Mail list logo