If the VIRTIO_RING_F_INDIRECT_DESC negotiation succeeds, and the number
of sgs used for sending packets is greater than 1. We must constantly
call __kmalloc/kfree to allocate/release desc.
In the case of extremely fast package delivery, the overhead cannot be
ignored:
27.46% [kernel] [k] virt
In the case of using indirect, indirect desc must be allocated and
released each time, which increases a lot of cpu overhead.
Here, a cache is added for indirect. If the number of indirect desc to be
applied for is less than desc_cache_thr, the desc array with
the size of desc_cache_thr is fixed a
If the VIRTIO_RING_F_INDIRECT_DESC negotiation succeeds, and the number
of sgs used for sending packets is greater than 1. We must constantly
call __kmalloc/kfree to allocate/release desc.
In the case of extremely fast package delivery, the overhead cannot be
ignored:
27.46% [kernel] [k] virt
In the case of using indirect, indirect desc must be allocated and
released each time, which increases a lot of cpu overhead.
Here, a cache is added for indirect. If the number of indirect desc to be
applied for is less than desc_cache_thr, the desc array with
the size of desc_cache_thr is fixed a
On Mon, Oct 18, 2021 at 06:40:46AM +0200, Christoph Hellwig wrote:
> Replace the dax_host_hash with an xarray indexed by the pointer value
> of the gendisk, and require explicitl calls from the block drivers that
> want to associate their gendisk with a dax_device.
>
> Signed-off-by: Christoph Hel
On Thu, Oct 28, 2021 at 5:48 PM Parav Pandit wrote:
>
>
>
> > From: Stefano Garzarella
> > Sent: Thursday, October 28, 2021 3:08 PM
>
> > >> >$ vdpa/vdpa dev add mgmtdev vdpasim_net name vdpa0 mac
> > >> >00:11:22:33:44:55 $ echo 0 > /sys/bus/vdpa/drivers_autoprobe
> > >> >
> > >> >And after vdpa
On Thu, Oct 28, 2021 at 5:47 PM Stefano Garzarella wrote:
>
> On Thu, Oct 28, 2021 at 10:24:47AM +0800, Jason Wang wrote:
> >On Thu, Oct 28, 2021 at 4:16 AM Michael S. Tsirkin wrote:
> >>
> >> On Wed, Oct 27, 2021 at 03:21:15PM +, Parav Pandit wrote:
> >> > Hi Stefano,
> >> >
> >> > > From: S
On Fri, 29 Oct 2021 10:20:04 +0800, Jason Wang wrote:
> On Thu, Oct 28, 2021 at 6:49 PM Xuan Zhuo wrote:
> >
> > In the case of using indirect, indirect desc must be allocated and
> > released each time, which increases a lot of cpu overhead.
> >
> > Here, a cache is added for indirect. If the nu
On Thu, Oct 28, 2021 at 6:49 PM Xuan Zhuo wrote:
>
> In the case of using indirect, indirect desc must be allocated and
> released each time, which increases a lot of cpu overhead.
>
> Here, a cache is added for indirect. If the number of indirect desc to be
> applied for is less than VIRT_QUEUE_C
Hi Xuan,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on horms-ipvs/master]
[also build test WARNING on linus/master v5.15-rc7]
[cannot apply to mst-vhost/linux-next next-20211028]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when
Hi Dan,
On Wed, 27 Oct 2021 13:46:31 -0700 Dan Williams
wrote:
>
> My merge resolution is here [1]. Christoph, please have a look. The
> rebase and the merge result are both passing my test and I'm now going
> to review the individual patches. However, while I do that and collect
> acks from DM
On Wed, Oct 27, 2021 at 4:12 AM Gerd Hoffmann wrote:
>
> [ Cc'ing Gurchetan Singh ]
>
> > Can we follow up on this issue?
> >
> > The main pain point with your suggestion is the fact,
> > that it will cause VirGL protocol breakage and we would
> > like to avoid this.
> >
> > Extending execbuffer i
The pull request you sent on Wed, 27 Oct 2021 16:08:29 -0400:
> https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git tags/for_linus
has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/9c5456773d79b64cc6cebb06f668e29249636ba9
Thank you!
--
Deet-doot-dot, I am a b
* Conference listed in CORE Ranking
** Google Scholar H5-Index = 23
*** Best papers selected for SCI/SSCI journals
--
WorldCIST'22 - 10th World Conference on Information Systems and Technol
If the VIRTIO_RING_F_INDIRECT_DESC negotiation succeeds, and the number
of sgs used for sending packets is greater than 1. We must constantly
call __kmalloc/kfree to allocate/release desc.
In the case of extremely fast package delivery, the overhead cannot be
ignored:
27.46% [kernel] [k] virt
In the case of using indirect, indirect desc must be allocated and
released each time, which increases a lot of cpu overhead.
Here, a cache is added for indirect. If the number of indirect desc to be
applied for is less than VIRT_QUEUE_CACHE_DESC_NUM, the desc array with
the size of VIRT_QUEUE_CAC
If the VIRTIO_RING_F_INDIRECT_DESC negotiation succeeds, and the number
of sgs used for sending packets is greater than 1. We must constantly
call __kmalloc/kfree to allocate/release desc.
In the case of extremely fast package delivery, the overhead cannot be
ignored:
27.46% [kernel] [k] virt
In the case of using indirect, indirect desc must be allocated and
released each time, which increases a lot of cpu overhead.
Here, a cache is added for indirect. If the number of indirect desc to be
applied for is less than VIRT_QUEUE_CACHE_DESC_NUM, the desc array with
the size of VIRT_QUEUE_CAC
If we ensure we have already some data available by enqueuing
again the buffer once data are exhausted, we can return what we
have without waiting for the device answer.
Signed-off-by: Laurent Vivier
---
drivers/char/hw_random/virtio-rng.c | 26 --
1 file changed, 12 inse
if we don't use all the entropy available in the buffer, keep it
and use it later.
Signed-off-by: Laurent Vivier
---
drivers/char/hw_random/virtio-rng.c | 52 +++--
1 file changed, 35 insertions(+), 17 deletions(-)
diff --git a/drivers/char/hw_random/virtio-rng.c
b/driv
When virtio-rng device was dropped by the hwrng core we were forced
to wait the buffer to come back from the device to not have
remaining ongoing operation that could spoil the buffer.
But now, as the buffer is internal to the virtio-rng we can release
the waiting loop immediately, the buffer will
hwrng core uses two buffers that can be mixed in the
virtio-rng queue.
If the buffer is provided with wait=0 it is enqueued in the
virtio-rng queue but unused by the caller.
On the next call, core provides another buffer but the
first one is filled instead and the new one queued.
And the caller re
hwrng core uses two buffers that can be mixed in the virtio-rng queue.
This series fixes the problem by adding an internal buffer in virtio-rng.
Once the internal buffer is added, we can fix two other problems:
- to be able to release the driver without waiting the device releases the
buffer
> From: Stefano Garzarella
> Sent: Thursday, October 28, 2021 3:08 PM
> >> >$ vdpa/vdpa dev add mgmtdev vdpasim_net name vdpa0 mac
> >> >00:11:22:33:44:55 $ echo 0 > /sys/bus/vdpa/drivers_autoprobe
> >> >
> >> >And after vdpa device creation, it manually binds to the desired
> >> >driver such
On Thu, Oct 28, 2021 at 10:24:47AM +0800, Jason Wang wrote:
On Thu, Oct 28, 2021 at 4:16 AM Michael S. Tsirkin wrote:
On Wed, Oct 27, 2021 at 03:21:15PM +, Parav Pandit wrote:
> Hi Stefano,
>
> > From: Stefano Garzarella
> > Sent: Wednesday, October 27, 2021 8:04 PM
> >
> > Hi folks,
> >
On Wed, Oct 27, 2021 at 03:56:16PM +, Parav Pandit wrote:
Hi Stefano,
From: Stefano Garzarella
Sent: Wednesday, October 27, 2021 9:17 PM
To: Parav Pandit
Cc: Jason Wang ; Michael Tsirkin ;
Linux Virtualization ; Eli Cohen
Subject: Re: vDPA bus driver selection
Hi Parav,
On Wed, Oct 27,
On Wed, Oct 27, 2021 at 02:45:15PM -0400, Michael S. Tsirkin wrote:
On Wed, Oct 27, 2021 at 04:33:50PM +0200, Stefano Garzarella wrote:
Hi folks,
I was trying to understand if we have a way to specify which vDPA bus
driver (e.g. vhost-vdpa, virtio-vdpa) a device should use.
IIUC we don't have it
I'm not sure why it is useful to know the number of fences
in the reservation object, but we try to avoid exposing the
dma_resv_shared_list() function.
So use the iterator instead. If more information is desired
we could use dma_resv_describe() as well.
Signed-off-by: Christian König
---
driver
Instead of dumping the fence info manually.
Signed-off-by: Christian König
Reviewed-by: Rob Clark
---
drivers/gpu/drm/etnaviv/etnaviv_gem.c | 26 +++---
1 file changed, 7 insertions(+), 19 deletions(-)
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
b/drivers/gpu/drm/et
Instead of hand rolling pretty much the same code.
Signed-off-by: Christian König
Reviewed-by: Rob Clark
---
drivers/gpu/drm/msm/msm_gem.c | 20 +---
1 file changed, 1 insertion(+), 19 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index
Add functions to dump dma_fence and dma_resv objects into a seq_file and
use them for printing the debugfs informations.
Signed-off-by: Christian König
Reviewed-by: Rob Clark
---
drivers/dma-buf/dma-buf.c | 11 +--
drivers/dma-buf/dma-fence.c | 16
drivers/dma-buf/dma
On Mon, Oct 25, 2021 at 11:22:40AM +0100, Colin Ian King wrote:
> There is a spelling mistake in a dev_err error message. Fix it.
>
> Signed-off-by: Colin Ian King
> ---
> drivers/block/virtio_blk.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
Reviewed-by: Stefan Hajnoczi
signatur
Hi Xuan,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on horms-ipvs/master]
[also build test ERROR on linus/master v5.15-rc7]
[cannot apply to mst-vhost/linux-next next-20211027]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitti
The initial pvops functions handling irq flags will only ever be called
before interrupts are being enabled.
So switch them to be dummy functions:
- xen_save_fl() can always return 0
- xen_irq_disable() is a nop
- xen_irq_enable() can BUG()
Add some generic paravirt functions for that purpose.
S
The pvops function for Xen PV guests handling the interrupt flag are
much more complex than needed.
With the supported Xen hypervisor versions they can be simplified a
lot, especially by removing the need for disabling preemption.
Juergen Gross (2):
x86/xen: remove xen_have_vcpu_info_placement
35 matches
Mail list logo