[PATCH v3 3/3] virtio-net: enable virtio desc cache

2021-10-28 Thread Xuan Zhuo
If the VIRTIO_RING_F_INDIRECT_DESC negotiation succeeds, and the number of sgs used for sending packets is greater than 1. We must constantly call __kmalloc/kfree to allocate/release desc. In the case of extremely fast package delivery, the overhead cannot be ignored: 27.46% [kernel] [k] virt

[PATCH v3 2/3] virtio: cache indirect desc for packed

2021-10-28 Thread Xuan Zhuo
In the case of using indirect, indirect desc must be allocated and released each time, which increases a lot of cpu overhead. Here, a cache is added for indirect. If the number of indirect desc to be applied for is less than desc_cache_thr, the desc array with the size of desc_cache_thr is fixed a

[PATCH v3 0/3] virtio support cache indirect desc

2021-10-28 Thread Xuan Zhuo
If the VIRTIO_RING_F_INDIRECT_DESC negotiation succeeds, and the number of sgs used for sending packets is greater than 1. We must constantly call __kmalloc/kfree to allocate/release desc. In the case of extremely fast package delivery, the overhead cannot be ignored: 27.46% [kernel] [k] virt

[PATCH v3 1/3] virtio: cache indirect desc for split

2021-10-28 Thread Xuan Zhuo
In the case of using indirect, indirect desc must be allocated and released each time, which increases a lot of cpu overhead. Here, a cache is added for indirect. If the number of indirect desc to be applied for is less than desc_cache_thr, the desc array with the size of desc_cache_thr is fixed a

Re: [PATCH 03/11] dax: simplify the dax_device <-> gendisk association

2021-10-28 Thread Ira Weiny
On Mon, Oct 18, 2021 at 06:40:46AM +0200, Christoph Hellwig wrote: > Replace the dax_host_hash with an xarray indexed by the pointer value > of the gendisk, and require explicitl calls from the block drivers that > want to associate their gendisk with a dax_device. > > Signed-off-by: Christoph Hel

Re: vDPA bus driver selection

2021-10-28 Thread Jason Wang
On Thu, Oct 28, 2021 at 5:48 PM Parav Pandit wrote: > > > > > From: Stefano Garzarella > > Sent: Thursday, October 28, 2021 3:08 PM > > > >> >$ vdpa/vdpa dev add mgmtdev vdpasim_net name vdpa0 mac > > >> >00:11:22:33:44:55 $ echo 0 > /sys/bus/vdpa/drivers_autoprobe > > >> > > > >> >And after vdpa

Re: vDPA bus driver selection

2021-10-28 Thread Jason Wang
On Thu, Oct 28, 2021 at 5:47 PM Stefano Garzarella wrote: > > On Thu, Oct 28, 2021 at 10:24:47AM +0800, Jason Wang wrote: > >On Thu, Oct 28, 2021 at 4:16 AM Michael S. Tsirkin wrote: > >> > >> On Wed, Oct 27, 2021 at 03:21:15PM +, Parav Pandit wrote: > >> > Hi Stefano, > >> > > >> > > From: S

Re: [PATCH v2 1/3] virtio: cache indirect desc for split

2021-10-28 Thread Xuan Zhuo
On Fri, 29 Oct 2021 10:20:04 +0800, Jason Wang wrote: > On Thu, Oct 28, 2021 at 6:49 PM Xuan Zhuo wrote: > > > > In the case of using indirect, indirect desc must be allocated and > > released each time, which increases a lot of cpu overhead. > > > > Here, a cache is added for indirect. If the nu

Re: [PATCH v2 1/3] virtio: cache indirect desc for split

2021-10-28 Thread Jason Wang
On Thu, Oct 28, 2021 at 6:49 PM Xuan Zhuo wrote: > > In the case of using indirect, indirect desc must be allocated and > released each time, which increases a lot of cpu overhead. > > Here, a cache is added for indirect. If the number of indirect desc to be > applied for is less than VIRT_QUEUE_C

Re: [PATCH v2 3/3] virtio-net: enable virtio indirect cache

2021-10-28 Thread kernel test robot
Hi Xuan, Thank you for the patch! Perhaps something to improve: [auto build test WARNING on horms-ipvs/master] [also build test WARNING on linus/master v5.15-rc7] [cannot apply to mst-vhost/linux-next next-20211028] [If your patch is applied to the wrong git tree, kindly drop us a note. And when

Re: futher decouple DAX from block devices

2021-10-28 Thread Stephen Rothwell
Hi Dan, On Wed, 27 Oct 2021 13:46:31 -0700 Dan Williams wrote: > > My merge resolution is here [1]. Christoph, please have a look. The > rebase and the merge result are both passing my test and I'm now going > to review the individual patches. However, while I do that and collect > acks from DM

Re: drm/virtio: not pin pages on demand

2021-10-28 Thread Chia-I Wu
On Wed, Oct 27, 2021 at 4:12 AM Gerd Hoffmann wrote: > > [ Cc'ing Gurchetan Singh ] > > > Can we follow up on this issue? > > > > The main pain point with your suggestion is the fact, > > that it will cause VirGL protocol breakage and we would > > like to avoid this. > > > > Extending execbuffer i

Re: [GIT PULL] virtio: last minute fixes

2021-10-28 Thread pr-tracker-bot
The pull request you sent on Wed, 27 Oct 2021 16:08:29 -0400: > https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git tags/for_linus has been merged into torvalds/linux.git: https://git.kernel.org/torvalds/c/9c5456773d79b64cc6cebb06f668e29249636ba9 Thank you! -- Deet-doot-dot, I am a b

WorldCist'22 - 10th World Conference on Information Systems and Technologies | Montenegro

2021-10-28 Thread WorldCIST
* Conference listed in CORE Ranking ** Google Scholar H5-Index = 23 *** Best papers selected for SCI/SSCI journals -- WorldCIST'22 - 10th World Conference on Information Systems and Technol

[PATCH v2 3/3] virtio-net: enable virtio indirect cache

2021-10-28 Thread Xuan Zhuo
If the VIRTIO_RING_F_INDIRECT_DESC negotiation succeeds, and the number of sgs used for sending packets is greater than 1. We must constantly call __kmalloc/kfree to allocate/release desc. In the case of extremely fast package delivery, the overhead cannot be ignored: 27.46% [kernel] [k] virt

[PATCH v2 1/3] virtio: cache indirect desc for split

2021-10-28 Thread Xuan Zhuo
In the case of using indirect, indirect desc must be allocated and released each time, which increases a lot of cpu overhead. Here, a cache is added for indirect. If the number of indirect desc to be applied for is less than VIRT_QUEUE_CACHE_DESC_NUM, the desc array with the size of VIRT_QUEUE_CAC

[PATCH v2 0/3] virtio support cache indirect desc

2021-10-28 Thread Xuan Zhuo
If the VIRTIO_RING_F_INDIRECT_DESC negotiation succeeds, and the number of sgs used for sending packets is greater than 1. We must constantly call __kmalloc/kfree to allocate/release desc. In the case of extremely fast package delivery, the overhead cannot be ignored: 27.46% [kernel] [k] virt

[PATCH v2 2/3] virtio: cache indirect desc for packed

2021-10-28 Thread Xuan Zhuo
In the case of using indirect, indirect desc must be allocated and released each time, which increases a lot of cpu overhead. Here, a cache is added for indirect. If the number of indirect desc to be applied for is less than VIRT_QUEUE_CACHE_DESC_NUM, the desc array with the size of VIRT_QUEUE_CAC

[PATCH v2 4/4] hwrng: virtio - always add a pending request

2021-10-28 Thread Laurent Vivier
If we ensure we have already some data available by enqueuing again the buffer once data are exhausted, we can return what we have without waiting for the device answer. Signed-off-by: Laurent Vivier --- drivers/char/hw_random/virtio-rng.c | 26 -- 1 file changed, 12 inse

[PATCH v2 3/4] hwrng: virtio - don't waste entropy

2021-10-28 Thread Laurent Vivier
if we don't use all the entropy available in the buffer, keep it and use it later. Signed-off-by: Laurent Vivier --- drivers/char/hw_random/virtio-rng.c | 52 +++-- 1 file changed, 35 insertions(+), 17 deletions(-) diff --git a/drivers/char/hw_random/virtio-rng.c b/driv

[PATCH v2 2/4] hwrng: virtio - don't wait on cleanup

2021-10-28 Thread Laurent Vivier
When virtio-rng device was dropped by the hwrng core we were forced to wait the buffer to come back from the device to not have remaining ongoing operation that could spoil the buffer. But now, as the buffer is internal to the virtio-rng we can release the waiting loop immediately, the buffer will

[PATCH v2 1/4] hwrng: virtio - add an internal buffer

2021-10-28 Thread Laurent Vivier
hwrng core uses two buffers that can be mixed in the virtio-rng queue. If the buffer is provided with wait=0 it is enqueued in the virtio-rng queue but unused by the caller. On the next call, core provides another buffer but the first one is filled instead and the new one queued. And the caller re

[PATCH v2 0/4] hwrng: virtio - add an internal buffer

2021-10-28 Thread Laurent Vivier
hwrng core uses two buffers that can be mixed in the virtio-rng queue. This series fixes the problem by adding an internal buffer in virtio-rng. Once the internal buffer is added, we can fix two other problems: - to be able to release the driver without waiting the device releases the buffer

RE: vDPA bus driver selection

2021-10-28 Thread Parav Pandit via Virtualization
> From: Stefano Garzarella > Sent: Thursday, October 28, 2021 3:08 PM > >> >$ vdpa/vdpa dev add mgmtdev vdpasim_net name vdpa0 mac > >> >00:11:22:33:44:55 $ echo 0 > /sys/bus/vdpa/drivers_autoprobe > >> > > >> >And after vdpa device creation, it manually binds to the desired > >> >driver such

Re: vDPA bus driver selection

2021-10-28 Thread Stefano Garzarella
On Thu, Oct 28, 2021 at 10:24:47AM +0800, Jason Wang wrote: On Thu, Oct 28, 2021 at 4:16 AM Michael S. Tsirkin wrote: On Wed, Oct 27, 2021 at 03:21:15PM +, Parav Pandit wrote: > Hi Stefano, > > > From: Stefano Garzarella > > Sent: Wednesday, October 27, 2021 8:04 PM > > > > Hi folks, > >

Re: vDPA bus driver selection

2021-10-28 Thread Stefano Garzarella
On Wed, Oct 27, 2021 at 03:56:16PM +, Parav Pandit wrote: Hi Stefano, From: Stefano Garzarella Sent: Wednesday, October 27, 2021 9:17 PM To: Parav Pandit Cc: Jason Wang ; Michael Tsirkin ; Linux Virtualization ; Eli Cohen Subject: Re: vDPA bus driver selection Hi Parav, On Wed, Oct 27,

Re: vDPA bus driver selection

2021-10-28 Thread Stefano Garzarella
On Wed, Oct 27, 2021 at 02:45:15PM -0400, Michael S. Tsirkin wrote: On Wed, Oct 27, 2021 at 04:33:50PM +0200, Stefano Garzarella wrote: Hi folks, I was trying to understand if we have a way to specify which vDPA bus driver (e.g. vhost-vdpa, virtio-vdpa) a device should use. IIUC we don't have it

[PATCH 4/4] drm/qxl: use iterator instead of dma_resv_shared_list

2021-10-28 Thread Christian König
I'm not sure why it is useful to know the number of fences in the reservation object, but we try to avoid exposing the dma_resv_shared_list() function. So use the iterator instead. If more information is desired we could use dma_resv_describe() as well. Signed-off-by: Christian König --- driver

[PATCH 3/4] drm/etnaviv: use dma_resv_describe

2021-10-28 Thread Christian König
Instead of dumping the fence info manually. Signed-off-by: Christian König Reviewed-by: Rob Clark --- drivers/gpu/drm/etnaviv/etnaviv_gem.c | 26 +++--- 1 file changed, 7 insertions(+), 19 deletions(-) diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/et

[PATCH 2/4] drm/msm: use the new dma_resv_describe

2021-10-28 Thread Christian König
Instead of hand rolling pretty much the same code. Signed-off-by: Christian König Reviewed-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 20 +--- 1 file changed, 1 insertion(+), 19 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index

[PATCH 1/4] dma-buf: add dma_fence_describe and dma_resv_describe

2021-10-28 Thread Christian König
Add functions to dump dma_fence and dma_resv objects into a seq_file and use them for printing the debugfs informations. Signed-off-by: Christian König Reviewed-by: Rob Clark --- drivers/dma-buf/dma-buf.c | 11 +-- drivers/dma-buf/dma-fence.c | 16 drivers/dma-buf/dma

Re: [PATCH][next] virtio_blk: Fix spelling mistake: "advertisted" -> "advertised"

2021-10-28 Thread Stefan Hajnoczi
On Mon, Oct 25, 2021 at 11:22:40AM +0100, Colin Ian King wrote: > There is a spelling mistake in a dev_err error message. Fix it. > > Signed-off-by: Colin Ian King > --- > drivers/block/virtio_blk.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) Reviewed-by: Stefan Hajnoczi signatur

Re: [PATCH 2/3] virtio: cache indirect desc for packed

2021-10-28 Thread kernel test robot
Hi Xuan, Thank you for the patch! Yet something to improve: [auto build test ERROR on horms-ipvs/master] [also build test ERROR on linus/master v5.15-rc7] [cannot apply to mst-vhost/linux-next next-20211027] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitti

[PATCH v3 2/2] x86/xen: switch initial pvops IRQ functions to dummy ones

2021-10-28 Thread Juergen Gross via Virtualization
The initial pvops functions handling irq flags will only ever be called before interrupts are being enabled. So switch them to be dummy functions: - xen_save_fl() can always return 0 - xen_irq_disable() is a nop - xen_irq_enable() can BUG() Add some generic paravirt functions for that purpose. S

[PATCH v3 0/2] x86/xen: simplify irq pvops

2021-10-28 Thread Juergen Gross via Virtualization
The pvops function for Xen PV guests handling the interrupt flag are much more complex than needed. With the supported Xen hypervisor versions they can be simplified a lot, especially by removing the need for disabling preemption. Juergen Gross (2): x86/xen: remove xen_have_vcpu_info_placement