number of virtqueues [Stefan]
Patch 6:
- init s->num_queues [Stefano]
- free s->dev.vqs [Stefano]
Longpeng (Mike) (5):
virtio: get class_id and pci device id by the virtio id
vdpa: add vdpa-dev support
vdpa: add vdpa-dev-pci support
vdpa-dev: mark the device as unmigr
From: Longpeng
The generic vDPA device doesn't support migration currently, so
mark it as unmigratable temporarily.
Reviewed-by: Stefano Garzarella
Acked-by: Jason Wang
Signed-off-by: Longpeng
---
hw/virtio/vdpa-dev.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/hw/virtio/vdpa-dev.c b
From: Longpeng
The kvm_irqchip_commit_routes() is a time-intensive operation, it needs
scan and update all irqfds that are already assigned during each invocation,
so more vectors means need more time to process them. For virtio-pci, we
can just submit once when enabling vectors of a virtio-pci d
From: Longpeng
This patchset optimizes the time-consuming operation in
virtio_pci_set_guest_notifier,
especially for the vhost-vdpa migration, the time spend on set_guest_notifier
can
reduce 87% in some cases.
Longpeng (Mike) (3):
virtio-pci: submit msi route changes in batch
kvm-irqchip
From: Longpeng
The KVMRouteChange API is added by commit 9568690868e ("kvm-irqchip:
introduce new API to support route change"). We can also apply it on
kvm_irqchip_update_msi_route(), there are no functional changes and
we can optimize the virtio-pci core base on this change in the next
patch.
From: Longpeng
All unmasked vectors will be setup in msix_set_vector_notifiers(), which
is a time-consuming operation because each vector need to be submit to
KVM once. It's even worse if the VM has several devices and each devices
has dozens of vectors.
We can defer and commit the vectors in ba
From: Longpeng
When updating ioeventfds, we need to iterate all address spaces and
iterate all flat ranges of each address space. There is so much
redundant process that a FlatView would be iterated for so many times
during one commit (memory_region_transaction_commit).
We can mark a FlatView as
From: Longpeng
Supports vdpa-dev, we can use the deivce directly:
-M microvm -m 512m -smp 2 -kernel ... -initrd ... -device \
vhost-vdpa-device,vhostdev=/dev/vhost-vdpa-x
Reviewed-by: Stefano Garzarella
Acked-by: Jason Wang
Signed-off-by: Longpeng
---
hw/virtio/Kconfig| 5 +
h
to make the code clearer [Stefan]
- fix the misleading description of 'dc->desc' [Stefano]
Patch 5:
- check returned number of virtqueues [Stefan]
Patch 6:
- init s->num_queues [Stefano]
- free s->dev.vqs [Stefano]
Longpeng (Mike) (5):
virtio: get class_id an
From: Longpeng
Signed-off-by: Longpeng
---
.../devices/vhost-vdpa-generic-device.rst | 68 +++
1 file changed, 68 insertions(+)
create mode 100644 docs/system/devices/vhost-vdpa-generic-device.rst
diff --git a/docs/system/devices/vhost-vdpa-generic-device.rst
b/docs/syste
From: Longpeng
The generic vDPA device doesn't support migration currently, so
mark it as unmigratable temporarily.
Reviewed-by: Stefano Garzarella
Acked-by: Jason Wang
Signed-off-by: Longpeng
---
hw/virtio/vdpa-dev.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/hw/virtio/vdpa-dev.c b
From: Longpeng
Add helpers to get the "Transitional PCI Device ID" and "class_id"
of the device specified by the "Virtio Device ID".
These helpers will be used to build the generic vDPA device later.
Acked-by: Jason Wang
Signed-off-by: Longpeng
---
hw/virtio/virtio-pci.c | 88 +++
From: Longpeng
Supports vdpa-dev-pci, we can use the device as follow:
-device vhost-vdpa-device-pci,vhostdev=/dev/vhost-vdpa-X
Reviewed-by: Stefano Garzarella
Acked-by: Jason Wang
Signed-off-by: Longpeng
---
hw/virtio/meson.build| 1 +
hw/virtio/vdpa-dev-pci.c | 102 +
f vdpa/net.
Longpeng (Mike) (2):
vdpa-dev: get iova range explicitly
vdpa: harden the error path if get_iova_range failed
hw/virtio/vdpa-dev.c | 9 +
hw/virtio/vhost-vdpa.c | 7 +++
include/hw/virtio/vhost-vdpa.h | 2 ++
net/vhost-vdpa.c
From: Longpeng
We should stop if the GET_IOVA_RANGE ioctl failed.
Signed-off-by: Longpeng
---
net/vhost-vdpa.c | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index ffdc435d19..e65023d013 100644
--- a/net/vhost-vdpa.c
+++ b/net/vho
From: Longpeng
In commit a585fad26b ("vdpa: request iova_range only once") we remove
GET_IOVA_RANGE form vhost_vdpa_init, the generic vdpa device will start
without iova_range populated, so the device won't work. Let's call
GET_IOVA_RANGE ioctl explicitly.
Fixes: a585fad26b2e6ccc ("vdpa: request
From: Longpeng
Simplify the error path in vhost_dev_enable_notifiers by using
vhost_dev_disable_notifiers directly.
Signed-off-by: Longpeng
---
hw/virtio/vhost.c | 20
1 file changed, 4 insertions(+), 16 deletions(-)
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
inde
From: Longpeng
Changes v3->v2:
- cleanup the code [Philippe]
Changes v2->v1:
Patch-1:
- remove vq_init_count [Jason]
Patch-2:
- new added. [Jason]
v1: https://www.mail-archive.com/qemu-devel@nongnu.org/msg922499.html
Longpeng (Mike) (3):
vhost: simplify vhost_dev_enable_not
From: Longpeng
This allows the vhost device to batch the setup of all its host notifiers.
This significantly reduces the device starting time, e.g. the time spend
on enabling notifiers reduce from 376ms to 9.1ms for a VM with 64 vCPUs
and 3 vhost-vDPA generic devices (vdpa_sim_blk, 64vq per devic
From: Longpeng
This allows the vhost-vdpa device to batch the setup of all its MRs of
host notifiers.
This significantly reduces the device starting time, e.g. the time spend
on setup the host notifier MRs reduce from 423ms to 32ms for a VM with
64 vCPUs and 3 vhost-vDPA generic devices (vdpa_si
On 8/29/2024 7:08 AM, Cédric Le Goater wrote:
On 8/1/24 22:30, Michael Kowal wrote:
From: Glenn Miles
Adds support for single byte writes to offset 0xC38 of the TIMA address
space. When this offset is written to, the hardware disables the thread
context and copies the current state informat
On 8/29/2024 7:14 AM, Cédric Le Goater wrote:
On 8/1/24 22:30, Michael Kowal wrote:
From: Glenn Miles
Hypervisor "pool" targets do not get their own interrupt line and
instead
must share an interrupt line with the hypervisor "physical" targets.
This also means that the pool ring must use so
On 8/29/2024 7:29 AM, Cédric Le Goater wrote:
On 8/1/24 22:30, Michael Kowal wrote:
From: Glenn Miles
Current code was updating the PIPR inside the xive_tctx_accept()
function
instead of the xive_tctx_set_cppr function, which is where the HW would
have it updated.
Did you confirm with th
On 8/30/2024 3:25 AM, Cédric Le Goater wrote:
On 8/29/24 22:35, Mike Kowal wrote:
On 8/29/2024 7:29 AM, Cédric Le Goater wrote:
On 8/1/24 22:30, Michael Kowal wrote:
From: Glenn Miles
Current code was updating the PIPR inside the xive_tctx_accept()
function
instead of the
On 9/12/2024 1:27 AM, Cédric Le Goater wrote:
On 9/9/24 23:10, Michael Kowal wrote:
Some the functions that have been created are specific to a ring or
context. Some
of these same functions are being changed to operate on any
ring/context. This will
simplify the next patch sets that are addi
On 9/13/2024 8:10 AM, Cédric Le Goater wrote:
On 9/12/24 22:50, Michael Kowal wrote:
Some the functions that have been created are specific to a ring or
context. Some
of these same functions are being changed to operate on any
ring/context. This will
simplify the next patch sets that are add
From: Longpeng
Implements the .instance_init and the .class_init interface.
Signed-off-by: Longpeng
---
hw/virtio/vdpa-dev-pci.c | 22 +++-
hw/virtio/vdpa-dev.c | 69 ++--
include/hw/virtio/vdpa-dev.h | 3 ++
3 files changed, 91 insertions(+
From: Longpeng
Implements the .get_config and .set_config interface.
Signed-off-by: Longpeng
---
hw/virtio/vdpa-dev.c | 14 --
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/hw/virtio/vdpa-dev.c b/hw/virtio/vdpa-dev.c
index 1713818bc3..f28d3ed5f9 100644
--- a/hw/vir
From: Longpeng
Add helpers to get the "Transitional PCI Device ID" and "class_id"
of the device specified by the "Virtio Device ID".
These helpers will be used to build the generic vDPA device later.
Signed-off-by: Longpeng
---
hw/virtio/virtio-pci.c | 77 +
From: Longpeng
Add the infrastructure of vdpa-dev (the generic vDPA device), we
can add a generic vDPA device as follow:
-device vhost-vdpa-device-pci,vdpa-dev=/dev/vhost-vdpa-X
Signed-off-by: Longpeng
---
hw/virtio/Kconfig| 5 +++
hw/virtio/meson.build| 2 ++
hw/virtio
From: Longpeng
Update linux headers to 5.xxx (kernel part is not merged yet)
To support generic vdpa deivce, we need add the following ioctls:
- VHOST_VDPA_GET_CONFIG_SIZE: get the configuration size.
- VHOST_VDPA_GET_VQS_COUNT: get the count of supported virtqueues.
Signed-off-by: Longpeng
--
From: Longpeng
The generic vDPA device doesn't support migration currently, so
mark it as unmigratable temporarily.
Signed-off-by: Longpeng
---
hw/virtio/vdpa-dev.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/hw/virtio/vdpa-dev.c b/hw/virtio/vdpa-dev.c
index c6847df7aa..5224617574 1006
From: Longpeng
Implements the .get_features interface.
Signed-off-by: Longpeng
---
hw/virtio/vdpa-dev.c | 9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/hw/virtio/vdpa-dev.c b/hw/virtio/vdpa-dev.c
index f28d3ed5f9..9536982061 100644
--- a/hw/virtio/vdpa-dev.c
+++ b/h
From: Longpeng
Implements the .set_status interface.
Signed-off-by: Longpeng
---
hw/virtio/vdpa-dev.c | 100 ++-
1 file changed, 99 insertions(+), 1 deletion(-)
diff --git a/hw/virtio/vdpa-dev.c b/hw/virtio/vdpa-dev.c
index 9536982061..c6847df7aa 100644
de clearer [Stefan]
- fix the misleading description of 'dc->desc' [Stefano]
Patch 5:
- check returned number of virtqueues [Stefan]
Patch 6:
- init s->num_queues [Stefano]
- free s->dev.vqs [Stefano]
Longpeng (Mike) (10):
virtio: get class_id and pci d
From: Longpeng
Implements the .realize interface.
Signed-off-by: Longpeng
---
hw/virtio/vdpa-dev-pci.c | 18 -
hw/virtio/vdpa-dev.c | 132 +++
include/hw/virtio/vdpa-dev.h | 10 +++
3 files changed, 159 insertions(+), 1 deletion(-)
diff --git
From: Longpeng
Implements the .unrealize interface.
Signed-off-by: Longpeng
---
hw/virtio/vdpa-dev.c | 18 +-
1 file changed, 17 insertions(+), 1 deletion(-)
diff --git a/hw/virtio/vdpa-dev.c b/hw/virtio/vdpa-dev.c
index 4defe6c33d..1713818bc3 100644
--- a/hw/virtio/vdpa-dev.c
s that already assigned and need to process in this
round.
The optimization can be applied to msi type too.
Signed-off-by: Longpeng(Mike)
---
hw/vfio/pci.c | 130 +-
hw/vfio/pci.h | 2 +
2 files changed, 99 insertions(+), 33 deletions(-)
diff --git a/h
Move re-enabling INTX out, and the callers should decide to
re-enable it or not.
Signed-off-by: Longpeng(Mike)
---
hw/vfio/pci.c | 17 +++--
1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index 1433989aeb..a1916e2e36 100644
--- a/hw/vfio
nd grammatical errors [Alex, Philippe]
- split fixups and cleanups into separate patches [Alex, Philippe]
- introduce kvm_irqchip_add_deferred_msi_route to
minimize code changes[Alex]
- enable the optimization in msi setup path[Alex]
Longpeng (Mike) (5):
vfio: simplify the conditional s
It's unnecessary to test against the specific return value of
VFIO_DEVICE_SET_IRQS, since any positive return is an error
indicating the number of vectors we should retry with.
Signed-off-by: Longpeng(Mike)
---
hw/vfio/pci.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff
Use vfio_msi_disable_common to simplify the error handling
in vfio_msi_enable.
Signed-off-by: Longpeng(Mike)
---
hw/vfio/pci.c | 16 ++--
1 file changed, 2 insertions(+), 14 deletions(-)
diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index a1916e2e36..6f49e71cd4 100644
--- a/hw/vfio
ned-off-by: Longpeng(Mike)
---
hw/vfio/pci.c | 20 +++-
1 file changed, 3 insertions(+), 17 deletions(-)
diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index 6f49e71cd4..6801391cf6 100644
--- a/hw/vfio/pci.c
+++ b/hw/vfio/pci.c
@@ -572,9 +572,6 @@ static void vfio_msix_vector_release
ned-off-by: Longpeng(Mike)
---
hw/vfio/pci.c | 20 +++-
1 file changed, 3 insertions(+), 17 deletions(-)
diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index aeeb6cd..0bd832b 100644
--- a/hw/vfio/pci.c
+++ b/hw/vfio/pci.c
@@ -569,9 +569,6 @@ static void vfio_msix_vector_release(PCIDe
Move re-enabling INTX out, and the callers should decide to
re-enable it or not.
Signed-off-by: Longpeng(Mike)
---
hw/vfio/pci.c | 17 +++--
1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index c1fba40..a4985c0 100644
--- a/hw/vfio/pci.c
It's unnecessary to test against the specific return value of
VFIO_DEVICE_SET_IRQS, since any positive return is an error
indicating the number of vectors we should retry with.
Signed-off-by: Longpeng(Mike)
---
hw/vfio/pci.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff
Extract a common helper that add MSI route for specific vector
but does not commit immediately.
Signed-off-by: Longpeng(Mike)
---
accel/kvm/kvm-all.c | 15 +--
include/sysemu/kvm.h | 6 ++
2 files changed, 19 insertions(+), 2 deletions(-)
diff --git a/accel/kvm/kvm-all.c b
s that already assigned and need to process in this
round.
The optimization can be applied to msi type too.
Signed-off-by: Longpeng(Mike)
---
hw/vfio/pci.c | 129 ++
hw/vfio/pci.h | 1 +
2 files changed, 105 insertions(+), 25 deletions(-)
diff -
ate patches [Alex, Philippe]
- introduce kvm_irqchip_add_deferred_msi_route to
minimize code changes[Alex]
- enable the optimization in msi setup path[Alex]
Longpeng (Mike) (6):
vfio: simplify the conditional statements in vfio_msi_enable
vfio: move re-enabling INTX out of the common h
Use vfio_msi_disable_common to simplify the error handling
in vfio_msi_enable.
Signed-off-by: Longpeng(Mike)
---
hw/vfio/pci.c | 16 ++--
1 file changed, 2 insertions(+), 14 deletions(-)
diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index a4985c0..aeeb6cd 100644
--- a/hw/vfio/pci.c
? Such as:
* Lightweight
Net: vhost-vdpa-net
Storage: vhost-vdpa-blk
* Heavy but more powerful
Net: netdev + virtio-net + vhost-vdpa
Storage: bdrv + virtio-blk + vhost-vdpa
[1] https://www.mail-archive.com/qemu-devel@nongnu.org/msg797569.html
Signed-off-by: Longpeng(Mike)
---
hw/net/m
From: Longpeng
This patchset moves the call to kvm_irqchip_commit_routes() out of
kvm_irqchip_add_msi_route(). An optimization of vfio migration [1]
depends on this changes.
[1] https://lists.gnu.org/archive/html/qemu-devel/2021-11/msg00968.html
Longpeng (Mike) (2):
kvm-irqchip: introduce
From: Longpeng
As suggested by Paolo [1], add the new API to support route changes.
We should invoke kvm_irqchip_begin_route_changes() before changing the
routes, increase the KVMRouteChange.changes if the routes are changed,
and commit the changes at last.
[1] https://lists.gnu.org/archive/html
From: Longpeng
We invoke commit operation for each addition to msi route table, this
is not efficient if we are adding lots of routes in some cases (e.g.
the resume phase of vfio migration [1]).
This patch moves the call to kvm_irqchip_commit_routes() to the callers,
so the callers can decide ho
image
are preserved. As another visible change of `qemu-img dd` behavior is that
if destination image is less than source it can finish with error (similar
to "dd" utility):
qemu-img: error while writing to output image file: Input/output error
Signed-off-by: Mike Maslenkin
image
are preserved. As another visible change of `qemu-img dd` behavior is that
if destination image is less than source it can finish with error (similar
to "dd" utility):
qemu-img: error while writing to output image file: Input/output error
Signed-off-by: Mike Maslenkin
---
d
be improved to smth like
if (strcmp(fmt, "raw") || !g_file_test(out.filename,
G_FILE_TEST_EXISTS)) . And parameter "conv=notrunc" may be implemented
additionally for this case.
Three of above do not require "conv=" parameter from my point of view.
I would be glad t
quot;bytes" variable as int64_t
and then shift it to the right? I see here it can not be negative,
but it's a common to use signed values and not to add explicit check
before shifting to right In this file
I takes time to ensure that initial values are not negative.
Regards,
Mike.
&
aph-lock.h:85:26: note:
expanded from macro 'GRAPH_RDLOCK_PTR'
#define GRAPH_RDLOCK_PTR TSA_GUARDED_BY(graph_lock)
^
/Users/mg/sources/qemu/include/qemu/clang-tsa.h:48:31: note: expanded
from macro 'TSA_GUARDED_BY'
#define TSA_GUARDED_BY(x) TSA(guarded_b
ata_start;
^~
../block/parallels.c:1139:18: note: remove the '||' if its condition
is always false
need_check = need_check ||
^
../block/parallels.c:1067:24: note: initialize the variable
'data_start' to silence this warning
uint32_t data_start;
^
= 0
1 warning generated.
Regards,
Mike.
ocMode prealloc_mode;
>
> --
> 2.34.1
>
Is it intended behavior?
Run:
1. ./qemu-img create -f parallels $TEST_IMG 1T
2. dd if=/dev/zero of=$TEST_IMG oseek=12 bs=1M count=128 conv=notrunc
3. ./qemu-img check $TEST_IMG
No errors were found on the image.
Image end offset: 150994944
Without this patch `qemu-img check` reports:
ERROR space leaked at the end of the image 145752064
139 leaked clusters were found on the image.
This means waste of disk space, but no harm to data.
Image end offset: 5242880
Note: there is another issue caused by previous commits exists.
g_free asserts from parallels_free_used_bitmap() because of
s->used_bmap is NULL.
To reproduce this crash at revision before or without patch 15/19, run commands:
1. ./qemu-img create -f parallels $TEST_IMG 1T
2. dd if=/dev/zero of=$TEST_IMG oseek=12 bs=1M count=128 conv=notrunc
3. ./qemu-img check -r leaks $TEST_IMG
Regards,
Mike.
On Sat, Oct 7, 2023 at 1:18 PM Alexander Ivanov
wrote:
>
>
>
> On 10/6/23 21:43, Mike Maslenkin wrote:
> > On Mon, Oct 2, 2023 at 12:01 PM Alexander Ivanov
> > wrote:
> >> Since we have used bitmap, field data_end in BDRVParallelsState is
> >> r
On Sat, Oct 7, 2023 at 5:30 PM Alexander Ivanov
wrote:
>
>
>
> On 10/7/23 13:21, Mike Maslenkin wrote:
> > On Sat, Oct 7, 2023 at 1:18 PM Alexander Ivanov
> > wrote:
> >>
> >> On 10/6/23 21:43, Mike Maslenkin wrote:
> >>> On Mon,
, but NOT the arrays of
_fruid data array)
Thanks,
Mike
From: Cédric Le Goater
Date: Tuesday, July 4, 2023 at 7:07 AM
To: Sittisak Sinprem , Bin Huang ,
Tao Ren , Mike Choi
Cc: qemu-devel@nongnu.org , qemu-...@nongnu.org
, peter.mayd...@linaro.org ,
and...@aj.id.au , Joel Stanley ,
qemu-sta
What was the issue you are seeing?
Was it something like you get the UA. We retry then on one of the
retries the sense is not setup correctly, so the scsi error handler
runs? That fails and the device goes offline?
If you turn on scsi debugging you would see:
[ 335.445922] sd 0:0:0:0: [sda] ta
ers.
Your proposal here suggests modifying hugetlb so that it can be used in
a new way (use case) by KVM's guest_mem. As such it really seems like
something that should be done in a separate filesystem/driver/allocator.
You will likely not get much support for modifying hugetlb.
--
Mike Krav
te hugetlb pages. This will require different alignment
and size requirements on the UDMABUF_CREATE API.
[1]
https://lore.kernel.org/linux-mm/20230512072036.1027784-1-junxiao.ch...@intel.com/
Fixes: 16c243e99d33 ("udmabuf: Add support for mapping hugepages (v4)")
Cc:
Signed-off-by: Mike K
On Thu, Oct 19, 2023 at 4:05 PM Alexander Ivanov
wrote:
>
> Now dirty bitmaps can be loaded but there is no their saving. Add code for
> dirty bitmap storage.
>
> Signed-off-by: Alexander Ivanov
> ---
> block/parallels-ext.c | 167 ++
> block/parallels.c
oid parallels_free_used_bitmap(BlockDriverState
> *bs)
> {
> BDRVParallelsState *s = bs->opaque;
> s->used_bmap_size = 0;
> +s->used_bmap = NULL;
> g_free(s->used_bmap);
> }
Shouldn't it be added after g_free() call?
Regards,
Mike.
int64_t *clusters);
>
> --
> 2.34.1
>
>
Just a note: parallels_mark_unused() could be initially declared as
global just because after patch 3/20 there can be compilation warning:
warning: unused function 'mark_unused' [-Wunused-function]
:)
I do not have strong opinion about how to avoid such compilation
warning in the middle of the patch series.
The simplest and straightforward way is to declare this function as
static in patch 4.
I do not have any other objections for the series except misplaced
NULL assignment.
Regards,
Mike.
When trying to send IO to more than 2 virtqueues the single
thread becomes a bottlneck.
This patch adds a new setting, workers_per_virtqueue, which can be set
to:
false: Existing behavior where we get the single worker thread.
true: Create a worker per IO virtqueue.
Signed-off-by: Mike Christie
--
The following patches allow users to configure the vhost worker threads
for vhost-scsi. With vhost-net we get a worker thread per rx/tx virtqueue
pair, but for vhost-scsi we get one worker for all workqueues. This
becomes a bottlneck after 2 queues are used.
In the upstream linux kernel commit:
h
This adds the vhost backend callouts for the worker ioctls added in the
6.4 linux kernel commit:
c1ecd8e95007 ("vhost: allow userspace to create workers")
Signed-off-by: Mike Christie
---
hw/virtio/vhost-backend.c | 28
include/hw/virtio/vhost
On 11/29/23 3:30 AM, Stefano Garzarella wrote:
> On Sun, Nov 26, 2023 at 06:28:34PM -0600, Mike Christie wrote:
>> This adds support for vhost-scsi to be able to create a worker thread
>> per virtqueue. Right now for vhost-net we get a worker thread per
>> tx/rx virtqueue pai
When trying to send IO to more than 2 virtqueues the single
thread becomes a bottlneck.
This patch adds a new setting, worker_per_virtqueue, which can be set
to:
false: Existing behavior where we get the single worker thread.
true: Create a worker per IO virtqueue.
Signed-off-by: Mike Christie
Rev
This adds the vhost backend callouts for the worker ioctls added in the
6.4 linux kernel commit:
c1ecd8e95007 ("vhost: allow userspace to create workers")
Signed-off-by: Mike Christie
Reviewed-by: Stefano Garzarella
Reviewed-by: Stefan Hajnoczi
---
hw/virtio/vhost-backend.c
The following patches allow users to configure the vhost worker threads
for vhost-scsi. With vhost-net we get a worker thread per rx/tx virtqueue
pair, but for vhost-scsi we get one worker for all workqueues. This
becomes a bottlneck after 2 queues are used.
In the upstream linux kernel commit:
h
Use of the API was removed a while back, but the define wasn't.
Signed-off-by: Mike Frysinger
---
include/tcg/tcg-op.h | 2 --
1 file changed, 2 deletions(-)
diff --git a/include/tcg/tcg-op.h b/include/tcg/tcg-op.h
index 80cfcf8104b6..3ead59e4594d 100644
--- a/include/tcg/tcg-op.h
When trying to send IO to more than 2 virtqueues the single
thread becomes a bottlneck.
This patch adds a new setting, virtqueue_workers, which can be set to:
1: Existing behavior whre we get the single thread.
-1: Create a worker per IO virtqueue.
Signed-off-by: Mike Christie
---
hw/scsi/vhost-s
The following patches allow users to configure the vhost worker threads
for vhost-scsi. With vhost-net we get a worker thread per rx/tx virtqueue
pair, but for vhost-scsi we get one worker for all workqueues. This
becomes a bottlneck after 2 queues are used.
In the upstream linux kernel commit:
h
This adds the vhost backend callouts for the worker ioctls added in the
6.4 linux kernel commit:
c1ecd8e95007 ("vhost: allow userspace to create workers")
Signed-off-by: Mike Christie
---
hw/virtio/vhost-backend.c | 28
include/hw/virtio/vhost
On 11/15/23 5:43 AM, Stefano Garzarella wrote:
> On Mon, Nov 13, 2023 at 06:36:44PM -0600, Mike Christie wrote:
>> This adds support for vhost-scsi to be able to create a worker thread
>> per virtqueue. Right now for vhost-net we get a worker thread per
>> tx/rx virtqueue pai
On 11/15/23 6:57 AM, Stefan Hajnoczi wrote:
> On Wed, Nov 15, 2023 at 12:43:02PM +0100, Stefano Garzarella wrote:
>> On Mon, Nov 13, 2023 at 06:36:44PM -0600, Mike Christie wrote:
>>> This adds support for vhost-scsi to be able to create a worker thread
>>> per virtque
On Wed, Aug 23, 2023 at 12:17 PM Fiona Ebner wrote:
>
> Am 23.08.23 um 10:47 schrieb Fiona Ebner:
> > Am 17.02.23 um 22:22 schrieb Mike Maslenkin:
> >> I can not tell anything about dma-reentracy issues, but yes, i would
> >> start to look at check_cmd() functio
g:268963.640129.8
The VM's topology is "1*socket 8*cores 2*threads".
After present virtual L3 cache info for VM, the amounts of RES IPI in guest
reduce 85%.
Signed-off-by: Longpeng(Mike)
---
target-i386/cpu.c | 34 +++---
1 file changed, 27
Hi Eduardo,
On 2016/8/30 22:25, Eduardo Habkost wrote:
> On Mon, Aug 29, 2016 at 09:17:02AM +0800, Longpeng (Mike) wrote:
>> This patch presents virtual L3 cache info for virtual cpus.
>
> Just changing the L3 cache size in the CPUID code will make
> guests see a different ca
Hi Michael,
On 2016/9/1 21:27, Michael S. Tsirkin wrote:
> On Thu, Sep 01, 2016 at 02:58:05PM +0800, l00371263 wrote:
>> From: "Longpeng(Mike)"
>>
>> Some software algorithms are based on the hardware's cache info, for example,
>> for x86 linux kerne
From: "Longpeng(Mike)"
Some software algorithms are based on the hardware's cache info, for example,
for x86 linux kernel, when cpu1 want to wakeup a task on cpu2, cpu1 will trigger
a resched IPI and told cpu2 to do the wakeup if they don't share low level
cache. Opposite
socket. With L3 cache, the performance improves 7.2%~33.1%(avg:15.7%).
Signed-off-by: Longpeng(Mike)
---
Changes since v2:
- add more useful commit mesage.
- rename "compat-cache" to "l3-cache-shared".
Changes since v1:
- fix the compat problem: set compat_props on PC_C
Hi Michael,
On 2016/9/3 6:52, Michael S. Tsirkin wrote:
> On Fri, Sep 02, 2016 at 10:22:55AM +0800, Longpeng(Mike) wrote:
>> From: "Longpeng(Mike)"
>>
>> Some software algorithms are based on the hardware's cache info, for example,
>> for x86 linux
Signed-off-by: Longpeng(Mike)
---
hw/i386/pc_piix.c| 17 ++---
hw/i386/pc_q35.c | 16 +---
include/hw/compat.h | 2 ++
include/hw/i386/pc.h | 3 +++
4 files changed, 32 insertions(+), 6 deletions(-)
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index
et. With L3 cache, the performance
improves 7.2%~33.1%(avg:15.7%).
Signed-off-by: Longpeng(Mike)
---
include/hw/i386/pc.h | 9 +
target-i386/cpu.c| 49 -
target-i386/cpu.h| 6 ++
3 files changed, 59 insertions(+), 5 deletions(-)
diff --
ps on PC_COMPAT_2_7.
- fix a "intentionally introducde bug": make intel's and amd's consistently.
- fix the CPUID.(EAX=4, ECX=3):EAX[25:14].
- test the performance if vcpus running on sparate sockets: with L3 cache,
the performance improves 7.2%~33.1%(avg: 15.7%).
---
Longp
Signed-off-by: Longpeng(Mike)
---
hw/i386/pc_piix.c| 16 +---
hw/i386/pc_q35.c | 13 +++--
include/hw/compat.h | 2 ++
include/hw/i386/pc.h | 3 +++
4 files changed, 29 insertions(+), 5 deletions(-)
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index a07dc81
et. With L3 cache, the performance
improves 7.2%~33.1%(avg:15.7%).
Signed-off-by: Longpeng(Mike)
---
include/hw/i386/pc.h | 9 +
target-i386/cpu.c| 49 -
target-i386/cpu.h| 6 ++
3 files changed, 59 insertions(+), 5 deletions(-)
diff --
ps on PC_COMPAT_2_7.
- fix a "intentionally introducde bug": make intel's and amd's consistently.
- fix the CPUID.(EAX=4, ECX=3):EAX[25:14].
- test the performance if vcpus running on sparate sockets: with L3 cache,
the performance improves 7.2%~33.1%(avg: 15.7%).
---
Longp
Hi Eduardo,
On 2016/9/6 2:53, Eduardo Habkost wrote:
> On Fri, Sep 02, 2016 at 10:22:55AM +0800, Longpeng(Mike) wrote:
> [...]
>> ---
>> Changes since v2:
>> - add more useful commit mesage.
>> - rename "compat-cache" to "l3-cache-shared&quo
This will used by the next patch.
Signed-off-by: Longpeng(Mike)
---
hw/i386/pc_piix.c| 16 +---
hw/i386/pc_q35.c | 13 +++--
include/hw/i386/pc.h | 3 +++
3 files changed, 27 insertions(+), 5 deletions(-)
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index
et. With L3 cache, the performance
improves 7.2%~33.1%(avg:15.7%).
Signed-off-by: Longpeng(Mike)
---
include/hw/i386/pc.h | 9 +
target-i386/cpu.c| 49 -
target-i386/cpu.h| 6 ++
3 files changed, 59 insertions(+), 5 deletions(-)
diff --
f vcpus running on sparate sockets: with L3 cache,
the performance improves 7.2%~33.1%(avg: 15.7%).
Longpeng(Mike) (2):
pc: Add 2.8 machine
target-i386: present virtual L3 cache info for vcpus
hw/i386/pc_piix.c| 16 +---
hw/i386/pc_q35.c | 13 ++
601 - 700 of 1233 matches
Mail list logo