vCPU is not paused, the vfio device is
always running. This looks like a *deadlock*.
Do you have any ideas to solve this problem?
Looking forward to your replay.
Thanks,
Kunkun Jiang
Hi Kevin:
On 2021/9/24 14:47, Tian, Kevin wrote:
From: Kunkun Jiang
Sent: Friday, September 24, 2021 2:19 PM
Hi all,
I encountered a problem in vfio device migration test. The
vCPU may be paused during vfio-pci DMA in iommu nested
stage mode && vSVA. This may lead to migration fail a
Hi Eric,
On 2021/10/22 0:15, Eric Auger wrote:
Hi Kunkun,
On 9/14/21 3:53 AM, Kunkun Jiang wrote:
We expand MemoryRegions of vfio-pci sub-page MMIO BARs to
vfio_pci_write_config to improve IO performance.
s/to vfio_pci_write_config/ in vfio_pci_write_config()
Thank you for your review. I
Hi Eric,
On 2021/10/22 1:02, Eric Auger wrote:
Hi Kunkun,
On 9/14/21 3:53 AM, Kunkun Jiang wrote:
The MSI-X structures of some devices and other non-MSI-X structures
are in the same BAR. They may share one host page, especially in the
may be in the same bar?
You are right. So embarrassing
Hi Eric,
On 2021/10/23 22:26, Eric Auger wrote:
Hi Kunkun,
On 10/22/21 12:01 PM, Kunkun Jiang wrote:
Hi Eric,
On 2021/10/22 0:15, Eric Auger wrote:
Hi Kunkun,
On 9/14/21 3:53 AM, Kunkun Jiang wrote:
We expand MemoryRegions of vfio-pci sub-page MMIO BARs to
vfio_pci_write_config to improve
AM section cannot be DMA mapped")
did.
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index a784b219e6..dd387b0d39 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -893
()
(vfio_pci_load_config) and vfio_sub_page_bar_update_mapping()
will not be called.
This may result in poor performance after live migration.
So iterate BARs in vfio_pci_load_config() and try to update
sub-page BARs.
Reported-by: Nianyao Tang
Reported-by: Qixin Gan
Signed-off-by: Kunkun Jiang
---
hw/vfio/pci.c
age [Eric Auger]
v1 -> v2:
- Add iterate sub-page BARs in vfio_pci_load_config and try to update them
[Alex Williamson]
Kunkun Jiang (2):
vfio/pci: Add support for mmapping sub-page MMIO BARs after live
migration
vfio/common: Add a trace point when a MMIO RAM section cannot be
mappe
Kindly ping,
Hi all,
Will this patch be picked up soon, or is there any other advice?
Thanks,
Kunkun Jiang
On 2021/9/14 9:53, Kunkun Jiang wrote:
This series include patches as below:
Patch 1:
- vfio/pci: Fix vfio-pci sub-page MMIO BAR mmaping in live migration
Patch 2:
- Added a trace
Hi Eric,
On 2021/10/8 0:58, Eric Auger wrote:
Hi Kunkun,
On 4/14/21 3:45 AM, Kunkun Jiang wrote:
On 2021/4/13 20:57, Auger Eric wrote:
Hi Kunkun,
On 4/13/21 2:10 PM, Kunkun Jiang wrote:
Hi Eric,
On 2021/4/11 20:08, Eric Auger wrote:
In nested mode, legacy vfio_iommu_map_notify cannot be
.
Is my understanding correct?
Should the source wait the result of the last round of destination ?
Thanks,
Kunkun Jiang
Hi Dave,
On 2021/5/6 21:05, Dr. David Alan Gilbert wrote:
* Kunkun Jiang (jiangkun...@huawei.com) wrote:
Hi all,
Hi,
Recently I am learning about the part of live migration.
I have a question about the last round.
When the pending_size is less than the threshold, it will enter
the last
vfio dirty log, which can
eliminate some redundant dirty handling
History:
v1 -> v2:
- Add a new ioctl VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP_NOCLEAR to get
vfio dirty log when support manual clear.
Thanks,
Kunkun Jiang
[1]
IOMMU part:
https://lore.kernel.org/linux-iommu/20210507102211.8
VFIO_DIRTY_LOG_MANUAL_CLEAR and
provide the log_clear() hook for vfio_memory_listener. If the
kernel supports it, deliever the clear message to kernel.
Signed-off-by: Zenghui Yu
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 149 +-
include/hw/vfio/vfio-common.h
From: Zenghui Yu
The new capability VFIO_DIRTY_LOG_MANUAL_CLEAR and the new ioctl
VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP_NOCLEAR and
VFIO_IOMMU_DIRTY_PAGES_FLAG_CLEAR_BITMAP have been introduced in
the kernel, update the header to add them.
Signed-off-by: Zenghui Yu
Signed-off-by: Kunkun Jiang
om kvm side.
See commit 9f4bf4baa8b820c7930e23c9566c9493db7e1d25. ]
Signed-off-by: Zenghui Yu
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 62 +++
include/hw/vfio/vfio-common.h | 9 +
2 files changed, 65 insertions(+), 6 deletions(-)
diff --gi
Hi all,
Sorry for my carelessness.
This is the v2 of this series.
Thanks,
Kunkun Jiang
On 2021/5/8 17:31, Kunkun Jiang wrote:
In the past, we clear dirty log immediately after sync dirty log to
userspace. This may cause redundant dirty handling if userspace
handles dirty log iteratively
dded the post_load function to vmstate_smmuv3 for passing stage 1
configuration to the destination host after the migration
Best regards,
Kunkun Jiang
History:
v2 -> v3:
- Rebase to v9 of Eric's series 'vSMMUv3/pSMMUv3 2 stage VFIO integration'[1]
- Delete smmuv3_manual_set_pci_d
won't cause any errors. Add
global_log_start/stop interface in vfio_memory_prereg_listener
can separate stage 2 from stage 1.
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 24
1 file changed, 24 insertions(+)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
stage.
This patch adds vfio_prereg_listener_log_sync to mark dirty
pages in nested mode.
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 27 +++
1 file changed, 27 insertions(+)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 9fb8d44a6d..149e535a75 100644
--- a/h
operation fails, the migration fails.
Signed-off-by: Kunkun Jiang
---
hw/arm/smmuv3.c | 33 -
1 file changed, 28 insertions(+), 5 deletions(-)
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index ca690513e6..ac1de572f3 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm
Extract part of the code from vfio_sync_dirty_bitmap to form a
new helper, which allows to mark dirty pages of a RAM section.
This helper will be called for nested stage.
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 22 ++
1 file changed, 14 insertions(+), 8 deletions
Hi all,
This series has been updated to v3.[1]
Any comments and reviews are welcome.
Thanks,
Kunkun Jiang
[1] [RFC PATCH v3 0/4] Add migration support for VFIO PCI devices in
SMMUv3 nested mode
https://lore.kernel.org/qemu-devel/20210511020816.2905-1-jiangkun...@huawei.com/
On 2021/3/31 18
kindly ping,
Any comments and reviews are welcome.😁
Thanks,
Kunkun Jiang
On 2021/3/10 17:41, Kunkun Jiang wrote:
Hi all,
In the past, we clear dirty log immediately after sync dirty log to
userspace. This may cause redundant dirty handling if userspace
handles dirty log iteratively:
After
Hi Kevin,
On 2021/3/18 14:28, Tian, Kevin wrote:
From: Kunkun Jiang
Sent: Wednesday, March 10, 2021 5:41 PM
Hi all,
In the past, we clear dirty log immediately after sync dirty log to
userspace. This may cause redundant dirty handling if userspace
handles dirty log iteratively:
After vfio
Hi Kevin,
On 2021/3/18 17:04, Tian, Kevin wrote:
From: Kunkun Jiang
Sent: Thursday, March 18, 2021 3:59 PM
Hi Kevin,
On 2021/3/18 14:28, Tian, Kevin wrote:
From: Kunkun Jiang
Sent: Wednesday, March 10, 2021 5:41 PM
Hi all,
In the past, we clear dirty log immediately after sync dirty log
On 2021/3/18 20:36, Tian, Kevin wrote:
From: Kunkun Jiang
Sent: Thursday, March 18, 2021 8:29 PM
Hi Kevin,
On 2021/3/18 17:04, Tian, Kevin wrote:
From: Kunkun Jiang
Sent: Thursday, March 18, 2021 3:59 PM
Hi Kevin,
On 2021/3/18 14:28, Tian, Kevin wrote:
From: Kunkun Jiang
Sent: Wednesday
On 2021/3/18 20:36, Tian, Kevin wrote:
From: Kunkun Jiang
Sent: Thursday, March 18, 2021 8:29 PM
Hi Kevin,
On 2021/3/18 17:04, Tian, Kevin wrote:
From: Kunkun Jiang
Sent: Thursday, March 18, 2021 3:59 PM
Hi Kevin,
On 2021/3/18 14:28, Tian, Kevin wrote:
From: Kunkun Jiang
Sent: Wednesday
discussions between Eric and Linu about
this [1], but this idea does not seem to be implemented.
[1] https://lists.gnu.org/archive/html/qemu-arm/2017-09/msg00149.html
Best regards,
Kunkun Jiang
supported,
vSVA will failed to be enabled in the future for 16K guest
kernel. So it'd better to support it.
Signed-off-by: Kunkun Jiang
---
hw/arm/smmuv3.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 3b87324ce2..0a483b0bab 1
Extract part of the code from vfio_sync_dirty_bitmap to form a
new helper, which allows to mark dirty pages of a RAM section.
This helper will be called for nested stage.
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 22 ++
1 file changed, 14 insertions(+), 8 deletions
operation is fail, the migration is fail.
Signed-off-by: Kunkun Jiang
---
hw/arm/smmuv3.c | 62 +
hw/arm/trace-events | 1 +
2 files changed, 63 insertions(+)
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 55aa6ad874..4d28ca3777 100644
--- a
o the destination
host after the migration.
Best Regards,
Kunkun Jiang
[1] [RFC,v8,00/28] vSMMUv3/pSMMUv3 2 stage VFIO integration
http://patchwork.ozlabs.org/project/qemu-devel/cover/20210225105233.650545-1-eric.au...@redhat.com/
This Patch set includes patches as below:
Patch 1-2:
- Refactor th
won't cause any errors. Add
global_log_start/stop interface in vfio_memory_prereg_listener
can separate stage 2 from stage 1.
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 22 ++
1 file changed, 22 insertions(+)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
This patch adds
vfio_prereg_listener_log_sync to mark dirty pages in nested mode.
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 27 +++
1 file changed, 27 insertions(+)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 3117979307..86722814d4 100644
--- a/hw/vfio/co
Kindly ping,
Hi David Alan Gilbert,
Will this series be picked up soon, or is there any other work for me to do?
Best Regards,
Kunkun Jiang
On 2021/3/16 20:57, Kunkun Jiang wrote:
Hi all,
This series include patches as below:
Patch 1:
- reduce unnecessary rate limiting in ram_save_host_page
Hi Eric,
On 2021/4/7 3:50, Auger Eric wrote:
Hi Kunkun,
On 3/27/21 3:24 AM, Kunkun Jiang wrote:
Hi all,
Recently, I did some tests on SMMU nested mode. Here is
a question about the translation granule size supported by
vSMMU.
There is such a code in SMMUv3_init_regs():
/* 4K and 64K
Hi Dave,
On 2021/4/7 1:14, Dr. David Alan Gilbert wrote:
* Kunkun Jiang (jiangkun...@huawei.com) wrote:
Kindly ping,
Hi David Alan Gilbert,
Will this series be picked up soon, or is there any other work for me to do?
You don't need to do anything, but it did miss the cutoff for soft
f
Hi Eric,
On 2021/4/8 15:27, Auger Eric wrote:
Hi Kunkun,
On 4/7/21 11:26 AM, Kunkun Jiang wrote:
Hi Eric,
On 2021/4/7 3:50, Auger Eric wrote:
Hi Kunkun,
On 3/27/21 3:24 AM, Kunkun Jiang wrote:
Hi all,
Recently, I did some tests on SMMU nested mode. Here is
a question about the
Hi Eric,
On 2021/4/8 21:46, Auger Eric wrote:
Hi Kunkun,
On 2/19/21 10:42 AM, Kunkun Jiang wrote:
Extract part of the code from vfio_sync_dirty_bitmap to form a
new helper, which allows to mark dirty pages of a RAM section.
This helper will be called for nested stage.
Signed-off-by: Kunkun
Hi Eric,
On 2021/4/12 16:40, Auger Eric wrote:
Hi Kunkun,
On 2/19/21 10:42 AM, Kunkun Jiang wrote:
Hi all,
Since the SMMUv3's nested translation stages[1] has been introduced by Eric, we
need to pay attention to the migration of VFIO PCI devices in SMMUv3 nested
stage
mode. At presen
Hi Eric,
On 2021/4/8 21:56, Auger Eric wrote:
Hi Kunkun,
On 2/19/21 10:42 AM, Kunkun Jiang wrote:
On Intel, the DMA mapped through the host single stage. Instead
we set up the stage 2 and stage 1 separately in nested mode as there
is no "Caching Mode".
You need to rewrite the above
Hi Eric,
On 2021/4/12 16:34, Auger Eric wrote:
Hi Kunkun,
On 2/19/21 10:42 AM, Kunkun Jiang wrote:
In nested mode, we call the set_pasid_table() callback on each STE
update to pass the guest stage 1 configuration to the host and
apply it at physical level.
In the case of live migration, we
ues a TLBI cmd
without "range" (tg = 0) to invalidate a 2M huge page. Then qemu passed
the iova and size (4K) to host kernel. Finally, host kernel issues a
TLBI cmd
with "range" (4K) which can not invalidate the TLB entry of 2M huge page.
(pSMMU supports RIL)
Thanks,
Kunkun Jia
On 2021/4/13 20:57, Auger Eric wrote:
Hi Kunkun,
On 4/13/21 2:10 PM, Kunkun Jiang wrote:
Hi Eric,
On 2021/4/11 20:08, Eric Auger wrote:
In nested mode, legacy vfio_iommu_map_notify cannot be used as
there is no "caching" mode and we do not trap on map.
On Intel, vfio_iommu_map_
Hi Eric,
On 2021/4/14 16:05, Auger Eric wrote:
Hi Kunkun,
On 4/14/21 3:45 AM, Kunkun Jiang wrote:
On 2021/4/13 20:57, Auger Eric wrote:
Hi Kunkun,
On 4/13/21 2:10 PM, Kunkun Jiang wrote:
Hi Eric,
On 2021/4/11 20:08, Eric Auger wrote:
In nested mode, legacy vfio_iommu_map_notify cannot be
- Remove 'goto' [David Edmondson]
Kunkun Jiang (2):
migration/ram: Reduce unnecessary rate limiting
migration/ram: Optimize ram_save_host_page()
migration/ram.c | 34 +++---
1 file changed, 19 insertions(+), 15 deletions(-)
--
2.23.0
When the host page is a huge page and something is sent in the
current iteration, migration_rate_limit() should be executed.
If not, it can be omitted.
Signed-off-by: Keqian Zhu
Signed-off-by: Kunkun Jiang
Reviewed-by: David Edmondson
---
migration/ram.c | 9 +++--
1 file changed, 7
rmance to use migration_bitmap_find_dirty().
Tested on Kunpeng 920; VM parameters: 1U 4G (page size 1G)
The time of ram_save_host_page() in the last round of ram saving:
before optimize: 9250us after optimize: 34us
Signed-off-by: Keqian Zhu
Signed-off-by: Kunkun Jiang
---
migrati
Hi Peter,
On 2021/3/17 5:39, Peter Xu wrote:
On Tue, Mar 16, 2021 at 08:57:15PM +0800, Kunkun Jiang wrote:
When the host page is a huge page and something is sent in the
current iteration, migration_rate_limit() should be executed.
If not, it can be omitted.
Signed-off-by: Keqian Zhu
Signed
s: 7c2f5f75f94 (vfio: Register SaveVMHandlers for VFIO device)
Reported-by: Qixin Gan
Signed-off-by: Kunkun Jiang
---
hw/vfio/migration.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
index 201642d75e..ef397ebe6c 100644
--- a/hw/vfio/migration.c
+++ b
Hi Philippe,
On 2021/5/27 21:44, Philippe Mathieu-Daudé wrote:
On 5/27/21 2:31 PM, Kunkun Jiang wrote:
In the vfio_migration_init(), the SaveVMHandler is registered for
VFIO device. But it lacks the operation of 'unregister'. It will
lead to 'Segmentation fault (c
On 2021/3/3 16:56, David Edmondson wrote:
On Monday, 2021-03-01 at 16:21:32 +08, Kunkun Jiang wrote:
Starting from pss->page, ram_save_host_page() will check every page
and send the dirty pages up to the end of the current host page or
the boundary of used_length of the block. If the host p
The atomic_ has been renamed to qatomic_ in the patch d73415a3154.
It seems that the pvrdma_ring.h doesn't need to be updated.
Best Regards.
Kunkun Jiang
diff --git a/linux-headers/linux/iommu.h b/linux-headers/linux/iommu.h
new file mode 100644
index 00..0a6326bd36
--- /dev/null
+++ b/lin
gsize to host page size
to support more translation granule sizes.
Fixes: 87ea529c502 (vfio: Get migration capability flags for container)
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 44 ++--
1 file changed, 22 insertions(+), 22 deletions(-)
diff --
[David Edmondson]
Kunkun Jiang (3):
migration/ram: Modify the code comment of ram_save_host_page()
migration/ram: Reduce unnecessary rate limiting
migration/ram: Optimize ram_save_host_page()
migration/ram.c | 54 ++---
1 file changed, 29 insertions(
The ram_save_host_page() has been modified several times
since its birth. But the comment hasn't been modified as it should
be. It'd better to modify the comment to explain ram_save_host_page()
more clearly.
Signed-off-by: Kunkun Jiang
---
migration/ram.c | 16 +++-
1 fi
-by: Kunkun Jiang
---
migration/ram.c | 21 ++---
1 file changed, 14 insertions(+), 7 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index a168da5cdd..9fc5b2997c 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1988,7 +1988,7 @@ static int ram_save_target_page
rmance to use migration_bitmap_find_dirty().
Signed-off-by: Keqian Zhu
Signed-off-by: Kunkun Jiang
---
migration/ram.c | 39 +++
1 file changed, 19 insertions(+), 20 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 9fc5b2997c..28215aefe4
Hi, Peter
On 2021/3/5 21:59, Peter Xu wrote:
On Fri, Mar 05, 2021 at 03:50:33PM +0800, Kunkun Jiang wrote:
The ram_save_host_page() has been modified several times
since its birth. But the comment hasn't been modified as it should
be. It'd better to modify the comment
Hi,
On 2021/3/5 22:22, Peter Xu wrote:
Kunkun,
On Fri, Mar 05, 2021 at 03:50:34PM +0800, Kunkun Jiang wrote:
When the host page is a huge page and something is sent in the
current iteration, the migration_rate_limit() should be executed.
If not, this function can be omitted to save time
Hi,
On 2021/3/5 22:30, Peter Xu wrote:
On Fri, Mar 05, 2021 at 03:50:35PM +0800, Kunkun Jiang wrote:
Starting from pss->page, ram_save_host_page() will check every page
and send the dirty pages up to the end of the current host page or
the boundary of used_length of the block. If the host p
Hi,
On 2021/3/9 5:36, Peter Xu wrote:
On Mon, Mar 08, 2021 at 09:58:02PM +0800, Kunkun Jiang wrote:
Hi,
On 2021/3/5 22:30, Peter Xu wrote:
On Fri, Mar 05, 2021 at 03:50:35PM +0800, Kunkun Jiang wrote:
Starting from pss->page, ram_save_host_page() will check every page
and send the di
Hi,
On 2021/3/9 5:03, Peter Xu wrote:
On Mon, Mar 08, 2021 at 06:33:56PM +0800, Kunkun Jiang wrote:
Hi, Peter
On 2021/3/5 21:59, Peter Xu wrote:
On Fri, Mar 05, 2021 at 03:50:33PM +0800, Kunkun Jiang wrote:
The ram_save_host_page() has been modified several times
since its birth. But the
Hi,
On 2021/3/9 5:12, Peter Xu wrote:
On Mon, Mar 08, 2021 at 06:34:58PM +0800, Kunkun Jiang wrote:
Hi,
On 2021/3/5 22:22, Peter Xu wrote:
Kunkun,
On Fri, Mar 05, 2021 at 03:50:34PM +0800, Kunkun Jiang wrote:
When the host page is a huge page and something is sent in the
current iteration
Hi,
On 2021/3/10 0:15, Peter Xu wrote:
On Tue, Mar 09, 2021 at 10:33:04PM +0800, Kunkun Jiang wrote:
Hi,
On 2021/3/9 5:12, Peter Xu wrote:
On Mon, Mar 08, 2021 at 06:34:58PM +0800, Kunkun Jiang wrote:
Hi,
On 2021/3/5 22:22, Peter Xu wrote:
Kunkun,
On Fri, Mar 05, 2021 at 03:50:34PM +0800
Hi Alex,
On 2021/3/10 7:17, Alex Williamson wrote:
On Thu, 4 Mar 2021 21:34:46 +0800
Kunkun Jiang wrote:
The cpu_physical_memory_set_dirty_lebitmap() can quickly deal with
the dirty pages of memory by bitmap-traveling, regardless of whether
the bitmap is aligned correctly or not
From: Zenghui Yu
The new capability VFIO_DIRTY_LOG_MANUAL_CLEAR and the new ioctl
VFIO_IOMMU_DIRTY_PAGES_FLAG_CLEAR_BITMAP have been introduced in
the kernel, update the header to add them.
Signed-off-by: Zenghui Yu
Signed-off-by: Kunkun Jiang
---
linux-headers/linux/vfio.h | 55
nual clear vfio dirty log, which can
eliminate some redundant dirty handling
Thanks,
Kunkun Jiang
[1]
https://lore.kernel.org/linux-iommu/20210310090614.26668-1-zhukeqi...@huawei.com/T/#mb168c9738ecd3d8794e2da14f970545d5820f863
Zenghui Yu (3):
linux-headers: update against 5.12-rc2 and "
vfio_memory_listener. If the
kernel supports it, deliever the clear message to kernel.
Signed-off-by: Zenghui Yu
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 145 +-
include/hw/vfio/vfio-common.h | 1 +
2 files changed, 145 insertions(+), 1
om kvm side.
See commit 9f4bf4baa8b820c7930e23c9566c9493db7e1d25. ]
Signed-off-by: Zenghui Yu
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 62 +++
include/hw/vfio/vfio-common.h | 9 +
2 files changed, 65 insertions(+), 6 deletions(-)
diff --gi
parent_obj.name, asid, iova,
tg, num_pages);
Thanks,
Kunkun Jiang
@@ -877,7 +878,7 @@ static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, dma_addr_t iova,
tg, num_pages);
IOMMU_NOTIFIER_FOREACH(n, mr) {
-smmuv3_no
Hi Eric,
On 2021/4/26 20:30, Auger Eric wrote:
Hi Kunkun,
On 4/14/21 3:45 AM, Kunkun Jiang wrote:
On 2021/4/13 20:57, Auger Eric wrote:
Hi Kunkun,
On 4/13/21 2:10 PM, Kunkun Jiang wrote:
Hi Eric,
On 2021/4/11 20:08, Eric Auger wrote:
In nested mode, legacy vfio_iommu_map_notify cannot be
map flattens a large range of
IO-PTEs.
* That may not be true for all IOMMU types.
*/
}
I think we need a check here. If it is nested mode, just return after
g_free(giommu). Because in nested mode, stage 2 (gpa->hpa) and the
stage 1 (giova->gpa) are
Hi Eric,
On 2021/4/27 3:16, Auger Eric wrote:
Hi Kunkun,
On 4/15/21 4:03 AM, Kunkun Jiang wrote:
Hi Eric,
On 2021/4/14 16:05, Auger Eric wrote:
Hi Kunkun,
On 4/14/21 3:45 AM, Kunkun Jiang wrote:
On 2021/4/13 20:57, Auger Eric wrote:
Hi Kunkun,
On 4/13/21 2:10 PM, Kunkun Jiang wrote:
Hi
Hi,
This series include patches as below:
Patch 1-2:
- modified the comment and code of ram_save_host_page() to make them match each
other
Patch 3:
- optimized ram_save_host_page() by using migration_bitmap_find_dirty() to find
dirty
pages
Best Regards
Kunkun Jiang
Kunkun Jiang (3
rmance to use migration_bitmap_find_dirty().
Signed-off-by: Keqian Zhu
Signed-off-by: Kunkun Jiang
---
migration/ram.c | 14 --
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index c7e18dc2fc..c7a2350198 100644
--- a/migration/ram.c
++
According to the comment, when the host page is a huge page, the
migration_rate_limit() should be executed. If not, this function
can be omitted to save time.
Signed-off-by: Keqian Zhu
Signed-off-by: Kunkun Jiang
---
migration/ram.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion
The ram_save_host_page() has been modified several times
since its birth. But the comment hasn't been modified as it should
be. It'd better to modify the comment to explain ram_save_host_page()
more clearly.
Signed-off-by: Keqian Zhu
Signed-off-by: Kunkun Jiang
---
migration/
On 2021/2/25 6:53, David Edmondson wrote:
On Tuesday, 2021-02-23 at 10:16:43 +08, Kunkun Jiang wrote:
The ram_save_host_page() has been modified several times
since its birth. But the comment hasn't been modified as it should
be. It'd better to modify the comment to explain ram_save
On 2021/2/25 20:48, David Edmondson wrote:
On Tuesday, 2021-02-23 at 10:16:45 +08, Kunkun Jiang wrote:
Starting from pss->page, ram_save_host_page() will check every page
and send the dirty pages up to the end of the current host page or
the boundary of used_length of the block. If the h
The ram_save_host_page() has been modified several times
since its birth. But the comment hasn't been modified as it should
be. It'd better to modify the comment to explain ram_save_host_page()
more clearly.
Signed-off-by: Keqian Zhu
Signed-off-by: Kunkun Jiang
---
migration/
According to the comment, when the host page is a huge page, the
migration_rate_limit() should be executed. If not, this function
can be omitted to save time.
Signed-off-by: Keqian Zhu
Signed-off-by: Kunkun Jiang
---
migration/ram.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion
rmance to use migration_bitmap_find_dirty().
Signed-off-by: Keqian Zhu
Signed-off-by: Kunkun Jiang
---
migration/ram.c | 12 +++-
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 3a9115b6dc..a1374db356 100644
--- a/migration/ram.c
+++ b/mi
age() comment [David Edmondson]
- Remove 'goto' [David Edmondson]
Kunkun Jiang (3):
migration/ram: Modify the code comment of ram_save_host_page()
migration/ram: Modify ram_save_host_page() to match the comment
migration/ram: Optimize ram_save_host_page()
migrati
kindly ping,
Any comments and reviews are welcome.
Thanks.
Kunkun Jiang.
On 2021/2/19 17:42, Kunkun Jiang wrote:
Hi all,
Since the SMMUv3's nested translation stages[1] has been introduced by Eric, we
need to pay attention to the migration of VFIO PCI devices in SMMUv3 nested
stage
mod
operation is fail, the migration is fail.
Signed-off-by: Kunkun Jiang
---
hw/arm/smmuv3.c | 60 +
hw/arm/trace-events | 1 +
2 files changed, 61 insertions(+)
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 6c6ed84e78..94ca15375c 100644
--- a
the destination host after the migration.
@Eric, Could you please add this Patch set to your future version of
"vSMMUv3/pSMMUv3 2 stage VFIO integration", if you think this Patch set makes
sense? :)
Best Regards
Kunkun Jiang
[1] [RFC,v7,00/26] vSMMUv3/pSMMUv3 2 stage VFIO integ
Extract part of the code from vfio_sync_dirty_bitmap to form a
new helper, which allows to mark dirty pages of a RAM section.
This helper will be called for nested stage.
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 22 ++
1 file changed, 14 insertions(+), 8 deletions
This patch adds
vfio_prereg_listener_log_sync to mark dirty pages in nested mode.
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 25 +
1 file changed, 25 insertions(+)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 7c50905856..af333e0dee 100644
--- a/hw/vfio/common.
Accroding to the SMMUv3 spec, the SPAN field of Level1 Stream Table
Descriptor is 5 bits([4:0]).
Fixes: 9bde7f0674f(hw/arm/smmuv3: Implement translate callback)
Signed-off-by: Kunkun Jiang
---
hw/arm/smmuv3-internal.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/hw/arm
Is the condition "as != &address_space_memory" needed to determine whether
a vIOMMU is in place? I think "memory_region_is_iommu(as->root)" is enough.
Looking forward to your reply.:)
Thanks,
Kunkun Jiang
Hi Eric,
Friendly ping... :)
On 2020/11/24 10:37, Kunkun Jiang wrote:
Accroding to the SMMUv3 spec, the SPAN field of Level1 Stream Table
Descriptor is 5 bits([4:0]).
Fixes: 9bde7f0674f(hw/arm/smmuv3: Implement translate callback)
Signed-off-by: Kunkun Jiang
---
hw/arm/smmuv3-internal.h
Hi Eric,
On 2021/7/6 21:52, Eric Auger wrote:
Hi,
On 7/6/21 10:18 AM, Kunkun Jiang wrote:
Hi Eric,
On 2021/6/30 17:16, Eric Auger wrote:
On 6/30/21 3:38 AM, Kunkun Jiang wrote:
On 2021/6/30 4:14, Eric Auger wrote:
Hi Kunkun,
On 6/29/21 11:33 AM, Kunkun Jiang wrote:
Hi all,
Accroding to
On 2021/7/6 22:27, Eric Auger wrote:
Hi Dave,
On 7/6/21 4:19 PM, Dr. David Alan Gilbert wrote:
* Eric Auger (eric.au...@redhat.com) wrote:
Hi,
On 7/6/21 10:18 AM, Kunkun Jiang wrote:
Hi Eric,
On 2021/6/30 17:16, Eric Auger wrote:
On 6/30/21 3:38 AM, Kunkun Jiang wrote:
On 2021/6/30 4:14
smallest page size to align the address.
Fixes: 1eb7f642750 (vfio: Support host translation granule size)
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 30 +-
1 file changed, 21 insertions(+), 9 deletions(-)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index
This series include patches as below:
Patch 1:
- Add a trace point to informe users when a MMIO RAM section less than minimum
size
Patch 2:
- Fix address alignment in region_add/regiondel with vfio iommu smallest page
size
Kunkun Jiang (2):
vfio/common: Add trace point when a MMIO RAM
ge. Let's add a trace point to informed users.
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 7d80f43e39..bbb8d1ea0c 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -892,6 +8
Hi Eric,
On 2021/6/30 17:16, Eric Auger wrote:
On 6/30/21 3:38 AM, Kunkun Jiang wrote:
On 2021/6/30 4:14, Eric Auger wrote:
Hi Kunkun,
On 6/29/21 11:33 AM, Kunkun Jiang wrote:
Hi all,
Accroding to the patch cddafd8f353d2d251b1a5c6c948a577a85838582,
our original intention is to flush the
On 2021/7/6 18:27, Dr. David Alan Gilbert wrote:
* Kunkun Jiang (jiangkun...@huawei.com) wrote:
Hi Daniel,
On 2021/7/5 20:48, Daniel P. Berrangé wrote:
On Mon, Jul 05, 2021 at 08:36:52PM +0800, Kunkun Jiang wrote:
In the current version, the source QEMU process does not automatic
exit after
1 - 100 of 124 matches
Mail list logo