From: Ivan Ren
When encounter error, multifd_send_thread should always notify who pay
attention to it before exit. Otherwise it may block migration_thread
at multifd_send_sync_main forever.
Error as follow:
---
(gdb
On Mon, Aug 5, 2019 at 8:34 AM Wei Yang wrote:
>
> On Fri, Aug 02, 2019 at 06:18:41PM +0800, Ivan Ren wrote:
> >From: Ivan Ren
> >
> >This patch fix a multifd migration bug in migration speed calculation, this
> >problem can be reproduced as follows:
> >1
From: Ivan Ren
This patch fix a multifd migration bug in migration speed calculation, this
problem can be reproduced as follows:
1. start a vm and give a heavy memory write stress to prevent the vm be
successfully migrated to destination
2. begin a migration with multifd
3. migrate for a long
From: Ivan Ren
This patch fix a multifd migration bug in migration speed calculation, this
problem can be reproduced as follows:
1. start a vm and give a heavy memory write stress to prevent the vm be
successfully migrated to destination
2. begin a migration with multifd
3. migrate for a long
On Fri, Aug 2, 2019 at 1:59 PM Wei Yang wrote:
>
> On Fri, Aug 02, 2019 at 01:46:41PM +0800, Ivan Ren wrote:
> >>>>> s->iteration_start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
> >>>>>+/*
> >>>>>+ * Update s
unters into a
helper
> function. So each time all of them.
>2. In function ram_get_total_transferred_pages, do we missed multifd_bytes?
In function ram_save_multifd_page, ram pages transferred by multifd threads
is
counted by ram_counters.normal.
You mean other multifd bytes like multifd pac
ote:
> On Tue, Jul 30, 2019 at 01:36:32PM +0800, Ivan Ren wrote:
> >From: Ivan Ren
> >
> >This patch fix a multifd migration bug in migration speed calculation,
> this
> >problem can be reproduced as follows:
> >1. start a vm and give a heavy memory write
Thanks.
On Thu, Aug 1, 2019 at 10:56 AM Wei Yang
wrote:
> Thanks, I didn't notice this case.
>
> On Sun, Jul 14, 2019 at 10:51:19PM +0800, Ivan Ren wrote:
> >Reproduce the problem:
> >migrate
> >migrate_cancel
> >migrate
> >
> >Error happen for me
From: Ivan Ren
This patch fix a multifd migration bug in migration speed calculation, this
problem can be reproduced as follows:
1. start a vm and give a heavy memory write stress to prevent the vm be
successfully migrated to destination
2. begin a migration with multifd
3. migrate for a long
From: Ivan Ren
Multifd sync will send MULTIFD_FLAG_SYNC flag info to destination, add
these bytes to ram_counters record.
Signed-off-by: Ivan Ren
Suggested-by: Wei Yang
---
migration/ram.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/migration/ram.c b/migration/ram.c
index
From: Ivan Ren
Add qemu_file_update_transfer for just update bytes_xfer for speed
limitation. This will be used for further migration feature such as
multifd migration.
Signed-off-by: Ivan Ren
Reviewed-by: Wei Yang
---
migration/qemu-file.c | 5 +
migration/qemu-file.h | 1 +
2 files
From: Ivan Ren
Currently multifd migration has not been limited and it will consume
the whole bandwidth of Nic. These two patches add speed limitation to
it.
This is the v3 patches:
v3 VS v2:
Add Reviewed info and Suggested info.
v2 VS v1:
1. change qemu_file_update_rate_transfer interface
From: Ivan Ren
Limit the speed of multifd migration through common speed limitation
qemu file.
Signed-off-by: Ivan Ren
---
migration/ram.c | 22 --
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 889148dd84
Thanks
I'll send a new version with these suggest info and review info.
On Tue, Jul 30, 2019 at 8:42 AM Wei Yang
wrote:
> On Mon, Jul 29, 2019 at 04:01:21PM +0800, Ivan Ren wrote:
> >Multifd sync will send MULTIFD_FLAG_SYNC flag info to destination, add
> >these bytes to
Multifd sync will send MULTIFD_FLAG_SYNC flag info to destination, add
these bytes to ram_counters record.
Signed-off-by: Ivan Ren
---
migration/ram.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/migration/ram.c b/migration/ram.c
index 88ddd2bbe2..20b6eebb7c 100644
--- a/migration
Add qemu_file_update_transfer for just update bytes_xfer for speed
limitation. This will be used for further migration feature such as
multifd migration.
Signed-off-by: Ivan Ren
---
migration/qemu-file.c | 5 +
migration/qemu-file.h | 1 +
2 files changed, 6 insertions(+)
diff --git a
update ram_counters for multifd sync packet
Ivan Ren (3):
migration: add qemu_file_update_transfer interface
migration: add speed limit for multifd migration
migration: update ram_counters for multifd sync packet
migration/qemu-file.c | 5 +
migration/qemu-file.h | 1 +
migration
Limit the speed of multifd migration through common speed limitation
qemu file.
Signed-off-by: Ivan Ren
---
migration/ram.c | 22 --
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 889148dd84..88ddd2bbe2 100644
--- a
rs->f, p->packet_len);
>
>The original code seems forget to update
>
>ram_counters.multifd_bytes
>ram_counters.transferred
>
>Sounds we need to update these counters here too.
Yes, Thanks for review
I'll send a new version with a new patch to fix it.
On Mon, Jul 29, 2019 at 2
Limit the speed of multifd migration through common speed limitation
qemu file.
Signed-off-by: Ivan Ren
---
migration/ram.c | 22 --
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 889148dd84..e3fde16776 100644
--- a
Add qemu_file_update_rate_transfer for just update bytes_xfer for
speed limitation. This will be used for further migration feature
such as multifd migration.
Signed-off-by: Ivan Ren
---
migration/qemu-file.c | 5 +
migration/qemu-file.h | 1 +
2 files changed, 6 insertions(+)
diff --git a
Currently multifd migration has not been limited and it will consume
the whole bandwidth of Nic. These two patches add speed limitation to
it.
Ivan Ren (2):
migration: add qemu_file_update_rate_transfer interface
migration: add speed limit for multifd migration
migration/qemu-file.c | 5
is better to *also* set an p->quit variable there, and
> not even try to receive anything for that channel?
>
> I will send a patch later.
Yes, agree.
Thanks for review.
On Wed, Jul 24, 2019 at 5:01 PM Juan Quintela wrote:
> Ivan Ren wrote:
> > When migrate_cancel a multi
/coroutine-ucontext.c:115
#5 0x7fbd66d98d40 in ?? () from /lib64/libc.so.6
#6 0x7ffec0bf24d0 in ?? ()
#7 0x in ?? ()
On Tue, Jun 25, 2019 at 9:18 PM Ivan Ren wrote:
> When migrate_cancel a multifd migration, if run sequence like this:
>
>
The problem still exists in mainline, Ping for review
On Tue, Jun 25, 2019 at 9:18 PM Ivan Ren wrote:
> The patches fix the problems encountered in multifd migration when try
> to cancel the migration by migrate_cancel.
>
> Ivan Ren (3):
> migration: fix migrate_cancel leads
]
- migration_bitmap_sync_range: sync ram_list.
dirty_memory[DIRTY_MEMORY_MIGRATION] to RAMBlock.bmap
and ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] is set to zero
Here RAMBlock.bmap only have new logged dirty pages, don't contain
the whole guest pages.
Signed-off-by: Ivan Ren
---
migration/ram.c
ad_join
qemu_thread_join
multifd_load_cleanup
process_incoming_migration_co
coroutine_trampoline
Signed-off-by: Ivan Ren
---
migration/ram.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/migration/ram.c b/migration/ram.c
index e4eb9c441f..504c8ccb03 100644
--- a/migration/ram.c
+++ b/mi
c_main my hung at qemu_sem_wait(&multifd_send_state->
sem_sync)
Signed-off-by: Ivan Ren
---
migration/ram.c | 23 +++
1 file changed, 19 insertions(+), 4 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index f8908286c2..e4eb9c441f 100644
--- a/migration/ram
ion_thread
qemu_thread_start
start_thread
clone
Signed-off-by: Ivan Ren
---
migration/ram.c | 36 +---
1 file changed, 29 insertions(+), 7 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 908517fc2b..f8908286c2 100644
--- a/migration/ram.c
+++ b/migra
The patches fix the problems encountered in multifd migration when try
to cancel the migration by migrate_cancel.
Ivan Ren (3):
migration: fix migrate_cancel leads live_migration thread endless loop
migration: fix migrate_cancel leads live_migration thread hung forever
migration: fix
After commit dcaf446ebda5d87e05eb41cdbafb7ae4a7cc4a62, we can't
dynamic adjust the compress level when migration running.
For some scenes, dynamic adjust the compress level to change the
compress behavior without restart a new migration is useful.
Signed-off-by: Ivan Ren
---
migration/
hit over a month old; I only skimmed the comments on V1 from
>Max and Eric. Is this something we still want?
Thanks for reply.
I think it's better to return this information instead of 0.
On Wed, Jun 13, 2018 at 2:42 AM John Snow wrote:
>
>
> On 05/05/2018 03:49 AM, Ivan Ren wro
> Oh, yes, there is no doubt that the result will be correct. My point is
> that people aren't usually interested so much in the physical layout of
> the clusters, but more about the fact that no metadata updates and no
> COW is necessary when you write to a cluster for the first time (i.e.
> becau
ping for review
On Sat, May 5, 2018 at 3:50 PM Ivan Ren wrote:
> qemu-img info with a block device which has a qcow2 format always
> return 0 for disk size, and this can not reflect the qcow2 size
> and the used space of the block device. This patch return the
> allocated size of
, and the cluster
offset is fixed.
On Sat, May 12, 2018 at 1:29 AM Kevin Wolf wrote:
> Am 11.05.2018 um 17:36 hat Ivan Ren geschrieben:
> > Create a qcow2 directly on bare block device with
> > "-o preallocation=metadata" option. When read this qcow2, it will
&g
Create a qcow2 directly on bare block device with
"-o preallocation=metadata" option. When read this qcow2, it will
return pre-existing data on block device. This patch add
QCOW_OFLAG_ZERO flag (supported in qcow_version >= 3) for
preallocated l2 entry to avoid this problem.
Signed
_version >= 3.
I'll fix it.
On Fri, May 11, 2018 at 9:41 PM Eric Blake wrote:
> On 05/11/2018 07:37 AM, Ivan Ren wrote:
> > Create a qcow2 directly on bare block device with
> > "-o preallocation=metadata" option. When read this qcow2, it will
> > return pre-e
Create a qcow2 directly on bare block device with
"-o preallocation=metadata" option. When read this qcow2, it will
return pre-existing data on block device, and this may lead to
data leakage. This patch add QCOW_OFLAG_ZERO for all preallocated
l2 entry to avoid this problem.
Signed-of
underlying protocol might already
> have in place?
Yea, it sounds good. Always pass QCOW_OFLAG_ZERO in preallocation has no
problem and can guarantee no garbage will be read when preallcate metadata
for qcow2 on any underlying device.
I will send a v2 patch.
Thanks.
On Thu, May 10, 2018 at
ping for review
On Tue, May 8, 2018 at 8:27 PM Ivan Ren wrote:
> Create a qcow2 directly on bare block device with
> "-o preallocation=metadata" option. When read this qcow2, it will
> return dirty data of block device. This patch add QCOW_OFLAG_ZERO
> for all preall
Create a qcow2 directly on bare block device with
"-o preallocation=metadata" option. When read this qcow2, it will
return dirty data of block device. This patch add QCOW_OFLAG_ZERO
for all preallocated l2 entry if the underlying device is a bare
block device.
Signed-off-by: Ivan Ren
qemu-img info with a block device which has a qcow2 format always
return 0 for disk size, and this can not reflect the qcow2 size
and the used space of the block device. This patch return the
allocated size of qcow2 as the disk size.
Signed-off-by: Ivan Ren
---
block/qcow2.c | 54
> Yeah, well, then you use qemu-img check to repair it. The refcount
> structure is there to know which clusters are allocated, so we should
> use it when we want to know that and not assume it's broken. If the
> user thinks it may be broken, they are free to run qemu-img check to
check.
Yea, it
qemu-img info with a block device which has a qcow2 format always
return 0 for disk size, and this can not reflect the qcow2 size
and the used space of the block device. This patch return the
allocated size of qcow2 as the disk size.
Signed-off-by: Ivan Ren
---
block/qcow2-bitmap.c | 69
44 matches
Mail list logo