Markus Armbruster wrote on 28/06/2010 10:26:47:
> From: Markus Armbruster
> To: qemu-devel@nongnu.org
> Cc: Liran Schour/Haifa/i...@ibmil
> Date: 28/06/2010 10:26
> Subject: Block live migration's use of type hint
>
> Block live migration appears to migrate only bloc
Start transfer dirty blocks during the iterative stage. That will
reduce the time that the guest will be suspended
Signed-off-by: Liran Schour
---
block-migration.c | 135 +++--
1 files changed, 99 insertions(+), 36 deletions(-)
diff --git a
Start transfer dirty blocks during the iterative stage. That will
reduce the time that the guest will be suspended
Signed-off-by: Liran Schour
---
block-migration.c | 135 +++--
1 files changed, 99 insertions(+), 36 deletions(-)
diff --git a
From: Paolo Bonzini
Some places use get_clock directly because they want to access the
rt_clock with nanosecond precision. Add a function to do exactly that
instead of using internal interfaces.
Signed-off-by: Paolo Bonzini
---
qemu-timer.h |1 +
vl.c | 21 +++--
This will manage dirty counter for each device and will allow to get the
dirty counter from above.
Signed-off-by: Liran Schour
---
block.c | 16 ++--
block.h |1 +
block_int.h |1 +
3 files changed, 16 insertions(+), 2 deletions(-)
diff --git a/block.c b/block.c
Move to stage3 only when remaining work can be done below max downtime.
Use qemu_get_clock_ns for measuring read performance.
Signed-off-by: Liran Schour
---
block-migration.c | 79 +++--
1 files changed, 70 insertions(+), 9 deletions(-)
diff
blk_mig_save_bulked_block is never called with sync flag. Remove the sync
flag. Calculate bulk completion during blk_mig_save_bulked_block.
Remove unused constants.
Signed-off-by: Liran Schour
---
block-migration.c | 61 +++-
1 files changed, 18
235 +---
block.c | 16 +++-
block.h |1 +
block_int.h |1 +
qemu-timer.h |1 +
vl.c | 21 +-
6 files changed, 203 insertions(+), 72 deletions(-)
Signed-off-by: Liran Schour
Pierre Riteau wrote on 21/01/2010 20:03:32:
> On 21 janv. 2010, at 16:24, Liran Schour wrote:
>
> > Move to stage3 only when remaining work can be done below max downtime.
> >
> > Changes from v1: remove max iterations. Try to infer storage
> performance and by th
Start transfer dirty blocks during the iterative stage. That will
reduce the time that the guest will be suspended
Changes from v1: remove trailing whitespaces and remove max iterations limit.
Signed-off-by: Liran Schour
---
block-migration.c | 135
Move to stage3 only when remaining work can be done below max downtime.
Changes from v1: remove max iterations. Try to infer storage performance and by
that calculate remaining work.
Signed-off-by: Liran Schour
---
block-migration.c | 136
This will manage dirty counter for each device and will allow to get the
dirty counter from above.
Changes from v1: remove trailing whitespaces.
Signed-off-by: Liran Schour
---
block.c | 16 ++--
block.h |1 +
block_int.h |1 +
3 files changed, 16 insertions(+), 2
blk_mig_save_bulked_block is never called with sync flag. Remove the sync
flag. Calculate bulk completion during blk_mig_save_bulked_block.
Changes from v1: remove trailing whitespaces and minor cleanups.
Signed-off-by: Liran Schour
---
block-migration.c | 59
-migration.c | 244 +++--
block.c | 20 -
block.h |1 +
block_int.h |1 +
4 files changed, 181 insertions(+), 85 deletions(-)
Signed-off-by: Liran Schour
Jan Kiszka wrote on 12/01/2010 13:51:09:
> Liran Schour wrote:
> > Move to stage3 only when remaining work can be done below max downtime.
> > To make sure the process will converge we will try only
> MAX_DIRTY_ITERATIONS.
>
> OK, that explains now patch 2. But do
Pierre Riteau wrote on 12/01/2010 11:52:18:
> On 12 janv. 2010, at 09:27, Liran Schour wrote:
>
> > Move to stage3 only when remaining work can be done below max downtime.
> > To make sure the process will converge we will try only
> MAX_DIRTY_ITERATIONS.
> >
>
Move to stage3 only when remaining work can be done below max downtime.
To make sure the process will converge we will try only MAX_DIRTY_ITERATIONS.
Signed-off-by: Liran Schour
---
block-migration.c | 67 +++-
1 files changed, 45 insertions
Start transfer dirty blocks during the iterative stage. That will
reduce the time that the guest will be suspended
Signed-off-by: Liran Schour
---
block-migration.c | 158 +++--
1 files changed, 116 insertions(+), 42 deletions(-)
diff --git a
(+), 85 deletions(-)
Signed-off-by: Liran Schour
This will manage dirty counter for each device and will allow to get the
dirty counter from above.
Signed-off-by: Liran Schour
---
block.c | 20
block.h |1 +
block_int.h |1 +
3 files changed, 18 insertions(+), 4 deletions(-)
diff --git a/block.c b
blk_mig_save_bulked_block is never called with sync flag. Remove the sync
flag. Calculate bulk completion during blk_mig_save_bulked_block.
Signed-off-by: Liran Schour
---
block-migration.c | 63
1 files changed, 24 insertions(+), 39
I want to be able to synchronize between the code that is running the live
migration with the code that call fro the completion callback of async IO.
For that I am using qemu-thread.c (i.e QemuCond). I see that I have
problems while linking if I do not use --enable-io-thread.
Can someone explain m
I have a weird problem running migrate. I try to run migrate (normal
migration with shared storage) and I get the following error on the guest
after migration completed:
ide: failed opcode was: unknown
hda: task_out_intr: status=0x41 {DriverReady Error }
hda: task_out_intr: error
Jan Kiszka wrote on 26/11/2009 19:24:40:
> +bdrv_write(bs, (addr >> SECTOR_BITS),
> + buf, block_mig_state->
sectors_per_block);
> >>> This synchronous write-back translates appears to be the reason for
an
> >>> unusable migration (or restore f
Jan Kiszka wrote on 26/11/2009 15:53:49:
> > +qemu_get_buffer(f, buf,
> > +BLOCK_SIZE);
> > +if(bs != NULL) {
> > +
> > +bdrv_write(bs, (addr >> SECTOR_BITS),
> > + buf, block_mig_state->sectors_per_bl
jan.kis...@web.de wrote on 25/11/2009 00:55:57:
> trying to understand the code and fixing some cosmetic issues around
> progress reporting, one potentially performance-relevant question popped
up:
>
> lir...@il.ibm.com wrote:
> > diff --git a/block-migration.c b/block-migration.c
> > new file m
"Anthony Liguori" wrote on 24/11/2009
16:27:58:
> Jan Kiszka wrote:
> > Oh, indeed, thanks. Due to the fact that this discussion suggested that
> > there are still open issues, I did not even checked git.
>
> They still need to be addressed. However, I wanted to do that work in
> the tree vs. out
- Liran
Avi Kivity wrote on 02/11/2009 20:47:34:
> On 11/02/2009 03:40 PM, lir...@il.ibm.com wrote:
> > This series adds support for live migration without shared storage,
means
> > copy the storage while migrating. It was tested with KVM. Supports 2
ways
> > to replicate the storage during migr
qemu-devel-bounces+lirans=il.ibm@nongnu.org wrote on 02/11/2009
15:40:25:
> Live migration will work as follows:
> (qemu) migrate -d tcp:0: # for ordinary live migration
> (qemu) migrate -d blk tcp:0: # for live migration with complete
> storage copy
> (qemu) migrate -d blk inc tcp:0:4
qemu-devel-bounces+lirans=il.ibm@nongnu.org wrote on 21/10/2009
20:21:09:
> Anthony Liguori
> Sent by: qemu-devel-bounces+lirans=il.ibm@nongnu.org
>
> 21/10/2009 20:21
>
> To
>
> Liran Schour/Haifa/i...@ibmil
>
> cc
>
> qemu-devel@nongnu.org
>
&g
- Liran
Anthony Liguori wrote on 21/10/2009 20:27:39:
> Anthony Liguori
> 21/10/2009 20:27
>
> To
>
> Liran Schour/Haifa/i...@ibmil
>
> cc
>
> qemu-devel@nongnu.org
>
> Subject
>
> Re: [Qemu-devel] [PATCH 3/3 v4] Enable migration without shared
> s
qemu_savevm_state will call all registered components with 3 phases: START,
PART, END. Only the PART phase is iterative.
In case of storage live migration we have lot more data to copy then memory
and usually the dirty rate is much less then memory dirty rate. I thought
about adding an iterative p
32 matches
Mail list logo