Liran Schour wrote:
> Move to stage3 only when remaining work can be done below max downtime.
> To make sure the process will converge we will try only MAX_DIRTY_ITERATIONS.

OK, that explains now patch 2. But do we have such barrier for memory
migration as well? I don't thinks so, and I don't think this hard-coded
limit is the right approach. Such thing should be derived from the
bandwidth the user can control during runtime.

> 
> Signed-off-by: Liran Schour <lir...@il.ibm.com>
> ---
>  block-migration.c |   67 +++++++++++++++++++++++++++++++++++-----------------
>  1 files changed, 45 insertions(+), 22 deletions(-)
> 
> diff --git a/block-migration.c b/block-migration.c
> index 90c84b1..9ae04c4 100644
> --- a/block-migration.c
> +++ b/block-migration.c
> @@ -17,6 +17,7 @@
>  #include "qemu-queue.h"
>  #include "monitor.h"
>  #include "block-migration.h"
> +#include "migration.h"
>  #include <assert.h>
>  
>  #define BLOCK_SIZE (BDRV_SECTORS_PER_DIRTY_CHUNK << BDRV_SECTOR_BITS)
> @@ -30,6 +31,7 @@
>  #define BLOCKS_READ_CHANGE 100
>  #define INITIAL_BLOCKS_READ 100
>  #define MAX_DIRTY_ITERATIONS 100
> +#define DISK_RATE (30 << 20) //30 MB/sec

IMHO a bad idea (e.g. mine was 6 MB/s last time I tried). Measure it
during runtime just like the mem-migration does.

<skipping the rest of the patch>

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux


Reply via email to