On 11/17/2015 01:26 PM, Dr. David Alan Gilbert wrote:
> There are a couple of things I don't understand about this:
>   1) How does the source fill it's hashes table?  Is it just given the same
>      dump file as the destination?
>   2) Why does RAM_SAVE_FLAG_PAGE_HASH exist; if you're sending the full page
>      to the destination, why do we also send the hash?

1. Migration source is assumed to have the same dump file as the
destination. The design was optimized for the case of ping-pong
migrations over SAN, where checkpoint file is always available. We
also have proof-of-concept code that transfers available hashes from
the migration destination to the source over the network, but it
didn't make it into these patches.
2. We send the hash to avoid hash calculations on the receiving side
to save some CPU time. This flag can be removed, as I don't think the
benefits it provides are big.

> I think there's a problem here that given the source is still running it's 
> CPU and changing
> memory; it can be writing to the page at the same time, so the page you send 
> might not
> match the hash you send;  we're guaranteed to resend the page again if it was 
> written
> to, but that still doesn't make these two things match; although as I say 
> above
> I'm not sure why SAVE_FLAG_PAGE_HASH exists.

This is true. In this case, we will just delete the SAVE_FLAG_PAGE_HASH flag.

> --
> Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK

-- 
With best regards,
Bohdan Trach

Reply via email to