* Thomas Huth (th...@redhat.com) wrote:

> That would mean a regression compared to what we have today. Currently,
> the ballooning is working OK for 64k guests on a 64k ppc host - rather
> by chance than on purpose, but it's working. The guest is always sending
> all the 4k fragments of a 64k page, and QEMU is trying to call madvise()
> for every one of them, but the kernel is ignoring madvise() on
> non-64k-aligned addresses, so we end up with a situation where the
> madvise() frees a whole 64k page which is also declared as free by the
> guest.

I wouldn't worry about migrating your fragmenet map; but I wonder if it
needs to be that complex - does the guest normally do something more sane
like do the 4k pages in order and so you've just got to track the last
page it tried rather than having a full map?

A side question is whether the behaviour that's seen by 
virtio_ballon_handle_output
is always actually the full 64k page;  it calls balloon_page once
for each message/element - but if all of those elements add back up to the full
page, perhaps it makes more sense to reassemble it there?

> I think we should either take this patch as it is right now (without
> adding extra code for migration) and later update it to the bitmap code
> by Jitendra Kolhe, or omit it completely (leaving 4k guests broken) and
> fix it properly after the bitmap code has been applied. But disabling
> the balloon code for 64k guests on 64k hosts completely does not sound
> very appealing to me. What do you think?

Yeh I agree; your existing code should work and I don't think we should
break 64k-on-64k.

Dave
> 
>  Thomas
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK

Reply via email to