On Wed, 11 Sep 2019 at 11:36, Dr. David Alan Gilbert <dgilb...@redhat.com> wrote: > > * Beata Michalska (beata.michal...@linaro.org) wrote: > > On Tue, 10 Sep 2019 at 14:16, Dr. David Alan Gilbert > > <dgilb...@redhat.com> wrote: > > > > > > * Beata Michalska (beata.michal...@linaro.org) wrote: > > > > On Tue, 10 Sep 2019 at 12:26, Dr. David Alan Gilbert > > > > <dgilb...@redhat.com> wrote: > > > > > > > > > > * Beata Michalska (beata.michal...@linaro.org) wrote: > > > > > > Switch to ram block writeback for pmem migration. > > > > > > > > > > > > Signed-off-by: Beata Michalska <beata.michal...@linaro.org> > > > > > > --- > > > > > > migration/ram.c | 5 +---- > > > > > > 1 file changed, 1 insertion(+), 4 deletions(-) > > > > > > > > > > > > diff --git a/migration/ram.c b/migration/ram.c > > > > > > index b01a37e7ca..8ea0bd63fc 100644 > > > > > > --- a/migration/ram.c > > > > > > +++ b/migration/ram.c > > > > > > @@ -33,7 +33,6 @@ > > > > > > #include "qemu/bitops.h" > > > > > > #include "qemu/bitmap.h" > > > > > > #include "qemu/main-loop.h" > > > > > > -#include "qemu/pmem.h" > > > > > > #include "xbzrle.h" > > > > > > #include "ram.h" > > > > > > #include "migration.h" > > > > > > @@ -4064,9 +4063,7 @@ static int ram_load_cleanup(void *opaque) > > > > > > RAMBlock *rb; > > > > > > > > > > > > RAMBLOCK_FOREACH_NOT_IGNORED(rb) { > > > > > > - if (ramblock_is_pmem(rb)) { > > > > > > - pmem_persist(rb->host, rb->used_length); > > > > > > - } > > > > > > + qemu_ram_block_writeback(rb); > > > > > > > > > > ACK for migration > > > > > > > > > > Although I do worry that if you really have pmem hardware, is it > > > > > better > > > > > to fail the migration if you don't have libpmem available? > > > > > > > > According to the PMDG man page, pmem_persist is supposed to be > > > > equivalent for the msync. > > > > > > OK, but you do define qemu_ram_block_writeback to fall back to fdatasync; > > > so that would be too little? > > > > Actually it shouldn't. All will end-up in vfs_fsync_range; msync will > > narrow the range. > > fdatasync will trigger the same call just that with a wider range. At > > least for Linux. > > fdatasync will also fallback to fsync if it is not available. > > So it's going from the best case scenario (as of performance and range of > > mem > > to be synced) towards the worst case one. > > > > I should probably double-check earlier versions of Linux. > > I'll also try to verify that for other host variants. > > Well I guess it should probably follow whatever Posix says; it's OK to > make Linux specific assumptions for Linux specific bits - but you can't > do it by code examination to guarantee it'll be right for other > platforms, especially if this is in code ifdef'd for portability. > Also it needs commenting to explain why it's safe to avoid someone else > asking this question. > I will definitely address that in the next version. Will just wait a bit to potentially gather more input on the series.
> > BTW: Thank you for having a look at the changes. > > No problem. > Thanks again. BR Beata > Dave > > > BR > > Beata > > > > > > > > > It's just more performant. So in case of real pmem hardware it should > > > > be all good. > > > > > > > > [http://pmem.io/pmdk/manpages/linux/v1.2/libpmem.3.html] > > > > > > Dave > > > > > > > > > > > BR > > > > Beata > > > > > > > > > > Dave > > > > > > > > > > > } > > > > > > > > > > > > xbzrle_load_cleanup(); > > > > > > -- > > > > > > 2.17.1 > > > > > > > > > > > -- > > > > > Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK > > > -- > > > Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK > -- > Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK