Hi ----- Original Message ----- > * Marc-André Lureau (marcandre.lur...@redhat.com) wrote: > > If the backend is of type VHOST_BACKEND_TYPE_USER, allocate > > shareable memory. > > > > Note: vhost_log_get() can use a global "vhost_log" that can be shared by > > several vhost devices. We may want instead a common shareable log and a > > common non-shareable one. > > > > Signed-off-by: Marc-André Lureau <marcandre.lur...@redhat.com> > > --- > > hw/virtio/vhost.c | 38 +++++++++++++++++++++++++++++++------- > > include/hw/virtio/vhost.h | 3 ++- > > 2 files changed, 33 insertions(+), 8 deletions(-) > > > > diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c > > index 2712c6f..862e786 100644 > > --- a/hw/virtio/vhost.c > > +++ b/hw/virtio/vhost.c > > @@ -18,6 +18,7 @@ > > #include "qemu/atomic.h" > > #include "qemu/range.h" > > #include "qemu/error-report.h" > > +#include "qemu/memfd.h" > > #include <linux/vhost.h> > > #include "exec/address-spaces.h" > > #include "hw/virtio/virtio-bus.h" > > @@ -286,20 +287,34 @@ static uint64_t vhost_get_log_size(struct vhost_dev > > *dev) > > } > > return log_size; > > } > > -static struct vhost_log *vhost_log_alloc(uint64_t size) > > + > > +static struct vhost_log *vhost_log_alloc(uint64_t size, bool share) > > { > > - struct vhost_log *log = g_malloc0(sizeof *log + size * > > sizeof(*(log->log))); > > + struct vhost_log *log; > > + uint64_t logsize = size * sizeof(*(log->log)); > > + int fd = -1; > > + > > + log = g_new0(struct vhost_log, 1); > > + if (share) { > > + log->log = qemu_memfd_alloc("vhost-log", logsize, > > + F_SEAL_GROW|F_SEAL_SHRINK|F_SEAL_SEAL, > > &fd); > > + memset(log->log, 0, logsize); > > qemu_memfd_alloc can return NULL can't it - so that needs checking? > > > + } else { > > + log->log = g_malloc0(logsize); > > I know the old code also used g_malloc0, but if the log isn't 'small' > then g_try_malloc0 is possibly safer and properly return errors > if it can't be allocated.
Yeah, I agree it's better to check for the return value here (as you pointed out, I followed the existing pattern). Maybe we are just screwed if it happens, live migration shouldn't succeed if it can't be done properly imho. What's your take on this Michael? cheers