On Wed, Aug 06, 2014 at 10:49:23PM +1000, Peter Crosthwaite wrote: > You subject line is excessively long. How about just "improve RAM file > error handling" and elaborate in a commit msg para?
After splitting both patches are improving of RAM file error handling. I'll reword the subjects and drop this one. > > On Wed, Aug 6, 2014 at 3:36 PM, Hu Tao <hu...@cn.fujitsu.com> wrote: > > This patch fixes two problems of memory-backend-file: > > > > It looks like two self contained changes. Any reason to not split? > > > 1. If user adds a memory-backend-file object using object_add command, > > specifying a non-existing directory for property mem-path, qemu > > will core dump with message: > > > > /nonexistingdir: No such file or directory > > Bad ram offset fffffffffffff000 > > Aborted (core dumped) > > > > with this patch, qemu reports error message like: > > > > qemu-system-x86_64: -object > > memory-backend-file,mem-path=/nonexistingdir,id=mem-file0,size=128M: > > failed to stat file /nonexistingdir: No such file or directory > > > > 2. If user adds a memory-backend-file object using object_add command, > > specifying a size that is less than huge page size, qemu > > will core dump with message: > > > > Bad ram offset fffffffffffff000 > > Aborted (core dumped) > > > > with this patch, qemu reports error message like: > > > > qemu-system-x86_64: -object > > memory-backend-file,mem-path=/hugepages,id=mem-file0,size=1M: memory > > size 0x100000 should be euqal or larger than huge page size 0x200000 > > "equal". > > > > > Signed-off-by: Hu Tao <hu...@cn.fujitsu.com> > > --- > > exec.c | 21 ++++++++++++--------- > > 1 file changed, 12 insertions(+), 9 deletions(-) > > > > diff --git a/exec.c b/exec.c > > index 7e60a44..6512820 100644 > > --- a/exec.c > > +++ b/exec.c > > @@ -996,7 +996,7 @@ void qemu_mutex_unlock_ramlist(void) > > > > > #define HUGETLBFS_MAGIC 0x958458f6 > > > > -static long gethugepagesize(const char *path) > > +static long gethugepagesize(const char *path, Error **errp) > > { > > struct statfs fs; > > int ret; > > @@ -1006,7 +1006,7 @@ static long gethugepagesize(const char *path) > > } while (ret != 0 && errno == EINTR); > > > > if (ret != 0) { > > - perror(path); > > + error_setg_errno(errp, errno, "failed to get size of file %s", > > path); > > I think your error message is imprecise. It's not the file size you > are trying to get its the page size for that file (or its underlying > file system I think). changed to: failed to get page size of file > > > return 0; > > } > > > > @@ -1024,17 +1024,20 @@ static void *file_ram_alloc(RAMBlock *block, > > char *filename; > > char *sanitized_name; > > char *c; > > - void *area; > > + void *area = NULL; > > int fd; > > - unsigned long hpagesize; > > + uint64_t hpagesize; > > > > - hpagesize = gethugepagesize(path); > > - if (!hpagesize) { > > + hpagesize = gethugepagesize(path, errp); > > + if (errp && *errp) { > > More flow control dependent on non NULL errp. I think you want a > local_err for safety here. Okay. > > > goto error; > > } > > > > if (memory < hpagesize) { > > - return NULL; > > + error_setg(errp, "memory size 0x" RAM_ADDR_FMT " must be euqal to " > > "equal" thanks! > > Regards, > Peter > > > + "or larger than huge page size 0x%" PRIx64, > > + memory, hpagesize); > > + goto error; > > } > > > > if (kvm_enabled() && !kvm_has_sync_mmu()) { > > @@ -1094,8 +1097,8 @@ static void *file_ram_alloc(RAMBlock *block, > > return area; > > > > error: > > - if (mem_prealloc) { > > - exit(1); > > + if (area && area != MAP_FAILED) { > > + munmap(area, memory); > > } > > return NULL; > > } > > -- > > 1.9.3 > > > >