On Wed, Jun 19, 2019 at 03:19:00PM +0530, Pankaj Gupta wrote: > This patch series has implementation for "virtio pmem" > device. "virtio pmem" is persistent memory(nvdimm) device in > guest which allows to bypass the guest page cache. This > also implements a VIRTIO based asynchronous flush mechanism. > Details of project idea for 'virtio pmem' flushing interface > is shared [2] & [3]. > > Sharing Qemu device emulation in this patchset. Tested with > guest kernel driver [1]. This series is based on David's > memory device refactoring [5] work with modified version of > my initial virtio pmem [4] series. > > Usage: > ./qemu -name test -machine pc -m 8G,slots=240,maxmem=20G > -object memory-backend-file,id=mem1,share,mem-path=test.img, > size=4G,share > -device virtio-pmem-pci,memdev=mem1,id=nv1 >
Hi, Pankaj I tried this series with v14 kernel driver, while getting some error on using this. Not sure this is my error configuration. The qemu command line is: -object memory-backend-file,id=mem1,share=on,mem-path=/dev/dax0.0,size=1G,align=2M -device virtio-pmem-pci,memdev=mem1,id=nv1 The guest boots up and I can see /dev/pmem0 device. But when I want to partition this device, I got the error: # parted /dev/pmem0 mklabel gpt Warning: Error fsyncing/closing /dev/pmem0: Input/output error Also I see an error when running "ndctl list": libndctl: __sysfs_device_parse: ndctl0: add_dev() failed Would you mind letting me know which part I am wrong? > (qemu) info memory-devices > Memory device [virtio-pmem]: "nv1" > memaddr: 0x240000000 > size: 4294967296 > memdev: /objects/mem1 > > Implementation is divided into two parts: > New virtio pmem guest driver and qemu code changes for new > virtio pmem paravirtualized device. In this series we are > sharing Qemu device emulation. > >1. Guest virtio-pmem kernel driver >--------------------------------- > - Reads persistent memory range from paravirt device and > registers with 'nvdimm_bus'. > - 'nvdimm/pmem' driver uses this information to allocate > persistent memory region and setup filesystem operations > to the allocated memory. > - virtio pmem driver implements asynchronous flushing > interface to flush from guest to host. > >2. Qemu virtio-pmem device >--------------------------------- > - Creates virtio pmem device and exposes a memory range to > KVM guest. > - At host side this is file backed memory which acts as > persistent memory. > - Qemu side flush uses aio thread pool API's and virtio > for asynchronous guest multi request handling. > > Virtio-pmem security implications and suggested countermeasures: > --------------------------------------------------------------- > > In previous posting of kernel driver, there was discussion [7] > on possible implications of page cache side channel attacks with > virtio pmem. After thorough analysis of details of known side > channel attacks, below are the suggestions: > > - Depends entirely on how host backing image file is mapped > into guest address space. > > - virtio-pmem device emulation, by default shared mapping is used > to map host backing file. It is recommended to use separate > backing file at host side for every guest. This will prevent > any possibility of executing common code from multiple guests > and any chance of inferring guest local data based based on > execution time. > > - If backing file is required to be shared among multiple guests > it is recommended to don't support host page cache eviction > commands from the guest driver. This will avoid any possibility > of inferring guest local data or host data from another guest. > > - Proposed device specification [6] for virtio-pmem device with > details of possible security implications and suggested > countermeasures for device emulation. > >Changes from PATCH v1: > - Change proposed version from qemu 4.0 to 4.1 - Eric > - Remove virtio queue_add from unrealize function - Cornelia > >[1] https://lkml.org/lkml/2019/6/12/624 >[2] https://www.spinics.net/lists/kvm/msg149761.html >[3] https://www.spinics.net/lists/kvm/msg153095.html >[4] https://marc.info/?l=linux-kernel&m=153572228719237&w=2 >[5] https://marc.info/?l=qemu-devel&m=153555721901824&w=2 >[6] https://lists.oasis-open.org/archives/virtio-dev/201903/msg00083.html >[7] https://lkml.org/lkml/2019/1/9/1191 > > Pankaj Gupta (3): > virtio-pmem: add virtio device > virtio-pmem: sync linux headers > virtio-pci: proxy for virtio-pmem > > David Hildenbrand (4): > virtio-pci: Allow to specify additional interfaces for the base type > hmp: Handle virtio-pmem when printing memory device infos > numa: Handle virtio-pmem in NUMA stats > pc: Support for virtio-pmem-pci > > hmp.c | 27 ++- > hw/i386/Kconfig | 1 > hw/i386/pc.c | 72 ++++++++++ > hw/virtio/Kconfig | 10 + > hw/virtio/Makefile.objs | 2 > hw/virtio/virtio-pci.c | 1 > hw/virtio/virtio-pci.h | 1 > hw/virtio/virtio-pmem-pci.c | 131 ++++++++++++++++++ > hw/virtio/virtio-pmem-pci.h | 34 ++++ > hw/virtio/virtio-pmem.c | 189 > +++++++++++++++++++++++++++ > include/hw/pci/pci.h | 1 > include/hw/virtio/virtio-pmem.h | 49 +++++++ > include/standard-headers/linux/virtio_ids.h | 1 > include/standard-headers/linux/virtio_pmem.h | 35 +++++ > numa.c | 24 +-- > qapi/misc.json | 28 +++- > 16 files changed, 580 insertions(+), 26 deletions(-) >---- -- Wei Yang Help you, Help me