On Fri, Sep 07, 2018 at 10:39:16AM +0100, Burakov, Anatoly wrote: > On 05-Sep-18 5:28 AM, Tiwei Bie wrote: > > Recently some memory APIs were introduced to allow users to > > get the file descriptor and offset for each memory segment. > > We can leverage those APIs to get rid of the /proc magic on > > memory table preparation in vhost-user backend. > > > > Signed-off-by: Tiwei Bie <tiwei....@intel.com> > > --- > > <snip> > > > - for (i = 0; i < num; ++i) { > > - mr = &msg->payload.memory.regions[i]; > > - mr->guest_phys_addr = huges[i].addr; /* use vaddr! */ > > - mr->userspace_addr = huges[i].addr; > > - mr->memory_size = huges[i].size; > > - mr->mmap_offset = 0; > > - fds[i] = open(huges[i].path, O_RDWR); > > + if (rte_memseg_get_fd_offset_thread_unsafe(ms, &offset) < 0) { > > + PMD_DRV_LOG(ERR, "Failed to get offset, ms=%p rte_errno=%d", > > + ms, rte_errno); > > + return -1; > > + } > > + > > + start_addr = (uint64_t)(uintptr_t)ms->addr; > > + end_addr = start_addr + ms->len; > > + > > + for (i = 0; i < wa->region_nr; i++) { > > There has to be a better way than to run this loop on every segment. Maybe > store last-used region, and only do a region look up if there's a mismatch? > Generally, in single-file segments mode, you'll get lots of segments from > one memseg list one after the other, so you will need to do a lookup once > memseg list changes, instead of on each segment.
We may have holes in one memseg list due to dynamic free. Do you mean we just need to do rte_memseg_contig_walk() and we can assume that fds of the contiguous memegs will be the same? > > > + if (wa->fds[i] != fd) > > + continue; > > + > > + mr = &wa->vm->regions[i]; > > + > > + if (mr->userspace_addr + mr->memory_size < end_addr) > > + mr->memory_size = end_addr - mr->userspace_addr; > > + > > <snip> > > > int fds[VHOST_MEMORY_MAX_NREGIONS]; > > int fd_num = 0; > > - int i, len; > > + int len; > > int vhostfd = dev->vhostfd; > > RTE_SET_USED(m); > > @@ -364,10 +337,6 @@ vhost_user_sock(struct virtio_user_dev *dev, > > return -1; > > } > > - if (req == VHOST_USER_SET_MEM_TABLE) > > - for (i = 0; i < fd_num; ++i) > > - close(fds[i]); > > - > > You're sharing fd's - presumably the other side of this is in a different > process, so it's safe to close these fd's there? Above code belongs to virtio-user, and it will close the fds got from rte_memseg_get_fd_thread_unsafe(). Below is the code which will close these fds on the other side (i.e. the vhost-user process): https://github.com/DPDK/dpdk/blob/3605968c2fa7/lib/librte_vhost/vhost_user.c#L805 https://github.com/DPDK/dpdk/blob/3605968c2fa7/lib/librte_vhost/vhost_user.c#L97 > > > if (need_reply) { > > if (vhost_user_read(vhostfd, &msg) < 0) { > > PMD_DRV_LOG(ERR, "Received msg failed: %s", > > > > > -- > Thanks, > Anatoly