05/07/2017 05:02, Guo, Jia: > hi, thomas > > > On 7/5/2017 7:45 AM, Thomas Monjalon wrote: > > Hi, > > > > This is an interesting step for hotplug in DPDK. > > > > 28/06/2017 13:07, Jeff Guo: > >> + netlink_fd = socket(PF_NETLINK, SOCK_DGRAM, > >> NETLINK_KOBJECT_UEVENT); > > It is monitoring the whole system... > > > >> +int > >> +rte_uevent_get(int fd, struct rte_uevent *uevent) > >> +{ > >> + int ret; > >> + char buf[RTE_UEVENT_MSG_LEN]; > >> + > >> + memset(uevent, 0, sizeof(struct rte_uevent)); > >> + memset(buf, 0, RTE_UEVENT_MSG_LEN); > >> + > >> + ret = recv(fd, buf, RTE_UEVENT_MSG_LEN - 1, MSG_DONTWAIT); > > ... and it is read from this function called by one driver. > > It cannot work without a global dispatch. > the rte_uevent-connect is called in func of pci_uio_alloc_resource, so > each socket is created by by each uio device. so i think that would not > affect each driver isolate to use it.
Ah OK, I missed it. > > It must be a global mechanism, probably a service core. > > The question is also to know whether it should be a mandatory > > service in DPDK or an optional helper? > a global mechanism would be good, but so far, include mlx driver, we all > handle the hot plug event in driver by app's registered callback. maybe > a better global would be try in the future. but now is would work for all > pci uio device. mlx drivers have a special connection to the kernel through the associated mlx kernel drivers. That's why the PMD handle the events in a specific way. You are adding event handling for UIO. Now we need also VFIO. I am wondering how it could be better integrated in the bus layer. > and more, if in pci uio device to use hot plug , i think it might be > mandatory.