On Tue, Jan 23, 2018 at 09:06:49PM +0800, Wei Wang wrote: > On 01/23/2018 07:12 PM, Stefan Hajnoczi wrote: > > On Mon, Jan 22, 2018 at 07:09:06PM +0800, Wei Wang wrote: > > > On 01/19/2018 09:06 PM, Stefan Hajnoczi wrote: > > > > > > > > > - Suppose in the future there is also a kernel virtio-vhost-user driver > > > as > > > other PCI devices, can we unbind the kernel driver first, and then bind > > > the > > > device to the dpdk driver? A normal PCI device should be able to smoothly > > > switch between the kernel driver and dpdk driver. > > It depends what you mean by "smoothly switch". > > > > If you mean whether it's possible to go from a kernel driver to > > vfio-pci, then the answer is yes. > > > > But if the kernel driver has an established vhost-user connection then > > it will be closed. This is the same as reconnecting with AF_UNIX > > vhost-user. > > > > Actually not only the case of switching to testpmd after kernel establishes > the connection, but also for several runs of testpmd. That is, if we run > testpmd, then exit testpmd. I think the second run of testpmd won't work.
The vhost-user master must reconnect and initialize again (SET_FEATURES, SET_MEM_TABLE, etc). Is your master reconnecting after the AF_UNIX connection is closed? > I'm thinking about caching the received master msgs in QEMU when > virtio_vhost_user_parse_m2s(). Why is that necessary and how does QEMU know they are still up-to-date when a new connection is made? > Btw, I'm trying to run the code, but couldn't bind the virito-vhost-user > device to vfio-pci (reports Unknown device), not sure if it is because the > device type is "Unclassified device". You need to use the modified usertools/dpdk-devbind.py from my patch series inside the guest. Please see: https://dpdk.org/ml/archives/dev/2018-January/088177.html Stefan
signature.asc
Description: PGP signature