On Fri, May 19, 2017 at 11:10:33AM +0800, Jason Wang wrote: > On 2017年05月18日 11:03, Wei Wang wrote: > > On 05/17/2017 02:22 PM, Jason Wang wrote: > > > On 2017年05月17日 14:16, Jason Wang wrote: > > > > On 2017年05月16日 15:12, Wei Wang wrote: > > > > > > Hi: > > > > > > > > > > > > Care to post the driver codes too? > > > > > > > > > > > OK. It may take some time to clean up the driver code before > > > > > post it out. You can first > > > > > have a check of the draft at the repo here: > > > > > https://github.com/wei-w-wang/vhost-pci-driver > > > > > > > > > > Best, > > > > > Wei > > > > > > > > Interesting, looks like there's one copy on tx side. We used to > > > > have zerocopy support for tun for VM2VM traffic. Could you > > > > please try to compare it with your vhost-pci-net by: > > > > > > We can analyze from the whole data path - from VM1's network stack to > > send packets -> VM2's > > network stack to receive packets. The number of copies are actually the > > same for both. > > That's why I'm asking you to compare the performance. The only reason for > vhost-pci is performance. You should prove it.
There is another reason for vhost-pci besides maximum performance: vhost-pci makes it possible for end-users to run networking or storage appliances in compute clouds. Cloud providers do not allow end-users to run custom vhost-user processes on the host so you need vhost-pci. Stefan
signature.asc
Description: PGP signature