Thank  you Ray
dpdk is not running in the container - this is customer container and I cannot 
force the customer to use dpdk.
Regarding the  "not compatible"  - I mean pkts received at vpp user-space,  
destined to C1 (container) ,
should be copied to the kernel and then to C1 IP-stack (e.g.  AF_PACKET 
interface )  , while this copy can be saved if my  vSwitch is running in kernel.
So theoretically for the container networking use-case ,  vSwitch in kernel can 
 achieve better performance and CPU utilization than any vSwitch  over dpdk. 
(VPP or OvS) 
Unless a memory map can be used here to save the copy (userspace to kernel and 
kernel to userspace)
Best Regards
Avi

> -----Original Message-----
> From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Ray 
> Kinsella
> Sent: Friday, 16 February, 2018 4:52 PM
> To: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] VPP vs XDP-eBPF performance numbers - for linux
> containers
> 
> Hi Avi,
> 
> So the "not compatible" comment is something I would like to understand a bit
> more?
> 
> Are you typically running DPDK/VPP based or Socket based applications inside
> your containers?
> 
> Our perspective is that userspace networking is also equally good for
> Container/Cloud Native - of course depending on what you are trying to do. We
> have done a huge amount of work in both VPP and DPDK developing
> technologies to help like MemIF (including libmemif), Virtio-User, FastTap,
> Master-VM, Contiv-VPP etc to help in this regard.
> 
> What a container is - ultimately - is a silo'ing of CPU, memory and IO 
> resources
> for both Kernel and Userspace processes, but there is nothing in this forces 
> us
> to choose Kernel over Userspace networking.
> 
> The way we typically handle Containers networking for both VPP/DPDK is for
> packets to flow directly between userspace processes - no kernel required.
> Where VPP runs in the default namespace possibly as a vSwitch or vRouter, and
> switches packets to containers running DPDK/VPP etc, all achieved in
> userspace. We also provide the Master-VM approach and/or FastTAP or
> AF_PACKET to punt the packets into the Kernel when required.
> 
> We test the the performance of aspects of this such as Memif regularly -
> results are available here.
> 
> https://docs.fd.io/csit/rls1710/report/vpp_performance_tests/packet_throughp
> ut_graphs/container_memif.html#ndr-throughput
> 
> Thanks,
> 
> Ray K
> 
> 
> On 13/02/2018 14:04, Avi Cohen (A) wrote:
> > Hello
> > Are there 'numbers' for performance - VPP  vs XDP-eBPF for container
> networking.
> >
> > Since the DPDK and linux-containers are not compatible, is a sense that
> container and host share the same kernel - hence pkts received at VPP-DPDK
> at user-space and directed to a linux container  - should be go   down to the
> kernel and then to the container ip-stack, while in XDP-eBPF this pkt can be
> forward to the container ip-stack directly from the kernel.
> >
> > I heard that a vhostuser interface for containers is 'in-working' stage.
> > Can anyone assist with the performance numbers and the status of this vhost-
> user for containers ?
> >
> > Best Regards
> > Avi
> >
> >
> >
> >
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#8251): https://lists.fd.io/g/vpp-dev/message/8251
View All Messages In Topic (7): https://lists.fd.io/g/vpp-dev/topic/11144798
Mute This Topic: https://lists.fd.io/mt/11144798/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to