On 8/14/15 12:04 PM, Gray, Mark D wrote:
Hi Daniele,
Thanks for starting this conversation. It is a good list :) I have
crossed-posted this
to dpdk.org as I feel that some of the points could be interesting to that
community
as they are related to how DPDK is used.
How do "users" of OVS with DPDK feel about this list? Does anyone disagree or
does anyone have any additions? What are your experiences?
Daniele,
Although I think Mark posted this information to @openvswitch before, I
want to mention again the new project in opnfv, openvswitch for nfv
(tagged ovsnfv) whose purpose is to deploy Open vSwitch with sw datapath
acceleration into opnfv. The goal is to test ovs-dpdk or other potential
contributed accelerated datapaths into more complex user focused
scenarios such as sfc and opnfv vsperf.
There has been some discussion lately about the status of the Open vSwitch
port to DPDK. While part of the code has been tested for quite some time,
I think we can agree that there are a few rough spots that prevent it from
being easily deployed and used.
I was hoping to get some feedback from the community about those rough
spots,
i.e. areas where OVS+DPDK can/needs to improve to become more
"production
ready" and user-friendly.
- PMD threads and queues management: the code has shown several bugs
and
the
netdev interfaces don't seem up to the job anymore.
You had a few ideas about how to refactor this before but I was concerned
about the effect it would have on throughput. I can't find the thread.
Do you have some further ideas about how to achieve this?
There's a lot of margin of improvement: we could factor out the code from
dpif-netdev, add configuration parameters for advanced users, and figure
out
a way to add unit tests.
I think this is a general issue with both the kernel datapath (and netdevs)
and the userspace datapath. There isn't much unit testing (or testing) outside
of the slow path.
Well yes of course but there is quite a bit of tradecraft accumulated
over many years about how to debug and test a kernel based protocol that
just doesn't exist yet for dpdk.
Related to this, the system should be as fast as possible out-of-the-box,
without requiring too much tuning.
I know there have been some off-line discussions about the possibility
of creating some canned tuning profiles including a default profile to
improve the "out of the box" experience of dpdk so new deployers of
dpdk/ovs could experience some of the benefits of dpdk without needing
to deep dive into the mysteries of tuning dpdk.
This is a good point. I think the kernel datapath has a similar issue. You can
get a certain level of performance without compiling with -Ofast or
pinning threads but you will (even with the kernel datapath) get better
performance if you pin threads (and possibly compile differently). I guess
it is more visible with the dpdk datapath as performance is one of the key
values. It is also more detrimental to the performance if you don't set it
up correctly.
Perhaps we could provide scripts to help do this?
I think this is also interesting to the DPDK community. There is
knowledge required when running DPDK enabled apps to
get good performance: core pinning is one thing that comes to mind.
- Userspace tunneling: while the code has been there for quite some time it
hasn't received the level of testing that the Linux kernel datapath
tunneling
has.
Again, there is a lack of test infrastructure in general for OVS. vsperf is a
good
start, and it would be great to see more people use and contribute to it!
- Documentation: other than a step by step tutorial, it cannot be said
that
DPDK is a first class citizen in the OVS documentation. Manpages could
be
improved.
Easily done. The INSTALL guide is pretty good but the structure could be better.
There is also a lack of manpages. Good point.
- Vhost: the code has not received the level of testing of the kernel
vhost.
Another doubt shared by some developers is whether we should keep
vhost-cuse, given its relatively low ease of use and the overlapping with
the far more standard vhost-user.
vhost-cuse is required for older versions of qemu. I'm aware of some companies
using it as they are restricted to an older version of qemu. I think it is
deprecated
at the moment? Is there a notice to that effect? We just need a plan for when to
remove it and make sure that plan is clear?
+1
- Interface management and naming: interfaces must be manually removed
from
the kernel drivers.
We still don't have an easy way to identify them. Ideas are welcome: how
can
we make this user friendly? Is there a better solution on the DPDK side?
This is a tough one and is interesting to the DPDK community. The basic issue
here is that users are more familiar with linux interfaces and linux naming
conventions.
"ovs-vsctl add-port bro eth0" makes a lot more sense than
"dpdk_nic_bind -b igb_uio <pci_id>", then check the order that the ports
are enumerated and then run "ovs-vsctl add-port br0 dpdkN".
I can think of ways to do this with physical NICs. For example,
you could reference the port by the linux name and when you try to add it, OVS
could unbind from the kernel module and bind it to igb_uio?
However, I am not sure how you would do it with virtual nics as there is not
even a real device.
I think a general solution from the dpdk community would be really helpful here.
How are DPDK interfaces handled by linux distributions? I've heard about
ongoing work for RHEL and Ubuntu, it would be interesting to coordinate.
- Insight into the system and debuggability: nothing beats tcpdump for the
kernel datapath. Can something similar be done for the userspace
datapath?
Yeah, this would be useful. I have my own way of dealing with this. For example,
you could dump from the LOCAL port on a NORMAL bridge or add a rule to
mirror a flow to another port but I feel there could be a better way to do this
in
DPDK.
+1
I have recently heard that the DPDK team do something with a pcap pmd
to help with debugging. A more general approach from dpdk would help a lot.
I agree that a libpcap IF would be really useful - maybe where a core
with a hugepage could be allocated for buffering.
- Consistency of the tools: some commands are slightly different for the
userspace/kernel datapath. Ideally there shouldn't be any difference.
Yeah, there are some things that could be changed. DPDK just works differently
but
the benefits are significant :)
We need to mount hugepages, bind nics to igb_uio, etc
With a lot of this stuff, maybe the DPDK community's tools don't need to emulate
the linux networking tools exactly. Maybe over time as the DPDK community
and user-base expands, people will become more familiar with the tools,
processes, etc
and this will be less of an issue?
- Packaging: how should the distributions package DPDK and OVS? Should
there
only be a single build to handle both the kernel and the userspace
datapath,
eventually dynamically linked to DPDK?
Yeah. Do we need to start with dpdk if we have compiled with DPDK support???
- Benchmarks: we often rely on extremely simple flow tables with single
flow
traffic to evaluate the effect of a change. That may be ok during
development, but OVS with the kernel datapath has been tested in
different
scenarios with more complicated flow tables and even with hostile traffic
patterns.
Efforts in this sense are being made, like the vsperf project, or even
the
simple ovs-pipeline.py
vsperf will really help this.
I would appreciate feedback on the above points, not (only) in terms of
solutions, but in terms of requirements that you feel are important for our
system to be considered ready.
Thanks for making these good points and starting this thread.
--TFH
Cheers,
Daniele
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev