Hello to all,

I am performing some performance testing using OVS with DPDK and I am
having some issues when I change the number of PMD cores that ovs uses.

The testing architecture consists of a source and sink processes, the
source allocates some packets from the memory pool at the beginning and
then sends them in an infinite loop, the sink process just receives the
packets and counts them. In between, there is a forwarder application that
just takes packets from a port and sends them into another.

Source and sink are executed as secondary DPDK-processes while the
forwarding app is executed within a VM. Source and Sink are connected to
ovs using a dpdkr port each one, the VM is connected using two dpdkr ports,
it means that there are a total of 4 dpdkr ports involved.

OpenFlow rules are configured to forward traffic in the following way:
Source -> Forwarder (app in the VM) -> Sink

The testing machine has 10 physical cores, core 0 is available to the OS
while the others are isolated. Source, Sink and Forwarder are pinned to
different physical cores. The PMD cores assigned to OvS are independent
from the ones assigned to Source, Sink and Forwarder.

We are trying to understand how the number of PMD cores affects the
performance.
With 1 and 2 cores assigned to OvS we get a performance of about 7 Mpps,
but when we use 3 or 4 cores the performance decreases to only 750 kpps. I
noticed that the core 0 (used by the OS) is used a 32%, additionally OvS
prints the messages
"ofproto_dpif_upcall(pmd119)|WARN|Dropped 13210124 log messages in last 60
seconds (most recently, 0 seconds ago) due to excessive rate"
"ofproto_dpif_upcall(pmd119)|WARN|upcall_cb failure: ukey installation
fails"

Does anyone know why this behaviour?

PD: I'm using DPDK v2.1.0 and ovs commit
15a0ca65f341c2298e571052eb68d8a282e853a5

Thank you very much for your help.
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to