Hi to All,
I performed some testing using physical NICs and an external traffic
generator. The issue was not present in that case.
Could it be due to that when using physical NICs the data rate is slower
than when the traffic is generated locally?
What else should I check?
Thank you,
On 31 De
I updated to the last git version and then applied your patch. The issue is
still present.
The ovs log still shows the message
"2015-12-31T14:18:04Z|1|dpif_netdev(pmd95)|INFO|Core 5 processing port
'dpdkr2'
2015-12-31T14:18:04Z|1|dpif_netdev(pmd96)|INFO|Core 4 processing port
'dpdkr4'
2015
So, possibly, there is more than one bug here.
Please try my new patch "[PATCH RFC] dpif-netdev: Rework of rx queue
management."
http://openvswitch.org/pipermail/dev/2015-December/063920.html
May be it will help.
Best regards, Ilya Maximets.
On 30.12.2015 18:25, Mauricio Vásquez wrote:
> Hello
Hello Ilya,
I applied the patch but I still getting a low throughput and the message
"ofproto_dpif_upcall(pmd101)|WARN|upcall_cb failure: ukey installation
fails" in the ovs log.
On 30 December 2015 at 09:59, Ilya Maximets wrote:
> As I see, this is exactly the same bug as fixed in
> commit e
As I see, this is exactly the same bug as fixed in
commit e4e74c3a2b ("dpif-netdev: Purge all ukeys when reconfigure pmd.")
but reproduced while only reconfiguring of pmd threads without restarting.
Try this patch as a workaround:
diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
index fe2cd4b.
I have no idea, ovs was running for a long time when I took that data.
I restarted everything and now the main thread shows:
main thread:
emc hits:1316
megaflow hits:0
miss:681
lost:1348
polling cycles:7226622 (19.41%)
processing cycles:30002635 (80.59%)
avg cycles per
On 30.12.2015 17:32, Mauricio Vásquez wrote:
> I just checked and the traffic is generated after everything is already set
> up, ports and flows.
And what is this 50K packets in that case?
main thread:
emc hits:20341
megaflow hits:0
miss:10193
lost:20372
>
> On 30 December 201
I just checked and the traffic is generated after everything is already set
up, ports and flows.
On 30 December 2015 at 08:50, Ilya Maximets wrote:
> The transmission starts before the addition of dpdkr4 to ovs?
>
> On 30.12.2015 16:31, Mauricio Vásquez wrote:
> > Dear Ilya,
> >
> > ovs-appct
The transmission starts before the addition of dpdkr4 to ovs?
On 30.12.2015 16:31, Mauricio Vásquez wrote:
> Dear Ilya,
>
> ovs-appctl dpif-netdev/pmd-stats-show -> http://pastebin.com/k1nnMfQZ
> ovs-appctl coverage/show -> http://pastebin.com/617CYR4n
> ovs-appctl dpctl/show -> http://pastebin.c
Dear Ilya,
ovs-appctl dpif-netdev/pmd-stats-show -> http://pastebin.com/k1nnMfQZ
ovs-appctl coverage/show -> http://pastebin.com/617CYR4n
ovs-appctl dpctl/show -> http://pastebin.com/JFCT8tgS
ovs-log -> http://pastebin.com/sJkaF20M
Thank you very much.
On 30 December 2015 at 08:05, Ilya Maximet
On 30.12.2015 15:51, Mauricio Vásquez wrote:
> Hello Ilya,
>
> The dpdkr ports involved have just one TX queue, so it should not be the
> reason in this case.
>
Please, provide output of:
ovs-appctl dpif-netdev/pmd-stats-show
ovs-appctl coverage/show
ovs-appctl dpctl/sho
Hello Ilya,
The dpdkr ports involved have just one TX queue, so it should not be the
reason in this case.
Thank you very much,
On 30 December 2015 at 07:07, Ilya Maximets wrote:
> Your 'Source' application, most likely, directs packets of the same flow
> to different TX queues. That's why
Your 'Source' application, most likely, directs packets of the same flow
to different TX queues. That's why most of pmd threads can't install a ukey
and always executes a misses instead of emc hits.
Fix your 'Source'.
Best regards, Ilya Maximets.
On 29.12.2015 22:19, Mauricio Vásquez wrote:
> He
13 matches
Mail list logo