Hi,

I'm using Open vSwitch with Mininet, and noticed a large increase in ICMP
ping RTTs when using POX OpenFlow controllers _without_ flow entries:

RTTs were up from an average of 87 ms to 541 ms.

I did a git bisect on origin/branch-2.1 and found that this was introduced
in e79a6c833e0d72370951d6f8841098103cbb0b2d.

For reference, my Mininet 2.1 setup is using three switches, each with one
POX controller and three nodes.  I've set up all link latencies to be 5 ms,
10 MB bandwidth and using HTB.  The controllers are L2 learning switches,
but they don't install any flows (I use it for benchmarking processing
delays). They instruct the switch to forward packets. (*)

During bisecting I made full rebuilds between each commit like so:

    $ git checkout ...
    $ git clean -fdx
    $ ./boot.sh
    $ ./configure --prefix=/usr \
                  --with-linux=/lib/modules/`uname -r`/build
    $ make
    $ sudo /etc/init.d/openvswitch-controller stop
    $ sudo /etc/init.d/openvswitch-switch stop
    $ make install && make modules_install
    $ # Mininet requires ovs-controller:
    $ test -f tests/test-controller \
        && sudo cp tests/test-controller /usr/bin/ovs-controller
    $ sudo rmmod openvswitch
    $ sudo depmod -a
    $ sudo /etc/init.d/openvswitch start

To confirm the commit in question, I built and ran tests on both

    $ git checkout e79a6c833e0d72370951d6f8841098103cbb0b2d
    $ git checkout e79a6c833e0d72370951d6f8841098103cbb0b2d^

The ping results from each far end of the virtual network (crossing four
links and three switch/controllers + links between switch and controller)
were

Case (1) e79a6c833e0d72370951d6f8841098103cbb0b2d

    mininet> h1 ping -c10 h9
    PING 10.0.0.9 (10.0.0.9) 56(84) bytes of data.
    64 bytes from 10.0.0.9: icmp_req=1 ttl=64 time=638 ms
    64 bytes from 10.0.0.9: icmp_req=2 ttl=64 time=645 ms
    64 bytes from 10.0.0.9: icmp_req=3 ttl=64 time=516 ms
    64 bytes from 10.0.0.9: icmp_req=4 ttl=64 time=631 ms
    64 bytes from 10.0.0.9: icmp_req=5 ttl=64 time=461 ms
    64 bytes from 10.0.0.9: icmp_req=6 ttl=64 time=340 ms
    64 bytes from 10.0.0.9: icmp_req=7 ttl=64 time=486 ms
    64 bytes from 10.0.0.9: icmp_req=8 ttl=64 time=488 ms
    64 bytes from 10.0.0.9: icmp_req=9 ttl=64 time=594 ms
    64 bytes from 10.0.0.9: icmp_req=10 ttl=64 time=608 ms

    --- 10.0.0.9 ping statistics ---
    10 packets transmitted, 10 received, 0% packet loss, time 9010ms
    rtt min/avg/max/mdev = 340.610/541.301/645.402/94.210 ms

Case (2) e79a6c833e0d72370951d6f8841098103cbb0b2d^

    mininet> h1 ping -c10 h9
    PING 10.0.0.9 (10.0.0.9) 56(84) bytes of data.
    64 bytes from 10.0.0.9: icmp_req=1 ttl=64 time=112 ms
    64 bytes from 10.0.0.9: icmp_req=2 ttl=64 time=86.8 ms
    64 bytes from 10.0.0.9: icmp_req=3 ttl=64 time=65.3 ms
    64 bytes from 10.0.0.9: icmp_req=4 ttl=64 time=95.6 ms
    64 bytes from 10.0.0.9: icmp_req=5 ttl=64 time=79.0 ms
    64 bytes from 10.0.0.9: icmp_req=6 ttl=64 time=64.9 ms
    64 bytes from 10.0.0.9: icmp_req=7 ttl=64 time=96.3 ms
    64 bytes from 10.0.0.9: icmp_req=8 ttl=64 time=76.2 ms
    64 bytes from 10.0.0.9: icmp_req=9 ttl=64 time=107 ms
    64 bytes from 10.0.0.9: icmp_req=10 ttl=64 time=89.7 ms

    --- 10.0.0.9 ping statistics ---
    10 packets transmitted, 10 received, 0% packet loss, time 9014ms
    rtt min/avg/max/mdev = 64.961/87.483/112.880/15.494 ms

As you can see, RTTs increased from an average of 87 ms to 541 ms!
And these results are _consistent_ over 2000 ICMP sequences (don't
worry, I won't post the data).

I am well aware that correctness trumps performance, but I'd be very
interested in learning what was done in this commit to affect my testing.
For instance, am *I* doing anything wrong?

I looked at the commit, but as it's pretty big I'm not even going to try to
understand it at this point.

FYI, if I install forwarding flows I get very good RTTs in both cases
(averages of 42.451 ms for the "bad" commit and 42.963 ms for the
"good").

Perhaps this suggests something is going on with the upcalls?

I'd be very happy if anyne can give a good reflection on this, to me,
strange behaviour.  Just so you know, I'm an Open vSwitch and Mininet
novice.

(*) The code for this part _may_ be relevant, but I'll see if anyone replies
first.

Regards,

-- 
Christian Stigen Larsen

_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to