Here is a bug in iperf that stops me to solve TCP tests in a neat way:
http://sourceforge.net/tracker/?func=detail&aid=1983829&group_id=128336&atid=711371.
The problem is that iperf server silently exits after it has done
bidirectional TCP mode testing with the client (when --tradeoff or
--dualtest flag were present). Although this bug does not effect UDP or
unidirectional TCP-send tests.

As potential workarounds I see:

   1. Implement ovs-vlan-test server logic so that it would restart iperf
   TCP server every-time iperf-server exited because of this bug,
   2. For TCP (or maybe also for UDP) use a different tool (e.g. nttcp,
   netperf ...). I suppose output from other tools would be a little bit
   tougher to parse than for iperf,
   3. Limit ovs-vlan-test server lifetime to a single testing session, so
   that user manually has to restart ovs-vlan-test server if he wants to do
   multiple tests in a row,
   4. Implement all iperf logic inside the ovs-vlan-test python script
   (would python be good enough candidate for this performance wise?);

Will try to come up with something soon. Anyway as a best long term solution
I see #4, because then we would not have dependency on netper/iperf
STDOUT/STDERR format. Although that would be a little bit more time
consuming...

Thanks,
Ansis

On Wed, Oct 19, 2011 at 1:04 PM, Jesse Gross <je...@nicira.com> wrote:

> Yes, running a TCP stream would generate MTU sized packets and should
> detect pretty much all problems.  The main reason for running UDP
> tests with smaller packet size (include MTU sized ones) is to try to
> narrow down the type of problem (i.e. to distinguish between an MTU
> issue and a TSO issue).  I don't think that we've seen any cases where
> the addition of a vlan tag causes packet reordering, so I'm not sure
> that it's critical.
>
> I think that comparing the performance between tagged and untagged
> traffic is the easiest thing to do.  There are roughly three potential
> outcomes that I would expect:
>  * Performance roughly on par for tagged and untagged traffic.
>  * Performance drop for tagged traffic but still significant
> throughput.  This is due to the hardware not supporting the same
> offloads when vlan tags are involved and software fallbacks are used.
> This is most likely to show up on 10G links since most CPUs are fast
> enough to handle the emulation at 1G.  This is interesting to note but
> is a hardware limitation, not a bug.
>  * Performance close (but probably not exactly) 0 for tagged traffic.
> This is generally means that there is a problem with TSO and only
> small packets get through.
>
> On Wed, Oct 19, 2011 at 12:14 PM, Ansis Atteka <aatt...@nicira.com> wrote:
> > Jesse,
> >
> > Potentially we could add iperf TCP tests to the ovs-vlan-test and then IP
> > packet length would match MTU? Albeit then iperf would not report
> > packet-loss or out-of-order packet count.
> >
> > During the implementation I had following concerns:
> >
> > ovs-vlan-test utility uses iperf's STDOUT, STDERR to communicate with it
> and
> > that might be iperf-version specific. Preferred approach would be to use
> > some kind of a "iperf-type"-library, but I was not able to find one
> (maybe
> > we could develop one ourselves?).
> > Iperf also has limitation that smallest UDP packet it can send is
> 52-bytes
> >
> > Thanks,
> > Ansis
> >
> > On Wed, Oct 19, 2011 at 11:21 AM, Jesse Gross <je...@nicira.com> wrote:
> >>
> >> On Tue, Oct 18, 2011 at 9:23 PM, Ansis Atteka <aatt...@nicira.com>
> wrote:
> >> > ovs-vlan-test runs through a number of tests to identify VLAN issues.
> >> >  This
> >> > is useful when trying to debug why a particular driver has issues, but
> >> > it made
> >> > the testing environment a bit harder to set up.  This commit adds an
> >> > iperf
> >> > test to check basic functionality.  It also useful in detecting
> >> > performance
> >> > issues.
> >> >
> >> > Issue #6976
> >>
> >> I didn't review the implementation details but I have some high level
> >> concerns about the tests being run:
> >>  * At least some of the problems that we have encountered are due to
> >> offloading and some offloads are only available with TCP.  Therefore,
> >> running only UDP traffic won't catch these.
> >>  * Another category of issues has do with exactly MTU sized packets.
> >> I see that the packet sizes you use are the same as the previous
> >> version but I'm not sure that they are great choices.  The first issue
> >> is that it is essentially assuming that the MTU is 1500 but ideally we
> >> would actually detect it.  Once we have it, I would pick a value that
> >> is exactly MTU size after accounting for the UDP headers.  Right now
> >> we will probably get that but you have to assume that fragmentation
> >> results in a maximum size first packet.
> >
> >
>
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to