Speaking of IPv6 performance testing- In a recent FTTH field deployment, the network operator deployed an IPv6-only network and tunneled all subscriber IPv4 traffic over an IPv6 tunnel to the upstream network edge. It then unpacked the IPv4 traffic from the IPv6 tunnel and sent it on its merry way.
Long story short, the 4o6 tunneling code in the residential gateway was not nearly as performant as the IPv6 forwarding code. I actually got better IPv4 throughput running an IPv6 VPN on my end device, then sending my IPv4 traffic through that tunnel - thus avoiding the tunnel code on the gateway. If I recall correctly, the tunnel code capped out at about 20 Mbps and the IPv6 code went up to the 50Mbps SLA rate. I stumbled into this while running some IPTV video tests while running throughput benchmarks on my PC (with apparently pseudo-random results, until we figured out the various tunnels). Took me a while to figure it out. Delay also spiked when the gateway got bogged down...... More capable gateways were deployed in the latter stages of the deployment, and they seemed to keep up with the 50 Mbps SLA rate. Bill Ver Steeg -----Original Message----- From: bloat-boun...@lists.bufferbloat.net [mailto:bloat-boun...@lists.bufferbloat.net] On Behalf Of Dave Taht Sent: Wednesday, September 03, 2014 3:31 PM To: Sebastian Moeller Cc: cerowrt-devel@lists.bufferbloat.net; bloat Subject: Re: [Bloat] [Cerowrt-devel] Comcast upped service levels -> WNDR3800 can't cope... On Wed, Sep 3, 2014 at 12:22 PM, Sebastian Moeller <moell...@gmx.de> wrote: > Hi Aaron, > > > On Sep 3, 2014, at 17:12 , Aaron Wood <wood...@gmail.com> wrote: > >> On Wed, Sep 3, 2014 at 4:08 AM, Jonathan Morton <chromati...@gmail.com> >> wrote: >> Given that the CPU load is confirmed as high, the pcap probably isn't as >> useful. The rest would be interesting to look at. >> >> Are you able to test with smaller packet sizes? That might help to isolate >> packet-throughput (ie. connection tracking) versus byte-throughput problems. >> >> - Jonathan Morton >> >> Doing another test setup will take a few days (maybe not until the weekend). >> But I can get the data uploaded, and do some preliminary crunching on it. > > So the current SQM system allows to shape on multiple interfaces, so > you could set up the shaper on se00 and test between sw10 and se00 (should > work if you reliably get fast enough wifi connection, something like combined > shaped bandwidth <= 70% of wifi rate should work). That would avoid the whole > firewall and connection tracking logic. > My home wifi environment is quite variable/noisy and not > well-suited for this test: with rrul_be I got stuck at around 70Mbps combined > bandwidth, with different distributions of the up and down-leg for > no-shaping, shaping to 50Mbps10Mbps, and shaping to 100Mbps50Mbps. SIRQ got > pretty much pegged at 96-99% during all netperf-wrapper runs, so I assume > this to be the bottleneck (the radio was in the > 200mbps range during the > test with occasional drops to 150mbps). So my conclusion would: be it really > is the shaping that is limited on my wndr3700v2 with cerowrt 3.10.50-1, again > if I would be confident about the measurement which I am not (but > EOUTOFTIME). That or my rf environment might only allow for roughly 70-80Mbps > combined throughput. For what it is worth: test where performed between > macbook running macosx 10.9.4 and hp proliant n54l running 64bit openSuse > 13.1, kernel 3.11.10-17 (AMD turion with tg3 gbit ethernet adapter (BQL > enabled), running fq_codel on eth0), with sha ping on the se00 interface. A note on wifi throughput. CeroWrt routes, rather than bridges, between interfaces. So I would expect for simple benchmarks, openwrt (which bridges) might show much better wifi<-> ethernet behavior. We route, rather than bridge wifi, because of 1) it made it easier to debug it, and 2) the theory that multicast on busier networks messes up wifi far more than not-bridging slows it down. Have not accumulated a lot of proof of this, but this was kind of enlightening: http://tools.ietf.org/html/draft-desmouceaux-ipv6-mcast-wifi-power-usage-00 I note that my regular benchmarking environment has mostly been 2 or more routers with nat and firewalling disabled. Given the trend towards looking at iptables and nat overhead on this thread, an ipv6 benchmark on this box might be revealing. > Best Regards > Sebastian > > >> >> -Aaron >> _______________________________________________ >> Cerowrt-devel mailing list >> Cerowrt-devel@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/cerowrt-devel > > _______________________________________________ > Cerowrt-devel mailing list > Cerowrt-devel@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/cerowrt-devel -- Dave Täht https://www.bufferbloat.net/projects/make-wifi-fast _______________________________________________ Bloat mailing list bl...@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/bloat _______________________________________________ Cerowrt-devel mailing list Cerowrt-devel@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/cerowrt-devel