Hi Dave,

On Sep 3, 2014, at 21:30 , Dave Taht <dave.t...@gmail.com> wrote:

> On Wed, Sep 3, 2014 at 12:22 PM, Sebastian Moeller <moell...@gmx.de> wrote:
>> Hi Aaron,
>> 
>> 
>> On Sep 3, 2014, at 17:12 , Aaron Wood <wood...@gmail.com> wrote:
>> 
>>> On Wed, Sep 3, 2014 at 4:08 AM, Jonathan Morton <chromati...@gmail.com> 
>>> wrote:
>>> Given that the CPU load is confirmed as high, the pcap probably isn't as 
>>> useful.  The rest would be interesting to look at.
>>> 
>>> Are you able to test with smaller packet sizes?  That might help to isolate 
>>> packet-throughput (ie. connection tracking) versus byte-throughput problems.
>>> 
>>> - Jonathan Morton
>>> 
>>> Doing another test setup will take a few days (maybe not until the 
>>> weekend).  But I can get the data uploaded, and do some preliminary 
>>> crunching on it.
>> 
>>        So the current SQM system allows to shape on multiple interfaces, so 
>> you could set up the shaper on se00 and test between sw10 and se00 (should 
>> work if you reliably get fast enough wifi connection, something like 
>> combined shaped bandwidth <= 70% of wifi rate should work). That would avoid 
>> the whole firewall and connection tracking logic.
>>        My home wifi environment is quite variable/noisy and not well-suited 
>> for this test: with rrul_be I got stuck at around 70Mbps combined bandwidth, 
>> with different distributions of the up and down-leg for no-shaping, shaping 
>> to 50Mbps10Mbps, and shaping to 100Mbps50Mbps. SIRQ got pretty much pegged 
>> at 96-99% during all netperf-wrapper runs, so I assume this to be the 
>> bottleneck (the radio was in the > 200mbps range during the test with 
>> occasional drops to 150mbps). So my conclusion would: be it really is the 
>> shaping that is limited on my wndr3700v2 with cerowrt 3.10.50-1, again if I 
>> would be confident about the measurement which I am not (but EOUTOFTIME). 
>> That or my rf environment might only allow for roughly 70-80Mbps combined 
>> throughput. For what it is worth: test where performed between macbook 
>> running macosx 10.9.4 and hp proliant n54l running 64bit openSuse 13.1, 
>> kernel 3.11.10-17 (AMD turion with tg3 gbit ethernet adapter (BQL enabled), 
>> running fq_codel on eth0), with sha
>> ping on the se00 interface.
> 
> 
> A note on wifi throughput. CeroWrt routes, rather than bridges,
> between interfaces. So I would expect for simple benchmarks, openwrt
> (which bridges) might show much better wifi<-> ethernet behavior.

        Interesting, I just tried to make quick and dirty test with the goal of 
getting rid of NAT and fire-walling from the test path, so I am very happy that 
cerowrt routes by default. That way shaping on se00 is quite a good test of the 
internet routing performance.

> 
> We route, rather than bridge wifi, because of 1) it made it easier to
> debug it, and 2) the theory that multicast on busier networks messes
> up wifi far more than not-bridging slows it down.

        I am already sold on this idea! I think there should be a good reason 
to call it a “home router” and not a home bridge ;) (though some of the stock 
firmwares make me feel someone “had a bridge to sell” ;) )


> Have not accumulated
> a lot of proof of this, but this
> was kind of enlightening:
> http://tools.ietf.org/html/draft-desmouceaux-ipv6-mcast-wifi-power-usage-00
> 
> I note that my regular benchmarking environment has mostly been 2 or
> more routers with nat and firewalling disabled.

        I would love to recreate that, but my home setup is not really wired to 
test this (upstream of cerowrt sits only the 100Mbit switch of the ISP’s del 
modem/-router combination, so no way to plug in a faster receiver there)

> 
> Given the trend towards looking at iptables and nat overhead on this
> thread, an ipv6 benchmark on this box might be revealing.

        I would love to test this as well, but I have not gotten IPv6 to work 
reliably at my home.

Best Regards
        Sebastian

IPv6 NOTE: Everyone with a real dual-stack IPv6 and IPv4 connection to the 
internet (so not tunneled over IPv4) and an ATM-based DSL connection (might be 
the empty set...) needs to use the htb-private method for link layer 
adjustments, as the td-stab method currently does not take the different header 
sizes for IPv4 and IPv6 into account (pure IPv6 connections or where IPv4 is 
tunneled in IPv6 packets should be fine they just need to increase the per 
packet overhead by 20 bytes over the IPv4 recommendation…).

> 
>> Best Regards
>>        Sebastian
>> 
>> 
>>> 
>>> -Aaron
>>> _______________________________________________
>>> Cerowrt-devel mailing list
>>> Cerowrt-devel@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>> 
>> _______________________________________________
>> Cerowrt-devel mailing list
>> Cerowrt-devel@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> 
> 
> 
> -- 
> Dave Täht
> 
> https://www.bufferbloat.net/projects/make-wifi-fast

_______________________________________________
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel

Reply via email to