These are the latest scripts, AFAIK
no overhead allowance. I note.
On 10/07/15 20:40, Sebastian Moeller wrote:
Hi Fred,
On Jul 10, 2015, at 21:34 , Fred Stratton <fredstrat...@imap.cc> wrote:
bridge sync is circa 10 000 kbit/s
with the cake option in sqm enabled
config queue 'eth1'
option qdisc_advanced '0'
option enabled '1'
option interface 'pppoe-wan'
option upload '850'
option qdisc 'cake'
option script 'simple_pppoe.qos'
option linklayer 'atm'
option overhead '40'
option download ‘8500'
So this looks reasonable. Then again, if the DSLAM is under
provisioned/oversubscribed (= congested) shaping uypur DSL link might not fix
all buffer bloat..
tc -s qdisc show dev pppoe-wan
qdisc htb 1: root refcnt 2 r2q 10 default 12 direct_packets_stat 0 direct_qlen 3
Sent 101336 bytes 440 pkt (dropped 2, overlimits 66 requeues 0)
backlog 0b 0p requeues 0
qdisc cake 110: parent 1:11 unlimited diffserv4 flows raw
Sent 4399 bytes 25 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
Class 0 Class 1 Class 2 Class 3
rate 0bit 0bit 0bit 0bit
target 5.0ms 5.0ms 5.0ms 5.0ms
interval 100.0ms 100.0ms 100.0ms 100.0ms
Pk delay 0us 0us 7us 2us
Av delay 0us 0us 0us 0us
Sp delay 0us 0us 0us 0us
pkts 0 0 22 3
way inds 0 0 0 0
way miss 0 0 22 2
way cols 0 0 0 0
bytes 0 0 3392 1007
drops 0 0 0 0
marks 0 0 0 0
qdisc cake 120: parent 1:12 unlimited diffserv4 flows raw
Sent 96937 bytes 415 pkt (dropped 2, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
Class 0 Class 1 Class 2 Class 3
rate 0bit 0bit 0bit 0bit
target 5.0ms 5.0ms 5.0ms 5.0ms
interval 100.0ms 100.0ms 100.0ms 100.0ms
Pk delay 0us 28.0ms 0us 0us
Av delay 0us 1.2ms 0us 0us
Sp delay 0us 4us 0us 0us
pkts 0 417 0 0
way inds 0 0 0 0
way miss 0 23 0 0
way cols 0 0 0 0
bytes 0 98951 0 0
drops 0 2 0 0
marks 0 0 0 0
qdisc cake 130: parent 1:13 unlimited diffserv4 flows raw
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
Class 0 Class 1 Class 2 Class 3
rate 0bit 0bit 0bit 0bit
target 5.0ms 5.0ms 5.0ms 5.0ms
interval 100.0ms 100.0ms 100.0ms 100.0ms
Pk delay 0us 0us 0us 0us
Av delay 0us 0us 0us 0us
Sp delay 0us 0us 0us 0us
pkts 0 0 0 0
way inds 0 0 0 0
way miss 0 0 0 0
way cols 0 0 0 0
bytes 0 0 0 0
drops 0 0 0 0
marks 0 0 0 0
qdisc cake 140: parent 1:14 unlimited diffserv4 flows raw
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
Class 0 Class 1 Class 2 Class 3
rate 0bit 0bit 0bit 0bit
target 5.0ms 5.0ms 5.0ms 5.0ms
interval 100.0ms 100.0ms 100.0ms 100.0ms
Pk delay 0us 0us 0us 0us
Av delay 0us 0us 0us 0us
Sp delay 0us 0us 0us 0us
pkts 0 0 0 0
way inds 0 0 0 0
way miss 0 0 0 0
way cols 0 0 0 0
bytes 0 0 0 0
drops 0 0 0 0
marks 0 0 0 0
qdisc ingress ffff: parent ffff:fff1 ----------------
Sent 273341 bytes 435 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
But this is the hallmark of out of date sqm-scripts, this just uses
cake as leaf qdisc and keeps HTB as the main shaper; a configuration that is
useful for testing. I assume this is the old set of sqm-scripts not the update
I just sent as attachment? If so could you retry with the newer scripts, please?
Best Regards
Sebastian
On 10/07/15 19:46, Sebastian Moeller wrote:
Hi Fred,
your results seem to indicate that cake is not active at all, as the latency
under load is abysmal (a quick check is to look at the median in relation to
the min and the 90% number, in your examples all of these are terrible). Could
you please post the result of the following commands on your router:
1) cat /etc/config/sqm
2) tc -d qdisc
3) tc -d class show dev pppoe-wan
4) tc -d class show dev ifb4pppoe-wqn
5) /etc/init.d/sqm stop
6) /etc/init.d/sqm start
hopefully these give some insight what might have happened.
And finally I would love to learn the output of:
sh betterspeedtest.sh -4 -H netperf-eu.bufferbloat.net -t 150 -p
netperf-eu.bufferbloat.net -n 4 ; sh netperfrunner.sh -4 -H
netperf-eu.bufferbloat.net -t 150 -p netperf-eu.bufferbloat.net -n 4
Many Thanks & Best Regards
Sebastian
On Jul 10, 2015, at 20:25 , Fred Stratton <fredstrat...@imap.cc> wrote:
By your command
Rebooted to rerun qdisc script, rather than changing qdiscs from the
command-line, so suboptimal process as end-point changed.
script configuring qdiscs and overhead 40 on
sh netperfrunner.sh -H netperf-eu.bufferbloat.net -p 2.96.48.1
2015-07-10 18:22:08 Testing netperf-eu.bufferbloat.net (ipv4) with 4 streams
down and up while pinging 2.96.48.1. Takes about 60 seconds.
Download: 6.73 Mbps
Upload: 0.58 Mbps
Latency: (in msec, 62 pings, 0.00% packet loss)
Min: 24.094
10pct: 172.654
Median: 260.563
Avg: 253.580
90pct: 330.003
Max: 411.145
script configuring qdiscs on flows raw
sh netperfrunner.sh -H netperf-eu.bufferbloat.net -p
78.145.32.1
2015-07-10 18:49:21 Testing netperf-eu.bufferbloat.net (ipv4) with 4 streams
down and up while pinging 78.145.32.1. Takes about 60 seconds.
Download: 6.75 Mbps
Upload: 0.59 Mbps
Latency: (in msec, 59 pings, 0.00% packet loss)
Min: 23.605
10pct: 169.789
Median: 282.155
Avg: 267.099
90pct: 333.283
Max: 376.509
script configuring qdiscs and overhead 36 on
sh netperfrunner.sh -H netperf-eu.bufferbloat.net -p
80.44.96.1
2015-07-10 19:20:18 Testing netperf-eu.bufferbloat.net (ipv4) with 4 streams
down and up while pinging 80.44.96.1. Takes about 60 seconds.
Download: 6.56 Mbps
Upload: 0.59 Mbps
Latency: (in msec, 62 pings, 0.00% packet loss)
Min: 22.975
10pct: 195.473
Median: 281.756
Avg: 271.609
90pct: 342.130
Max: 398.573
On 10/07/15 16:19, Alan Jenkins wrote:
I'm glad to hear there's a working version (even if it's not in the current
build :).
Do you have measurable improvements with overhead configured (v.s.
unconfigured)?
I've used netperfrunner from CeroWrtScripts, e.g.
sh netperfrunner.sh -H netperf-eu.bufferbloat.net -p $ISP_ROUTER
I believe accounting for overhead helps on this two-way test, because a) it
saturates the uplink b) about half that bandwidth is tiny ack packets
(depending on bandwidth asymmetry). And small packets have proportionally high
overhead.
(But it seems to only make a small difference for me, which always surprises
Seb).
Alan
On 10/07/15 15:52, Fred Stratton wrote:
You are absolutely correct.
I tried both a numeric overhead value, and alternatively 'pppoe-vcmux'
and 'ether-fcs' in the build I crafted based on r46006, which is lupin
undeclared version 2. Everything works as stated.
On lupin undeclared version 4, the current release based on r46117, the
values were not recognised.
Thank you.
I had cake running on a Lantiq ADSL gateway running the same r46006
build. Unfortunately this was bricked by attempts to get homenet
working, so I have nothing to report about gateway usage at present.
On 10/07/15 13:57, Jonathan Morton wrote:
You're already using correct syntax - I've written it to be quite
lenient and use sensible defaults for missing information. There are
several sets of keywords and parameters which are mutually orthogonal,
and don't depend on each other, so "besteffort" has nothing to do with
"overhead" or "atm".
What's probably happening is that you're using a slightly old version
of the cake kernel module which lacks the overhead parameter entirely,
but a more up to date tc which does support it. We've seen this
combination crop up ourselves recently.
- Jonathan Morton
_______________________________________________
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel
_______________________________________________
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel