I am working to restore the functionality of my CeroWrt 3.10.50-1 router with
an OpenWrt BB image.
Things are going pretty well, but I have run into a problem. In the past, I
frequently used two CeroWrt routers at my home: one was my primary, and
connected via PPPoE to my DSL link; the other wa
Hope this helps.
# uptime
00:16:17 up 4 days, 10 min, load average: 0.31, 0.32, 0.26
# tc -s qdisc show dev ge00
qdisc htb 1: root refcnt 2 r2q 10 default 12 direct_packets_stat 0
direct_qlen 1000
Sent 1480380789 bytes 5957584 pkt (dropped 0, overlimits 2385541 requeues
0)
backlog 0b 8p reque
On Tue, 12 May 2015, Dave Taht wrote:
One thread bothering me on dslreports.com is that some folk seem to
think you only get bufferbloat if you stress test the network, where
transient bufferbloat is happening all the time, everywhere.
On one of my main sqm'd network gateways, day in, day out,
I am curious as to the drop and mark statistics for those actively
using their networks,
but NOT obsessively testing dslreports' speedtest as I have been :),
over the course of days.
A cron job running once an hour would work, but snmp polling with
mrtg, parsing the tc output would be better.
But
One thread bothering me on dslreports.com is that some folk seem to
think you only get bufferbloat if you stress test the network, where
transient bufferbloat is happening all the time, everywhere.
On one of my main sqm'd network gateways, day in, day out, it reports
about 6000 drops or ecn marks
Hi Alan,
> nitpick: "these settings simply tell SQM/fq_codel how to make up for that
> overhead"
>
> The ATM settings don't apply to fq_codel, rather the htb shaper. It might be
> more technically accurate to write "SQM" instead of "SQM/fq_codel".
I love nitpkck-ness (if that's a word.) And a