On 02/08/11 08:43, Jeff Ross wrote:
Stuart Henderson wrote:
Are you using altq?
Yes, using the hfsc scheduler. I think that was the hint I needed. udp
packets were all being assigned to the dns queue so I added another
match rule to put openvpn traffic into the default queue.
Here's what I have now:
match in all scrub (no-df max-mss 1440)
altq on $ext_if bandwidth $ext_bw hfsc queue { main }
queue main bandwidth 99% priority 7 qlimit 100 hfsc (realtime 20%,
linkshare 99%) \
{ q_pri, q_web, q_mail, q_def, q_dns }
queue q_pri bandwidth 4% priority 7 hfsc (realtime 0, linkshare 4% red )
queue q_web bandwidth 50% priority 6 hfsc (realtime 30% linkshare 50% red)
queue q_def bandwidth 30% priority 5 hfsc (default realtime (100Kb 3000
30Kb) linkshare 30% red)
queue q_mail bandwidth 13% priority 1 hfsc (realtime (30Kb 3000 12Kb)
linkshare 13% red)
queue q_dns bandwidth 3% priority 7 qlimit 100 hfsc (realtime (30Kb 3000
12Kb), \
linkshare 3%)
match out on $ext_if from $localnet nat-to $carp_if queue (q_def, q_pri)
match out on $ext_if proto tcp to port { www https } queue (q_web, q_pri)
match out on $ext_if proto udp to port { 1194 } queue (q_web, q_pri)
match out on $ext_if proto tcp to port smtp queue (q_mail, q_pri)
match out on $ext_if proto { tcp udp } to port domain queue (q_dns, q_pri)
match out on $ext_if proto icmp queue (q_dns, q_pri)
So, we'll see how that holds up over the course of the day.
Thanks, Stuart!
Unfortunately, I am still having the same error.
I tried switching from the hsfc scheduler to the cbq scheduler and while
the number of error messages I'm getting has gone down, I am still
getting them.
Here are my new cbq rules and match rules:
match in all scrub (no-df max-mss 1440)
altq on $ext_if cbq bandwidth $ext_bw queue { main, mail, dns, web, icmp }
queue main bandwidth 19% cbq(default borrow red)
queue mail bandwidth 5% cbq(borrow red)
queue dns bandwidth 20% cbq(borrow red)
queue web bandwidth 55% cbq(borrow red)
queue icmp bandwidth 1% cbq
match out on $ext_if from $localnet nat-to $carp_if queue main tag DEFAULT
match out on $ext_if proto tcp to port { www https } queue web tag WEB
match out on $ext_if proto tcp to port smtp queue mail tag MAIL
match out on $ext_if proto udp queue dns tag DNS
match out on $ext_if proto icmp queue icmp tag ICMP
It seems that I haven't yet figured out how to assign OpenVPN traffic to
the correct queue. With the hfsc scheduler in place, the OpenVPN
traffic appeared to be on the 3% dns queue. Now, as I watch the
external interface with tcpdump in one xterm and pftop in another xterm
to eyeball the spikes in traffic and the reported queue use, I can see
that the OpenVPN traffic is on the default queue and not on the dns
queue as I would expect.
Given that, it makes sense that the "lack of buffer space" error
messages have gone down since the default queue is bigger but what am I
missing to correctly assign OpenVPN to the dns queue?
If indeed that will fix the problem ;-)
Thanks,
Jeff Ross