Re: bug in ksh tab complete

2013-06-04 Thread LEVAI Daniel
On h, jún 03, 2013 at 11:05:11 -0400, Ted Unangst wrote:
[...]
> > Didn't send the diff; I think because of the general lack of interest in
> > ksh patches in the past.
> 
> I don't think that's always true, sometimes the interested people
> aren't interested that day, or in that patch. But as a project, we
> strongly encourage people to continue using ksh instead of resorting
> to bash, so keeping ksh working and usable is important.
> 
> In general, I think no feedback is closer to good feedback than bad
> feedback.

Undertood, attitude adjusted :)


Daniel



Re: ALTQ(32bit)

2013-06-04 Thread Stuart Henderson
On 2013-06-03, Chris Cappuccio  wrote:
> Andy [a...@brandwatch.com] wrote:
>> Hi,
>> 
>> We're really looking forward to improvements in ALTQ too.
>> 
>> And we are /really/ hoping that the queues can either be shared across 
>> interfaces (so your WAN downstream bandwidth doesn't have to be sliced 
>> up and divided up across all the internal interfaces), or that you can 
>> create queues on the external interface's 'ingress' flow.
>> 
>> I know this opens a can of worms as many say you can't theoretically 
>> shape inbound bandwidth as you've already received the packets, however 
>> we do shape inbound bandwidth and it works brilliantly! But you have to 
>> do it on each of the internal interfaces egress (hence having to slice 
>> up the total downstream), so connections receiving too many downstream 
>> packets are slowed by dropping some of the already received TCP packets 
>> (not perfect but it works).

You're still not shaping *inbound* bandwidth, you're shaping *outbound*
bandwidth. It happens to be "bandwidth coming in to your router and then
getting sent out to another host" but from the point of view of the router,
this is still *outbound*.

(You are also relying on flow control mechanisms within the protocols
i.e. you may be *influencing* the rate of packets sent to you, but there's
no absolute control, if someone sends a bunch of UDP at you then queueing
outbound won't do anything to throttle incoming traffic).

> You should post your ruleset. It sounds like you may be able to get some
> better performance without new functionality.

If using vlans, then creating queues on the physical interface rather
than the vlan interfaces might do the trick.

>> Also whilst I'm wishing, also looking forward to the day that the 
>> FQ_Codel algorithms etc which significantly improve buffer-bloat are 
>> soon in OpenBSD (now in Linux 3.7 :)
>
> Honestly, who cares about buffer bloat? Just because it's a
> popular issue in some circles does not mean that anything you do
> on your openbsd firewall is going to affect the problem one way or
> another. 

It may well be a problem if you're using medium/large altq buffers
or if you raise net.inet.ip.ifq.maxlen too high..



Announce: OpenSMTPD 5.3.3 released

2013-06-04 Thread Eric Faurot
OpenSMTPD 5.3.3 has just been released and the archives are available at
our main site: www.OpenSMTPD.org

OpenSMTPD is a FREE implementation of the SMTP protocol with some common
extensions. It allows ordinary machines to exchange e-mails with systems
speaking the SMTP protocol. It implements a fairly large part of RFC5321
and can already cover a large range of use-cases.

It runs on OpenBSD, NetBSD, FreeBSD, DragonFlyBSD, OSX and Linux.

We would like to thank the OpenSMTPD community for their help in testing
the snapshots, reporting bugs, contributing code and packaging for other
systems.

This is a bugfix and reliability release, no new feature.


BugFixes:
=

  * fix a bug causing possible timeouts of incoming SSL sessions
  * fix a case-folding issue when looking up keys in static tables
  * plug several memory leaks in the MTA engine
  * while there, fix a use-after-free in debug traces


Checksums:
==

  SHA256 (opensmtpd-5.3.3.tar.gz) =
  01c4f22cdc5b4f04205f2b1490e275fba2c2265c9eb586f5c259dd3ecb6271b0

  SHA256 (opensmtpd-5.3.3p1.tar.gz) =
  34f0e208e6fdde5c5c25bb11f468436c4d6148a8b640c32117869cad140b823c


Support:


You are encouraged to register to our general purpose mailing-list:
http://www.opensmtpd.org/list.html

The "Official" IRC channel for the project is at:
#OpenSMTPD @ irc.freenode.net


Reporting Bugs:
===

Please read http://www.opensmtpd.org/report.html
Security bugs should be reported directly to secur...@opensmtpd.org
Other bugs may be reported to b...@opensmtpd.org

OpenSMTPD is brought to you by Gilles Chehade, Eric Faurot and Charles Longeau.



Re: ALTQ(32bit)

2013-06-04 Thread Andy

Hi Chris,

Thanks, OK taking your suggestion (I'm always interested in hearing 
opinions and suggestions on how we can do things better) here is an 
example of my queue logic (I've tried to keep it as short as possible 
whilst providing full detail);


NB; We have a 100MBit leased line with 100 down and 100 up (in reality 
when testing it really only achieves 95MBit up and 70Mbit down before 
the Cisco CPE (or other upstream router) starts to mess with the packets.


The usable WAN upstream has been limited to 65MBit up, and the WAN 
downstream is the sum of all the internal interfaces and is limited to 
90Mbit (80+10). We have more interfaces than this (inc DMZ, WIFI etc and 
the 90MBit downstream bandwidth is divided across all of them with more 
granularity, but for this example I have included only 3 interfaces and 
adjusted the figures to highlight the logic).


You'll notice that each interface has been split into two logical 
bandwidth groups (e.g. lan_local and lan_wan), this is to ensure 
inter-zone traffic (LAN->VoIP for example) does not use the WAN 
bandwidth queues (uses available NIC bandwidth). And xxx_local_kernel is 
used for guaranteeing carp packets no matter how busy the firewall is.


# EXAMPLE QUEUES
# EXT Local & WAN upstream queues
altq on $if_ext bandwidth 800Mb hfsc queue { ext_local, ext_wan }
queue ext_local bandwidth 700Mb priority 4 hfsc(upperlimit 700Mb) { 
ext_local_kernel, ext_local_data }
queue ext_local_kernel bandwidth 1% qlimit 100 priority 4 
hfsc(realtime 1%, linkshare 20%)
queue ext_local_data bandwidth 99% qlimit 100 priority 0 
hfsc(linkshare 80%)
queue ext_wan bandwidth 65Mb priority 15 hfsc(linkshare 65Mb, 
upperlimit 65Mb) { ext_wan_rt, ext_wan_int, ext_wan_pri, ext_wan_vpn, 
ext_wan_web, ext_wan_dflt, ext_wan_bulk }
queue ext_wan_rt bandwidth 20% priority 15 qlimit 100 
hfsc(realtime(35%, 5000, 20%), linkshare 20%)
queue ext_wan_int bandwidth 10% priority 14 qlimit 200 
hfsc(realtime 5%, linkshare 10%)
queue ext_wan_pri bandwidth 20% priority 10 qlimit 200 
hfsc(realtime(10%, 2000, 5%), linkshare 20%)
queue ext_wan_vpn bandwidth 10% priority 8 qlimit 300 
hfsc(realtime 5%, linkshare 10%, ecn)
queue ext_wan_web bandwidth 10% priority 6 qlimit 500 
hfsc(realtime(10%, 3000, 5%), linkshare 10%, ecn)
queue ext_wan_dflt bandwidth 20% priority 4 qlimit 100 
hfsc(realtime(15%, 5000, 10%), linkshare 20%, ecn, default)
queue ext_wan_bulk bandwidth 5% priority 0 qlimit 100 
hfsc(upperlimit 30%, linkshare 5%, ecn)


# LAN Local & WAN Downstream queues
altq on $if_lan bandwidth 800Mb hfsc queue { lan_local, lan_wan }
queue lan_local bandwidth 700Mb priority 4 hfsc(upperlimit 700Mb) { 
lan_local_kernel, lan_local_data }
queue lan_local_kernel bandwidth 1% qlimit 100 priority 4 
hfsc(realtime 1%, linkshare 20%)
queue lan_local_data bandwidth 99% qlimit 100 priority 0 
hfsc(linkshare 80%)
queue lan_wan bandwidth 80Mb priority 15 hfsc(linkshare 80Mb, 
upperlimit 80Mb) { lan_wan_rt, lan_wan_int, lan_wan_pri, lan_wan_vpn, 
lan_wan_web, lan_wan_dflt, lan_wan_bulk }
queue lan_wan_rt bandwidth 20% priority 15 qlimit 100 
hfsc(realtime(30%, 5000, 15%), linkshare 20%)
queue lan_wan_int bandwidth 10% priority 14 qlimit 200 
hfsc(realtime 5%, linkshare 10%)
queue lan_wan_pri bandwidth 10% priority 10 qlimit 300 
hfsc(realtime(10%, 2000, 5%), linkshare 10%)
queue lan_wan_vpn bandwidth 10% priority 8 qlimit 300 
hfsc(realtime 5%, linkshare 10%, ecn)
queue lan_wan_web bandwidth 20% priority 6 qlimit 500 
hfsc(realtime(15%, 3000, 5%), linkshare 20%, ecn)
queue lan_wan_dflt bandwidth 20% priority 4 qlimit 100 
hfsc(realtime(15%, 5000, 10%), linkshare 20%, ecn, default)
queue lan_wan_bulk bandwidth 5% priority 0 qlimit 100 
hfsc(upperlimit 30%, linkshare 5%, ecn)


# VoIP Local & WAN Downstream queues
altq on $if_voip bandwidth 800Mb hfsc queue { voip_local, voip_wan }
queue voip_local bandwidth 700Mb priority 4 hfsc(upperlimit 700Mb) 
{ voip_local_kernel, voip_local_data }
queue voip_local_kernel bandwidth 1% qlimit 100 priority 4 
hfsc(realtime 1%, linkshare 20%)
queue voip_local_data bandwidth 99% qlimit 100 priority 0 
hfsc(linkshare 80%)
queue voip_wan bandwidth 10Mb priority 15 hfsc(linkshare 10Mb, 
upperlimit 10Mb) { voip_wan_rt, voip_wan_pri, voip_wan_dflt }
queue voip_wan_rt bandwidth 50% priority 15 qlimit 100 
hfsc(realtime(60%, 5000, 40%), linkshare 50%)
queue voip_wan_pri bandwidth 20% priority 10 qlimit 300 
hfsc(realtime(20%, 3000, 10%), linkshare 20%)
queue voip_wan_dflt bandwidth 10% priority 2 qlimit 100 
hfsc(upperlimit 50%, linkshare 10%, ecn, default)



# EXAMPLE RULES
# EXT Interface
pass out on $if_ext all modulate state (pflow) queue 
(ext_wan_dflt,ext_wan_pri)# Default out (all) - queues for WAN upstream

# No user traffic rule for inter-zone-traffic on EXT 

Re: ALTQ(32bit)

2013-06-04 Thread Andy

Hi Stuart,

On 04/06/13 09:32, Stuart Henderson wrote:

On 2013-06-03, Chris Cappuccio  wrote:

Andy [a...@brandwatch.com] wrote:

Hi,

We're really looking forward to improvements in ALTQ too.

And we are /really/ hoping that the queues can either be shared across
interfaces (so your WAN downstream bandwidth doesn't have to be sliced
up and divided up across all the internal interfaces), or that you can
create queues on the external interface's 'ingress' flow.

I know this opens a can of worms as many say you can't theoretically
shape inbound bandwidth as you've already received the packets, however
we do shape inbound bandwidth and it works brilliantly! But you have to
do it on each of the internal interfaces egress (hence having to slice
up the total downstream), so connections receiving too many downstream
packets are slowed by dropping some of the already received TCP packets
(not perfect but it works).

You're still not shaping *inbound* bandwidth, you're shaping *outbound*
bandwidth. It happens to be "bandwidth coming in to your router and then
getting sent out to another host" but from the point of view of the router,
this is still *outbound*.

Absolutely :)


(You are also relying on flow control mechanisms within the protocols
i.e. you may be *influencing* the rate of packets sent to you, but there's
no absolute control, if someone sends a bunch of UDP at you then queueing
outbound won't do anything to throttle incoming traffic).

And their in-lies the DDOS principle. Damn DOSers..

You should post your ruleset. It sounds like you may be able to get some
better performance without new functionality.

If using vlans, then creating queues on the physical interface rather
than the vlan interfaces might do the trick.
Have just sent a message with full details of our logic. I learn from 
the experience and comments of others so forgive me if I've made some 
stupid mistakes..

Also whilst I'm wishing, also looking forward to the day that the
FQ_Codel algorithms etc which significantly improve buffer-bloat are
soon in OpenBSD (now in Linux 3.7 :)

Honestly, who cares about buffer bloat? Just because it's a
popular issue in some circles does not mean that anything you do
on your openbsd firewall is going to affect the problem one way or
another.

It may well be a problem if you're using medium/large altq buffers
or if you raise net.inet.ip.ifq.maxlen too high..


It is.. :)



Re: PF policy routing route-to rules don’t catch any packet

2013-06-04 Thread Raimundo Santos
I am guessing that the problem lies with flags S/SA.

Changing all rules to flags any, and the packets hits the rules, but things
go worse: no web navigation... this is driving me mad!



On 3 June 2013 13:09, Raimundo Santos  wrote:

> Hi there!
>
> I asked, without an answer, something about nat-to and real IPs. Well, I
> really need an answer there, so if someone get a clue, I will be glad tho
> hear :)
>
> Now, to the new issue!
>
> Here in our WiFi ISP we are have contracted a tproxy service from FreeBSD
> Brasil. It is somehow working, but I can not figure out exactly how. Here
> is a diagram of the desired paths:
>
> http://devio.us/~raitech/Obsd53PfTproxy.png
>
> These are my rules by now:
>
> RFC1918 = "{ 172.16/12, 192.168/16, 10/8, 127/8 }"
> table  persist {  internal nets, all valid IPs }
>
> ext_if_1 = "em0"
> ext_gw_1 = "187.72.X.X"
> ext_ip_1 = "187.72.X.X"
>
> ext_if_2 = "em1"
> ext_gw_2 = "187.72.X.X"
> ext_ip_2 = "187.72.X.X"
>
> ext_if_3 = "alc0"
> ext_gw_3 = "187.72.X.X"
> ext_ip_3 = "187.72.X.X"
>
> int_if_1 = "em2"
> int_gw_1 = "187.72.X.X"
> int_ip_1 = "187.72.X.X"
>
> squid_master_if = "em3"
> squid_master_gw = "187.72.X.X"
> squid_master_ip = "187.72.X.X"
>
> set limit states 6304000
> set limit tables 5000
> set limit src-nodes 20
> set limit frags 3000
> set optimization aggressive
> set state-defaults pflow, no-sync
>
> set skip on lo
>
> block in log quick on {  \
>  $ext_if_1,\
>  $ext_if_2,\
>  $ext_if_3,\
>  $squid_master_if, \
>  $int_if_1 } from $RFC1918 label "blocking RFC1918"
>
> # trying to prioritizing ACKs...
> match set prio (3,5)
> # ... and all traffic http. https over the others
> match proto tcp to port { http, https } set prio (5,6)
> match proto tcp from port { http, https } set prio (5,6)
>
> match proto tcp to port { ssh, 9876 } set prio(5,7)
>
> pass in on $int_if_1 proto tcp from { , $int_gw_1 } to port http \
>  route-to ($squid_master_if $squid_master_gw)
>
> pass in on { $ext_if_1, $ext_if_2, $ext_if_3 } proto tcp from port http \
>  to { , $int_gw_1 } \
>  route-to ($squid_master_if $squid_master_gw)
>
> pass in on $squid_master_if proto tcp from { , $int_gw_1 } to \
>  port http no state route-to \
> { \
>   ($ext_if_1 $ext_gw_1) , \
>   ($ext_if_2 $ext_gw_2)   \
> } least-states label "cahce external outbound balancing"
>
> pass in on $squid_master_if proto tcp from port http\
>  to { , $int_gw_1 } route-to ($int_if_1 $int_gw_1)   \
>  label "cahce internal outbound routing"
>
> An here are a pfctl -vsr output:
>
> block drop in log quick on em0 inet from 172.16.0.0/12 to any label
> "blocking RFC1918"
>   [ Evaluations: 61764339  Packets: 332   Bytes: 32854   States: 0
> ]
>   [ Inserted: uid 0 pid 19584 State Creations: 0 ]
> block drop in log quick on em0 inet from 192.168.0.0/16 to any label
> "blocking RFC1918"
>   [ Evaluations: 5883927   Packets: 114   Bytes: 28621   States: 0
> ]
>   [ Inserted: uid 0 pid 19584 State Creations: 0 ]
> block drop in log quick on em0 inet from 10.0.0.0/8 to any label
> "blocking RFC1918"
>   [ Evaluations: 5883813   Packets: 170   Bytes: 18354   States: 0
> ]
>   [ Inserted: uid 0 pid 19584 State Creations: 0 ]
> block drop in log quick on em0 inet from 127.0.0.0/8 to any label
> "blocking RFC1918"
>   [ Evaluations: 5883643   Packets: 0 Bytes: 0   States: 0
> ]
>   [ Inserted: uid 0 pid 19584 State Creations: 0 ]
> block drop in log quick on em1 inet from 172.16.0.0/12 to any label
> "blocking RFC1918"
>   [ Evaluations: 60684174  Packets: 305   Bytes: 30912   States: 0
> ]
>   [ Inserted: uid 0 pid 19584 State Creations: 0 ]
> block drop in log quick on em1 inet from 192.168.0.0/16 to any label
> "blocking RFC1918"
>   [ Evaluations: 6862827   Packets: 93Bytes: 9232States: 0
> ]
>   [ Inserted: uid 0 pid 19584 State Creations: 0 ]
> block drop in log quick on em1 inet from 10.0.0.0/8 to any label
> "blocking RFC1918"
>   [ Evaluations: 6862734   Packets: 196   Bytes: 19396   States: 0
> ]
>   [ Inserted: uid 0 pid 19584 State Creations: 0 ]
> block drop in log quick on em1 inet from 127.0.0.0/8 to any label
> "blocking RFC1918"
>   [ Evaluations: 6862538   Packets: 0 Bytes: 0   States: 0
> ]
>   [ Inserted: uid 0 pid 19584 State Creations: 0 ]
> block drop in log quick on alc0 inet from 172.16.0.0/12 to any label
> "blocking RFC1918"
>   [ Evaluations: 50726925  Packets: 304   Bytes: 30856   States: 0
> ]
>   [ Inserted: uid 0 pid 19584 State Creations: 0 ]
> block drop in log quick on alc0 inet from 192.168.0.0/16 to any label
> "blocking RFC1918"
>   [ Evaluations: 1251  Packets: 79Bytes: 8268States: 0
> ]
>   [ Inserted: uid 0 pid 19584 State Creations: 0 ]
> block drop in log quick on alc0 inet from 10.0.0.0/8 to any label
> "blocking RFC1918"
>   [ Evaluations: 1172  Packets: 

5.2 > 5.3 mouse issues

2013-06-04 Thread F Bax
I just upgraded from 5.2-release to 5.3-release and notice the following
issues with touchpad mouse control.
1) using left-click then drag to hilight then copy text no longer works;
happens in term window or gui app (ie: firefox).
2) When using google maps in firefox; the mouse pointer takes about 2-3
seconds to convert from pointer to grab/drag to move map within window.
3) Another website I use has an interface to maps; left-click mouse never
converts from pointer to grab/drag mode.

I don't know how I can copy/paste my dmesg into this email; so its
available here:
http://www.gallery.bax.on.ca/dmesg53.txt



Re: OSPF ABR/ASBR issue

2013-06-04 Thread Claudio Jeker
On Mon, Jun 03, 2013 at 03:43:21PM +0300, Kapetanakis Giannis wrote:
> On 01/06/13 18:44, Claudio Jeker wrote:
> >Can you give this diff a spin? Not much tested but the current way we
> >define an area as active (having at least one active neighbor) is wrong.
> >This changes the decision to have at least one active interface
> >(not IF_STA_DOWN). Not sure if that will cause troubles with passive
> >interfaces since those are not considered active.  At least it seems that
> >RFC 3509 uses this to define active areas.
> >
> >Thanks
> 
> Just tested this diff and it does not work in my case for passive
> interfaces (either carp or loopback).
> 
> area 0.0.0.7 {
>stub
>interface carp8 {passive}
>interface lo1 {passive}
> }
> 
> If I add carp8 or lo1 in area 0.0.0.0 then the routes are announced.
> 

Yeah, while the diff fixed the B flag it did not solve the problem that we
skipped our own networks. This version should solve that (at least it does
in my quick test).

Needs lots of testing since this changes core parts of the route calculation.
-- 
:wq Claudio

Index: area.c
===
RCS file: /cvs/src/usr.sbin/ospfd/area.c,v
retrieving revision 1.9
diff -u -p -r1.9 area.c
--- area.c  7 Jan 2009 21:16:36 -   1.9
+++ area.c  4 Jun 2013 20:58:05 -
@@ -94,19 +94,24 @@ area_find(struct ospfd_conf *conf, struc
 }
 
 void
-area_track(struct area *area, int state)
+area_track(struct area *area)
 {
-   int old = area->active;
+   int old = area->active;
+   struct iface*iface;
 
-   if (state & NBR_STA_FULL)
-   area->active++;
-   else if (area->active == 0)
-   fatalx("area_track: area already inactive");
-   else
-   area->active--;
+   area->active = 0;
+   LIST_FOREACH(iface, &area->iface_list, entry) {
+   if (iface->state & IF_STA_DOWN)
+   continue;
+   area->active = 1;
+   break;
+   }
 
-   if (area->active == 0 || old == 0)
+   if (area->active != old) {
+   ospfe_imsg_compose_rde(IMSG_AREA_CHANGE, area->id.s_addr, 0,
+   &area->active, sizeof(area->active));
ospfe_demote_area(area, old == 0);
+   }
 }
 
 int
@@ -116,7 +121,7 @@ area_border_router(struct ospfd_conf *co
int  active = 0;
 
LIST_FOREACH(area, &conf->area_list, entry)
-   if (area->active > 0)
+   if (area->active)
active++;
 
return (active > 1);
Index: interface.c
===
RCS file: /cvs/src/usr.sbin/ospfd/interface.c,v
retrieving revision 1.75
diff -u -p -r1.75 interface.c
--- interface.c 14 May 2012 10:17:21 -  1.75
+++ interface.c 4 Jun 2013 20:58:05 -
@@ -136,8 +136,10 @@ if_fsm(struct iface *iface, enum iface_e
if (new_state != 0)
iface->state = new_state;
 
-   if (iface->state != old_state)
+   if (iface->state != old_state) {
+   area_track(iface->area);
orig_rtr_lsa(iface->area);
+   }
 
if (old_state & (IF_STA_MULTI | IF_STA_POINTTOPOINT) &&
(iface->state & (IF_STA_MULTI | IF_STA_POINTTOPOINT)) == 0)
Index: neighbor.c
===
RCS file: /cvs/src/usr.sbin/ospfd/neighbor.c,v
retrieving revision 1.46
diff -u -p -r1.46 neighbor.c
--- neighbor.c  17 Jan 2013 10:07:56 -  1.46
+++ neighbor.c  4 Jun 2013 20:58:05 -
@@ -204,7 +204,6 @@ nbr_fsm(struct nbr *nbr, enum nbr_event 
 * neighbor changed from/to FULL
 * originate new rtr and net LSA
 */
-   area_track(nbr->iface->area, nbr->state);
orig_rtr_lsa(nbr->iface->area);
if (nbr->iface->state & IF_STA_DR)
orig_net_lsa(nbr->iface);
Index: ospfd.h
===
RCS file: /cvs/src/usr.sbin/ospfd/ospfd.h,v
retrieving revision 1.91
diff -u -p -r1.91 ospfd.h
--- ospfd.h 17 Jan 2013 10:07:56 -  1.91
+++ ospfd.h 4 Jun 2013 20:58:05 -
@@ -104,6 +104,7 @@ enum imsg_type {
IMSG_NEIGHBOR_CAPA,
IMSG_NETWORK_ADD,
IMSG_NETWORK_DEL,
+   IMSG_AREA_CHANGE,
IMSG_DD,
IMSG_DD_END,
IMSG_DD_BADLSA,
@@ -530,7 +531,7 @@ struct demote_msg {
 struct area*area_new(void);
 int area_del(struct area *);
 struct area*area_find(struct ospfd_conf *, struct in_addr);
-voidarea_track(struct area *, int);
+voidarea_track(struct area *);
 int area_border_router(struct ospfd_conf *);
 u_int8_tarea_ospf_options(struct area *);
 
Index: ospfe.c

Re: ALTQ(32bit)

2013-06-04 Thread Kurt Mosiejczuk
On Mon, Jun 03, 2013 at 03:49:22PM +0200, Peter N. M. Hansteen wrote:

> ALTQ is old code (perhaps move obviously so to German speakers than others 
> ;)), a replacement 
> is in the pipeline but not immediately ready, unfortunately.

> http://bsdly.blogspot.ca/2011/07/anticipating-post-altq-world.html gives some 
> background,
> diffs are being tested by various people now, and the commit of the new 
> queueing system
> *must* be moving closer by the minute. But no definite ETA just yet.

The diff Henning sent out is working well so far in my house.  My wife
has gone mad with power setting up different queues for different sets of
traffic on our home network (voip, netflix, web, etc).

She talked about setting up a 'doghouse' queue in case I got myself in
trouble with her and that she would set it down to 1 byte.

My response: "I didn't realize you could use pfctl to write divorce papers."

--Kurt



Re: ALTQ(32bit)

2013-06-04 Thread Chris Cappuccio
Stuart Henderson [s...@spacehopper.org] wrote:
> 
> It may well be a problem if you're using medium/large altq buffers
> or if you raise net.inet.ip.ifq.maxlen too high..

While I don't disagree in concept (by definition, using sysctl maxlen=
big would create a large buffer), I think in implementation most
people are simply using ethernet-ethernet firewalls and the real
buffering is already happening somehere else (on a router or bridge that
goes from a high-speed link to a low-speed link).

The people who are concerned with buffer issues on their firewall
(whose interfaces are typically all run at equivalent speed) are
looking in the wrong place.

Raising IFQ on a box with two interfaces that both run at 1Gbps
is not going to cause a buffer bloat issue. It just gives openbsd
a longer queue to get more work done when there are large bursts
of traffic. This really isn't the same problem that is bandied about.

That problem looks more like a rack with servers connected at
10Gbps, talking to a client somewhere else at 1Gbps, or 100Mbps.
The 10Gbps clients may fill up the potentially large switch or
router buffers at rates above 1Gbps per second, only to wait for the
buffers to drain. That is the problem in a nutshell - fast source,
slow receiver, equipment in between takes the brunt of the traffic,
high latency for traffic that got delivered, and it repeats all
over.

I'm ignoring that altq can be used to shape for lower speed links,
it seems that large altq queues would have the same effect there.
I guess if you are creating that situation with a queueing algorithm,
then you want ALTQ to have small queues. I don't recall any excessive
buffering in practice with ALTQ. Isn't there some measure of support
for RED and ECN too ?