Ok, thanks for the tips. I did not have any ifq drops, but have still just
increased the net.inet.icmp.errppslimit to 10000 (from the 1000 that was
before and shown below) and will see if that helps anything. Thanks also for
the clarification on the match counter.

I had forgotten to also include the sysctl changes that I had made as well,
mostly based from calomel.org, which were the following:

kern.maxclusters=128000
net.inet.icmp.errppslimit=1000
net.inet.ip.ifq.maxlen=1536
net.inet.ip.mtudisc=0
net.inet.ip.ttl=254
net.inet.ipcomp.enable=1
net.inet.tcp.ackonpush=1
net.inet.tcp.ecn=1
net.inet.tcp.mssdflt=1472
net.inet.tcp.recvspace=262144
net.inet.tcp.rfc1323=1
net.inet.tcp.rfc3390=1
net.inet.tcp.sack=1
net.inet.tcp.sendspace=262144
net.inet.udp.recvspace=262144
net.inet.udp.sendspace=262144
vm.swapencrypt.enable=1

On Tue, Feb 1, 2011 at 3:15 PM, Henning Brauer <lists-open...@bsws.de>wrote:

> * Steve Johnson <maill...@sjohnson.info> [2011-02-01 20:35]:
> > I currently have a system that has no match rule in the ruleset, but that
> > uses tables for a big chunk of the traffic, including our monitoring
> station
> > that has a pretty high SNMP request rate. That system has a state table
> that
> > usually stabilizes between 15-20K sessions, with a session search rate of
> > around 10K. The states limit has been raised to 100000 and the frags to
> > 10000, but all other limits are set to default values.
>
> you can increase that much more. the times where kmem was a very
> scarce ressource are long over.
>
> > However, the "match"
> > counter always states a rate between 199/200 per second.
>
> the counter has nothing to do with match rules. it is increased any
> time a rule matches, regardless of the type.
>
> >  During some heavy
> > traffic period, we are getting some failures from the monitoring system
> and
> > the only thing that seems possibly out of health for the system is the
> match
> > counter rate. System processor and memory are fine and there is no other
> > noticeable impact, but clearly the monitoring tool is seeing an impact,
> as
> > it didn't reflect something this behavior before we implemented the PF
> > systems.
>
> you might hit some other limit, not necessarily pf. start with
> checking sysctl net.inet.ifq - in particular drops, and increase
> maxlen if you see it increasing.
> depending on how you monitor you might also run into the icmp err rate
> limit, play with the net.inet.icmp.errppslimit sysctl.
>
> --
> Henning Brauer, h...@bsws.de, henn...@openbsd.org
> BS Web Services, http://bsws.de
> Full-Service ISP - Secure Hosting, Mail and DNS Services
> Dedicated Servers, Rootservers, Application Hosting

Reply via email to