Hi,
have you tried to set some tuning options in pf.conf & sysctl.conf ?
eg:
for sysctl.conf:
net.inet.ip.ifq.maxlen=512     # Maximum allowed input queue length
(256*number of physical interfaces)
kern.bufcachepercent=90        # Allow the kernel to use up to 90% of the
RAM for cache (default 10%)
net.inet.udp.recvspace=131072 # Increase based on your memory
net.inet.udp.sendspace=131072 # Increase based on your memory
ddb.panic=0                    # do not enter ddb console on kernel panic,
reboot if possible , this reduces headache

for pf.conf :
set optimization aggressive
...


On Wed, Feb 22, 2012 at 12:21 AM, Joachim Schipper <
joac...@joachimschipper.nl> wrote:

> On Mon, Feb 20, 2012 at 05:57:05PM +0100, Roger S. wrote:
> > I am facing regular and consequent DDoS, and I would like to know how
> > the OpenBSD community deal with these. Hints and inputs welcome.
> >
> > The obvious first : my input pipes are not filled, there is plenty of
> > bandwith available for my regular users. (...)
> >
> > Methodology is more or less always the same :
> >       - massive UDP flood           :   2 Gbps / 150 Kpps -> dropped
> > directly on the router, not a problem
> >       - moderate ICMP flood         :  10 Mbps /  12 Kpps
> >       - moderate IP fragments flood : 380 Mbps /  57 Kpps
> >       - moderate TCP RST flood      :  10 Mbps /  30 Kpps
> >       - massive TCP SYN flood       : 640 Mbps /   2 Mpps -> yup, that
> hurts
> >
> > So, UDP never ever reaches my OpenBSD box. The SYN are made with a
> > very vicious method : each used IP send exactly one SYN, but there are
> > millions of them (traffic probably spoofed, but can not use uRPF as we
> > have asymmetric traffic and routes). I tried to set limit states with
> > 1M entries, and it was quickly filled (tried 5M but the box collapses
> > way before that). So in the end, the state table collapses and no
> > traffic can pass, even for regular users with already established
> > connections.
> >
> > I ran some experiments in a lab trying to reproduce this, with a box
> > roughly identical to what I have in production (but much weaker, of
> > course). The box collapses at 600 Kpps SYN (100% interrupts), but
> > handles everything very gently (less than 50% interrupts and no packet
> > loss) if the first rule evaluated is block drop in quick from !
> > <whitelisted_users>. So it seems that my bottleneck is PF here, not
> > the hardware. A consequence of this saturation : both my main firewall
> > and my backup claims MASTER ownership of the CARP (split brain
> > syndrome). CARP works just fine when I add the block rule, though.
> >
> > Some configuration details :
> >       - OS  : OpenBSD 5.0/amd64 box, using GENERIC.MP
> >       - CPU : Intel X3460 CPU (4 cores, 2.80GHz)
> >       - RAM : 4GB
> >       - NIC : 2x Intel 82576 (2 ports each)
> >
> > Each network card has the following setup : one port to the LAN, one
> > port to the WAN. Each pair (LAN1/LAN2 and WAN1/WAN2) is trunked using
> > LACP. Already bumped net.inet.ip.ifq.maxlen, as all NICs are
> > supported. My benchmarks did highlight two interesting things : amd64
> > has better performance than i386 (roughly 5-10% less interrupts, with
> > same rules and traffic), but the difference between GENERIC and
> > GENERIC.MP is insignificant.
> >
> > My current idea is to hack a daemon to track established connections
> > (extracting them ` la netstat), and inject my block rule in an anchor
> > (` la relayd) when needed (watching some stats from pf, with its ioctl
> > interface). Pros: regular users the firewall saw before the attack can
> > still use the service. Cons: no new users are allowed until the
> > removal of the rule, obviously. Better than nothing, but I welcome any
> > other hints :)
> >
> > One other solution may be to add boxes. I tried a carpnodes cluster,
> > but at 600 Kpps I got a "split brain" with both nodes claiming MASTER
> > for each carpnode. Maybe if I configure ALTQ it could help this ? As I
> > have more boxes, I could deal with the performance impact of ALTQ.
> >
> > I am willing to test any patch/suggestion you may have, of course.
> > Even just hints about kernel code, as I am currently messing with PF
> > code myself. I did compile a profiled kernel, I must now check the
> > results but that will be another story.
>
> Just the most obvious idea, since you mention that this sort-of-works if
> you put "block drop in quick from !<whitelisted_users>": does it handle
> this load if you turn off pf, or only include one or two trivial rules?
> It certainly suggests that you may be well-served by optimizing your
> pf.conf... (also, you've probably found the "synproxy" directive? If
> not, try that too.)
>
> Also, state tracking is apparently faster than stateless pf for normal
> firewalls. I'd double-check if this is still true in your case, though;
> if nothing else, stateless pf makes a CARP'ed setup easier.
>
> I'm pretty sure you can muck with the rules without dropping existing
> connections. (pf essentially does "does this packet match a known state?
> If not, look at pf.conf".) This is almost certainly easier than your
> proposed daemon.
>
> A final, rather hackish, idea that probably does need a bit of
> programming: greylisting for SYNs. Legitimate users will send you a
> second SYN, so you could do something like (this has not even been
> syntax-checked!)
>
>  block drop log in quick from !<syn_seen> no state flags S/SA
>
> and then add every logged IP to syn_seen. Obviously, this will slow down
> access to the service for legitimate users, which may or may not be
> acceptable.
>
>        Joachim
>
> --
> PotD: www/squid,ntlm - WWW and FTP proxy cache and accelerator
> http://www.joachimschipper.nl/

Reply via email to