Since you already stated you have valid clients which could open many connections at once it seems pf might not be the right solution.
Have you thought about using a reverse proxy server in front of your web servers? A program like Pound would allow you to specify valid URL regular expressions which would then goto your web servers. All of the invalid requests would get an error by the proxy server. If you wanted to, you could make a script to watch the logs and add ips to the pf blacklist table. Pound secure reverse proxy "how to" http://calomel.org/pound.html If your webserver has the ability to use mod_evasive this might also help. Mod_evasive will return errors for clients who connect over a set limit. I believe mod_security can blacklist clients who produce too many errors. If you decide to stick with just PF then take a stab at writing a script to watch the webserver logs. If you have a web client producing a certain amount or type of errors put them in a slow queue for a while. Using Pf's "probability"directive works really well if you want to slow, but not completely block the host. You can find pf examples here: OpenBSD Pf Firewall "how to" ( pf.conf ) http://calomel.org/pf_config.html Hope this helps. -- Calomel @ http://calomel.org Open Source Research and Reference On Thu, Jan 31, 2008 at 10:50:43AM -0600, Cache Hit wrote: >Hello, > >I've been successfully using the max-src-conn and max-src-conn-rate >with an overload into a table that I block for our external firewall >that protects a few dozen (mostly Sun) web servers. As it stands it >works great for blocking ssh, ftp, smtp and several other protocols >when there are attempts at floods or hacks. I group them by port >and and have different settings for different sets of ports. > >One thing I continually run into on the machines are port 80 attacks >or floods.I'd like to do something similar with PF as I'm already >doing for other protocols to overload these into a table and block >them, but I'm finding it very hard to come up with a set of rules >that eliminate any false positives while still catching actual >attacks.I find in particular there are a few websites behind our >firewall that have very complex page structures with lots of embedded >images such that a fast browser with a fast connection viewing >certain sections of the site can easily do 100's of legit GET's in a >matter of a couple seconds. > >Does anyone have any suggestions for weeding out the false >positives? Merely upping either of max-src-conn or max-src-conn- >rate seems to be eventually self-defeating as it just allows attacks >through as well as allowing the fast legit traffic. > >thanks, > >-- >[EMAIL PROTECTED] >The sky above the port was the color of television, tuned to a dead >station.