Ned Slider put forth on 11/3/2010 3:11 PM:

> Stan, and others who are using this file - have any of you looked at the
> overlap with greylisting? I would imaging that the vast majority of
> clients with dynamic/generic rDNS would be spambots and as such I would
> expect greylisting to block the vast majority anyway, and without the
> risk of FPs. IOW what percentage of connection attempts from clients
> with dynamic/generic rDNS will retry? Of course the benefits of growing
> such a list now would become immediately apparent the day spambots learn
> to retry to overcome greylisting.

Hmm.  The CBL still exists.  The PBL and other "dynamic" dnsbls still
exist.  I've guess they've not heard of this master of zombie killers
called greylisting. ;)

The performance impact of greylisting is substantially higher than an
access table lookup, yes, even the caching ones such as postgrey.  You
also have the retry interval finger tapping with greylisting, waiting
for order confirmation, list subscription, airline reservation, etc.
Greylisting is simply not an option at some organizations due to
management policy.  Greylisting is not a panacea.

If this expression file is to evolve, some of the first additions will
likely be patterns matching snowshoe farms and other spam sources
different from the generic broadband type hosts targeted by the current
expressions.

Regarding the FP issue you raise, I think you're approaching it from the
wrong perspective.  These are regular expressions.  They match rDNS
naming patterns.  We're not talking about something sophisticated like
Spamhaus Zen or the CBL which exclusively use spamtraps and auto expire
listings a short while after the emission stops.  For just about every
/21 consumer/small biz broadband range that any of these patterns may
match, there is likely going to be a ham server or 3 nestled in there,
maybe small biz or a kid with a Linux box and Postfix in the basement.

That's why I say "whitelist where necessary" when promoting use of this
regex set.  I haven't checked the entire list, but I'm pretty sure all
the patterns match mostly residential type ranges.  Some ISPs mix small
biz in with residential, which is stupid, but they do it.  My
residential ISP is an example of such, as we discussed.  With a method
this basic, there's no way around rejecting the ham servers without
whitelisting.  If you start removing patterns due to what you call FPs,
pretty soon there may be few patterns left.  If you start adding
patterns to the top of the file to specifically whitelist those ham
sources, you're now starting to duplicate the DNSWL project, with the
exception that such regex patterns will only be realized retroactively
after an FP.  Such a method of weeding out the ham servers is absolutely
the opposite of scale.  Any ham server within a residential type range
should be listed by its OP at dnswl anyway.  Do you query dnswl Ned?  If
not, I wonder how many of your FPs wouldn't be rejected if you did.

To date, I can't recall a single FP here due to these regexes.  This is
one of the reasons I like it so well and promote it.  As always, YMMV.

-- 
Stan

Reply via email to