Joey wrote, at 10/13/2008 01:42 PM:

> You reach a point where the money we think we are profiting from
> services sucks up all our time and resources and somehow we have to
> reduce that overhead and SPAM. Imagine that we are blocking millions
> of spam messages a month through various methods and we have clients
> complaining about spam... what are we to do.  It gets really old.

No argument here. :)

Of the remaining spam that gets past my defenses, nearly all of it could
be stopped by the following:

1. Require Forward Confirmed reverse DNS (FCrDNS), where the IPs must
match in a IP -> name -> IP lookup.

2. Require reverse DNS (rDNS), where the connecting host must have a PTR
record, returning a (valid) host name in an IP -> name lookup.

3. Require encrypted connections via STARTTLS.

FCrDNS offers a lot of promise, but if Network Solutions can't even get
it right (when its parent company, Verisign, controls a huge chunk of
DNS), there's little hope that other sites will. I'd like to apply the
ipt_recent module to hosts without FCrDNS, but there is little desire
for filter developers to base rules on realtime DNS lookups, since it
can introduce significant overhead and a host of other serious problems.
Selective greylisting aimed at FCrDNS offers some hope, however, as many
of the offenders don't appear to retry.

Many school and government sites (not to mention China) can't seem to
configure rDNS and FCrDNS properly. I have given up trying to contact
offending sites. Too often, they decide the solution is simply to drop
the recipient from a mailing list, instead of correcting their DNS
records to improve the robustness of their mailings. It's a shame,
because things got pretty quiet on my test domains during the weeks I
implemented reject_unknown_(reverse_)client_hostname.

Requiring encryption is a pipe dream, and as Wietse has mentioned,
introduces a greater risk of exposing bugs as a result of linking to a
large base of external code.

Reply via email to