>James Craig Burley wrote:
>
>> Going back to my earlier questions, which I'll rephrase and ask you:
>> 
>>   Does DNS rely on local caching to avoid latencies related to network
>>   topology and potential problems with overloaded or unreachable Root
>>   servers?
>
>Your question is based on a false premise.  You seem to be obsessed with 
>the root servers being the primary point of failure (or you are being 
>imprecise in your terminology).

Why not simply answer the question?  Does DNS rely on local caching to
be effective?  Never mind the rest of it -- after all, overloaded or
unreachable authoritative servers can be problems as well.

I'm simply saying, if caching is not important, why does DNS implement
it so widely?

And if caching *is* important, what effects will SPF lookups have on
the hit rates for such caches, especially in the face of attacks
designed to expose SPF's weaknesses?

>The gTLD (.COM, .NET) servers in particular are massively redundant

I'm not all that worried about them, though, in theory, a sufficiently
well-coordinated attack targeting SPF (once widely deployed) could
bring easily bring those servers to their knees *regardless* of
redundancy.

What I'm asking you is, will you be okay with frequent SPF lookups on
incoming email rendering your *downstream* DNS caches not only useless
but actually costly, because their hit rates go so low they just cause
most of your DNS lookups to take extra time to go to the caches when
they won't get hits there anyway?

>So Yes, local DNS caching is used to avoid some network latencies, but 
>No, the root (or even the gTLD) servers are not an issue.  If person or 
>persons unknown decide to attack the root or TLD servers (and they 
>should somehow succeed), the fact that SPF queries would be effected too 
>is a minor footnote.  Everything related to DNS would stop working.

Widespread injections of even modest amounts of well-designed email
envelopes could be an easy way to trigger that, until enough people
turned off SPF.

In the meantime, it'd be really hard to stop such an attack on root
servers (or on a set of authoritative servers), because there'd be no
easy way to distinguish hostile queries from innocent ones.

(Pending more thought on the issue, I'd lean towards advocating
utilizing an entirely distinct set of DNS caches for SPF and similar
reverse-lookup-type measures.  That is, if you do any lookup keyed on
a host name that some external, potentially untrusted entity provides,
you do it via a DNS lookup on port 54 instead of 53, and sysadmins
would ensure that their port-54 caches were memory-limited such that
they couldn't impinge on port-53 caches continuing to work.  Root
servers would then be able to limit the % of resources they devoted to
queries coming via port 54.)

>>   Does the local caching rely on locality of reference over the set of
>>   lookups performed?
>
>By this, I have to assume that you are referring to glue records, which 
>can be cached to permit queries to short circuit the entire resolution 
>from the root.

Again, not necessarily; I'm referring to the simple concept of
*caching* itself, and whether SPF exposes a vulnerability in that
concept as apparently depended upon by DNS in order to work properly
(that is, to perform well enough to be useful).

>But, again, your question is reliant on a hidden (and I 
>would argue false) premise that the majority of the queries are of 
>significant depth into the heirachy.  In the case of rDNS queries, there 
>can be multiple delegations to smaller and smaller IP blocks, so the 
>depth of the queries would be higher.

SPF queries != rDNS queries, as I've pointed out already.

>However, we are specifically discussing queries based on domain name, 
>which in the case of gTLD will never be more than 3 queries away 
>(ignoring the effect of local caching) in practice.  And I say "in 
>practice" meaning specifically "the Real World."  No domain 
>administrator is going to assign a subdomain to any agent that they do 
>not completely trust.  So the *only* thing to worry about is rogue TLD 
>domains themselves (see next answer).

No, again, do the math on the potential effects of having DNS caches
experience very low hit rates due to SPF lookups.

>>   Are SPF-based DNS lookups under the control of a local user
>>   population, or of external, potentially hostile, entities?
>
>The SPF-based *lookups* are under the control of the MTA administrator; 

Yes, but the *keys* are *not*.  Those are under the control of any old
SMTP client that connects to that MTA's SMTP server:

  MAIL FROM:<[EMAIL PROTECTED]>

That *guarantees* a DNS lookup based on the key "a.b.c.d.e", assuming
the MTA is using SPF as designed.

The problem is that the MTA gives this lookup the appearance of
legitimacy as far as upstream DNS caches and servers are concerned,
but the MTA does not *really* care about the information other than to
use it to flag the email.

(Postponing SPF lookups to later on helps the problem somewhat.  The
best approach IMO is to postpone them until the end user decides the
email should be "tested" and explicitly asks for this test.)

>the *answers* are under the control of the external entities.  There is 
>no doubt in my mind that if SPF/Caller ID efforts begin to have an 
>effect, that spammers will start to publish domain SPF records. 
>However, the local MTA administrators can choose not to trust specific 
>domains, though the use of an SPF-specific blacklist.

That is all true as well, though it recurses back to the problem of
supporting blacklists.

>The real strength of systems like SPF comes when you consider the 
>current trend towards zombie armies of spam relays.  These exploited 
>computers send e-mail with forged Sender/From information from random 
>machine IP's inside networks who haven't firewalled port 25 (through 
>either sloth or stupidity).  If there exists an SPF record for the 
>forged Sender domain

A Very Big If.  Exactly when do you think all domains will have SPF
records that do not in any way permit *zombie* systems (which are
usually, by definition, on "trusted" networks, like mine -- Comcast --
or in dormitories, corporations, and so on) to send emails on their
behalf?

Spammers will quickly learn to discover exposed domain names just as
they learned to discover "legit" email addresses, not just to *send*
email to, but to *forge* email *from*.

>If Comcast decides to publish 
>an SPF record permitting all IP addresses in their block to send e-mail 
>(hint, this is bad), then no one will trust Comcast's SPF records.

Two things about that:

  1.  I'm in Comcast's block, but I don't send spam.  (Then again, I
      send email under one of my own domains, and I hope my dynamic-IP
      services will allow me to publish SPF records someday!)

  2.  We're recursing on the problem of trust again, this time to
      determine whether SPF records *themselves* can be trusted.

So now, instead of just doing an SPF lookup, you have to do a distinct
lookup (or set of lookups) to determine whether the the SPF records
can be trusted.

>This discussion is going nowhere; either you are a troll (someone who 
>argues just for the intellectual challenge), or you just don't agree 
>that SPF will help prevent certain unauthorized e-mail.

I'm definitely not a troll, though I'll admit this is an interesting
intellectual challenge, trying to get people to answer simple
questions about locality of reference, the importance of DNS caching,
etc.

And, I certainly agree that SPF *will* help prevent certain
unauthorized email.  I believe there's no debating that!

>Either way, I 
>don't see any hope to convince you that your theoretical weaknesses are 
>unrealistic based on actual, real world, usage patterns.  Consider this 
>my last posting on the subject.

Okay, fine, if you say so.

But I wish you'd explain just why you consider *today's* "actual, real
world, usage patterns" as representative of what tomorrow's usage
patterns will be, after SPF is widely deployed and used to combat
forgeries (read: block spam and vermin).

Point being, if my *theoretical* weaknesses are unrealistic *today*
(and I happen to agree that they are), they might become serious
problems *tomorrow*.

And some of the biggest engineering disasters ever were due to
"theoretical" weaknesses turning into real ones.

Is it really so much to ask that we try some actual *experimenting*
and *engineering* before telling the world SPF will be a useful weapon
in the anti-UBM arsenal?

>FWIW, I am publishing SPF records and I am experimenting with using SPF 
>with my domains.  I'm also using multiple blacklists to block known 
>spamsources and various content analysis tools to tag additional spam.

Publishing SPF records does little harm to DNS, beyond making the data
base a tiny bit larger for each such publication.  (There are already
substantial, real-world problems in backing off to TCP to handle
oversize DNS responses, however, so SPF adoption is already proving to
be a painful matter.  As I said earlier, I've seen my own outgoing
email get blocked by SPF implementations because of bugs in deployed
DNS caches and/or servers, presumably BIND.)

Querying DNS for SPF records, in response to each incoming email, adds
significant stress to the system (the system of DNS caches and
servers, not much to the local system).

It is my hope that SPF will be so useful that the costs will be more
than offset by the benefits of spammers and vermin authors injecting
much less of their unwanted bulk mail into the system (as a whole)
because they can no longer be assured of doing so anonymously.

It is my concern that SPF cannot possibly be that useful in practice,
because it cannot come *close* to stopping forgery, and it costs to
much to merely make it more difficult for *today's* generation of
spammers and vermin authors.

-- 
James Craig Burley
Software Craftsperson
<http://www.jcb-sc.com>

Reply via email to