I think this may be of interest. It was offlist, so I won't identify the
author I am responding to.
> [off-list]
>
> On Monday, September 24, 2007 06:25:49 PM -0400 Dean Anderson
> <[EMAIL PROTECTED]> wrote:
>
>
>
> > I. Harm only possible for ENDSO; Update RFC 2671 Instead
> >
> > The maximum non-EDNS amplification factor is 8
>
> 8x can be significant.
Yes. But they can get more than that from a couple hundred+ root
servers.
> > II. Authority Servers not Addressed, but Present More Harm
> > V. Mitigation Options Limited in Authority Case
>
> The document I read does cover these issues, though lightly. It is much
> more difficult to construct an effective attack using an authoritative
> server, because one must first find an existing record large enough to be
> worthwhile.
Your argument was addressed on the DNSOP list. To conduct the attack
with anything approaching anonymity, one __must__ find authority servers
with legitimately large records. But once you have this list, you may
as well use those instead of searching again for reflectors.
It is far easier to search for authority servers than it is to find
reflectors.
> Furthermore, the real danger here is the ability to mount a
> _distributed_ attack, in which large numbers of servers send bogus
> responses at a rate far beyond that which the original authoritative server
> could manage. That requires caching servers; a handful of authoritative
> servers mostly on the same network won't cut it.
Caching servers are not a requirement for a distributed attack. To
conduct the attack with any servers (caching or authority), one still
needs a botnet to send the spoofed source packets. This botnet is
amplified by the same factor, whatever type of server is used. Thus,
the type of server is irrelevant to the damage caused.
> As you note, attacks involving authoritative servers can be considerably
> more difficult to mitigate, since restricting access to them is a cure
> worse than the problem. The document does discuss this issue.
Well, obviously, you didn't get even the gist of the issues _already_
discussed on list for the document. No one else will get it, either,
given the current document.
> > III. Motivating Attacks Seem Contrived
> > There have been no further similar attacks.
>
> There have been many DDOS attacks involving using recursive nameservers as
> reflectors. We seem to see them on a fairly regular basis (no, I don't
> have any data -- I'm not responsible for tracking this stuff; I just notice
> the load spikes on the nameservers once in a while).
I don't see that data. I don't see any reports outside the original
"urgency--must do this now" reported on Nanog (2005 I think). People
have to cooperate to mitigate these attacks. There would be records.
There would be more complaints about recursors.
> > IV. Ordinary DDOS Mitigation Appropriate for Reflector Case
>
> > Because the mitigation options are the same as with any other spoofed
> > DDOS attack, this attack does not merit special attention.
>
> A large part of defending against various attacks is going after so-called
> "low-hanging fruit"; that is, taking actions which obtain a large result at
> a small cost. This document suggests such an action, which prevents the
> use of caching nameservers in attacks of the type that has been observed.
> This is a proactive step; as far as I know, all "ordinary DDOS mitigation"
> methods are reactive in nature.
This isn't low hanging fruit; no one is seriously abusing this, even
over 20+ years. Interestingly, this proposal comes from the Anycast
crowd. BTW, a number of the 'attacks' they have previously reported
(excessive TCP syns, TCP unmatched to syns, incomplete UDP fragments)
are symptomatic of anycast instabilities, and aren't attacks, either.
> It's worth noting that while many distributed attacks can be mitigated by
> applying broad filters at a point close to the victim, that is particularly
> difficult in this case. If you apply a filter that blocks "all incoming
> DNS responses", then you have denied DNS service to the victim, doing the
> attacker's job for him.
This mitigation is no different from authority servers. Certainly, more
sophisticated methods are required to scrub packets, such as matching
queries sent to responses. I think UUnet, for example, does this kind
of thing in DDoS attacks and has some sophisticated tools to scrub
traffic during an attack. But this is just what I meant that 'ordinary
DDoS mitigation' is appropriate to these attacks.
> > VI. Using "Evil" in a RFC Title is Unprofessional
>
> Uh, the title of the document is "Preventing Use of Recursive Nameservers
> in Reflector Attacks"; I don't see the word "Evil" anywhere there. It does
> appear in the draft filename, but that is just the filename of a working
> document of the IETF, and by convention can be whatever its author wants.
Ok.
> > VII. Reflectors are Useful for Scientic Measurement
>
> True, and also for various forms of debugging.
>
>
> > Because the NSID draft (RFC 5001) makes it impossible to independently
> > identify Anycast Root DNS server instances (because the returned nonce
> > is encrypted and only decryptable by the root operators) and therefore
> > impossible to measure the reliability of Anycast Root DNS services; the
> > open recursor method is the only economical method to measure anycast
> > services.
>
> Actually, RFC5001 leaves the content of the NSID option entirely up to the
> server operator; it does not mandate a form which is useless to anyone but
> the server operator.
Ah, yes. Amazingly trusting. Suppose SEC rules made it 'optional' for
the Public Company to encrypt its SEC filings so that only the Company
could decrypt them. Do you suppose the Ebbers and Skillings would allow
you to see that data?
Likewise RFC5001 __enables__ operators to hide their dubious operations
in ways that can't easily be independently tested. It is of no
consequence that they could 'choose to be honest'. The IETF role is to
ensure measurable proper conduct of, particularly, root and TLD DNS
operators.
The only reason to hide this information is to obscure data on the
unreliability of DNS Anycast. Further, hiding DNS information is
contrary to established architectural policy written to clarify DNSSEC
goals.
This ability to hide Anycast operation data advances no legitimate
public interest. It was put in at the last minute by root and TLD
anycast operators, despite objections.
> Further, I dispute your claim that your inability to distinguish
> servers makes it impossible to measure the reliability of the service.
> Speaking as one who operates many redundant services, the reliability
> of a service is about the behavior its clients see, not whether
> individual servers providing it are functioning correctly.
This is a specific kind of reliability measure. It is not the only kind.
> For example, if an anycast nameserver address is backed by three
> servers and one of them is down, but the routing is such that only 10%
> of requests go to the down server, then on average, clients will see
> 10% of their requests dropped, not one third.
This is overly simplistic. It is not true for stateful DNS packets,
because packets can always be routed to multiple anycast instances.
Earlier, I had thought that this could only happen with PPLB, and in
theory (RFC 1812 could happen at anytime. I've since tested this
experimentally. Theory is was right. Even routers that route using flow
caches expire those cache entries every 60 seconds. Send 2 packets more
than 60 seconds apart, and they can go to different servers. I've
detected anycast open recursors this way.
But to tell how bad the anycast problem is on an authority server (such
as the root or TLD servers), one needs to identify uniquely (with NSID),
the instance each query goes to, and measure how often one gets a
different server.
Alternately, one can indirectly use TCP, or indirectly use reflectors to
make measurements. These measurements can't be as accurate as NSID
measurements would be.
> Now, the ability to distinguish between servers is useful if you are trying
> to determine the cause of the failures, but is completely unnecessary for
> determining that the service has 90% reliability.
Incorrect, as shown above. Besides, we expect high availability from
from root servers. Anycast appears to give no better than about 97%
over TCP under ideal conditions (from a paper on Nanog by Anycast HTTP
advocate). It might not be that good, even. A figure of 90% would be
abysmally bad. 3% packet loss is usually unservicable.
--Dean
--
Av8 Internet Prepared to pay a premium for better service?
www.av8.net faster, more reliable, better service
617 344 9000
_______________________________________________
DNSOP mailing list
[email protected]
https://www1.ietf.org/mailman/listinfo/dnsop