In message <63d21888-478d-42f8-8576-e89131cfe...@nominum.com>, Ted Lemon writes:
> On Aug 4, 2015, at 5:39 PM, Donald Eastlake <d3e...@gmail.com> wrote:
> > From that it does not follow that it
> > "wouldn't make sense" to use COOKIEs in connection with TSIG. The
> > non-cryptographic calculations you do for COOKIE verification are
> > going to be at least two orders of magnitude cheaper than the
> > cryptographic calculations you do for TSIG verification.
>
> This is not true, because TSIG is going to be using the exact same
> hashing algorithm you are going to be using for cookies.   Kind of a side
> issue, though.

And not true.  BIND uses AES if it is available.

> > I would say it is more like it is easy to construct situations in
> > which it of great benefit and easy to construct situations in which it
> > is of no benefit. But it is not certain as to where on this spectrum
> > things will typically fall and how that will change with time.
>
> Actually we can make a lot of predictions.   First, if there is no
> incentive to implement it, vendors won’t implement it, so we can safely
> assume that adoption on stub resolvers will be very slow.

Well there are multiple vendors who have stated they intend to implement.

>  It’s possible
> that this could go a different way, but if history is our guide it won’t.
>   There is a huge incentive for DDoS attackers to come up with ways to
> bypass and leverage it, though.   Let’s think through some cases (I’m
> going to channel the following analysis from a private exchange I had
> with someone at Nominum who has a lot more DNS fu than I do, so if it
> seems brilliant, he gets the credit, and if it seems stupid, it’s
> probably because I am a lousy medium):
>
> Client good, no cookie: client treated normally, rate limited for
> excessive traffic (client may be broken but not malicious, still needs
> rate limit).
>
> Client evil, no cookie: client treated normally, rate limited for
> excessive traffic because malicious.
>
> Client good, has cookie: client treated normally, rate limited for
> excessive traffic because might be broken.   You can treat clients that
> have cookie preferentially, but you still have to rate limit them since
> they may be broken.
>
> Client evil, has cookie: client treated normally, rate limited for
> excessive traffic because evil.   If you treat good clients
> preferentially, you’re also treating evil clients preferentially.

The analysis above is lacking.

"has cookie" is not the determining factor.  "has good server cookie"
is the determining factor. 

Now without cookies you can't determine the difference between 1
and 2.  With cookie you can determine the difference between 3 and
4.  Additionally how you treat 1 and 2 is different to how you tread
3 and 4.

> You may be tempted to say “but the client is a DDoS client, so it won’t
> have a cookie.”   This is false, because the client may be an open
> resolver that implements cookies, and indeed open resolvers that
> implement cookies will now be specially favored as attack vectors.   And
> of course botnet attackers have legit IP addresses and use them, so again
> they can and definitely will implement cookies.   Worse, they can now
> make you do more work in order to rate limit them, because they can make
> up new cookies.   And if you rate limit by IP address as well as cookie,
> then you are doing the same amount of work as if you just rate limited,
> so cookies provide no benefit.

No so.  With cookies you by IP address no cookie, IP address bad/no
server cookie, IP address (or by client cookie) good server cookie.

The attack traffic falls into bins 1 or 2.  Legitimate repeat cookie
client traffic falls into bin 3.  Now most legitimate clients are
not broken so the chance of them being rate limited is low.  Clients
which don't implement cookies take all the negative consequences
of being binned with attack traffic.

> All of this goes for other potential sources of attack, like states and
> weak jurisdictions.
>
> The recursive->auth path is easy to attack with two-way communication, so
> cookies don’t really help you here unless you also do per-source rate
> limiting, at which point cookies also don’t help you because they add no
> value if you are doing per-source rate limiting.
>
> (Back to me…)   This doesn’t even get into the attacks you can do against
> a server that has the stateless implementation proposed in the draft.
> In an implementation that does no per-source rate limiting, a botnet or
> open-resolver network that can do cookies can now shut out the vast
> majority of legitimate clients that do not do cookies.

Hogwash.  Where does anyone say not to service non cookie traffic.  You
are making this up in you head.

>   So cookies now
> massively assist the attack, rather than slightly penalizing it.   And
> this is a very real concern, because such networks exist now, and can
> readily be turned to this new and more effective attack as soon as
> cookies are widely deployed on caching resolvers and/or auth servers.
> We are much more likely to see client-side cookie implementations on
> botnet attackers in particular because there is a huge incentive for
> implementing them.
>
> And of course as I mentioned previously, an on-path attacker can
> _prevent_ resolvers that do not implement cookies from returning results
> to clients that do, and can prevent clients that implement cookies from
> talking to resolvers that implement cookies.  The only clients that are
> safe from this attack are clients that do not implement cookies, which is
> another reason why deployment of cookies on clients might be slower than
> you would like to anticipate.



> > Since there is no change in behavior for clients that don't support
> > COOKIEs, there is no increase in packets for them.
>
> Yup, sorry—I figured that out after I’d sent that response.
>
> > I don't see any increase in packets for clients that support COOKIEs.
> > If a server does not think it is under attack and is getting few
> > enough requests from a client (and it seems to me that the first
> > request if it has no previous history of requests would always
> > qualify), it just processes the request normally and includes a server
> > COOKIE with the response.
>
> We really don’t care about the non-DDoS attack case except to the extent
> that cookies could be pre-seeded to cookie-aware clients, both good and
> evil, so that they immediately get preferential treatment during DDoS
> attacks.
>
> > If, on the other hand, a server was going to
> > discard a request from a client then, in the case where the client
> > supports COOKIEs but did not include a correct server COOKIE, the
> > server can return a short BADCOOKIE response, which means the client
> > will be authenticated on its next try so that that client won't send a
> > series of retries that are discarded saving on traffic.
>
> This is true, but as explained above does not actually help.  I would
> suggest that it’s actually more likely that _all_ of the clients that
> implement cookies will be botnet clients, than it is that there will be a
> mix of botnet clients that implement cookies and legitimate clients that
> implement cookies, because botnets have an incentive to implement
> cookies, and legitimate clients do not.   It is an absolutely _huge_ win
> for a botnet attacker to implement cookies if most clients do not.

Actually there are good reasons for clients to implement cookies.
They reduce the amount of port randomisation that needs to happen
in a recursive server (if the server supports cookies you don't
need to randomise the source port) and client cookies work even
when the NATs undoes port randomisation.

Even when the client does DNSSEC validation this is useful as the
referral is not signed.

-- 
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742                 INTERNET: ma...@isc.org

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to