On Tue, May 23, 2017 at 10:50 AM, Viktor Dukhovni <ietf-d...@dukhovni.org>
wrote:

> The fix is to amend DNSpriv to require stateless (random rather
> than say round-robit) RRset rotation.  With random rotation, the
> next RRset order is independent of previous queries.
>

That's a good fix for that specific local problem. But next, consider a
different one; what if a DNS provider has q-tuple rate-limiting for DOS
attacks? That's not an unusual measure for large providers - even bind9 has
support for it. Well with stateless 0RTT I can replay the clients query
over and over until the rate-limiting trips; now I have a DOS attack *and*
a privacy-defeating attack; because the rate limit exposes what the query
was for.


> Secondly, even with the 0-RTT leak, while privacy against an active
> attacker might not be assured for all users, there is fact privacy
> for most users, especially against a purely passive adversary.
>

My reference here isn't really meant as a criticism of DNSPriv - we should
make DNS private and secure, that's awesome, and it's a small attack in the
overall context of DNS. It's meant like Christian said; it is really really
hard to make an application protocol idempotent and side-effect free and
very smart people are often wrong about assuming that they are. I see
at-most-once 0-RTT mitigation as essential to avoiding a lot of real world
security issues here, because of that difficulty.


> To the extent that DNSpriv over TLS happens at all, 0-RTT
> will be used for DNS, and will be used statelessly (allowing
> replays).


That's not good for users, and seems like another very strong reason to
make it clear in the TLS draft that that it is not secure. FWIW; DNSCurve
includes nonces to avoid attacks like this:
https://dnscurve.org/replays.html (which means keeping state).

Stateless mechanisms simply aren't secure. We wish they were; because it is
so attractive operationally - just as it would be nice if my MD5
accelerators were still useful. But they don't hold up. We've even seen
this before with DTLS; where replay tolerance opened up the window to
several cryptographic attacks. It's an all-round bad idea.

I've seen a number of arguments here that essentially boil down to "We'd
like to keep it anyway, because it is so operationally convenient". Is that
really how this process works? Don't demonstrable real-world attacks
deserve deference?

-- 
Colm
_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to