On Wed, Jul 05, 2017 at 05:30:49PM +0100, Jim Reid wrote: > 1) There probably needs to be clearer guidance about the use cases for > this extension and the trade-offs between TLS clients doing DNSSEC > validation for themselves instead of sending DNS lookups to a validating > resolver server. How does an application developer decide which approach > would or wouldn't be appropriate?
On today's Internet, DNSSEC is not generally available to end-user devices. There are too many "last mile" problems. Thus, while direct acquisition of DANE TLSA records works for (e.g.) dedicated SMTP servers, any end-user application that wants to do DANE TLS needs to use the proposed extension. Perhaps you're asking whether once the relevant records are obtained, their validation should be via library calls to a suitable API, or via a suitable protocol to the local resolver? The latter would just be another "suitable API". So I think this issue falls outside proper subject matter for the IETF. Whatever the validation method is, it must avoid using untrusted oracles or leaking the records in the clear. > 2) I'm not sure there is much of an "associated latency penalty" from DNS > lookups. Something's going to experience this one way or another. Either > the TLS client takes that hit or the TLS server does it for them before > it sends back the requested DNS data. Except that the records will be warm in the server's cache, since many clients will be asking it for the *same* data. The same is not as likely to be true at the client. So there is indeed a likely latency reduction in farming out the lookups to the server. > 3) Something should be said about algorithm agility. We can be reasonably > sure web browsers, DNS servers, smart phones and so on will generally have > up-to-date DNSSEC validators and/or TLS code. Some TLS clients -- fire > and forget embedded systems, IoT devices, etc -- might never get updated > once they're deployed. If these clients use their own DNSSEC validators, > they will be screwed when/if DNSSEC drops SHA1 signatures (say) or adds > a new flavour of ECC or even an all-new signing protocol. SHA2 is already defined and widely used for DS records. The X25519 and X448 EC algorithms are already defined (or will be by the time this draft becomes an RFC). So there's not much churn on the immediate horizon. Devices doing all the validation locally will need software updates roughly once a decade (DNS parameters change slowly). A secure channel to a *trusted* resolver would avoid the problem, but trustworthy off-device resolvers are very unlikely to be ubiquitous. > 4) It's not clear if TLS clients can or should cache the DNS data (and > the resulting validations?) returned though this extension. The server will return TTLs, and caching per those TTLs is no less appropriate than it is in DNS generally. > Suppose a > jabber client validates foo.com, does it have to start at the root and > work all the way down to validate bar.com? Could it start that validation > at the previously validated and new cached trust anchor for .com? Can/should > negative answers -- NOHOST/NXDOMAIN responses -- be cached? Negative answers will not generally appear in this protocol, since the server is just returning its TLSA records and associated signatures. The only "negative" answers are NSEC/NSEC3 records that validate wildcard responses. There's little need to cache these, it suffices to just cache the TLSA records associated with the server, and forget the supporting validation RRs. > 5) How does a TLS client behave when its DNSSEC validation of a TLSA record > or whatever fails? Can/should it give up or fall back to conventional > validation of the certificate via a CA? This is application/configuration dependent. -- Viktor. _______________________________________________ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls