On 10/31/17, 05:39, "DNSOP on behalf of Moritz Muller" <dnsop-boun...@ietf.org on behalf of moritz.mul...@sidn.nl> wrote:
>Now, for example due to a key rollover at the TLD, the manually configured >trust anchor of the TLD does not match the DS in the root anymore. > >How should a resolver treat the signatures of this TLD? Short answer: the resolver ought to validate positively (other things being equal) the affected data sets. My opinion/recollection (meaning no citations*): this question has been bounced around before. The general case is "What if it is possible to build a chain of trust from a (any) trust anchor for high.example. but not from any trust anchor for lower.high.example?" Or vice versa? * = Below I have response citing a document...but for now... Starting with, "it is very hard to publish the correct signatures in a chain of trust", so if any can be built, trust the data. Hard meaning you have to have the private keys. If one allows a (or any) broken chain (a signature fails) to cause a validation failure, this allows an attack where someone supplies bad data just to lead to a denial of service. (Not a flooding attack, more like a process kill.) Of course, a strategy of "trusting any chain to any trust anchor" places importance on maintaining only truly trusted trust anchor points. Given that operations might result in a mismatch of established trust anchors and secure entry points, code ought not be too restrictive so as to require/expect the two to be in synch. (Much like NS sets above and below a cut-point. Yes, they should be the same, but if they differ, "life" can go on.) Software implementations can choose to be aggressive or passive. Aggressive means trying all possible means to validate the data within the rules of security. Passive might mean "try one and give up if it fails." For example, under a Kaminsky-described attack, a resolver that only chooses the first response (which would fail, causing SERVFAIL) may not see the later-arriving true response (which would validate). The former approach is what was implemented in at least early code bases (if not still). So, I'd prefer code that seeks any chain before giving up. DNSSEC causes the DNS to be more brittle (securing any existing system does this). For this the philosophy behind DNSSEC is to be as lenient as possible within the boundaries of maintaining a secured environment. (I've always wanted an RCODE of "AWHECKUSEITANYWAY" to represent DNSSEC validation errors that get passed because security is inconvenient - like when a user has one DNSSEC validating server in their stub's list alongside a non-validating server for when those SERVFAILs come back.) OTOH - for a referenced answer: "Protocol Modifications for the DNS Security Extensions" (aka RFC 4035), section "Determining Security Status of Data" (4.3), the definitions for: # Secure: An RRset for which the resolver is able to build a chain of # signed DNSKEY and DS RRs from a trusted security anchor to the # RRset. In this case, the RRset should be signed and is subject to # signature validation, as described above. Note "build a chain" not "build all possible chains" # Bogus: An RRset for which the resolver believes that it ought to be # able to establish a chain of trust but for which it is unable to # do so, either due to signatures that for some reason fail to # validate or due to missing data that the relevant DNSSEC RRs # indicate should be present. This case may indicate an attack but # may also indicate a configuration error or some form of data # corruption. Note again "a chain of trust" and no reference to any failure meaning the data is bogus.
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop