On Wed, May 20, 2020 at 6:40 PM Russ Housley <hous...@vigilsec.com> wrote:
> MINOR > > Section 1 also says: > > Because the above problems do not relate to the CA's inherent > function of validating possession of names, .... > > The CA is responsible for confirming that the public key in the > certificate corresponds to a private key that can be used by the > certificate subject. This is usually done by a proof of possession > mechanism. So, I think that the start of this sentence should be > reworded to avoid the impression that the CA only validates the > name. > The existing framing is correct. The most widely used Internet PKI, the Web PKI, intentionally doesn’t not require a proof of possession mechanism. It is not used as an authentication mechanism (i.e. a binding of a key to an identity) but an authorization mechanism (i.e. a binding of an authorized set of identities to a key). The “CA only validates the name” is not just an impression, it’s the widespread running code. Due to how TLS certificates are used (online protocol negotiation, without non-repudiation support), there’s no risk opened nor any necessity to do a strong proof of possession binding, even in cases of strong identity binding. QUESTION > > While I have no objection to the DelegationUsage extension, > I wonder is an extended key usage would provide the same > confidence in the certificate. > In practice, no. As things currently are, it would unfortunately undermine the confidence, although this an entirely fair and reasonable question. As a recap (and more for those without the same context you have) the way 5280 and it’s predecessors were designed, the Certificate Policies extension is the primary means of expressing or indicating compliance to a particular policy. If a relying party explicitly attempts to validate a certificate, for a RP-determined Certificate Policy, then they can know whether or not that certificate complied with their expectations for issuance and management. This is all defined within 5280, albeit quite complex, and involves processing rules for both leaves and intermediates, as well as the ability to map between different policies (via policyMappings), such that an RP expecting policy A can validate a certificate bearing only policy B, provided some trusted party in the certification path asserted that B was equal-or-equivalent-to A Additionally, certificates have the extendedKeyUsage, which is most commonly used to restrict the protocol or protocols a given certificate can be used for. In RFC 5280 and friends, this restriction only applies to lead certificates. However, from the very earliest days of PKIX, the two main implementations (Microsoft and Netscape) diverged from this, in unspecified ways that ultimately PKIX declined to incorporate, to allow EKUs to be used on intermediates to restrict the protocols that certificates can be issued for. If a leaf EKU is not a subset of its issuing chain, then that EKU is not permitted; much like policy OIDs. This divergence, which has existed since the very earliest days of TLS, resulted in a different approach to managing policy than the idealized goal of PKIX. Rather than having every relying party application provide an initial policy, such as a policy OID assigned by the root store vendor (typically the OS/browser vendor), to indicate compliance with the root store’s policy, and using policy mappings for that, implementations simply used the EKU as a joint indicator for “uses protocol X and complies with the issuance policies for protocol X, as defined by the root store vendor”. This whole long preamble builds to our present day. In wide practice, and even for those root stores/OSes/applications that do not implement EKU chaining in the fashion mentioned above, the mere presence of an EKU is the indicator for compliance with a set of policies, and altering EKUs, like altering policy OIDs, requires issuing new intermediates permitted for that EKU. If DC certificates only bore a DC EKU, they would, in effect, be exempt from all of the certificate policies widely practiced for the issuance of TLS certificates, which would reduce the confidence in the certificate. They would also typically require the CA to issue new intermediate certificates prior to being able to issue such certificates that were accepted. For applications, the certificate validation libraries apply local policy to certificates based on their EKU via technical measures, to ensure they match the expected issuance policy, such as checking for certificate transparency information or limiting the maximum validity period the application will accept. Recall that the design of DC is also possible via simply nameConstrained subCAs, but part of the reason for DC is that it does not require or reduces dependencies on external PKIs in order to satisfy local protocol needs. The same argument applies here against EKU: to use EKU would place ongoing dependencies back on those external PKI, be incompatible with how those PKIs are typically administered, and would cause significant challenges for applications implementing on common libraries (such as those provided by their OS). It’s far from perfect, but the current state of the art is this approach. If something is meant to be “like” TLS, at minimum, it needs to contain the TLS EKU, so it is subjected to the TLS policies and usable with TLS validation stacks. Extensions, with or without the critical bit set, serve as the way to distinguish whether and how it should be usable with TLS and the “something like TLS” protocol. An EKU certainly could be pursued, and it’s by no means an unreasonable question, but in order to provide the same confidence as currently specified, existing root store policies would need to be rewritten and redefined to include that EKU in scope of their “subject to TLS”, as would their software and validation libraries similarly be updated to process that.
_______________________________________________ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls