On 9/25/2012 6:12 PM, Erwann Abalea wrote:
Bonjour,

Le 25/09/2012 14:16, Jakob Bohm a écrit :
> On 9/25/2012 11:11 AM, Erwann Abalea wrote:
>> Le 24/09/2012 21:03, Jakob Bohm a écrit :
>> > Does that work with any other serious X.509 validation toolkit?
>>
>> It should.

And in fact, OpenSSL works correctly, at least versions 1.0.1 (Ubuntu),
and 1.1.0 (from sources).
The 1.0.1 version displays a warning if it finds the expired certificate
first, but the verification goes on with the next certificates, and it
finally gives an OK result.
That can be checked either by removing the nonexpired certificate from
the CAfile or by changing to the CApath mode and using strace to see
OpenSSL opens the second CA certificate (named 415660c1.1).

>> When trying to build a valid certification path, all possibilities have
>> to be tested until one of them succeeds. If a CA gives a good signature,
>> but fails for whatever reason (a non respected constraint, a revoked
>> state, or an expired certificate), then the considered certificate chain
>> is invalid, and the next has to be tested.
>>
> Read carefully, I said that if the only way to pick the right
> candidate is to
> validate the signature against 2 same-algorithm public keys, then the
> security of the signature validation is reduced by up to
> log2(keycount) bits.

?? Could you elaborate on this?
Any signature algorithm works by dividing the universe of N bit strings
into those that are validsignatures for the object and those that are
not.  For most algorithms the valid subset is exactly one of the 2**N
bit strings, for some ECC variants it is two of them, for DSA it is
2**(N/2) of them.

By simple logic, log2 of the fraction of invalid to valid bit strings is
a hard upper bound on the security of the signature algorithm (because
of the trivial try-many-sigs-until-one-is-valid brute force attack).

Now adding code outside the algorithm which accepts two or more
different keys obviously increases the number bit strings that are
valid, thus reducing that upper bound by log2(number of accepted keys).

For RSA this does not matter much as the security of an N bit RSA key
is a lot less than N bits.  But for DSA, ECC algorithms and other
algorithms whose expected security is at or close to the upper bound,
this is a security reduction.

> Anything that can be filtered out without signature checking (such as
> different algorithms, different key identifies, different key lengths
> etc.)
> does not cause that problem. And is OK security wise, but may not
> be implemented by all (otherwise compliant) X.509 implementations.
>
>> > To make this work (assuming the old root CA cert has not yet expired),
>> > the validation code will need to actually verify the End Entity
>> > certificate against both public keys, which effectively reduces the
>> > algorithm security by allowing twice as many bit strings to be
>> > accepted as valid.
>>
>> An EE can be valid under different certificate chains, without reducing
>> the security of anything. Think about cross-certifications.
> Cross-certifications involve different distinguished names for signature
> chain building, these can all be verified by building the trusted chain
> before validating the signatures.

You're right. The cross-certified entity will have different
certificates, each one with a distinct issuer name.

>> > As for trust anchor update scenarios, I know of 3 different scenarios
>> > that should be accepted by any good X.509 validation algorithm:
>> >
>> > 1. Changing expiry or other attributes while keeping the key.
>> > Here the CA issues a new self-signed certificate with updated
>> > attributes but unchanged key.

>> > 2. Changing the CA key when the old key has *not* been compromised.
>> > Here the CA generates a new key and issues two certificates for it:
>> >
>> >    A. A self-signed new root with a serial number or other variation
>> >      in one of the subject name components.
>>
>> This is a change in the name of the CA, whence it's a completely
>> different CA.
>>
> Yes, but it will still have a sufficiently close name to retain any
> reputation
> based human trust.

What about the DigiCert (Malaysia) against DigiCert, Inc (US) not so old
problem?

Yep, and probably did a lot of real world damage to the innocent DigiCert.
>> >    B. A certificate for the new key and the same subject and
>> (optional)
>> > SubjectKeyIdentifier as A, but issued by the old root certificate
>> >      identity and key.
>>
>> That's a self-issued certificate, it's OK until the old CA certificate
>> is not expired. Well described in X.509.
>> Manual update of the trust anchor is still necessary if you want the
>> validation to pass the expiration date of the old CA cert.
>>
> Actually, this is a cross-certificate from the old CA to the new CA.
> as you said, well described in the literature.

Again, I misread. The case you're describing is really a cross-certificate.

So there's one more possibility: the CA changes its key, keeps its name
(so it's the same CA), and issues 2 certificates. First one is a
self-signed one with its brand new key. Second one is a self-issued one,
signed by the previous key.
Both this case and the previous one are used by several countries for
CSCA certificates (for passports).

So you say they have an intermediary certificate where
Issuer DN==Subject DN,
but the Issuer keyis not the key in the cert itself.  Very weird, unless
there are appropriate key identifiers in thecertificates.
> Some of the discussions on this thread seems to indicate that when
> both the
> A and B certificate are available, OpenSSL sometimes may fail to stop
> when
> it hits the new (A) CA in the trust store because it does not
> distinguish between
> its trust store and its collection of cached/preloaded intermediary
> certificates
> (unlike Windows, which has seperate stores for those two categories).

What I understand from the OP seems to be different from this paragraph.
I grabed the old 1996-2004 VeriSign C3 root certificate, and its renewed
version 1996-2028 (same key, same name). That's your scenario 1.
The Thawte CA certificate doesn't have any authorityKeyIdentifier
extension, and OpenSSL correctly tests each possible certificate,
filtered by their subject name, until the validation is OK.

I assume the Thawte certificate you mention is not the same as the
VeriSign certificate (they havebeen the same company for a long time now).

The example I was referring to was GeoTrust (another VeriSign company)
and their Equifax cross cert.

>> > 3. Setting up the CA to have keys for more than one algorithm (such
>> > as RSA 1024 with SHA1 and RSA 4096 with SHA256).  In this case, the
>> > certificate validation could SHOULD (but might not) match issued end
>> > entity certificates to the trust anchor by also comparing
>> > signatureAlgorithm in the issued certificate against
>> > subjectPublicKeyInfo.algorithm in the candidate issuer cert from the
>> > store.
>>
>> The issued certificate will have "sha1withRSA" or "sha256withRSA" in its
>> signatureAlgorithm, not "sha1withRSA1024" or "sha256withRSA4096".
> I deliberately did not give the OID names for the combinations, just
> descriptions,
> that is why I wrote it as multiple space separated words.

Yes. And the signatureAlgorithm in the issued certificate will show
"sha1withRSA" while the subjectPublicKeyInfo.algorithm in the different
candidates issuer certificates will all have "rsaEncryption". The
combination of these 2 elements alone can't be a distinguisher. In this
scenario, key lengths and signature lengths could be used as
distinguishers, but the implementation then needs to parse the innards
of the certificates.

> Think of the regular posts on
> this list from people asking if they can upgrade Apache to OpenSSL 1.0.1
> without recompiling their existing Apache httpd binaries.  Then think
> of all
> the users who refuse to switch from Windows XP to Windows Vista/7/8
> because they like the old OS better, and who then suffer from Microsoft
> not adding SHA-256 to the crypto algorithms in the XP core. And then
> throw
> in the knock on effect on anyone trying to communicate with everybody,
> including those hapless victims of SHA-256 refusal.

True. They may have "good" reasons for not upgrading. (I doubt it)
Top reasons for users wanting to keep XP as long as possible:

1. Less bloat than in Vista and later.

2. Hardware not big enough to run Vista and later.

3. Needed "line of business" applications not compatible with Vista and
 later.

> I am on the list, no need to cc me

Sorry, that's the default "reply to all" behavior from Thunderbird.

Try "Reply" or "Reply List" (depending on version).

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. http://www.wisemo.com
Transformervej 29, 2730 Herlev, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to