I've got some code that dumps TLS diagnostic info and realised it was
displaying garbage values for some signature_algorithms entries.  Section
4.2.3 of the RFC says:

      In TLS 1.2, the extension contained hash/signature pairs.  The
      pairs are encoded in two octets, so SignatureScheme values have
      been allocated to align with TLS 1.2's encoding.

However, they don't align with TLS 1.2's encoding (apart from being 16-bit
values), the values are encoded backwards compared to TLS 1.2, so where 1.2
uses { hash, sig } 1.3 uses values equivalent to { sig, hash }.  In particular
to decode them you need to know whether you're looking at a 1.2 value or a 1.3
value, and a 1.2-compliant decoder that's looking at what it thinks are
{ hash, sig } pairs will get very confused.

Should I submit an erratum changing the above text to point out that the
encoding is incompatible and signature_algorithms needs to be decoded
differently depending on whether it's coming from a 1.2 or 1.3 client?  At the
moment the text is misleading since it implies that it's possible to process
the extension with a 1.2-compliant decoder when in fact all the 1.3 ones can't
be decoded like that.

Peter.

_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to