Alan DeKok <al...@deployingradius.com> writes:

> Simon Josefsson wrote:
>>>   It is widely deployed today with TTLS.  I think that allowing this
>>> practice to continue is a requirement.
>> 
>> I agree, but that does not necessarily mean that
>> passwords-sent-over-the-wire and passwords-sent-hashed must have the
>> same internationalization treatment or considerations.
>
>   That isn't a requirement, but it does make things easier.
>
>> Sure, but see section 4 of RFC 5198.  If it happens that NFC backwards
>> compatibility is broken, you end up with the interop problem.
>
>   ? Section 4 says explicitly:
>
>    if a string does not contain any unassigned
>    characters, and it is normalized according to NFC, it will always be
>    normalized according to all future versions of the Unicode Standard.
>
>   So there is no backwards compatibility problem.

See this part:

   Were Unicode to be changed in a way that violated these assumptions,
   i.e., that either invalidated the byte string order specified in RFC
   3629 or that changed the stability of NFC as stated above, this
   specification would not apply.

If that happens, there is a backwards compatibility problem with old
implementations.

>   My interpretation is that every system taking user input needs to
> perform this normalization.  Once that's done, string comparison is
> essentially memcmp().

Right.  My point is that the one needs to weight this approach to a
system which does not use normalization but instead use
internationalized comparison rules.

>   If both user && authenticator are using the same version of Unicode,
> then they inter-operate.
>
>   If one is using a newer version, then it either
>
>       (a) creates the same output string, because the *old* characters
>             have not been re-assigned in the *new* standard
>
>       (b) creates a different output string than the old system,
>           because either the standard is not backwards compatible,
>           or the implementation is wrong.
>
>   Is there anything I'm missing?

Yes: there is typically no version negotiation of SASLprep or RFC 5198
version used in protocols.  So a client would typically not know what
Unicode version the server uses.  This would be one reason to put the
Unicode version depended code into one place (where the comparison
happens) rather than in several places (normalization in the client and
in the server).

Which approach to use needs to be decided understanding these
trade-offs.  I'm not saying that the comparison-approach is always
better, but my experience with normalization in client/server-systems is
that it does not work well enough that we should consider if other
approaches may work better.

/Simon
_______________________________________________
Emu mailing list
Emu@ietf.org
https://www.ietf.org/mailman/listinfo/emu

Reply via email to