+1 what Ryan said. Especially the point that added restrictions aren’t a viable 
path to better interoperability.

ALPN IDs are byte strings; the fact that some of them can be displayed as ASCII 
character strings merely reflects the fact that those ALPN IDs were chosen by 
humans😊.

Cheers,

Andrei

From: TLS <tls-boun...@ietf.org> On Behalf Of Ryan Sleevi
Sent: Thursday, May 20, 2021 12:06 AM
To: tls@ietf.org
Subject: [EXTERNAL] Re: [TLS] Narrowing allowed characters in ALPN ?



On Thu, May 20, 2021 at 1:56 AM Viktor Dukhovni 
<ietf-d...@dukhovni.org<mailto:ietf-d...@dukhovni.org>> wrote:
On Wed, May 19, 2021 at 10:29:43PM +0000, Salz, Rich wrote:

> I support limiting it.

I concur.  These are not strings used between users to communicate in
their native language.  They are machine-to-machine protocol
identifiers, and use of the narrowest reasonable character set promotes
interoperability.

I'm not sure I understand this. Could you expand further how adding more 
normative restrictions promotes, rather than harms, interoperability?

The fact that, as you highlight, they are machine-to-machine, seems like the 
greatest path to interoperability, because they shouldn't be assumed to be 
"human-readable", and because as specified, no other validation needs to be 
performed by either party. They should simply be treated as they're specified: 
an opaque series of bytes. Conversions to text strings or transformations such 
as character sets seems like fundamentally misunderstanding/misusing them, 
rather than being a thing to support.

The idea of restricting the character set seems like it only opens the door for 
less interoperability and more complexity. For example, senders need to make 
sure they're sending within the allowed character set (meaning validation 
before transmission), and receivers that wish to avoid malicious peers need to 
similarly validate the identifiers before exposing them as to API callers. This 
then adds complexity to the API design, as "no fail" operations now become 
"maybe fail" (e.g if a caller attempts to call with an invalid character 
string), and that propagates throughout the design of systems (e.g. config 
files that may fail to load now).

It seems there's a parallel here to the discussion about whether HTTP/2 should 
have been a text protocol, like HTTP/1.1 and its predecessors, which had 
similar arguments to what's being raised now, versus the binary protocol that 
was ultimately adopted.

If the argument is that the extensibility has already rusted shut because the 
ecosystem ignored the spec and we didn't GREASE it by using ALPN identifiers 
that actually behaved as opaque bytes, then we should at least make an effort 
to document why and when that happened, so that mistakes like that can be 
avoided in the future.
_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to