Re: [TLS] [OPSEC] Call For Adoption: draft-wang-opsec-tls-proxy-bp

2020-07-27 Thread Nick Harper
As currently written, this draft has multiple problems.

Section 4 decides not to repeat the Protocol Invariants described in
section 9.3 of RFC 8446. However, further sections are written assuming
that a proxy acts in a way contrary to those Protocol Invariants. One such
example is section 4.2 in this draft. It describes how a proxy might
generate its list of cipher suites by modifying the client's list. A proxy
that copies the cipher suites from the client-initiated ClientHello into
its own ClientHello is violating the 1st and 3rd points of the TLS 1.3
Protocol Invariants. For a best practices document, I think it would be
reasonable to reiterate the Protocol Invariants.

In addition to reiterating the Protocol Invariants, it should also
summarize the advice from the cited papers SECURITY_IMPACT and
APPLIANCE_ANALYSIS. One of the problems pointed out by those papers is that
TLS proxies will make connections to a server that presents a certificate
the client wouldn't accept, but because of the proxy, the client isn't
aware of the certificate issues. I don't see this issue addressed at all in
the draft. (A similar issue with the server selecting a weak cipher suite
is possibly implied by section 4.2, but it is not spelled out well.)

If this draft is adopted, it needs to say the following things, which it
currently doesn't.

- When a TLS proxy generates its ClientHello, it should be created
independently from the client-initiated ClientHello. The proxy MAY choose
to omit fields from its ClientHello based on the client-initiated
ClientHello, but it MUST NOT add fields to its ClientHello based on the
client-initiated ClientHello. This is effectively a restatement of the 1st
(a client MUST support all parameters it sends) and 3rd (it MUST generate
its own ClientHello containing only parameters it understands) points of
the TLS 1.3 Protocol Invariants.

- If a proxy chooses to conditionally proxy TLS connections and needs more
information than what is contained in the client-initiated ClientHello,
then the only way to make that decision is to send its own ClientHello to
the server the client is connecting to and use information observed on that
connection to make the decision to proxy the pending connection.

- If a proxy chooses to not proxy some TLS connections, the proxy will fail
open. The only way to avoid failing open is to proxy all connections.

On Mon, Jul 27, 2020 at 6:31 AM Ben Schwartz  wrote:

> I'm concerned about this work happening outside the TLS working group.
> For example, the question of proper handling of TLS extensions is not
> addressed at all in this draft, and has significant security and
> functionality implications.  There are various other tricky protocol issues
> (e.g. version negotiation, TLS 1.3 record padding, TLS 1.3 0-RTT vs. TLS
> 1.2 False Start, round-trip deadlock when buffers fill, ticket (non-)reuse,
> client certificate linkability pre-TLS-1.3, implications of SAN scope of
> synthesized certificates) that could arise and are going to be difficult to
> get right in any other WG.
>
> The title "TLS Proxy Best Practice" implies that it is possible to proxy
> TLS correctly, and that this document is the main source for how to do it..
> I think the TLS WG is the right place to make those judgments..  For the
> OpSec group, I think a more appropriate draft would be something like "TLS
> Interception Pitfalls", documenting the operational experience on failure
> modes of TLS interception.
>
> On Mon, Jul 27, 2020 at 8:57 AM Nancy Cam-Winget (ncamwing)  40cisco@dmarc.ietf.org> wrote:
>
>> The document is not imposing any standards but rather provide guidelines
>> for those implementing TLS proxies;  given that proxies will continue to
>> exist I'm not sure why there is a belief that the IETF should ignore this.
>>
>> Warm regards, Nancy
>>
>> On 7/27/20, 5:20 AM, "OPSEC on behalf of Blumenthal, Uri - 0553 - MITLL"
>>  wrote:
>>
>> I support Stephen and oppose adoption. IMHO, this is not a technology
>> that IETF should standardize.
>>
>>
>> On 7/25/20, 10:07, "TLS on behalf of Stephen Farrell" <
>> tls-boun...@ietf.org on behalf of stephen.farr...@cs.tcd.ie> wrote:
>>
>>
>> I oppose adoption. While there could be some minor benefit
>> in documenting the uses and abuses seen when mitm'ing tls,
>> I doubt that the effort to ensure a balanced document is at
>> all worthwhile. The current draft is too far from what it'd
>> need to be to be adopted.
>>
>> Send to ISE.
>>
>> S.
>>
>> On 23/07/2020 02:30, Jen Linkova wrote:
>> > One thing to add here: the chairs would like to hear active and
>> > explicit support of the adoption. So please speak up if you
>> believe
>> > the draft is useful and the WG shall work on getting it
>> published.
>> >
>> > On Mon, Jul 20, 2020 at 3:35 AM Ron Bonica
>> >  wrote:
>> >>
>> >> Folks,
>> >>
>>  

Re: [TLS] [OPSAWG] CALL FOR ADOPTION: draft-reddy-opsawg-mud-tls

2020-09-15 Thread Nick Harper
I agree with EKR, Ben Schwartz, and Watson Ladd's concerns on this draft.

The grease_extension parameter shouldn't exist, and there should be no
special handling for GREASE values. GREASE doesn't need to be mentioned in
this draft, except to say that a client may send values (cipher suites,
extensions, named groups, signature algorithms, versions, key exchange
modes, ALPN identifiers, etc.) that are unknown to the middlebox and that
the middlebox MUST NOT reject connections with values unknown to the
middlebox. (This can be stated without mentioning GREASE specifically.)

There is also an issue where this draft does not describe how an observer
identifies whether a TLS ClientHello is compliant with a MUD profile.

On Tue, Sep 15, 2020 at 4:58 PM Watson Ladd  wrote:

> On Tue, Sep 15, 2020, 9:10 AM Eliot Lear
>  wrote:
> >
> >
> >
> > My concern is not with "new extensions" per se.  The hidden assumption
> here is that "extensions" are the only way TLS can evolve.  In fact, future
> TLS versions are not constrained to evolve in any particular way.  For
> example, future versions can introduce entirely new messages in the
> handshake, or remove messages that are currently visible in the handshake.
> QUIC is arguably just an extreme version of this observation.
> >
> >
> > I understand.  I used TLS extensions merely as an example.
>
> There is no reason that a firewall should expect to parse TLS 1.4. TLS
> 1.3 had to go through significant hoops due to middleboxes that
> assumed they could see into everything like it was 1.2. This easily
> added a year to the development time. The final hunt for incompatible
> devices involved attempting to purchase samples, with no real
> guarantee that they would find an intolerant device. Encouraging this
> sort of behavior is a bad idea IMHO, as it will substantially burden
> the TLS WG when designing TLS 1.4 in all sorts of unexpected ways.
>
> >
> >
> > Even within the realm of ClientHello extensions, there is significant
> inflexibility here.  For example, consider the handling of GREASE
> extensions.  GREASE uses a variety of reserved extension codepoints,
> specifically to make sure that no entity is attempting to restrict use of
> unrecognized extensions.  This proposal therefore has to add a flag
> declaring whether the client uses GREASE, because otherwise the set of
> extensions is dynamic, and the number of potential codepoints is
> impractically large.  Any change to the way GREASE selects and rotates
> extension codepoints would therefore require a revision of this YANG model
> first.  There has also been discussion of adding GREASE-type behavior to
> the "supported_versions" extension; that would similarly require a revised
> YANG model here.
> >
> >
> > Probably greasing is something that needs a certain special handling.
> Indeed that’s a form of fingerprinting (greases field XYZ).
>
> The whole point of grease is keeping extensions open. Coding special
> handling defeats the purpose.
>
> Sincerely,
> Watson Ladd
>
> >
> > Eliot
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Is stateless HelloRetryRequest worthwhile? (was Re: TLS 1.3 Problem?)

2020-10-01 Thread Nick Harper
On Thu, Oct 1, 2020 at 7:05 AM Michael D'Errico  wrote:

> > I am having a difficult time understanding the tradeoffs you're facing.
>
> This is the first time I'm reading the TLS 1.3 RFC.  I have
> implemented SSLv3, TLS 1.0, 1.1, and 1.2.  You may have
> used my test server at https www dot mikestoolbox dot
> org or dot net to test your own code.  It's kind of old now
> since it doesn't do ECC and the DHE_RSA key exchange
> I focused on has been disabled by most clients so you
> end up getting a regular RSA handshake now.
>
> I have gotten caught by the stateless HelloRetryRequest
> and can't get past it.  You can't possibly implement it the
> way the spec suggests with just a hash in a HRR cookie
> extension.


The only thing the server needs to know is the hash of the ClientHello (so
it can restore the transcript hash) and that the server has already sent a
HelloRetryRequest (which it can detect by presence of the cookie). The only
argument I've seen made for what the spec suggests not working is being
able to verify which fields changed between ClientHello1 and ClientHello2.
I see no language in RFC 8446 that the server MUST enforce that the
ClientHello2 is conformant with respect to ClientHello1. Thus, I think what
the spec suggests should work.


> If it can be done at all, the stateless server
> should probably just put the ClientHello1 and HRR (minus
> the cookie) into the cookie extension.  If this is how it
> should be done, then the spec should say so -- exactly
> how to do it so everyone does it the same (correct) way
> and not just hand-wave it and say figure it out yourself.
>
> Getting the cookie right isn't enough because of the
> potential for resending an old cookie by a mischievous
> client.


The cookie serves as an alternative way for the server to remember the
ClientHello1 sent by the server. If the client is trying to perform some
sort of attack on the server by re-sending an old cookie, I assume that a
prerequisite for this attack is that the TLS handshake succeeds. For the
handshake to succeed, the client needs to know the ClientHello1 that
corresponds to the cookie so that it can compute the transcript hash
correctly. Regardless of the source of that client hello, the client can
equivalently send that ClientHello, get a real HelloRetryRequest from the
server, and send its ClientHello2, or it can send a ClientHello with the
cookie, bypassing the step of waiting for the HRR. A client attempting to
do something malicious with an HRR cookie is equivalent to behavior that
does not depend on a cookie.

>
> Mike
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] PR#28: Converting cTLS to QUIC-style varints

2020-10-06 Thread Nick Harper
On Tue, Oct 6, 2020 at 11:37 AM Michael D'Errico 
wrote:

> I think we are in agreement.
>
> On 10/6/20 13:12, Christian Huitema wrote:
> > * Receiver side: receive the message, parser with generic ASN.1 decoder,
> > process the message using the "parsed" representation, re-encode with
> > DER, check the signature.
>
> I recall that at least one root certificate had a
> SEQUENCE encoded using BER-but-not-DER (?)  Yeah if
> your software re-encoded that, it would no longer
> be the same sequence of bytes.
>
> > Experience showed that this workflow is very problematic, because the
> > parse/reencode process may introduce subtle changes and the signature
> > will fail.  One may argue that these changes are due to implementation
> > bugs, but fact it that this is a rich environment for growing bugs.
> > Based on experience, the receiver side is better done as:
> >
> > * Receiver side: receive the message, save it, parse and process, and
> > when it is time to verify the signature go back to the original message
> > and check the signature.
>
> This is how I did X.509 verification, though I was
> late to the game and the advice was already there
> to accept a BER-encoded certificate.  Not sure if
> I would have done the DER re-encoding bit if that
> was the current advice at the time since it seems
> like the wrong thing to do, but maybe I would have.
>
> > If we do that, then there is no reason to mandate minimal length
> > encoding. And TLS already does that. For example, we do not reorder
> > extensions according to some canonical rules before placing them in the
> > transcript.
>
> I was disappointed to see that the TLS 1.3 spec now
> has a requirement to put one of the ClientHello
> extensions in a specific place (last in the list).
>
> We discussed this at length during the development
> of either TLS 1.2 or one of the extensions (maybe
> renegotiation-info?) and we ultimately came to what
> I believe was the correct decision never to require
> any ordering of the extensions.  Sad to see the
> group capitulated to whomever said it would make
> their software easier to write (which I doubt).
>

Hopefully https://tools.ietf.org/html/rfc8446#section-4.2.11.2 makes it
clear why the pre_shared_key extension must be at the end of the list.

>
> Mike
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] PR#28: Converting cTLS to QUIC-style varints

2020-10-06 Thread Nick Harper
I have no strong opinion on how this is formatted. I'd base my decision on
what the maximum value cTLS needs to encode: If 2^22-1 is sufficient, let's
keep it as is, otherwise let's change it to the QUIC format (or some other
change to increase the max value). I do like that the existing scheme,
compared to QUIC varints, is more efficient for values 64-127 and just as
efficient for the rest.

On Mon, Oct 5, 2020 at 8:09 PM Eric Rescorla  wrote:

> I don't have a strong opinion on whether to require a minimal encoding,
> but if we're not going to use QUIC's encoding as-is, then I would rather
> stick with the existing scheme, which has twice as large a range for the 1
> byte encoding and is thus more compact for a range of common cases.
>
> -Ekr
>
>
> On Mon, Oct 5, 2020 at 7:31 PM Marten Seemann 
> wrote:
>
>> In that case, why use QUIC's encoding at all? It would just put the
>> burden on the receiver to check that the minimal encoding was used.
>> Would it instead make more sense to modify QUIC's encoding, such that the
>> 2-byte encoding doesn't encode the numbers from 0 to 16383, but the numbers
>> from 64 to (16383 + 64), and equivalently for 4 and 8-byte encodings?
>>
>> On Tue, Oct 6, 2020 at 9:22 AM Salz, Rich  wrote:
>>
>>> Can you just say “QUIC rules but use the minimum possible length”?
>>>
>> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Flags extension and announcing support

2021-01-22 Thread Nick Harper
On Thu, Jan 21, 2021 at 9:46 PM Martin Thomson  wrote:

> In other words, each flag is treated just like an empty extension: you can
> initiate an exchange with it, but you can only answer with it if it was
> initiated with it.
>
> I agree that this is the correct guiding principle for handling flags. We
should allow unsolicited flags in the same places we allow unsolicited
extensions. Going by section 4.2 of RFC 8446, that would be ClientHello,
CertificateRequest, and NewSessionTicket.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [EXTERNAL] Re: Narrowing allowed characters in ALPN ?

2021-05-20 Thread Nick Harper
On Thu, May 20, 2021 at 11:19 AM Viktor Dukhovni 
wrote:

> On Thu, May 20, 2021 at 01:46:38PM -0400, Ryan Sleevi wrote:
>
> > > It is fine for the TLS protocol to not care, but the *standard* ALPN
> > > values in the IANA registry (that might then also appear in DNS
> > > zone files, configuration files, ...) a more restricted character
> > > set would actually be helpful.
> >
> > I'm a little torn here, because you've again mentioned usability and
> > interoperability suffer, but it's unclear if you're seeing that as a
> > generic statement or simply "with respect to configuring DNS zone files".
>
> At present, more the latter, but not exclusively so, since there are
> likely other places where operators might be recording choices of
> supported ALPN values in configuration files.
>
> > Saying BIDI, LTR/RTL or NFKC/NFKD are issues here is like saying TLS
> > wireprotocol version field itself suffers from left-to-right issues.
> Such a
> > statement makes no sense, because the version, like the ALPN, is a byte
> > string.
>
> And indeed at present also in the DNS wire format.  What's new, is that
> those values are now also going to be manipulated by operators in their
> presentation form, which gets rather unwieldy when the values happen to
> contain commas, double-quotes, control characters, ... let alone strings
> that in UTF-8 appear to be NFKD, BIDI, ...
>
> > We don't say that the TLS version is "COMBINING TILDE" (U+0303), we
> > say it's 0x03 0x03, or, if we want to make the string human readable, we
> > convert it to a value that has no relation to its wire representation -
> > such as "TLS 1.3"
>
> Of course, we all understand they're plain octet-strings.  But that does
> not help the poor operator trying to enter them into a config file or
> a web form.
>
> > The suggestion here, of restricting the registered set, seems like it
> > should equally be obvious as creating and amplifying interoperability
> > issues, rather than resolving them, because of the assumption again that
> > values will be ASCII, even though that's not what the wire protocol
> > assumes.
>
> I don't see a substantial risk that TLS stacks will start to not treat
> the ALPN string as an opaque byte string, it would take more code to do
> otherwise.
>
> > APIs, from the TLS implementation itself to those that expose or
> > rely on the TLS information (such as the Web APIs previously mentioned)
> > would, if such a path as you suggest here were adopted, no doubt assume
> > that despite the spec saying it's a byte string, it's in fact an ASCII
> > string.
>
> This is of course possible, but does not look like a substantial risk.
> And there's always GREASE.
>
> > This issue is, functionally, an example of that. It seems like the issue
> is
> > not at all related to the DNS wire format, which is perfectly happy with
> > this, but rather the configuration text syntax used with DNS, and not
> > wanting to do things like escape byte sequences.
>
> Correct, but more than just DNS, basically any data-at-rest
> serialisation of ALPN values in configuration files, or use
> with interactive data entry, ...
>
> > This is why it seems like it's simply a matter of being inconvenient,
> > but have I misunderstood and there's a deeper issue I've missed?
>
> The incovenience means that applications that process SVCB/HTTPS data
> entered by users need much more complex and easier to mess up parsers.
>
> Since the likelihood of actually adding exotic ALPN values to the
> registry appears slim, why not say so.  That would leave the exotic
> values for private on-the-wire use, while allowing DNS and other
> configuration serialisation forms to avail themselves of more
> straight-forward parsers.
>

Encoding ALPN identifiers in hex for these configuration files sounds like
a very straightforward way to support all valid ALPN identifiers. We
already have "exotic" ALPN identifiers in the registry (for GREASE). Any
new scheme that handles ALPN should be designed to handle all possible
values. Not doing so will lead to interoperability issues that others have
already mentioned.

>
> --
> Viktor.
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [EXTERNAL] Re: Narrowing allowed characters in ALPN ?

2021-05-20 Thread Nick Harper
On Thu, May 20, 2021 at 3:56 PM Viktor Dukhovni 
wrote:

> I agree it is a straight-forwarding encoding for machines, and it is
> well suited for the GREASE code points.
>
> But, it makes for a fairly terrible user interface for the human
> operator.  Compare:
>
> * managesieve
> * 6d616e6167657369657665
>
> Typos in hex values are easy to make and hard to recognise.
>
> I agree that it's not a great user interface for the human. A good
solution to that is to let the user define a constant with the hex value
(or build the ALPN constant into the config language), like how with
OpenSSL one can specify "ECDHE-ECDSA-AES128-GCM-SHA256" instead of 0xC02B.
Using your example, one could define a constant ManageSieve = {0x6d 0x61
0x6e 0x61 0x67 0x65 0x73 0x69 0x65 0x76 0x65} and reference that constant,
and if a typo were made (e.g. one put ManageSeive in the config), the
config would fail fast, vs if one configured "manageseive" as the ALPN
directly, the typo would propagate further through a deployment before
being detected/fixed.

There are good solutions to solve the human factors of managing/configuring
ALPN that don't require imposing restrictions on what ALPN can go on the
wire in TLS. Those solutions should be favored over restricting the wire
protocol/code point allocation.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Possible TLS 1.3 erratum

2021-07-15 Thread Nick Harper
Regarding
> so where 1.2 uses { hash, sig } 1.3 uses values equivalent to { sig, hash
}.

While some TLS 1.3 SignatureScheme enum values might appear to have the sig
in the upper octet and hash in the lower octet, that is not the case and
SignatureSchemes for TLS 1.3 only exist as combinations with all parameters
specified. (Some SignatureSchemes, e.g. ed25519 and ed448 don't decompose
into a separate sig and hash.) It does not make sense to think about
decomposing the on-the-wire representation of a SignatureScheme into a
separate sig and hash.

> Should I submit an erratum changing the above text to point out that the
> encoding is incompatible and signature_algorithms needs to be decoded
> differently depending on whether it's coming from a 1.2 or 1.3 client?

I don't think an erratum or PR is necessary. A TLS 1.2 server can process
the extension as specified in RFC 5246 (and the TLS 1.3 values will be
ignored as {unknown hash, unknown algorithm}). A TLS 1.3 server can process
the extension as values from the TLS 1.3 SignatureScheme enum, even if TLS
1.2 is negotiated. There's no incompatibility here.

On Thu, Jul 15, 2021 at 7:53 AM David Benjamin 
wrote:

> The SignatureScheme change was perhaps overly clever, but the intent is
> that you can process them the same at both versions and backport
> the single-enum interpretation. (That's what we do.) The key observation is
> that TLS 1.3's allocations will never overlap with a defined TLS 1.2 hash
> or signature value. So an old implementation will never send a value that
> overlaps with TLS 1.3. More importantly, it will interpret any new TLS 1.3
> value as {unknown hash, unknown algorithm} and ignore it, which is what we
> want it to do anyway.
>
> That means an old implementation will interop just fine with new values,
> and we can freely recast the whole extension as SignatureSchemes in new
> implementations.
>
> On Thu, Jul 15, 2021 at 9:02 AM Eric Rescorla  wrote:
>
>> As we are currently working on a 8446-bis, the best thing to do would be
>> to file a PR at:
>> https://github.com/tlswg/tls13-spec
>>
>> Thanks,
>> -Ekr
>>
>>
>> On Thu, Jul 15, 2021 at 3:56 AM Peter Gutmann 
>> wrote:
>>
>>> I've got some code that dumps TLS diagnostic info and realised it was
>>> displaying garbage values for some signature_algorithms entries.  Section
>>> 4.2.3 of the RFC says:
>>>
>>>   In TLS 1.2, the extension contained hash/signature pairs.  The
>>>   pairs are encoded in two octets, so SignatureScheme values have
>>>   been allocated to align with TLS 1.2's encoding.
>>>
>>> However, they don't align with TLS 1.2's encoding (apart from being
>>> 16-bit
>>> values), the values are encoded backwards compared to TLS 1.2, so where
>>> 1.2
>>> uses { hash, sig } 1.3 uses values equivalent to { sig, hash }.  In
>>> particular
>>> to decode them you need to know whether you're looking at a 1.2 value or
>>> a 1.3
>>> value, and a 1.2-compliant decoder that's looking at what it thinks are
>>> { hash, sig } pairs will get very confused.
>>>
>>> Should I submit an erratum changing the above text to point out that the
>>> encoding is incompatible and signature_algorithms needs to be decoded
>>> differently depending on whether it's coming from a 1.2 or 1.3 client?
>>> At the
>>> moment the text is misleading since it implies that it's possible to
>>> process
>>> the extension with a 1.2-compliant decoder when in fact all the 1.3 ones
>>> can't
>>> be decoded like that.
>>>
>>> Peter.
>>>
>>> ___
>>> TLS mailing list
>>> TLS@ietf.org
>>> https://www.ietf.org/mailman/listinfo/tls
>>>
>> ___
>> TLS mailing list
>> TLS@ietf.org
>> https://www.ietf.org/mailman/listinfo/tls
>>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] RFC8446 backward compatibility question

2021-08-05 Thread Nick Harper
Yes, backward compatibility is optional.

On Thu, Aug 5, 2021 at 1:44 PM Toerless Eckert  wrote:

> I am trying to figure out if every implementation compliant with
> RFC8446 is also necessarily interoperable with an RFC5246 peer, or if this
> is just a likely common, but still completely optional implementation
> choice.
>
> I could not find any explicit statement that backward compatibility
> with RFC5246 is mandatory (but i just was doing browsing/keyword search
> over RFC8446). COnditional text such as:
>
> "implementations which support both TLS 1.3 and earlier versions SHOULD"
>
> make me think that TLS 1.2 backward compatibility is just optional.
>
> Thanks
> Toerless
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Servers respond with BadRecordMac after ClientFinished, sent when PSK+EarlyData

2022-08-09 Thread Nick Harper
That is not the expected behavior. Likely what is happening is the server
(at the http layer) sees the Connection: close header, and goes to close
the socket for the underlying transport (in this case, the tls stack). The
server’s tls stack, when getting that signal, closes the tls connection,
and since it does that before receiving the erroneous client Finished, it
sends Alert(0) and closes the connection on its end.

On Tue, Aug 9, 2022 at 00:55 Kristijan Sedlak  wrote:

> After some sleep, I went playing with the content of the EarlyData sent to
> the server and it turned out that the "Connection: close" header must be
> present in the HTTP1.1 request. After adding it, the error was gone and the
> connection closed with Alert(0).
>
> Is this the expected behavior and Keep-Alieve is not allowed when
> EarlyData is used or it's just the remote server implementation specific?
> If I understand the spec correctly, the behavior of the EarlyData part is
> mostly up to the implementor, and you must know the rules up front, right?
>
> Best,
> Kristijan
>
> On 9 Aug 2022, at 09:05, Kristijan Sedlak  wrote:
>
> Hey Ilari,
>
> thank’s for replying. I did verify the transcript as well. Everything
> seems to be correct. I bet if it wasn't the 1-RTT and 0-RTT(no-early-data)
> would fail too. Something weird is going on only in 0-RTT(early-data) case.
>
> Can you maybe point me to an URL with the correct TLS1.3 implementation
> where I could safely test the client?
>
> Best,
> Kristijan
>
>
> On 9 Aug 2022, at 08:51, Ilari Liusvaara  wrote:
>
> Ilari
>
>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Securely disabling ECH

2022-10-18 Thread Nick Harper
On Tue, Oct 18, 2022 at 8:56 PM Safe Browsing 
wrote:

> The draft does consider this by allowing ECH to be disabled - as discussed
> in this thread. Albeit at the cost of an extra round trip. However, the
> connection retry logic in the presence of ECH disabling is currently
> optional.
>
> The draft states, in Section 8.2:
> “ this may trigger the retry logic”
>
> It seems this text must change to:
> “ this MUST trigger the retry logic”
>

This language change would not make sense. The context for "this may
trigger the retry logic" in section 8.2 offers two options. The sentence
structure is "Depending on whether X, this may Y or Z", i.e. if X is
resolved one way, then the client does Y, otherwise it does Z. Changing the
"may" to "MUST" would result in stating "this MUST trigger the retry logic
described in Section 6.1.6 or result in a connection failure", which
doesn't really make sense, and wouldn't have the goal you'd like, since a
connection failure instead of retry logic would satisfy the MUST.

If your server is authoritative for the public name, then the behavior you
care about is described in section 8.1.

I suspect most implementations of ECH will implement the retry logic, as
the misconfigurations and deployment concerns described in section 8.1 are
an inevitability, and implementing the retry logic avoids connection
failures that would occur without it. I doubt that adding a MUST would make
someone more likely to implement the retry logic.

>
> In order to ensure functional connections in a TLS client agnostic manner,
> in the presence of protocol level ECH disabling.
>
> I would appreciate your thoughts/input.
>
> On Oct 8, 2022, at 7:41 PM, Eric Rescorla  wrote:
>
> 
> If you are able to install a new trust anchor, then you should be able to
> use the enterprise configuration mechanisms in browsers to disable ECH.
>
> -Ekr
>
>
> On Fri, Oct 7, 2022 at 8:40 PM Safe Browsing 
> wrote:
>
>> Hi Rich,
>>
>> When I say “authoritative”, I mean it in the true TLS sense, in the way
>> that I believe the ECH draft also suggests and requires.
>>
>> In other words, the middlebox serves a cert to the client that is
>> cryptographically valid for the said public name of the client facing
>> server.
>>
>> How can that be when the client facing server guards its private key
>> properly? By re-signing the server certificate on the middlebox with a
>> private key, local to the middle box only, for which the corresponding
>> certificate has been installed in the trust store of the client, before
>> sending it on to the client. Only after the original server certificate
>> has been validated properly on the middlebox, of course. Message digests
>> being managed accordingly/appropriately.
>>
>> That is a very typical setup for most (all?) TLS inspection devices (next
>> gen firewalls and such).
>>
>> Thus this part of ECH, requiring the middlebox to be authoritative for
>> the server, is well understood and prolifically exercised in inspected TLS
>> sessions today. What is new is that these connections can now fail/close,
>> in the “securely disabling ECH” case, and the onus is on the TLS client,
>> not the application, to retry the connection without ECH.
>>
>> I am after such a client, if one exists already.
>>
>> Thank you.
>>
>> Sent from my phone
>>
>> On Oct 7, 2022, at 11:41 AM, Salz, Rich  wrote:
>>
>> 
>>
>>
>>
>>- Client <-> *Middlebox* <-> Client-facing server <-> Backend server
>>
>>
>>
>>- With "Middlebox" really meaning a middlebox like a firewall or
>>similar.
>>
>>
>>
>> The middlebox is not allowed to modify traffic between the client and the
>> server. Doing so would mean that the packets the client sent are not the
>> ones that the server received, and the two message digests would disagree.
>> (If you think about things, it **has** to be this way, otherwise TLS
>> would not be able to provide integrity guarantees.)
>>
>>
>>
>>- From the draft, ECH seems to be designed to still allow successful
>>TLS connection establishment if the encrypted_client_hello extension is
>>dropped from the ClientHello on a conforming middlebox. Provided that the
>>middlebox is authoritative for the client-facing server's public name, as
>>reported/delivered by DNS to the client. We can assume that this is the
>>case here.
>>
>>
>>
>> I do not understand what you mean by this.  The word “authoritative” is
>> used to mean that it has a certificate and keypair and can do TLS
>> termination. DNS giving the client a particular IP address is not
>> authoritative. It can be confusing because DNS terminology uses
>> authoritative to mean that a particular entity can prepare data used for
>> DNS responses.  But it is not authoritative in the TLS sense.
>>
>>
>>
>> I think your questions can be answered with those two overall corrections
>> above.  If not, please continue the thread.  (And feel free to repost from
>> your note since I trimmed for brevity.)
>>
>>
>>
>> __

Re: [TLS] Securely disabling ECH

2022-10-19 Thread Nick Harper
f.org/archive/id/draft-ietf-tls-esni-15.html#RFC8446>].
> Provided the server can present a certificate valid for the public name,
> the client can safely retry with updated settings, as described in Section
> 6.1.6
> <https://www.ietf.org/archive/id/draft-ietf-tls-esni-15.html#rejected-ech>
> ."
>
> So it simply refers back to Section 6.1.6 again, which then gets back to
> my earlier statement about Section 6.1.6 not being as clear as it can be
> about this situation. I.e. the situation were a retry is with ECH disabled,
> due to the lack of an ECH extension from the server, as opposed to the
> "retry_config" retry method with ECH still enabled - they are of course
> very different retry mechanisms.
>
>
> On Wed, Oct 19, 2022 at 1:44 AM Nick Harper  wrote:
>
>>
>>
>> On Tue, Oct 18, 2022 at 8:56 PM Safe Browsing 
>> wrote:
>>
>>> The draft does consider this by allowing ECH to be disabled - as
>>> discussed in this thread. Albeit at the cost of an extra round trip.
>>> However, the connection retry logic in the presence of ECH disabling is
>>> currently optional.
>>>
>>> The draft states, in Section 8.2:
>>> “ this may trigger the retry logic”
>>>
>>> It seems this text must change to:
>>> “ this MUST trigger the retry logic”
>>>
>>
>> This language change would not make sense. The context for "this may
>> trigger the retry logic" in section 8.2 offers two options. The sentence
>> structure is "Depending on whether X, this may Y or Z", i.e. if X is
>> resolved one way, then the client does Y, otherwise it does Z. Changing the
>> "may" to "MUST" would result in stating "this MUST trigger the retry logic
>> described in Section 6.1.6 or result in a connection failure", which
>> doesn't really make sense, and wouldn't have the goal you'd like, since a
>> connection failure instead of retry logic would satisfy the MUST.
>>
>> If your server is authoritative for the public name, then the behavior
>> you care about is described in section 8.1.
>>
>> I suspect most implementations of ECH will implement the retry logic, as
>> the misconfigurations and deployment concerns described in section 8.1 are
>> an inevitability, and implementing the retry logic avoids connection
>> failures that would occur without it. I doubt that adding a MUST would make
>> someone more likely to implement the retry logic.
>>
>>>
>>> In order to ensure functional connections in a TLS client agnostic
>>> manner, in the presence of protocol level ECH disabling.
>>>
>>> I would appreciate your thoughts/input.
>>>
>>> On Oct 8, 2022, at 7:41 PM, Eric Rescorla  wrote:
>>>
>>> 
>>> If you are able to install a new trust anchor, then you should be able
>>> to use the enterprise configuration mechanisms in browsers to disable ECH.
>>>
>>> -Ekr
>>>
>>>
>>> On Fri, Oct 7, 2022 at 8:40 PM Safe Browsing 
>>> wrote:
>>>
>>>> Hi Rich,
>>>>
>>>> When I say “authoritative”, I mean it in the true TLS sense, in the way
>>>> that I believe the ECH draft also suggests and requires.
>>>>
>>>> In other words, the middlebox serves a cert to the client that is
>>>> cryptographically valid for the said public name of the client facing
>>>> server.
>>>>
>>>> How can that be when the client facing server guards its private key
>>>> properly? By re-signing the server certificate on the middlebox with a
>>>> private key, local to the middle box only, for which the corresponding
>>>> certificate has been installed in the trust store of the client, before
>>>> sending it on to the client. Only after the original server
>>>> certificate has been validated properly on the middlebox, of course. 
>>>> Message
>>>> digests being managed accordingly/appropriately.
>>>>
>>>> That is a very typical setup for most (all?) TLS inspection devices
>>>> (next gen firewalls and such).
>>>>
>>>> Thus this part of ECH, requiring the middlebox to be authoritative for
>>>> the server, is well understood and prolifically exercised in inspected TLS
>>>> sessions today. What is new is that these connections can now fail/close,
>>>> in the “securely disabling ECH” case, and the onus is on the TLS client,
>>>> not the application, to retry the connectio

Re: [TLS] TLS 1.3 servers and psk_key_exchange_modes == [psk_ke]?

2023-03-06 Thread Nick Harper
On Mon, Mar 6, 2023 at 9:30 PM Viktor Dukhovni 
wrote:

> On 6 Mar 2023, at 8:13 pm, Peter Gutmann 
> wrote:
>
> > Not really sure how to fix this, although at the moment "stay with TLS
> > classic" seems to be the preferred option.
>
> There are three stages of fixes:
>
> 1. Update the protocol specification.
> 2. Fix the implementations.
> 3. Keep using TLS 1.2 until the fixed implementations are broadly adopted.
>
> Keeping in mind that LTS enterprise editions of Linux are lately supported
> for ~13 years, step 3 may take a while.  Which is not to say that we should
> not start doing 1. and 2., but it is like planting an olive tree, the fruit
> will be enjoyed by future generations.
>
> The protocol specification needs to say something along the lines of:
>
>- Implementations MUST support both psk_ke and psk_dhe_ke.
>
>- Server operators SHOULD leave both modes enabled.
>
>- In closed environments, or specific applications where *all*
>  clients are expected to and required to support psk_dhe_ke
>  the required to enable psk_ke is relaxed to MAY.
>
>- Conversely, where no clients are expected to support psk_dhe_ke,
>  the requirement to leave it enabled changes to MAY.
>
>- psk_dhe_ke is negotiated when supported by both sides, otherwise
>  psk_ke is negotiated.
>
>- Clients SHOULD generally offer both modes in the client HELLO.
>
>- Clients MAY offer just one or the other when appropriate for the
>  application in question, and can expect to interoperate with a
>  "general purpose" server.
>
> Basically, one way or another, PSK key exchange mode negotiation needs
> to be interoperable by default.


Based on your first message, it sounds like you have identified an
implementation where it is not interoperable. All of the spec language in
the world won’t fix implementation bugs.

PSK only and PSK-DHE modes offer two fundamentally different security
properties. PSK only key exchange lacks forward secrecy. For general
purpose libraries, defaulting to PSK-DHE with forward secrecy is the right
choice for security. (As you point out, a server that only supports DHE
PSKs shouldn’t send new session tickets to a client that doesn’t support
psk_dhe_ke, but that doesn’t contradict psk_dhe_ke only as being a sane
default.)

RFC 8446 provides the right building blocks, and delegates the choice of
which to use to an application profile standard. There’s no need for any
additional language in the TLS 1.3 spec to encourage or mandate the use of
non-FS PSK modes. For applications where forward secrecy isn’t needed or
the computation costs outweigh the security benefits, the application
profiles for those use cases can encourage or mandate psk_ke.

>
>
> As for implementations, the code changes are not that difficult, but will
> take time to release, and then there's step 3...
>
> Meanwhile, when there's no other choice, keep using TLS 1.2.
>
> --
> Viktor.
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS 1.3 servers and psk_key_exchange_modes == [psk_ke]?

2023-03-07 Thread Nick Harper
On Tue, Mar 7, 2023 at 6:50 AM Viktor Dukhovni 
wrote:

> On Mon, Mar 06, 2023 at 11:18:50PM -0800, Nick Harper wrote:
>
> > > Basically, one way or another, PSK key exchange mode negotiation needs
> > > to be interoperable by default.
> >
> > Based on your first message, it sounds like you have identified an
> > implementation where it is not interoperable. All of the spec language in
> > the world won’t fix implementation bugs.
>
> It isn't the only one.
>

Yet other implementations get this correct. If one's goal is to increase
adoption of psk_ke, it would seem to be a better use of time to work with
those implementers to fix those bugs instead of arguing that the spec
should be changed to work around those bugs (which would decrease security
for other parts of the ecosystem).

>
> > PSK only key exchange lacks forward secrecy.
>
> That's not quite accurate.


https://www.rfc-editor.org/rfc/rfc8446#section-2.2 disagrees.

>
> > For general purpose libraries, defaulting to PSK-DHE with forward
> > secrecy is the right choice for security.
>
> Defaulting, sure, provided both sides offer it.  But defaulting
> (interoperable) is different from exclusively offering only psk_dhe_ke
> (not interoperable with clients offeringl only "psk_ke").


> > (As you point out, a server that only supports DHE PSKs shouldn’t send
> > new session tickets to a client that doesn’t support psk_dhe_ke, but
> > that doesn’t contradict psk_dhe_ke only as being a sane default.)
>
> Yes, but the more serious issue is that resumption is impossible,
> sending an unusable ticket is only a minor nuisance.
>
> > RFC 8446 provides the right building blocks, and delegates the choice of
> > which to use to an application profile standard. There’s no need for any
> > additional language in the TLS 1.3 spec to encourage or mandate the use
> of
> > non-FS PSK modes.
>
> I disagree, protocols need to be interoperable by default.  Features
> that fragment clients and servers into non-interoperable islands of
> compatibility are poorly designed.  This is why we have MTI code points,
> and mechanisms to negotiate more preferred options.  We need both PSK
> modes to be MTI and enabled by default, with the stronger chosen when
> mutually supported and enabled.
>

It is interoperable by default, if the implementations follow the spec. If
implementations don't follow the spec, no amount of spec language will fix
their behavior.

Having both PSK modes MTI is a bad idea. Resumption is an optimization:
psk_dhe_ke removes an asymmetric cryptographic operation to verify the
certificate chain, while retaining forward secrecy. psk_ke further
optimizes the handshake by removing all asymmetric crypto, at the cost of
forward secrecy. An implementation could decide that it wants the
certificate verified on every connection (and support no resumption), or
could decide that it only wants to support connections with forward secrecy
(and not support psk_ke). Making any PSK modes MTI reduces security.

>
> > For applications where forward secrecy isn’t needed or the computation
> > costs outweigh the security benefits, the application profiles for
> > those use cases can encourage or mandate psk_ke.
>
> This is not well understood or likely to happen.  For example, ADoT in
> DNS (from iterative resolver to authoritative server, not user to
> resolver) is a good candidate for psk_ke, but it is not possible to to
> enable it, so as a non-trivial fraction of servers are psk_dhe_ke-only,
> negotiating "psk_ke" based on the client's preferred mode does not work.
> This is a protocol issue first, and implementation issue second.
>

This isn't a protocol issue. This is an implementation issue (buggy
implementations sending psk_dhe_ke NSTs in response to psk_ke ClientHellos)
and an ecosystem issue.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS 1.3 servers and psk_key_exchange_modes == [psk_ke]?

2023-03-07 Thread Nick Harper
On Tue, Mar 7, 2023 at 6:51 PM Viktor Dukhovni 
wrote:

> What specific changes would you recommend in say the OpenSSL
> implementation?  Just not sending the useless tickets?  Fine, we've
> saved a bit of bandwidth, but haven't really solved the problem.
>

I don't know the details of the OpenSSL implementation or the behavior of a
psk_ke only client attempting that resumption with an OpenSSL server. Not
sending the useless tickets sounds like the right fix. It also sounds like
we've solved the problem: In a case where a client that only supports
psk_key for resumption modes is talking to a server that only supports
psk_dhe_ke for resumption, useless information is no longer being sent over
the wire and the client will always attempt a full handshake.

>
> We have somewhat different interoperability expectations, because I
> expect resumption to work under typical conditions, which would include
> clients sending just "psk_ke".  Unless the server has a good reason to
> expect all clients to always request "psk_dhe_ke", it should support
> "psk_ke", leaving the client the option to negotiate "psk_dhe_ke" or use
> "psk_ke" if preferred.
>

I expect resumption to work when both endpoints support the feature. TLS
has multiple options (thankfully TLS 1.3 has many fewer than TLS 1.2), and
only the full handshake is required. You're correct that the client should
have the option to negotiate psk_dhe_ke or psk_ke (or none at all) as
desired, and in the same vein the server has the option to negotiate
psk_dhe_ke or psk_ke (or decline resumption) as desired.

>
> Resumption is an important optimisation, it can make the difference
> between a scalable service and a degraded or unusable service.
>

I don't disagree. In application profiles where the optimization makes a
critical difference, the resumption modes that make that difference should
be specified as MTI. The key here is that they should only be MTI in those
application profiles.

>
> There's no downgrade attack, if both sides want psk_dhe_ke, they get it.
> If some application or private deployment never needs psk_ke resumption,
> fine.  But in the absence of specific knowledge a generic client using
> TLS to reach some random server should be able to perform resumption
> without the sort of friction that turning up forward-secrecy to 11
> introduces.  The client *should* still offer psk_dhe_ke where it makes
> sense, and servers *should* then use DHE resumption, but if the client
> has good reason to choose "psk_ke" it should not be punished for its
> choice to optimise for performance.
>

Why should a client be allowed to force a server to accept psk_ke over
performing a full handshake? A server should be free to choose to always
perform forward secret handshakes. When a client that prefers psk_ke talks
to said server, it isn't being punished. The client is offered psk_dhe_ke,
and if it doesn't like that, it can always do a full handshake. Each peer
is entitled to their preferences, and the TLS handshake negotiates the best
option for both endpoints.

>
> > This isn't a protocol issue. This is an implementation issue (buggy
> > implementations sending psk_dhe_ke NSTs in response to psk_ke
> ClientHellos)
> > and an ecosystem issue.
>
> But sending the unusable tickets isn't the problem, the problem is that
> non-DHE resumption is unavailable by default.  Is that an implementation
> bug or not?


That's not an implementation bug, that's an ecosystem issue. As I said
earlier, all of the pieces are there in RFC 8446. People just have to
choose to use it.

TLS is used in many different ways by different application protocols.
Stating that a certain feature of TLS should be available in all TLS
implementations is a high bar to pass — it must be near universally useful.
Instead, each ecosystem that uses TLS can and should decide for itself
which features are useful. Actors in those ecosystems can choose their
implementations (or choose to create their own) based on the availability
of TLS features.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] IANA Considerations for draft-ietf-tls-dtls-connection-id

2019-06-26 Thread Nick Harper
I have a slight preference for 3.

On Wed, Jun 26, 2019 at 10:35 AM Salz, Rich  wrote:

> Something should be done, I don't have a strong preference for 2 or 3.
> Having this info back then might have prevented Heartbleed.
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] consensus call: (not precluding ticket request evolution)

2020-03-04 Thread Nick Harper
On Wed, Mar 4, 2020 at 5:27 PM Viktor Dukhovni 
wrote:

> On Wed, Mar 04, 2020 at 05:19:02PM -0800, Nick Harper wrote:
>
> > > Breaking interoperability.
> >
> > This doesn't break interoperability. If both endpoints negotiate
> > ticketrequests and this new extension, the new definition applies. If one
> > endpoint negotiates only this ticketrequests extension, then the
> definition
> > here applies. That doesn't break interoperability.
>
> The whole point of this discussion is that I looking to avoid the need
> to define two overlapping extensions solving the same problem.  The
> current extension should and will suffice.
>

By current extension, do you mean what is currently
in draft-ietf-tls-ticketrequests-04, which provides no mechanism for
indicating anything about ticket reuse? If so, I'm happy with that
resolution.

>
> We might never "bless" a way to negotiate reuse, fine, but there is
> definitely no need to go out of one's way to forestall that possibility.
>

MT's approach of putting two values in the extension and saying nothing
about reuse, or reiterating the advice in RFC 8446 solves your problem for
enabling reuse without blessing that use case.

>
> That's frankly simply hostile, and may evidences a cultural issue in
> this WG.
>
> Barring a defensible technical reason to preclude future evolution in a
> compatible manner to support a use-case that has non-negligible if not
> yet majority support, precluding it anyway can only be read as a hostile
> exclusionary tactic.  I object.
>

We make many non-technical decisions. One such decision is what work we
choose to do. An explicit focus of the charter of the TLS working group is
to make the protocol more privacy-friendly and reduce the amount of data
visible to attackers. Reusing tickets goes against those goals.

>
> --
> Viktor.
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Earlier exporters

2016-10-07 Thread Nick Harper
That's my assumption as well.

On Fri, Oct 7, 2016 at 2:07 PM, Eric Rescorla  wrote:

> I was assuming that there were two exporters:
>
> Export() --> the same API as in 1.2 and computed as described here
> Export0RTT -> A new API that computes the early_exporter.
>
>
> -Ekr
>
> On Fri, Oct 7, 2016 at 1:59 PM, Nick Harper  wrote:
>
>> Does the wording of this PR mean that the value from the exporter changes
>> depending on whether it's run before or after exporter_secret can be
>> computed? I think it would be better to keep an RFC 5705-style exporter
>> that remains constant for the connection. The 0-RTT exporter from an API
>> perspective can be a separate thing that a caller has to explicitly choose
>> to use.
>>
>> On Fri, Oct 7, 2016 at 8:10 AM, Eric Rescorla  wrote:
>>
>>> Please see the following PR:
>>>   https://github.com/tlswg/tls13-spec/pull/673
>>>
>>> This includes various changes to make exporters/resumption work better.
>>>
>>> Basically:
>>> 1. Add a 0-RTT exporter and change the transcript for the regular
>>> exporter so it
>>> only includes the transcript up to ServerFinished. This gives it
>>> parity with the
>>> rest of the traffic keys. If we need an exporter with the full
>>> transcript we can
>>> always add it later
>>>
>>> 2. Point out that you can predict ClientFinished for NST when not doing
>>> Client auth. This lets you issue tickets on the server's first
>>> flight, while still
>>> ensuring that if you do client auth you still bind resumption to the
>>> client's
>>> full transcript.
>>>
>>> These are pretty straightforward changes, so absent objections I'll merge
>>> them early next week.
>>>
>>> -Ekr
>>>
>>>
>>> ___
>>> TLS mailing list
>>> TLS@ietf.org
>>> https://www.ietf.org/mailman/listinfo/tls
>>>
>>>
>>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Confirming consensus: TLS1.3->TLS*

2016-11-17 Thread Nick Harper
I prefer TLS 1.3 but am also fine with TLS 4.

On Fri, Nov 18, 2016 at 11:12 AM, Sean Turner  wrote:

> At IETF 97, the chairs lead a discussion to resolve whether the WG should
> rebrand TLS1.3 to something else.  Slides can be found @
> https://www.ietf.org/proceedings/97/slides/slides-
> 97-tls-rebranding-aka-pr612-01.pdf.
>
> The consensus in the room was to leave it as is, i.e., TLS1.3, and to not
> rebrand it to TLS 2.0, TLS 2, or TLS 4.  We need to confirm this decision
> on the list so please let the list know your top choice between:
>
> - Leave it TLS 1.3
> - Rebrand TLS 2.0
> - Rebrand TLS 2
> - Rebrand TLS 4
>
> by 2 December 2016.
>
> Thanks,
> J&S
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] PR#812: End Of Early Data as handshake

2016-12-12 Thread Nick Harper
On Mon, Dec 12, 2016 at 5:32 PM, Eric Rescorla  wrote:

>
>
> On Mon, Dec 12, 2016 at 5:23 PM, Martin Thomson 
> wrote:
>
>> On 13 December 2016 at 12:09, Eric Rescorla  wrote:
>> > David Benjamin pointed out to me that end_of_early_data is the only
>> place
>> > where we transition keys on an alert and this would be cleaner if it
>> was a
>> > handshake message. This PR does that. It's encrypted under the same
>> > keys, so this is largely an aesthetic issue, but I think a good one.
>>
>> The major change in this PR isn't that obvious.  And that is this:
>>
>>if the server has accepted early data, an EndOfEarlyData
>>message will be sent to indicate [a] key change.
>>
>> This makes the end of early data signal conditional: a client that
>> learns that its 0-RTT data has been rejected MUST NOT send the
>> EndOfEarlyData message.
>>
>> The reason for this is that the EndOfEarlyData is a handshake message
>> and therefore part of the handshake transcript.  Presumably it appears
>> after Finished (I'm going to send a PR that expands the ellipsis in
>> the draft at least once, so that people can see the full canonical
>> order).
>>
>> FWIW, I think that this is the right change, but some implementations
>> will need to change in some non-obvious ways.
>>
>
> Thanks for pointing this out. I agree with this assessment. We could of
> course
> exclude it from the transcript but that seems silly.
>

Right now, I believe it's legal for a client to send ClientHello, early
data, and end_of_early_data alert without reading any messages from the
server. This change would require a client to wait for the ServerHello
before sending (or not) EndOfEarlyData, but that seems quite reasonable.

>
>
> I'm not aware of anyone who is rejecting 0-RTT and then decrypting the
>> data so they can avoid a bunch of failed decryptions, but that
>> approach won't work any more.
>>
>
> Actually you could, just discard EOED in that mode.
>
> -Ekr
>
>
>
>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS]Re: WG Adoption for TLS Trust Expressions

2024-05-21 Thread Nick Harper
On Mon, May 20, 2024 at 7:26 AM Dennis Jackson  wrote:

> Compared to the alternatives, Trust Expressions seems to solve the
> problems less effectively and introduce much greater risks. If you really
> feel the opposite is true, I would strongly encourage you to:
> b) Make a good faith attempt to engage with the concerns raised about the
> risks. Think through how a party with different goals to your own could
> exploit the negotiation offered by Trust Expressions for their own
> purposes. If your goal was instead to enable surveillance and domestic
> control of the web for your government, how would widespread support for
> Trust Expressions help you?
>

If an attacker's goal is to surveil the web (i.e. get the plaintext of
selected HTTPS connections), they can MitM the connection, log the traffic
keys for the connection, or have one of the parties (client or server) with
access to the plaintext send me the plaintext. Only the first option, a
machine-in-the-middle attack, involves the Web PKI - the rest can be done
regardless of the certificate chain served by the server or verified by the
client.

In a world without Trust Expressions, the surveillor performing a MitM
attack needs to get the client to trust the certificate chain presented by
the MitM. It could do this by asking the user to add its root to their
browser's trust store (e.g. like Kazakhstan does), or by coercing the
browser vendor to add the surveillance root to the browser's trust store.
In this world, it doesn't matter what certification authority the server
being MitMed is using.

In a world with Trust Expressions, the above scenario is still possible and
it is unchanged by the presence of Trust Expressions: The client still has
added the surveillance root to its trust store, the MitM is still
generating or using a cert issued by the surveillance root, and the server
being MitMed can serve whichever of its certificate chains to its end of
the MitM. (Presumably the MitM either doesn't advertise Trust Expressions
in the ClientHello it sends to the server, or it advertises the same one
sent by the client so as to keep a low profile of being detected by the
server.)

Trust Expressions appears to open up a new attack vector that involves
coercing clients to trust a new root, and also coercing servers to support
using a cert issued by that root (when connected to by a client that trusts
that root, and using another cert chain on other connections). This
scenario requires the attacker to coerce a superset of the parties involved
in the previous scenario, and includes an additional step that is
unnecessary. Coercing the server to use a different cert chain (only when
the client trusts it) does nothing to facilitate the attacker from getting
the plaintext of a connection from that server. The attacker still needs to
MitM that connection with an attacker controlled leaf certificate because
it doesn't know the private key of the certificate used by the server.

The technical attack of surveilling a connection looks the same regardless
of whether Trust Expressions exists: The surveillor coerces clients into
trusting its surveillance root, and creates certs issued from that root to
MitM connections. Trust Expressions might make it more palatable for this
attacker to coerce servers to use a cert issued by their surveillance root
(when trusted by clients, and another cert for other clients), but doing so
is unrelated to connection surveillance and is separable.

A government using Trust Expressions for domestic control of the Web is a
very broad and vague topic. The only two examples of end goals for domestic
control of the Web that I can think of right now are for surveillance and
censorship. Surveillance was discussed above and Trust Expressions provides
no additional benefit for surveillance over the status quo. For censorship
(that does not involve viewing the plaintext of connections - that requires
surveillance), there's a broader space to explore. Today, countries can
censor sites by DNS hijacking, IP address blocking, and SNI inspection, to
name a few technologies. Those options remain unaffected by Trust
Expressions. A government might try to convince/coerce websites that
operate within its jurisdiction to use a certificate chain issued by a
government CA when clients within the government jurisdiction connect.
Then, somehow, the government technically enforces this requirement, and if
a site serves objectionable content, the government revokes the site's cert
(or opts not to renew it). I'll assume that this technical enforcement is
only for domestic connections, where the government has hypothetical
control over both endpoints. Enforcing that all ClientHellos have a trust
expression for a root store that includes the government CA is insufficient
(and implausibly impractical, e.g. people visiting from other countries).
The enforcement can't look at the cert used per-connection and block if the
cert isn't issued by the government CA, as that's

[TLS]Re: WG Adoption for TLS Trust Expressions

2024-05-21 Thread Nick Harper
t store advertisements, the government can still
compel/coerce/encourage site operators to use a cert issued by G, by using
the mechanism that Watson described and is repeated above that requires no
changes to how servers get certificates from ACME, or by issuing a
completely different chain and relying on G's ubiquity in browsers.

My takeaway to the direct question about Trust Expressions increasing the
probability of ending up in this unhappy world is that it doesn't increase
that probability.

In addition to asking that question about probability, message [4] also
states:

> The real problem here is that you've
> (accidentally?) built a system that makes it much easier to adopt and
> deploy any new CA regardless of trust, rather than a system that makes
> it easier to deploy & adopt any new *trusted* CA.

I disagree that it makes it easier to adopt and deploy a new CA *regardless
of trust*. Trust Expressions only makes it easier to deploy a CA that is in
a trust store advertised by clients. If a CA is in a client's trust store,
that to me sounds like a *trusted CA*.

> On 21/05/2024 19:05, Nick Harper wrote:
>
> I'd be interested to hear details on what those are.
>
> Messages [1,2,3,4] of this thread lay out these details at length.
>

You asked to think through how widespread support for Trust Expressions
would help a government enable surveillance and domestic control of the
web. When thinking through this, I also considered what has previously been
discussed in the thread. Based on what I saw in the thread and the summary
of potential attacks, I couldn't find how Trust Expressions would help a
government achieve those goals - using Trust Expressions as a means to that
end resulted in an equivalent or worse version of what can already be done
with existing techniques.

I took another look at the messages you cited to see if I missed anything
in them about how Trust Expressions would help a government enable
surveillance and domestic control of the web, and found nothing. Here's a
review of what's in those messages and what might be relevant:

Message [1] states the following actions a government might take:

>   * One or more countries start either withholding certificates for
>undesirable sites, blocking connections which don't use their trust
>store, requiring key escrow to enable interception, etc etc.

Withholding certs and blocking connections are both addressed in my
previous email. Trust Expressions provides no benefits for censorship
compared to the status quo. Requiring key escrow is again something that
could be done today without Trust Expressions.

Messages [2], [3], and [4] talk about the use of Trust Expressions for a
government CA to be deployed on clients and servers.

Message [2] discusses the technical measures that a government might use to
roll out a government-issued CA, but it says nothing about why the
government is rolling out such a CA or what attacks it would carry out with
it. Message [3] discusses many topics: CA key rotation and CA distrust,
ACME deployment considerations and server configuration (relationships with
a single CA vs multiple), and whether the deployment of Trust Expressions
with ACME could result in CAs providing servers an additional chain that's
cross-signed by the government PKI. Message [4] talks about how Trust
Expressions is a mechanism that makes it easier to adopt and deploy any new
CA.

In my opinion, the issue of Trust Expressions enabling government
surveillance has been discussed in great detail and the conclusion is that
it is not an effective or useful tool for doing that. Instead, we should
focus discussion on the problems that Trust Expressions solves.


> Besides these concerns which are unaddressed so far, much of the recent
> discussion has focused on establishing what problem(s) Trust Expressions
> actually solves and how effective a solution it actually is.
>
> Looking forward to your thoughts on either or both aspects.
>

The general problem I see Trust Expressions solving is how a server can
choose which certificate chain to present to a client with a high degree of
confidence that the client will trust that certificate chain (assuming the
server has such a chain available to it). This general problem has multiple
concrete instantiations. The primary motivator is the transition to post
quantum cryptography in the Web PKI. Another is the existing problem with
temporal trust store divergence. This problem currently manifests in
servers and CA operators trying to provide a certificate chain that works
for both old and modern devices. In the future as CA operators rotate root
keys more frequently, the new root should be usable once it has met all the
requirements for inclusion in the root store, instead of waiting for a
significant portion of its lifetime to become ubiquitous.

As the Web PKI transitions to PQC, there

[TLS]Re: WG Adoption for TLS Trust Expressions

2024-05-24 Thread Nick Harper
On Thu, May 23, 2024 at 4:14 AM Dennis Jackson  wrote:

> Hi Nick,
>
> I think the issues around risk have a great deal of nuance that you're not
> appreciating, but which I've tried to lay out below. I appreciate that
> rational reasonable people can absolutely disagree on how they weigh up
> these risks, but I think we have to agree that Trust Expressions enables
> websites to adopt new CA chains regardless of client trust and even builds
> a centralized mechanism for doing so. It is a core feature of the design.
>
> On the issues around benefit, you've repeated the high level talking
> points around PQ, adopting new CAs and supporting older devices, but these
> talking points don't pan out on closer inspection.
>
> The central problem is that whilst Trust Expressions introduces a
> negotiation mechanism at the TLS layer, it is only moving the pain up the
> stack, without actually solving anything. To actually solve these problems,
> Trust Expressions envisages that either website operators are doing a
> complex dance with multiple CAs to enable the universal compatibility they
> already enjoy today or that new CAs are forming business relationships with
> incumbent CAs, identically as they would for a cross sign. In both cases,
> we're adding a lot more complexity, fragmentation and pain, for no actual
> return.
>
> A detailed response is inline below.
>
> Best,
> Dennis
>

Hi Dennis,

Since there's been a lot of discussion on this thread, I'm going to reply
here to your comments relating to the problem Trust Expressions solves and
how it compares to other potential solutions. The discussion of the risks
is still an important topic, but to make it easier to focus on that
discussion my replies to that will be in another message forking this
thread.

The main problem that Trust Expressions solves is giving servers a reliable
way to pick which one of their multiple certificate chains to use on a
connection with a given client. The transition to a PQC PKI is a motivating
use case where servers will have multiple certificate chains and need to
select one vs the other, though other use cases for a multi-certificate
model also exist and have also been discussed. This is issue #2 in Andrei's
email [1].

I agree with Andrei, you, and others that existing widely supported
mechanisms (signature_algorithms and signature_algorithms_cert TLS
extensions) solve issue #1 of selecting a chain compatible with the
client's signature suite support, e.g. whether or not the client supports
PQC PKI. Phrased differently, this existing mechanism is the negotiation
mechanism needed by PQC PKI experiments. The only gap in using
signature_algorithms_cert for negotiating the use of a PQC PKI experiment
is issue #2 - identifying whether the client trusts a particular
certificate chain.

You suggest that instead of the client providing an indication of what
roots it trusts, the server can (if the client supports the PQC algorithm)
send a PQC certificate chain that is cross-signed by a ubiquitous classical
CA. In this case, the client and server pay the cost of the PQC cert chain
on the wire but only get the security of classical cryptography. Using a
cross-sign also doesn't give the server a reliable signal about whether the
client will trust the chain; instead it maintains the status quo where
server operators hope that the cross-sign is ubiquitous enough for the
client to trust it. This also assumes that the PQC PKI uses X.509 to be
able to do the cross-sign, which puts an unnecessary design constraint on
PQC PKI.

Trust Expressions solves the problem of a server identifying which (if any)
of its multiple certificate chains will be trusted by the client that is
currently connecting to it. Temporal trust store divergence is one example
of this problem. Despite your claims otherwise, the authors have explained
[2] how Trust Expressions solves this problem:

> It isn’t necessary for older device manufacturers to adopt Trust
> Expressions. Rather, Trust Expressions would be adopted by modern clients,
> allowing them to improve user security without being held back by older
> clients that don’t update. Servers may still need to navigate
intersections
> and fingerprinting for older clients, but this will be unconstrained by
> modern ones. It will also become simpler, with fingerprinting less
> prevalent, as older clients fall out of the ecosystem.

The key point here is that Trust Expressions keeps this problem from
getting more complicated. Cross-signing only works if it's possible to
construct a set of paths with the right cross-signs trusted by both the old
devices and modern clients. I previously cited [3] as evidence of this,
with a Let's Encrypt cross-sign having expired with no replacement. If
cross-signs were a viable solution, we'd see more of it happening.
Cross-signs are allowed by root programs. You seem to think that root
programs encouraging cross signing would make them more of a valuable
solution, but I can't infer from yo

[TLS]TLS Trust Expressions risks

2024-05-24 Thread Nick Harper
On Fri, May 24, 2024 at 10:14 AM Dennis Jackson  wrote:

> Hi David,
>
> The certification chains issued to the server by the CA comes tagged with
> a list of trust stores its included in. The named trust stores are
> completely opaque to the server. These chains and names may not be trusted
> by any client nor approved by any server, they are issued solely by the CA
> as opaque labels. These chains sit on the server and will not be used
> unless a client connects with the right trust store label but obviously can
> easily be scanned for by anyone looking to check how pervasively deployed
> the alternate trust store is.
>
> Do you dispute any part of that? Most of what you wrote went off on a
> completely different tangent.
>
> Of course, whether this property (whether servers can usefully pre-deploy
> not-yet-added trust anchors), which trust expressions does not have, even
> matters boils to whether a root program would misinterpret availability in
> servers as a sign of CA trustworthiness, when those two are clearly
> unrelated to each other.
>
> Again, my primary concern here is not around the behavior of individual
> root stores, this is not relevant to the concern I'm trying to communicate
> to you. I know folks from all of the major root stores have great faith in
> their judgement and technical acumen.
>
> My concern is that Trust Expressions upsets a fragile power balance which
> exists outside of the individual root stores. There is an eternal war
> between governments pushing to take control of root stores and the security
> community pushing back. This battle happens in parliaments and governments,
> between lawmakers and officials, not within root stores and their
> individual judgement largely does not matter to this war. The major
> advantages we as the security community have today are that:
>
> a) These attempts to take control for surveillance are nakedly obvious
> to the local electorate because crappy domestic roots have no legitimate
> purpose because they can never achieve any real adoption.
>
> b) If a root store were to bow to pressure and let in a root CA used
> for interception, every other country has an interest in preventing that.
> An international WebPKI means that we are either all secure, or all
> insecure, together.
>
> Trust Expressions, though intended to solve completely different problems,
> will accidentally eradicate both of these advantages. Firstly, it provides
> a nice on ramp for a new domestic trust store, mostly through the
> negotiation aspect but also through the CA pre-distribution. Secondly, by
> enabling fragmentation along geographic boundaries whilst maintaining
> availability of websites. Without Trust Expressions, we cannot balkanize
> TLS. With Trust Expressions, we can and we know people who want to (not
> anyone in this thread).
>
> If you still do not understand this wider context within which all of our
> work sits, I do not think further discussion between you and I is going to
> help matters.
>
> I would suggest we focus our discussion on the use cases of Trust
> Expressions and how exactly it would work in practice - these concerns I
> shared earlier in the thread are solely technical and operational and you
> and I might be able to make better progress towards a common understanding.
>
> Best,
> Dennis
>

Hi Dennis,

I’m replying in this separate thread to respond to some of your comments
and responses around the risks related to Trust Expressions so that we can
keep the primary thread focused on the technical matters. First, let’s
agree on the facts of what Trust Expressions does. Trust Expressions makes
it easier to deploy new CAs. Specifically, it makes it easier for servers
to use certificate chains issued by new CAs.

Trust Expressions does not enable the use/adoption/deployment of new CAs
regardless of trust. It only does so for trusted CAs, whereby trusted I
mean the CA is in a client’s root store. David Benjamin explains [1] this
in detail in his latest reply. If you permit me to summarize what’s already
been said on the thread: At the time of certificate issuance, the CA
provides the web server with metadata (in the form of a
TrustStoreInclusionList, section 5.1 [2]) indicating which trust stores
contain the root for that certificate chain. When responding to a
ClientHello that contains the trust_expressions TLS extension, the server
will only use Trust Expressions if it has a certificate chain that at
issuance time matched one of the client’s TrustStores. Trust Expressions
only enables the deployment of certs from a new CA if that CA is trusted by
a client when the CA sends the subscriber the TrustStoreInclusionList
alongside the certificate chain. The client has to trust the CA before
Trust Expressions enables use of a new CA, hence it only enables new CAs
that are trusted. If the CA makes up its own trust store label to use for
deployment, clients would have to be compelled to advertise that trust
store label for this “pre-

[TLS]Re: TLS Trust Expressions risks

2024-05-24 Thread Nick Harper
On Fri, May 24, 2024 at 2:27 PM Brendan McMillion <
brendanmcmill...@gmail.com> wrote:

> In your latest message [5], I understand the context of governments
>> pushing for inclusion of certain roots with varying degrees of legitimacy.
>> I don’t see the on-ramp for CA pre-distribution being meaningfully
>> different with Trust Expressions compared to certificate_authorities.
>>
>
> Sorry, I meant to address this point as well. The difference between TE
> and the certificate_authorities extension, is that there's less widespread
> server support for the latter. You might compel a browser to bifurcate and
> advertise the certificate_authorities extension, but pushing out
> server-side support would be a substantial challenge. Not speaking for
> Google, but I believe their intention /is/ to put in the substantial work
> to make server-side TE support ubiquitous, such that it would be a minor
> ACME config change
>

The degree of server support is an important consideration. Even with
ubiquitous server-side TE support and servers configured with both a
ubiquitous chain and a government-issued chain, it seems to me this
government push for use of their CA requires a change to server TLS stacks
to prefer the government CA chain since both will match the client's
advertised trust stores. Mandating this server behavior change seems to me
like a heavier lift than just a minor config change. I don't have a good
sense of how it compares to the difficulty of configuring a server stack to
use the certificate_authorities extension. It appears that at least OpenSSL
has support for the certificate_authorities extension (
https://github.com/openssl/openssl/issues/13453), though the application
using OpenSSL needs to implement the certificate selection. With TE, I
imagine that certificate selection will happen inside the TLS stack or a TE
library closely tied to the TLS stack.

I wonder if there are ways to make it harder for a server to choose the
"bad" cert and easier to choose the "good" cert, but this seems like a
social/political problem rather than a technical one.

On Fri, May 24, 2024 at 2:46 PM Watson Ladd  wrote:

> To be clear, in Denis's scenario Ebonia requires all servers to obtain
> a cert from Honest Ahmed's
> (https://bugzilla.mozilla.org/show_bug.cgi?id=647959) Ebonian Secure
> CA. Server operators who complain that this will break clients are
> told that it will have a trust expression that currently isn't used,
> but government inspectors will use it to see if the cert is installed.
> Then in step 2 they use the number of certs issued to bolster the
> argument for inclusion. I don't see how Trust Expressions isn't making
> this scenario easier.
>

Sure, the Ebonian government could mandate that all servers get a cert from
Honest Achmed, and provide a specific Ebonia trust store label for the
servers to match against. However, the only time the server would use that
cert chain is when the government inspectors send the Ebonian trust store
label to check if the cert is installed. In step 2 when Ebonia uses the
number of certs from Honest Achmed as part of their argument, it matters
what that count is. Is it the number of certs issued, number of certs
servers are "willing to use" (that's not very well defined), number of
certs that servers are actually using? Of course Ebonia will choose
whatever suits them best, but it also depends where that number came from
and how it was obtained. For example, Ebonia can get a large number of
certs issued by Honest Achmed without Trust Expressions by having Honest
Achmed generate a cert for every cert it sees in CT logs (with the Honest
Achmed leaf using the same public key as in the logged leaf certs so that
they're technically usable by the server). I imagine if Ebonia tried to use
that count of certificates issued as part of their argument for adoption,
there would be outcry about how that volume of cert issuance is
manufactured and meaningless. In the scenario where certs are issued and
delivered to servers, but they only use them in response to the government
inspectors (because those are the only clients that match, this volume of
cert issuance to me seems similarly manufactured and meaningless.

Maybe Trust Expressions makes that scenario ever so slightly easier, but it
seems like this step requires manufacturing false demand/use that would be
fairly clear to see is occurring and discount it when arguing against that
claim.
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS]Re: TLS Trust Expressions risks

2024-05-24 Thread Nick Harper
On Fri, May 24, 2024 at 4:15 PM Brendan McMillion <
brendanmcmill...@gmail.com> wrote:

> The part of the spec you quoted says: if multiple certs match, choose any.
> When TE is rendered in actual code, why do you assume that there will be no
> configurable or easily-gameable way to make sure the government CA
> always wins?
>

I'm not assuming there will be no configurable or easily-gameable way to do
this - I don't know what exactly that will look like in implementations.
I'm asserting that TE alone as currently specified is insufficient for this
attack, because TE says "choose any" and the attack needs to choose a
specific one.
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS]Re: Curve-popularity data?

2024-06-03 Thread Nick Harper
On Mon, Jun 3, 2024 at 3:02 PM Peter Gutmann 
wrote:

> Filippo Valsorda  writes:
>
> >The most important performance consideration in TLS is avoiding Hello
> Retry
> >Request round-trips due to the server supporting none of the client's key
> >shares.
>
> This is already a huge problem with Chrome because it doesn't support any
> MTI
> curves in its client hello, which means it triggers a hello retry on
> *every*
> *single* *connect* to a compliant implementation.
>

RFC 8446 section 9.1:

> A
> TLS-compliant application MUST support key exchange with secp256r1
> (NIST P-256) and SHOULD support key exchange with X25519 [RFC7748].

Section 9.3:

>   -  If containing a "supported_groups" extension, it MUST also contain
>  a "key_share" extension, and vice versa.  An empty
>  KeyShare.client_shares vector is permitted.

Looking at a ClientHello from Chrome, I see secp256r1 in the
supported_groups extension. By my understanding of the RFC, this
ClientHello meets the MTI requirements in RFC 8446. I see no requirement in
section 9 nor in section 4.2.8 requiring MTI curves be present in the
key_share extension if that extension is non-empty. (The RFC is explicit
about permitting the extension to be empty.)

>
> This will also heavily skew any statistics, because Chrome's noncompliant
> behaviour will show up almost everywhere.  So I'm not sure that a
> popularity
> poll on curves has much meaning.
>
> Peter.
> ___
> TLS mailing list -- tls@ietf.org
> To unsubscribe send an email to tls-le...@ietf.org
>
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS]Re: HRR support (was something else)

2024-06-06 Thread Nick Harper
On Wed, Jun 5, 2024 at 6:25 AM Peter Gutmann 
wrote:

> Martin Thomson  writes:
>
> >Are you saying that there are TLS 1.3 implementations out there that don't
> >send HRR when they should?
>
> There are embedded TLS 1.3 implementations [*] that, presumably for space/
> complexity reasons and possibly also for attack surface reduction, only
> support the MTI algorithms (AES, SHA-2, P256) and don't do HRR.
>

Those implementations are not compliant with RFC 8446. Section 4.1.1
requires that a server respond with HRR if it selects an (EC)DHE group and
the client didn't offer a compatible key_share in the initial ClientHello.
(Likewise, section 4.1.3 requires that clients support HRR.)
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS]Re: TLS trust expressions and certificate_authorities

2024-06-11 Thread Nick Harper
On Tue, Jun 11, 2024 at 3:25 AM Dennis Jackson  wrote:

> I think the above captures the main thrust of your argument in this
> thread, but it seems like quite a flawed analysis. If T.E. does not offer
> any new capabilities over certificate_authorities, then there is no point
> in standardizing it at all. Conversely, arguments that T.E. is a much more
> effective mechanism for deploying trust negotiation than
> certificate_authorities undermines the claim that T.E. doesn't introduce
> new risks that merit discussion.
>
This is a false dichotomy. TE offers an incremental improvement on
certificate_authorities.
>
> In terms of the differences between the existing certificate_authorities
> extension and Trust Expressions, I want to enumerate a few:
>
> Firstly, certificate_authorities is impractical at any kind of scale
> because it requires listing the distinguished name of every CA in the
> ClientHello (as you noted). This makes the size blowup is a huge impediment
> to actually using it for anything other than in a private PKI setting e.g.
> for indicating which client_certs a server would like to see.
>
> Secondly, certificate_authorities is very poorly supported today. TLS
> libraries typically ignore it e.g. OpenSSL requires custom callbacks to
> integrate it [2] - I couldn't find anything actually calling this function.
> Neither BoringSSL nor NSS support it in ClientHellos as far as can tell.
>
> Thirdly, certificate_authorities doesn't have any of the machinery
> necessary to orchestrate deploying it. Trust Expressions envisions ACME /
> TLS Servers implementations and CAs cooperating to distribute these trust
> labels to subscribers without requiring them to do any configuration
> themselves.
>
> Trust Expressions proposes to solve all of these drawbacks with
> certificate_authorities. The first is achieved by replacing the long list
> of distinguished names with a single identifier. The second is to ship
> support across servers and clients and make sure it is well exercised and
> indeed required. The third is to ensure that CAs can deliver multiple
> certificate chains to clients and that clients can transparently select
> between them based on a trust label.
>
> Consequently, T.E. does meaningfully change the calculus over
> certificate_authorities and so there are number of active threads
> discussing the risks of enabling trust negotiation and evaluating how it
> can be abused.
>
If Trust Expressions does meaningfully change the calculus compared to
certificate_authorities, it does it in a way that lessens risk. The
certificate_authorities extension doesn't scale support the legitimate use
case of trust negotiation/advertisement that Trust Expressions supports,
but this problem doesn't exist for certificate_authorities advertising a
single government CA. In your first example of how certificate_authorities
differs from Trust Expressions, you've given an example of how Trust
Expressions is less risky than certificate_authorities.

The complexity of deploying certificate_authorities for the government CA
"risky" use case is much less than it is for Trust Expressions. The "risky"
use case requires clients advertise the name of the CA, and it requires
servers to be able to match a name in the certificate_authorities extension
against one of its multiple certificates. This deployment has no machinery
with CAs, ACME servers, or root programs publishing manifests. When you say
certificate_authorities doesn't have any of the machinery necessary, that's
because it doesn't need any such machinery, as Devon explained in point 4.
In the "risky" use case, Trust Expressions requires the government to
implement or compel more actions than it would with
certificate_authorities. Starting with the clients, it would need to compel
root programs to manage and publish an additional trust store manifest (or
manage its own trust store manifest and compel advertisement of that as
part of compelling trust). It would also need to have its CA (and the CA's
ACME server) support the government trust store in its
CertificatePropertyList. It looks like there's a lot more compulsion
involved in this government-forced trust use case when the government uses
Trust Expressions instead of certificate_authorities.
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS]Re: TLS trust expressions and certificate_authorities

2024-06-12 Thread Nick Harper
On Wed, Jun 12, 2024 at 3:17 AM Dennis Jackson 
wrote:

> You can't argue that T.E. contains the functionality of
> certificate_authorities as a subset, then conclude that having additional
> functionalities makes it less risky. You would need to argue the exact
> opposite, that T.E. doesn't contain the bad functionalities of
> certificate_authorities. The risk associated with abuse of a feature is not
> in any way diluted by tacking on good use cases.
>
I'm not arguing that TE is a superset of certificate_authorities. I'm
arguing that it's an incremental improvement over certificate_authorities.
That is to say, certificate_authorities is a way for a relying party to
indicate to a subscriber which CAs it trusts, and TE is another way to do
the same thing. TE is an incremental improvement because it's solving the
same problem but making different tradeoffs. To deploy the
certificate_authorities extension, no extra machinery is needed past what's
in the certificates, but that comes at a cost of a large number of bytes
sent by the relying party. TE optimizes for size, at the cost of additional
complexity and machinery involving additional parties.

For the abuse scenario, TE makes it no easier than certificate_authorities
(the size of advertising the single malicious CA isn't a concern, whereas
it is a problem when it's a browser's entire trust store that's
advertised), and TE adds additional deployment complexity compared to
certificate_authorities, which lessens the risk.

The takeaway here is that the risks associated with the abuse of Trust
Expressions also exist with certificate_authorities.
>
> I wonder what such a trust store manifest would look like... [1] [2].
> There's at least one large player out there with a list of CAs ready to go
> and all the necessary machinery in place.
>
Ready to go and do what?!

If you're talking about the EU eIDAS QWAC trust list, those CAs were
already trusted by browsers before the eIDAS regulations took effect, and
eIDAS allows for their distrust and removal. Already, one CA [1] on that
list is being distrusted by multiple [2] browsers [3]. Even if the EU has a
published list of CAs that they could turn into a trust store manifest,
this is a distraction from the point that with TE, abuse requires the
cooperation (or compulsion) of more parties than with
certificate_authorities.

1: https://eidas.ec.europa.eu/efda/tl-browser/#/screen/tl/AT/5/25
2:
https://groups.google.com/a/mozilla.org/g/dev-security-policy/c/XpknYMPO8dI/m/JBNFg3aVAwAJ
3:
https://groups.google.com/a/ccadb.org/g/public/c/wRs-zec8w7k/m/G_9QprJ2AQAJ
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS]Re: Trust Anchor Negotiation Surveillance Concerns and Risks

2024-07-19 Thread Nick Harper
The scenario where more than one party has the private keys is described in
scenario 6 [1]. The analysis of that scenario is that trust anchor
negotiation has no effect on the surveillant's ability to carry out their
goals.

1:
https://github.com/davidben/tls-trust-expressions/blob/main/surveillance-and-trust-anchor-negotiation.md#scenario-6-government-mandates-escrow-of-tls-private-keys-and-secrets

On Fri, Jul 19, 2024 at 7:06 PM Rob Sayre  wrote:

> Isn’t the most obvious issue that more than one party have the private
> keys?
>
> thanks,
> Rob
>
> On Fri, Jul 19, 2024 at 18:29 Devon O'Brien  40google@dmarc.ietf.org> wrote:
>
>> Hi all, We’ve added a document that attempts to summarize, and offer an
>> initial analysis of, several of the scenarios that have been raised in
>> on-list discussions related to the possibilities that Trust Expressions (or
>> more broadly, Trust Anchor Negotiation) could be used to enable
>> surveillance, or to make surveillance easier to achieve than with existing
>> solutions.
>>
>> We’ve been adding to this document for some time, and while there is
>> overlap with the documents that Dennis has recently shared, it is not a
>> response to them, as it was nearly complete by the time they were posted.
>> Our goal is for this analysis to be complete and accurate, so we will
>> incorporate additional scenarios, arguments, and analysis over time based
>> on the ensuing discussion.
>>
>>
>> https://github.com/davidben/tls-trust-expressions/blob/main/surveillance-and-trust-anchor-negotiation.md
>>
>> As with any of the other documents in the repository, we encourage you to
>> ask on list, or file a github issue if you feel we have missed something or
>> that our analysis is incorrect
>>
>> We look forward to the WGs comments and hope to see those coming to
>> Vancouver next week.
>>
>> - Devon, Bob, David
>> ___
>> TLS mailing list -- tls@ietf.org
>> To unsubscribe send an email to tls-le...@ietf.org
>>
> ___
> TLS mailing list -- tls@ietf.org
> To unsubscribe send an email to tls-le...@ietf.org
>
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS]Re: Trust Anchor Negotiation Surveillance Concerns and Risks

2024-07-19 Thread Nick Harper
On Fri, Jul 19, 2024 at 8:58 PM Salz, Rich  wrote:

>
>- I've read it before. I the main issue is that it says "trusted" a
>lot.
>
>
>
> Yeah, kinda snippy but not necessarily wrong.
>
>
>
> I’m a little skeptical of approaches that solve an entire problem space
> with one architecture. I’m more skeptical of enough people having the
> ability to read and understand the semantics of several pages of JSON
> object descriptions. I know I got MEGO[1] a copule of times while reading
> it.
>
>
>
> Can we simplify things and solve just one problem?
>

>From my perspective, this draft does solve just one problem: how a server
chooses a certificate to use that it knows the client will trust.

I had a similar reaction the first time I read the Trust Expressions draft.
Trust Anchor IDs (
https://www.ietf.org/archive/id/draft-beck-tls-trust-anchor-ids-00.html) is
a simpler to understand mechanism that solves the same problem in a
different way.

>
>
> For example, in some off-line discuissions others have mentioned that with
> PQ signatures being so big, there are policy decisions that clients might
> want to enforce – do you need SCT’s? Do you want OCSP stapling? Maybe it
> will be worthwhile to just think about what kind hybrid/PQ policies clients
> will want to express?
>
>
>
> [1] https://www.collinsdictionary.com/dictionary/english/mego
>
>
> ___
> TLS mailing list -- tls@ietf.org
> To unsubscribe send an email to tls-le...@ietf.org
>
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS] Re: Adoption call for TLS 1.2 Update for Long-term Support

2024-11-05 Thread Nick Harper
I understand the stated goal of this draft to be to provide a way for
hard-to-update endpoints to keep using TLS 1.2 in a secure way. The idea of
a document that describes how to safely deploy TLS 1.2 sounds like a good
idea, e.g. "use only these cipher suites, require EMS and RI, etc". This
draft is not that.

This draft makes changes to the TLS handshake protocol, which undermines
the goal of supporting hard-to-update endpoints. The two changes made to
the protocol are also addressed by RFC 8446. If endpoints need to be
updated to support TLS-LTS, it would make more sense to update them to
support TLS 1.3 than TLS-LTS.

The rationale section (3.7) of the draft presents two reasons for using
TLS-LTS over TLS 1.3. The first is the slow deployment cadence of a new
protocol. LTS requires a change to the protocol and deployment of that new
change, no different from 1.3. The second reason is fear of the unknown in
1.3: "TLS 1.3 is an almost entirely new protocol. As such, it rolls back
the 20 years of experience that we have with all the things that can go
wrong in TLS". The 20 years of all the things that can go wrong in TLS were
due to unsound cryptographic decisions. The research and analysis that
found those 20 years of issues was applied to the design of 1.3 to avoid
making the same mistakes. 1.3 doesn't roll back that experience, and we now
have over 8 years of experience with 1.3.

I do not support adoption of the draft in this format. If the draft made no
changes to the TLS 1.2 protocol and were deployable only through
configuration changes (e.g. a fixed list of cipher suites and extensions),
I would probably support it.

On Tue, Nov 5, 2024 at 11:02 AM Salz, Rich  wrote:

> I strongly support adoption.
>
> I do not understand why anyone would be opposed to the IETF making
> deployment recommendations. I can understand why someone might be bothered
> by the impliciation that *THIS ONE WAY* is the only way to get long-term
> support, especially if it's seen to contradict our encouragement of TLS
> 1.3. But that is an editorial issue that can be easily fixed.
>
> I would like to see this adopted, a short change cycle, and then advanced
> in the same cluster with our TLS 1.2 is frozen document.
>
>
> ___
> TLS mailing list -- tls@ietf.org
> To unsubscribe send an email to tls-le...@ietf.org
>
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS] Re: Trust Anchor IDs and PQ

2025-02-04 Thread Nick Harper
On Sat, Feb 1, 2025 at 10:02 AM Eric Rescorla  wrote:

> Starting a new thread to keep it off the adoption call thread.
>
> I'm still forming my opinion on this topic. To that end, perhaps it's
> most useful to focus in on the post-quantum case, as I think that's
> the one that the WG finds most compelling. This message tries to work
> through that case and the impact of TAI.
>
> I apologize in advance for the length of this message, but I wanted
> show my thinking, as well as make it easier to pinpoint where I may
> have gone wrong if people disagree with this analysis.
>
>
> CURRENT SETUP
> Here's what I take as the setting now:
>
> 1. We have a set of existing CAs, numbered, 1, 2, 3...
> 2. CA_i has a trust anchor TA_i which is embedded in clients and then
>used to sign an intermediate certificate I_i.
> 3. Servers have end-entity certificates signed by intermediates,
>so we can denote server s's certificate signed by CAI i as
>EE_s_i. The chain for this certificate is (proceeding from the
>root): T_i -> I_i -> EE_s_i
>
> These all use traditional algorithms (assume there's just one
> traditional algorithm for simplicity).
>
>
> ADDING PQ
> When the CA wants to roll out PQ certificates, the following happens.
>
> 1. It generates a new separate PQ trust hierarchy, that looks like:
>Tp_i -> Ip_i -> EEp_s_i.
> 2. It cross-signs its own PQ trust anchor with its traditional trust
>anchor.
>
> So abusing notation a bit, a server would have two certificate chains:
>
> - Traditional: T_i -> I_i -> EE_s_i
> - PQ:  T_i -> Tp_i -> Ip_i -> EEp_s_i
>
> Note that I'm assuming that there's just one CA, but of course
> there could be two CAs, in which case the chains will be entirely
> distinct:
>
> - Traditional: T_i -> I_i -> EE_s_i
> - PQ:  T_j -> Tp_j -> Kp_j -> EEp_s_j
>
> This actually doesn't matter (I think) for the purposes of this
> analysis because the server can only send one EE cert.
>
>
> CERTIFICATE CHAIN NEGOTIATION
> When the client connects, it signals which algorithms it supports in
> signature_algorithms. The server then selects either the traditional
> chain or the PQ chain and sends it to the client depending on the
> algorithm. This is how we've done previous transitions so there
> shouldn't be anything new here.
>
> The entire logic above is rooted in trusting whatever traditional
> algorithm is in T_i. But the reason we want to deploy PQ certificates
> is not for efficiency (as with EC) but because we want to stop
> trusting the traditional algorithms. We do that by a two-step process
> of:
>
> 1. Clients embed Tp_i in their trust list.
> 2. At some point in the (probably distant) future, they just deprecate
>support for existing traditional trust anchors.
>
> This means that there (again simplifying) there are at least four kinds of
> clients.
>
> 1. Trust T_i. No PQ support.
> 2. Trust T_i. Traditional and PQ support.
> 3. Trust T_i and Tp_i. Traditional and PQ support.
> 4. Trust Tp_i. No traditional support.
>
> However, the server only gets the "signature_algorithms" extension,
> which looks like so:
>
>   Client Status signature_algorithms
> Algorithms   Trust Anchors
> --  
> 1.  Traditional  T_itraditional
> 2.  Traditional + PQ T_itraditional + pq
> 3.  Traditional + PQ T_i + Tp_i traditional + pq
> 4.  PQ   Tp_i   pq
>
>
> Cases (1) and (4) are straightforward, because the server only has one
> option. However, the server can't distinguish (2) and (3). There are
> two possibilities here:
>
> * The server wants to use a traditional certificate chain (e.g.,
>   for performance reasons). In this case, there isn't an issue
>   wants to use a traditional certificate because it can just send
>   traditional chain.
>
> * The server wants to use a PQ chain. In this case, because it
>   can't distingish (2) and (3) it has to send the cross-signed Tp_i,
>   even though the client may already have it.
>
> On the more global scale, the server has no way of measuring how many
> clients trust Tp_i, and so isn't able to determine when it's largely
> safe to unilaterally elide T_i when using the PQ chain. Note that the
> server *can* determine when it's safe to stop presenting a traditional
> EE cert at all by measuring the rate at which clients offer PQ
> algorithms in signature_algorithms, because those clients are either
> type (2) or type (3) and will in any case accept the longer chain.
>
> As far as I can tell, none of this is relevant to the question of
> security against quantum computers, because what provides that
> property is that clients refuse to accept traditional algorithms at
> all (type (4)), which is easily determined from signature_algorithms.
>
>
> TRUST ANCHOR IDENTIFIERS
> As far as I can tell, TAI changes the situation in two main ways:
>
> 1. It al

[TLS] Re: Adoption Call for Trust Anchor IDs

2025-02-05 Thread Nick Harper
It is silly that in today’s world, we consider it good enough that a server
can send a client an end entity cert and a grab bag of intermediates and
cross signs and tell the client “I hope there’s enough material here for
you to build a path to something that you trust”. If someone proposed a new
system to this working group that relied on a server sending a grab bag of
extra certs, without any assurance that a path to something the client
trusts exists in all that goop, and wasting 10s of KB of bandwidth in the
handshake, I would oppose adoption and wonder how someone would think
that’s an acceptable design and what sort of unstated constraints lead to
that design.

This is the system we have today with X.509. We can and should do better.
Moving past X.509 will be a long process, but it is something that I and
others [1] think we should do. The syntax of X.509 isn’t the only problem
though - X.509’s system of path building and cross-signs is what got us to
this point of throwing a grab bag of certs over the wall and praying that
it works. To fix this problem, we need trust anchor negotiation.

If this working group rejects trust anchor negotiation, it is saying that
this is an acceptable design. I would be disappointed to see the working
group that created TLS 1.3 a decade ago decide now to reject trust anchor
negotiation in favor of X.509’s very rudimentary trust anchor agility. We
agreed at the interim that we want to solve this problem, yet people on
this thread opposing adoption of draft-beck-tls-trust-anchor-ids are
repeating the same arguments against trust anchor negotiation that were
presented at the interim without presenting new information.



On the technical side, most arguments against trust anchor negotiation
center around cross-signs. Many people have pointed out [2][3][4][5] that
we have reached the limit of what we are capable of doing with cross-signs.
I believe the experience of server operators over the arguments in
draft-jackson-tls-trust-is-nonnegotiable when it comes to the capability of
cross-signs. Cross-signs (and path building) can only do a strict subset of
what a trust negotiation mechanism like TAI can do. For the specific use
case of migrating to PQ, a classical/PQ cross-sign is a waste of bandwidth
that provides no security value [6] and is a bad design for the migration.

Another argument raised against trust anchor negotiation is that it shifts
the burden of compatibility work from root programs and CAs to site
operators and end users. This assumption is predicated on a large amount of
root store divergence, site operators using TAI in a non-automated fashion,
or a mistaken assumption that TAI’s success requires mass adoption by
websites. Wide deployment of TAI is not a requirement for TAI’s adoption by
the working group, and for the use case of divergent trust stores, TAI is
most useful for the site operators who have found through experience the
limits of cross-signs and are already managing and deploying multiple
distinct certificate chains to serve their diverse client populations. In
many cases, servers doing this need to use TLS fingerprinting to identify
clients, so TAI reduces the burden for these server operators. For server
operators who currently get a cert from a single CA, deployment of TAI
should change nothing for them. To (mis)quote Obama, “If you like your
[cert chain], you can keep it”. As long as our RSA X.509 PKI continues to
exist and be used on the web, I don’t see a high risk of divergence between
root stores. The same roots that have existed since the SSL was first used
on the web will continue to be ubiquitous (or new roots cross-signed by
them), and site operators can choose to get certs from those CA operators.

Opponents of trust anchor negotiation have described two broad categories
of risk that it could introduce. One is the risk of malicious root store
behavior [7], and the other is the risk of losing common value from root
program intersection [8]. Out of respect for the chairs’ request to keep
the discussion focused on technical issues [9], my only statement on
malicious root programs is that whether this is a risk depends on the
hypothetical non-technical behavior of various entities outside this
working group, and to opine on whether those risks exist would be
speculation and improper for this discussion thread.

The common value of root program intersection comes in two forms. One is
when root programs collectively agree on requirements for CAs and browsers,
e.g. through the CA/Browser Forum, and set the floor for requirements CAs
must meet. I assume this is where the tension between root programs
mentioned in [8] comes from. As stated in [8], when security advances
happen here, the benefits affect everyone. The other form of common value
is when a single root program takes unilateral action to impose new
requirements on CAs in its program. These unilateral actions have little
effect on the security for users of other root programs’ br