Re: [TLS] Solving HRR woes in ECH

2021-03-26 Thread Ben Schwartz
This seems like a reasonable suggestion to me, so long as the value is
purely a "hint", as you seem to be proposing.  I would suggest structuring
it as an ECHConfig extension.  This would avoid the need for multiple
points of integration between TLS and DNS, support the use of HRR hints in
other ECH use cases that don't involve DNS, and help to exercise the
ECHConfig extension mechanism.

On Thu, Mar 25, 2021 at 9:28 PM Nick Sullivan  wrote:

> Hi Chris,
>
> HRR in ECH does seem challenging. This may be tangential to the PR you
> linked, but there may be a way to reduce the likelihood of HRR by moving
> even more of the handshake negotiation into DNS. The HTTPS RR is already
> used for some types of negotiation (ALPN and ECH key), so why can't it be
> extended further to advertise to the client what the server is willing to
> support for cryptographic negotiations?
>
> For example, the HTTPS record could additionally contain the server's
> supported supported_groups and cipher_suites. With this information, a
> client could know which key_share extensions a server is likely to accept
> and adjust its clienthello accordingly. A client who typically sends two
> key_shares (P256 and x25519) for maximal compatibility could then reduce
> the size of its client hello (no need to send redundant key_shares) or even
> prevent an HRR flow altogether in the case that the default key_shares or
> cipher_suites are not supported by the server.
>
> This tweak wouldn't remove the need for HRR completely -- it could be
> necessary when changing server configuration, for example -- but it could
> remove the need for HRR in the common case.
>
> Nick
>
> On Thu, Mar 25, 2021 at 8:05 PM Christopher Patton  40cloudflare@dmarc.ietf.org> wrote:
>
>> Hi all,
>>
>> One of the open issues for ECH is how it interacts with HelloRetryRequest
>> (HRR). The central difficulty is that a client may advertise different
>> preferences for HRR-sensitive parameters in the ClientHelloInner and
>> ClientHelloOuter. And because the HRR path has no explicit signal of which
>> ClientHello was used, the client may not be able to know how to respond.
>> The following PR solves this problem by adding to HRR an explicit signal of
>> which ClientHello was used:
>> https://github.com/tlswg/draft-ietf-tls-esni/pull/407
>>
>> The design was originally proposed by David Benjamin in the issue
>> referenced by the PR. Along the way, It also solves a handful of other HRR
>> issues that have been identified.
>>
>> One consequence of this particular solution is that real ECH usage
>> "sticks out" if the server responds with an HRR. In particular, signaling
>> which ClientHello was used also signals whether ECH was accepted. However,
>> the solution is designed to mitigate "don't stick out" attacks that attempt
>> to trigger the HRR codepath by fiddling with bits on the wire. The
>> distinguisher only arises when HRR happens organically.
>>
>> Feedback is appreciated!
>>
>> Best,
>> Chris P.
>> ___
>> TLS mailing list
>> TLS@ietf.org
>> https://www.ietf.org/mailman/listinfo/tls
>>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>


smime.p7s
Description: S/MIME Cryptographic Signature
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Solving HRR woes in ECH

2021-03-26 Thread Stephen Farrell


Hiya,

On 26/03/2021 13:44, Ben Schwartz wrote:

This seems like a reasonable suggestion to me, so long as the value is
purely a "hint", as you seem to be proposing.  I would suggest structuring
it as an ECHConfig extension.  This would avoid the need for multiple
points of integration between TLS and DNS, support the use of HRR hints in
other ECH use cases that don't involve DNS, and help to exercise the
ECHConfig extension mechanism.


(I'm not stating an opinion on the PR yet but...) If there
is to be some new data included in SVCB/HTTPS RR values then
that ought be structured bearing in mind who produces which
bits of data. An ECHConfig is a binary blob mostly produced
by the client-facing server, whereas TLS parameters for the
backend server are not produced at the same place. Including
the latter as an ECHConfig.extension is not therefore a good
design IMO. Justifying those (IMO:-) unnecessary ECHConfig
extensions is also not a goal.

Information about the backend's TLS preferences, if published
in the DNS, ought be outside the ech value in HTTPS RRs. If
we wanted to publish information about the client-facing
server's TLS preferences in the backend's zone file, then
that could be put into the ECHConfig all right. (It's a pity
that we didn't split out the ECHConfigs from different
client-facing servers in SVCB/HTTPS to make all that easier
isn't it?)

Cheers,
S.



On Thu, Mar 25, 2021 at 9:28 PM Nick Sullivan  wrote:


Hi Chris,

HRR in ECH does seem challenging. This may be tangential to the PR you
linked, but there may be a way to reduce the likelihood of HRR by moving
even more of the handshake negotiation into DNS. The HTTPS RR is already
used for some types of negotiation (ALPN and ECH key), so why can't it be
extended further to advertise to the client what the server is willing to
support for cryptographic negotiations?

For example, the HTTPS record could additionally contain the server's
supported supported_groups and cipher_suites. With this information, a
client could know which key_share extensions a server is likely to accept
and adjust its clienthello accordingly. A client who typically sends two
key_shares (P256 and x25519) for maximal compatibility could then reduce
the size of its client hello (no need to send redundant key_shares) or even
prevent an HRR flow altogether in the case that the default key_shares or
cipher_suites are not supported by the server.

This tweak wouldn't remove the need for HRR completely -- it could be
necessary when changing server configuration, for example -- but it could
remove the need for HRR in the common case.

Nick

On Thu, Mar 25, 2021 at 8:05 PM Christopher Patton  wrote:


Hi all,

One of the open issues for ECH is how it interacts with HelloRetryRequest
(HRR). The central difficulty is that a client may advertise different
preferences for HRR-sensitive parameters in the ClientHelloInner and
ClientHelloOuter. And because the HRR path has no explicit signal of which
ClientHello was used, the client may not be able to know how to respond.
The following PR solves this problem by adding to HRR an explicit signal of
which ClientHello was used:
https://github.com/tlswg/draft-ietf-tls-esni/pull/407

The design was originally proposed by David Benjamin in the issue
referenced by the PR. Along the way, It also solves a handful of other HRR
issues that have been identified.

One consequence of this particular solution is that real ECH usage
"sticks out" if the server responds with an HRR. In particular, signaling
which ClientHello was used also signals whether ECH was accepted. However,
the solution is designed to mitigate "don't stick out" attacks that attempt
to trigger the HRR codepath by fiddling with bits on the wire. The
distinguisher only arises when HRR happens organically.

Feedback is appreciated!

Best,
Chris P.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls




___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls



OpenPGP_0x5AB2FAF17B172BEA.asc
Description: application/pgp-keys


OpenPGP_signature
Description: OpenPGP digital signature
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Solving HRR woes in ECH

2021-03-26 Thread Christopher Patton
I really like this idea, but I don't see it as a solution to ECH's HRR
woes. NIck's idea boils down to providing a recipe for how to construct the
CHOuter, but AFAICT, there's nothing in the TLS or HTTPS-RR specs that
requires the client to follow this recipe. We would still need a way of
reconciling differences in preferences between CHInner and CHOuter.

I think we should pursue using HTTPS-RR this way independently of ECH. It's
not just useful for ECH, after all. All connections would benefit from
knowing the server's preferences in advance of the ClientHello.

Chris P.

On Fri, Mar 26, 2021 at 8:10 AM Stephen Farrell 
wrote:

>
> Hiya,
>
> On 26/03/2021 13:44, Ben Schwartz wrote:
> > This seems like a reasonable suggestion to me, so long as the value is
> > purely a "hint", as you seem to be proposing.  I would suggest
> structuring
> > it as an ECHConfig extension.  This would avoid the need for multiple
> > points of integration between TLS and DNS, support the use of HRR hints
> in
> > other ECH use cases that don't involve DNS, and help to exercise the
> > ECHConfig extension mechanism.
>
> (I'm not stating an opinion on the PR yet but...) If there
> is to be some new data included in SVCB/HTTPS RR values then
> that ought be structured bearing in mind who produces which
> bits of data. An ECHConfig is a binary blob mostly produced
> by the client-facing server, whereas TLS parameters for the
> backend server are not produced at the same place. Including
> the latter as an ECHConfig.extension is not therefore a good
> design IMO. Justifying those (IMO:-) unnecessary ECHConfig
> extensions is also not a goal.
>
> Information about the backend's TLS preferences, if published
> in the DNS, ought be outside the ech value in HTTPS RRs. If
> we wanted to publish information about the client-facing
> server's TLS preferences in the backend's zone file, then
> that could be put into the ECHConfig all right. (It's a pity
> that we didn't split out the ECHConfigs from different
> client-facing servers in SVCB/HTTPS to make all that easier
> isn't it?)
>
> Cheers,
> S.
>
> >
> > On Thu, Mar 25, 2021 at 9:28 PM Nick Sullivan  > 40cloudflare@dmarc.ietf.org> wrote:
> >
> >> Hi Chris,
> >>
> >> HRR in ECH does seem challenging. This may be tangential to the PR you
> >> linked, but there may be a way to reduce the likelihood of HRR by moving
> >> even more of the handshake negotiation into DNS. The HTTPS RR is already
> >> used for some types of negotiation (ALPN and ECH key), so why can't it
> be
> >> extended further to advertise to the client what the server is willing
> to
> >> support for cryptographic negotiations?
> >>
> >> For example, the HTTPS record could additionally contain the server's
> >> supported supported_groups and cipher_suites. With this information, a
> >> client could know which key_share extensions a server is likely to
> accept
> >> and adjust its clienthello accordingly. A client who typically sends two
> >> key_shares (P256 and x25519) for maximal compatibility could then reduce
> >> the size of its client hello (no need to send redundant key_shares) or
> even
> >> prevent an HRR flow altogether in the case that the default key_shares
> or
> >> cipher_suites are not supported by the server.
> >>
> >> This tweak wouldn't remove the need for HRR completely -- it could be
> >> necessary when changing server configuration, for example -- but it
> could
> >> remove the need for HRR in the common case.
> >>
> >> Nick
> >>
> >> On Thu, Mar 25, 2021 at 8:05 PM Christopher Patton  >> 40cloudflare@dmarc.ietf.org> wrote:
> >>
> >>> Hi all,
> >>>
> >>> One of the open issues for ECH is how it interacts with
> HelloRetryRequest
> >>> (HRR). The central difficulty is that a client may advertise different
> >>> preferences for HRR-sensitive parameters in the ClientHelloInner and
> >>> ClientHelloOuter. And because the HRR path has no explicit signal of
> which
> >>> ClientHello was used, the client may not be able to know how to
> respond.
> >>> The following PR solves this problem by adding to HRR an explicit
> signal of
> >>> which ClientHello was used:
> >>> https://github.com/tlswg/draft-ietf-tls-esni/pull/407
> >>>
> >>> The design was originally proposed by David Benjamin in the issue
> >>> referenced by the PR. Along the way, It also solves a handful of other
> HRR
> >>> issues that have been identified.
> >>>
> >>> One consequence of this particular solution is that real ECH usage
> >>> "sticks out" if the server responds with an HRR. In particular,
> signaling
> >>> which ClientHello was used also signals whether ECH was accepted.
> However,
> >>> the solution is designed to mitigate "don't stick out" attacks that
> attempt
> >>> to trigger the HRR codepath by fiddling with bits on the wire. The
> >>> distinguisher only arises when HRR happens organically.
> >>>
> >>> Feedback is appreciated!
> >>>
> >>> Best,
> >>> Chris P.
> >>> ___
> >>> 

Re: [TLS] Solving HRR woes in ECH

2021-03-26 Thread Eric Rescorla
This is more complicated than I would have liked, but I also don't see how
to simplify it. As of now, I think it's the best we can do.

-Ekr


On Thu, Mar 25, 2021 at 5:05 PM Christopher Patton  wrote:

> Hi all,
>
> One of the open issues for ECH is how it interacts with HelloRetryRequest
> (HRR). The central difficulty is that a client may advertise different
> preferences for HRR-sensitive parameters in the ClientHelloInner and
> ClientHelloOuter. And because the HRR path has no explicit signal of which
> ClientHello was used, the client may not be able to know how to respond.
> The following PR solves this problem by adding to HRR an explicit signal of
> which ClientHello was used:
> https://github.com/tlswg/draft-ietf-tls-esni/pull/407
>
> The design was originally proposed by David Benjamin in the issue
> referenced by the PR. Along the way, It also solves a handful of other HRR
> issues that have been identified.
>
> One consequence of this particular solution is that real ECH usage "sticks
> out" if the server responds with an HRR. In particular, signaling which
> ClientHello was used also signals whether ECH was accepted. However, the
> solution is designed to mitigate "don't stick out" attacks that attempt to
> trigger the HRR codepath by fiddling with bits on the wire. The
> distinguisher only arises when HRR happens organically.
>
> Feedback is appreciated!
>
> Best,
> Chris P.
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Solving HRR woes in ECH

2021-03-26 Thread Eric Rescorla
On Fri, Mar 26, 2021 at 9:30 AM Christopher Patton  wrote:

> I really like this idea, but I don't see it as a solution to ECH's HRR
> woes. NIck's idea boils down to providing a recipe for how to construct the
> CHOuter, but AFAICT, there's nothing in the TLS or HTTPS-RR specs that
> requires the client to follow this recipe. We would still need a way of
> reconciling differences in preferences between CHInner and CHOuter.
>

Note that this might also be of value without ECH.

-Ekr


> I think we should pursue using HTTPS-RR this way independently of ECH.
> It's not just useful for ECH, after all. All connections would benefit from
> knowing the server's preferences in advance of the ClientHello.
>
> Chris P.
>
> On Fri, Mar 26, 2021 at 8:10 AM Stephen Farrell 
> wrote:
>
>>
>> Hiya,
>>
>> On 26/03/2021 13:44, Ben Schwartz wrote:
>> > This seems like a reasonable suggestion to me, so long as the value is
>> > purely a "hint", as you seem to be proposing.  I would suggest
>> structuring
>> > it as an ECHConfig extension.  This would avoid the need for multiple
>> > points of integration between TLS and DNS, support the use of HRR hints
>> in
>> > other ECH use cases that don't involve DNS, and help to exercise the
>> > ECHConfig extension mechanism.
>>
>> (I'm not stating an opinion on the PR yet but...) If there
>> is to be some new data included in SVCB/HTTPS RR values then
>> that ought be structured bearing in mind who produces which
>> bits of data. An ECHConfig is a binary blob mostly produced
>> by the client-facing server, whereas TLS parameters for the
>> backend server are not produced at the same place. Including
>> the latter as an ECHConfig.extension is not therefore a good
>> design IMO. Justifying those (IMO:-) unnecessary ECHConfig
>> extensions is also not a goal.
>>
>> Information about the backend's TLS preferences, if published
>> in the DNS, ought be outside the ech value in HTTPS RRs. If
>> we wanted to publish information about the client-facing
>> server's TLS preferences in the backend's zone file, then
>> that could be put into the ECHConfig all right. (It's a pity
>> that we didn't split out the ECHConfigs from different
>> client-facing servers in SVCB/HTTPS to make all that easier
>> isn't it?)
>>
>> Cheers,
>> S.
>>
>> >
>> > On Thu, Mar 25, 2021 at 9:28 PM Nick Sullivan > > 40cloudflare@dmarc.ietf.org> wrote:
>> >
>> >> Hi Chris,
>> >>
>> >> HRR in ECH does seem challenging. This may be tangential to the PR you
>> >> linked, but there may be a way to reduce the likelihood of HRR by
>> moving
>> >> even more of the handshake negotiation into DNS. The HTTPS RR is
>> already
>> >> used for some types of negotiation (ALPN and ECH key), so why can't it
>> be
>> >> extended further to advertise to the client what the server is willing
>> to
>> >> support for cryptographic negotiations?
>> >>
>> >> For example, the HTTPS record could additionally contain the server's
>> >> supported supported_groups and cipher_suites. With this information, a
>> >> client could know which key_share extensions a server is likely to
>> accept
>> >> and adjust its clienthello accordingly. A client who typically sends
>> two
>> >> key_shares (P256 and x25519) for maximal compatibility could then
>> reduce
>> >> the size of its client hello (no need to send redundant key_shares) or
>> even
>> >> prevent an HRR flow altogether in the case that the default key_shares
>> or
>> >> cipher_suites are not supported by the server.
>> >>
>> >> This tweak wouldn't remove the need for HRR completely -- it could be
>> >> necessary when changing server configuration, for example -- but it
>> could
>> >> remove the need for HRR in the common case.
>> >>
>> >> Nick
>> >>
>> >> On Thu, Mar 25, 2021 at 8:05 PM Christopher Patton > >> 40cloudflare@dmarc.ietf.org> wrote:
>> >>
>> >>> Hi all,
>> >>>
>> >>> One of the open issues for ECH is how it interacts with
>> HelloRetryRequest
>> >>> (HRR). The central difficulty is that a client may advertise different
>> >>> preferences for HRR-sensitive parameters in the ClientHelloInner and
>> >>> ClientHelloOuter. And because the HRR path has no explicit signal of
>> which
>> >>> ClientHello was used, the client may not be able to know how to
>> respond.
>> >>> The following PR solves this problem by adding to HRR an explicit
>> signal of
>> >>> which ClientHello was used:
>> >>> https://github.com/tlswg/draft-ietf-tls-esni/pull/407
>> >>>
>> >>> The design was originally proposed by David Benjamin in the issue
>> >>> referenced by the PR. Along the way, It also solves a handful of
>> other HRR
>> >>> issues that have been identified.
>> >>>
>> >>> One consequence of this particular solution is that real ECH usage
>> >>> "sticks out" if the server responds with an HRR. In particular,
>> signaling
>> >>> which ClientHello was used also signals whether ECH was accepted.
>> However,
>> >>> the solution is designed to mitigate "don't stick out" attacks that
>> attempt
>> >>> t

[TLS] key_share hints in DNS

2021-03-26 Thread David Benjamin
(Switching the subject line because the key share hints idea seems
orthogonal to the ECH HRR issue.)

I agree with Stephen that, if we do the key share hint idea, it should be
separate from the ECHConfigList. In addition to a mismatch in describing
client-facing vs. backend servers, there are also multiple ECHConfigs. This
is to manage different ECH versions, key types, sets of critical
extensions, etc. Adding things that aren't correlated with ECH information
would interact awkwardly.

There's also a downgrade nuisance to work through, depending on how the
server interprets key_share vs. supported_groups. Information from DNS is
not authenticated. If your server selects the group based exclusively on
supported_groups, and then only looks at key_share *after* the group
selection is set, unauthenticated key_share hints are fine. Attacker
influence on key_share won't translate to a downgrade attack.

If, however, the server incorporates key_shares into group selection, this
is not okay. An attacker could inject a hint for weaker key shares, and the
server may select that instead. I remember this coming up for TLS 1.3, and
I think we ended up allowing either flow, which is why HelloRetryRequest
retains the handshake transcript. Moreover, the client doesn't know how the
server will interpret the key_share list, but it's the client that decides
whether to honor the hint.

And, yeah, it wouldn't solve ECH's HRR problems because of synchronization
issues and cookies. Also, I don't think this is right:

> NIck's idea boils down to providing a recipe for how to construct the
CHOuter, [...]

It would most likely be a hint towards CHInner, since that's the "actual"
ClientHello, and the aim of such a feature would be to reduce HRRs. This
really seems just orthogonal to ECH.

David

On Fri, Mar 26, 2021 at 12:30 PM Christopher Patton  wrote:

> I really like this idea, but I don't see it as a solution to ECH's HRR
> woes. NIck's idea boils down to providing a recipe for how to construct the
> CHOuter, but AFAICT, there's nothing in the TLS or HTTPS-RR specs that
> requires the client to follow this recipe. We would still need a way of
> reconciling differences in preferences between CHInner and CHOuter.
>
> I think we should pursue using HTTPS-RR this way independently of ECH.
> It's not just useful for ECH, after all. All connections would benefit from
> knowing the server's preferences in advance of the ClientHello.
>
> Chris P.
>
> On Fri, Mar 26, 2021 at 8:10 AM Stephen Farrell 
> wrote:
>
>>
>> Hiya,
>>
>> On 26/03/2021 13:44, Ben Schwartz wrote:
>> > This seems like a reasonable suggestion to me, so long as the value is
>> > purely a "hint", as you seem to be proposing.  I would suggest
>> structuring
>> > it as an ECHConfig extension.  This would avoid the need for multiple
>> > points of integration between TLS and DNS, support the use of HRR hints
>> in
>> > other ECH use cases that don't involve DNS, and help to exercise the
>> > ECHConfig extension mechanism.
>>
>> (I'm not stating an opinion on the PR yet but...) If there
>> is to be some new data included in SVCB/HTTPS RR values then
>> that ought be structured bearing in mind who produces which
>> bits of data. An ECHConfig is a binary blob mostly produced
>> by the client-facing server, whereas TLS parameters for the
>> backend server are not produced at the same place. Including
>> the latter as an ECHConfig.extension is not therefore a good
>> design IMO. Justifying those (IMO:-) unnecessary ECHConfig
>> extensions is also not a goal.
>>
>> Information about the backend's TLS preferences, if published
>> in the DNS, ought be outside the ech value in HTTPS RRs. If
>> we wanted to publish information about the client-facing
>> server's TLS preferences in the backend's zone file, then
>> that could be put into the ECHConfig all right. (It's a pity
>> that we didn't split out the ECHConfigs from different
>> client-facing servers in SVCB/HTTPS to make all that easier
>> isn't it?)
>>
>> Cheers,
>> S.
>>
>> >
>> > On Thu, Mar 25, 2021 at 9:28 PM Nick Sullivan > > 40cloudflare@dmarc.ietf.org> wrote:
>> >
>> >> Hi Chris,
>> >>
>> >> HRR in ECH does seem challenging. This may be tangential to the PR you
>> >> linked, but there may be a way to reduce the likelihood of HRR by
>> moving
>> >> even more of the handshake negotiation into DNS. The HTTPS RR is
>> already
>> >> used for some types of negotiation (ALPN and ECH key), so why can't it
>> be
>> >> extended further to advertise to the client what the server is willing
>> to
>> >> support for cryptographic negotiations?
>> >>
>> >> For example, the HTTPS record could additionally contain the server's
>> >> supported supported_groups and cipher_suites. With this information, a
>> >> client could know which key_share extensions a server is likely to
>> accept
>> >> and adjust its clienthello accordingly. A client who typically sends
>> two
>> >> key_shares (P256 and x25519) for maximal compatibility could 

Re: [TLS] Transport Issues in DTLS 1.3

2021-03-26 Thread Eric Rescorla
Hi folks,

This is a combined response to Martin Duke and to Mark Allman.

Before I respond in detail I'd like to level set a bit.

First, DTLS does not provide a generic reliable bulk data transmission
capability. Rather, it provides an unreliable channel (a la UDP).
That channel is set up with a handshake protocol and DTLS provides
relibaility for that protocol. However, that protocol is run
infrequently and generally involves relatively small amounts
(typically << 10KB) of data being sent. This means that we have rather
more latitude in terms of how aggressively we retransmit because
it only applies to a small fraction of the traffic.

Second, DTLS 1.2 is already widely deployed. It uses a simple "wait
for the timer to expire and retransmit everything" approach, with the
timer being doubled on each retransmission. This doesn't always
provide ideal results, but also has not caused the network to
collapse. I don't know much about how things are deployed in the IoT
setting (paging Hannes Tschofenig) but at least in the WebRTC context,
we have found the 1000ms guidance to be unduly long (as a practical
matter, video conferencing just won't work with delays over
100-200ms). Firefox uses 50ms and AIUI Chrome uses a value derived
from the ICE handshake (which is probably better because there
are certainly times where 50ms is too short).



Martin Duke's Comments:

> In Sec 5.8.2, it is a significant change from DTLS 1.2 that the
> initial timeout is dropping from 1 sec to 100ms, and this is worthy of
> some discussion. This violation of RFC8961 ought to be explored
> further. For a client first flight of one packet, it seems
> unobjectionable. However, I'm less comfortable with a potentially
> large server first flight, or a client second flight, likely leading
> to a large spurious retransmission. With large flights, not only is a
> short timeout more dangerous, but you are more likely to get an ACK in
> the event of some loss that allows you to shortcut the timer anyway
> (i.e. the cost of long timeout is smaller)

You seem to be implicitly assuming that there is individual packet
loss rather than burst loss. If the entire flight is lost, you want to
just fall back to retransmitting.


> Relatedly, in section 5.8.3 there is no specific recommendation for a
> maximum flight size at all. I would think that applications SHOULD
> have no more than 10 datagrams outstanding unless it has some OOB
> evidence of available bandwidth on the channel, in keeping with de
> facto transport best practice.

I agree that this is a reasonable change.


> Finally, I am somewhat concerned that the lack of any window reduction
> might perform poorly in constrained environments.

I'm skeptical that this is actually the case. As a practical matter,
TLS flights rarely exceed 5 packets. For instance, Fastly's data on
QUIC [0] indicates that the server's first flight (the biggest flight
in the TLS 1.3 handshake) is less than 5 packets for the vast majority
of handshakes, even without certificate compression. Given that
constrained environments have more incentive to reduce bandwidth, I
would expect them to typically be smaller, either via using smaller
certificates or using some of the existing techniques for reducing
handshake size such as cert compression or cached info.



> Granted, doubling
> the timeout will reduce the rate, but when retransmission is
> ack-driven there is essentially no reduction of sending rate in
> response to loss.

I don't believe this is correct. Recall that unlike TCP, there's
generally no buffer of queued packets waiting to be transmitted.
Rather, there is a fixed flight of data which must be delivered.  With
one exceptional case [1], an ACK will reflect that some but not all of
the data was delivered and processed; when retransmitting, the
sender will only retransmit the un-ACKed packets, which naturally
reduces the sending rate. Given the quite small flights in play
here, that reduction is likely to be quite substantial. For instance,
if there are three packets and 1 is ACKed, then there will
be a reduction of 1/3.


> I want to emphasize that I am not looking to fully recreate TCP here;
> some bounds on this behavior would likely be satisfactory.
>
> Here is an example of something that I think would be workable. It is
> meant to be a starting point for discussion. I've asked for some input
> from the experts in this area who may feel differently.
>
> - In general, the initial timeout is 100ms.
> - The timeout backoff is not reset after successful delivery.
>   This
>   allows the "discovery" in bullet 1 to be safely applied to larger
>   flights.

Note that the timeout is actually only reset after successful loss-free
delivery of a flight:

   Implementations SHOULD retain the current timer value until a
   message is transmitted and acknowledged without having to
   be retransmitted, at which time the value may be
   reset to the initial value.

There seems to be some confusion here (perhaps due to

Re: [TLS] Transport Issues in DTLS 1.3

2021-03-26 Thread Eric Rescorla
On Fri, Mar 26, 2021 at 3:08 PM Eric Rescorla  wrote:

> Hi folks,
>
> This is a combined response to Martin Duke and to Mark Allman.
>
> Before I respond in detail I'd like to level set a bit.
>
> First, DTLS does not provide a generic reliable bulk data transmission
> capability. Rather, it provides an unreliable channel (a la UDP).
> That channel is set up with a handshake protocol and DTLS provides
> relibaility for that protocol. However, that protocol is run
> infrequently and generally involves relatively small amounts
> (typically << 10KB) of data being sent. This means that we have rather
> more latitude in terms of how aggressively we retransmit because
> it only applies to a small fraction of the traffic.
>
> Second, DTLS 1.2 is already widely deployed. It uses a simple "wait
> for the timer to expire and retransmit everything" approach, with the
> timer being doubled on each retransmission. This doesn't always
> provide ideal results, but also has not caused the network to
> collapse. I don't know much about how things are deployed in the IoT
> setting (paging Hannes Tschofenig) but at least in the WebRTC context,
> we have found the 1000ms guidance to be unduly long (as a practical
> matter, video conferencing just won't work with delays over
> 100-200ms). Firefox uses 50ms and AIUI Chrome uses a value derived
> from the ICE handshake (which is probably better because there
> are certainly times where 50ms is too short).
>
>
>
> Martin Duke's Comments:
>
> > In Sec 5.8.2, it is a significant change from DTLS 1.2 that the
> > initial timeout is dropping from 1 sec to 100ms, and this is worthy of
> > some discussion. This violation of RFC8961 ought to be explored
> > further. For a client first flight of one packet, it seems
> > unobjectionable. However, I'm less comfortable with a potentially
> > large server first flight, or a client second flight, likely leading
> > to a large spurious retransmission. With large flights, not only is a
> > short timeout more dangerous, but you are more likely to get an ACK in
> > the event of some loss that allows you to shortcut the timer anyway
> > (i.e. the cost of long timeout is smaller)
>
> You seem to be implicitly assuming that there is individual packet
> loss rather than burst loss. If the entire flight is lost, you want to
> just fall back to retransmitting.
>
>
> > Relatedly, in section 5.8.3 there is no specific recommendation for a
> > maximum flight size at all. I would think that applications SHOULD
> > have no more than 10 datagrams outstanding unless it has some OOB
> > evidence of available bandwidth on the channel, in keeping with de
> > facto transport best practice.
>
> I agree that this is a reasonable change.
>
>
> > Finally, I am somewhat concerned that the lack of any window reduction
> > might perform poorly in constrained environments.
>
> I'm skeptical that this is actually the case. As a practical matter,
> TLS flights rarely exceed 5 packets. For instance, Fastly's data on
> QUIC [0] indicates that the server's first flight (the biggest flight
> in the TLS 1.3 handshake) is less than 5 packets for the vast majority
> of handshakes, even without certificate compression. Given that
> constrained environments have more incentive to reduce bandwidth, I
> would expect them to typically be smaller, either via using smaller
> certificates or using some of the existing techniques for reducing
> handshake size such as cert compression or cached info.
>
>
>
> > Granted, doubling
> > the timeout will reduce the rate, but when retransmission is
> > ack-driven there is essentially no reduction of sending rate in
> > response to loss.
>
> I don't believe this is correct. Recall that unlike TCP, there's
> generally no buffer of queued packets waiting to be transmitted.
> Rather, there is a fixed flight of data which must be delivered.  With
> one exceptional case [1], an ACK will reflect that some but not all of
> the data was delivered and processed; when retransmitting, the
> sender will only retransmit the un-ACKed packets, which naturally
> reduces the sending rate. Given the quite small flights in play
> here, that reduction is likely to be quite substantial. For instance,
> if there are three packets and 1 is ACKed, then there will
> be a reduction of 1/3.
>
>
> > I want to emphasize that I am not looking to fully recreate TCP here;
> > some bounds on this behavior would likely be satisfactory.
> >
> > Here is an example of something that I think would be workable. It is
> > meant to be a starting point for discussion. I've asked for some input
> > from the experts in this area who may feel differently.
> >
> > - In general, the initial timeout is 100ms.
> > - The timeout backoff is not reset after successful delivery.
> >   This
> >   allows the "discovery" in bullet 1 to be safely applied to larger
> >   flights.
>
> Note that the timeout is actually only reset after successful loss-free
> delivery of a flight:
>
>Implementations SH