Hi Adam,

Thanks for the review.  You picked up on something that was a little
sloppy there.

PR: https://github.com/tlswg/tls-record-limit/pull/19

On Wed, Apr 4, 2018 at 3:58 PM, Adam Roach <a...@nostrum.com> wrote:>
Adam Roach has entered the following ballot position for
> §4:
>
>>  MUST NOT send a value higher than the protocol-defined maximum record
>>  size unless explicitly allowed by such a future version or extension.
>
> Presumably, recipients MUST gracefully accept values higher than the maximum
> record size?  That is implied by this text (and the text that follows), but
> given how TLS frequently aborts connections at the first sign of any
> irregularity, it's probably worth saying explicitly.

I thought that the following text was doing precisely that:

> A server MUST NOT enforce this restriction; a client might advertise a higher 
> limit that is enabled by an extension or version the server does not 
> understand.

A client can enforce the restriction, because it knows the entire set
of possible extensions that determine the maximum size.  I'll concede
that it's a little sloppy in that it doesn't explicitly say whether a
client is expected to police the value.  Is that a change you would
like to see?  As in:

> A client MAY abort the handshake with an illegal_parameter alert if the 
> record_size_limit extension includes a value greater than the maximum record 
> size permitted by the negotiated protocol version and extensions.

Note that this wasn't made a MUST because if there is an extension
that raises the limit, the client has to do a second pass over the
extensions and that's awkward.

> §4:
>
>>  a DTLS endpoint that
>>  receives a record larger than its advertised limit MAY either
>>  generate a fatal "record_overflow" alert or discard the record.
>
> I'm concerned about the interaction between the option to discard the record 
> and
> protocols that perform retransmission of lost packets over DTLS (e.g., 
> proposals
> such as draft-rescorla-quic-over-dtls). In the case that an oversized packet 
> is
> simply discarded, retransmissions of that (presumably still oversized) packet
> will take a while to time out (I'm not particularly well-versed in QUIC, but
> assume it has characteristics similar to TCP's ~nine-minute timeout), which
> would result in really bad user experiences.  Is there rationale for this 
> optionality?
> It would seem to be cleaner if the response were simply to always send a fatal
> error.

The problem is that you only want to abort if you decrypt the record.
DTLS doesn't kill connections if it receives junk.  In this case, you
would only want to kill the connection if the record is authentic.
But if you have a record size limit, it's usually because you don't
want to decrypt big records.  So you probably won't bother even trying
to decrypt the big packet, even if you could.

So yes, discard ends up looking like packet loss, akin to the loss you
get when a path doesn't support the MTU.  Except that you don't have
any hope of receiving feedback (inasmuch as ICMP is available to DTLS
anyway...).

Given that it's a protocol error, I'm not all that enthusiastic about
fixing the problem.  Even writing it down seems wrong.  "If an
endpoint violates its negotiated constraints, then it can induce
denial of service on itself."  There's only so much defensive design
we can build in.

FWIW, QUIC has the exact same mechanism built in, but you don't
retransmit packets verbatim, so it could rectify itself if it was a
one-off.  More likely, it would compound when more outstanding data is
available to send in every subsequent packet, perpetuating the use of
over-large packets.

_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to