On Fri, Mar 17, 2017 at 11:21:12AM +0000, Peter Gutmann wrote: > However, this then leads to a problem where it doesn't actually solve > the constrained-client/server issue, if a client asks for 2K max > record size and the server responds with a 4K hello then it's going to > break the client even if later application_data records are only 2K. > So it would need to apply to every record type, not just > application_data.
Hello, I had tried to raise the same issues here, a few months ago. The max_frag_length extension, as currently defined in RFC 6066, has the following issues: - It is client-driven: ** The server cannot send the extension unless the client has sent it first. ** Even if the client sent the extension, the only option for the server is to respond with an extension advertising the very same length. The server has no option to negotiate a smaller maximum fragment length. - "Big" clients (Web browsers) don't support it and have no incentive to do so, since they, as client, can totally use huge records, which are negligible with regards to the dozens of megabytes they eat up just for starting up. - The extension mandates the same size constraint on both directions. A constrained implementation may have two separate buffers for sending and receiving, and these buffers need not have the same size. In fact, in dedicated specific situations, records larger than the output buffer may be sent (the sender must know in advance how many bytes it will send, but it can encrypt and MAC "on the fly"). Fragmentation of messages is another issue, which is correlated but still distinct. Note for instance that it is customary, in the case of TLS 1.0 with a CBC-based cipher suite, to fragment _all_ records (application data records, at least) as part of a protection against BEAST-like attacks. Also, having very small buffers does not necessarily prevent processing larger handshake messages, or even larger unencrypted records. Here I may point at my own SSL implementation (www.bearssl.org) that can do both: it supports unencrypted records that are larger than its input buffer, and it supports huge handshake messages. It can actually perform rudimentary X.509 path validation even with multi-megabyte certificates, while keeping to a few kilobytes of RAM and no dynamic allocation. Now that does not mean that a "don't fragment" flag has no value. Indeed, streamed processing of messages is not easy to implement (I know, since I did it), and having some guarantees on non-fragmentations may help some implementations that are very constrained in ROM size and must stick to the simplest possible code. But it still is a distinct thing. Moreover, maximum handshake message length needs not be the same as the maximum record length. For instance, OpenSSL tends to enforce a maximum 64 kB size on handshake messages. Maybe we need a "maximum handshake message length" extension. In order to "fix" RFC 6066, the following would be needed, in my opinion: - Allow the server to send the extension even if the client did not send it. - Allow the server to mandate fragment lengths smaller than the value sent by the client (a client not sending the extension would be assumed to have implicitly sent an extension with a 16384-byte max fragment length). - Preferably, change the encoding to allow for _two_ lengths, for both directions, negociated separately. - Preferably, write down in TLS 1.3 that supporting the extension is mandatory. Otherwise, chances are that Web browsers won't implement it anyway. I can prototype things in BearSSL (both client and server). --Thomas Pornin _______________________________________________ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls