Imagine the following scenario, where the server and client have this repeated communication N times per day:
client server --X--> <--Y-- the client puts in X a message A of 1 byte or B of 1024 bytes, and pads it to the maximum size of TLS record. The server replies with the message "ok" (same every time), padded to the maximum size just after it reads X. However, TLS 1.3 detects the message size by iterating through all the padding bytes, and thus there is a timing leak observed by the time difference between receiving X and sending Y. Thus as an adversary I could take enough measurements and be able to distinguish between X having the value A or B. While I'd expect these iterations to be unmeasurable in desktop or server hardware, I am not sure about the situation in low-end IoT hardware. Is the design choice for having the padding removal depending on padding length intentional? There is mentioning of possible timing channels in: https://tools.ietf.org/html/draft-ietf-tls-tls13-21#appendix-E.3 However I don't quite understand how is this section intended to be read. The sentence for example: "Because the padding is encrypted alongside the actual content, an attacker cannot directly determine the length of the padding, but may be able to measure it indirectly by the use of timing channels exposed during record processing", what is its intention? Is it to acknowledge the above timing leak? Shouldn't instead be guidance in section 'Implementation Pitfalls' on how to remove padding in a way that there are no timing leaks? (the timing leak here is not in crypto algorithms, but TLS itself). Ideally TLS 1.3 itself shouldn't use data-size depending calculations itself such as the one described here. regards, Nikos _______________________________________________ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls