Hi Brian, > -----Original Message----- > From: Bryan A Ford [mailto:brynosau...@gmail.com] > Sent: Donnerstag, 3. Dezember 2015 10:51 > To: GUBALLA, JENS (JENS); Fabrice Gautier > Cc: tls@ietf.org > Subject: Re: [TLS] Encrypting record headers: practical for TLS 1.3 > after all? > > Hi Jens, > > On 12/2/15 11:47 AM, GUBALLA, JENS (JENS) wrote: > >> Fortunately the solution is fairly simple: the receiver simply pre- > >> computes and keeps in a small hash table the encrypted sequence > numbers > >> of all packets with sequence numbers between H-W and H+W, where H is > >> the highest sequence number correctly received so far (the horizon) > and > >> W is the anti-replay window size as specified in 4.1.2.5 of RFC > 4347, > >> which typically should be 32 or 64 according to the RFC. The > receiver > >> can precompute all these encryptions because in my proposal TLS > headers > >> are encrypted with a stream cipher (or the AEAD operating as a > stream > >> cipher), so it's just a matter of producing the correct cipherstream > >> bytes and XORing them with the uint48 sequence number. > >> > >> Whenever the receiver gets a datagram, it looks up the encrypted > >> sequence number in the hash table, drops it on the floor if it's not > >> present, and if it's present the receiver gets the decrypted > sequence > >> number from the hash table and uses that in the AEAD decryption and > >> integrity-check. In the low-probability event of a hash-table > >> collision (i.e., two uint48 sequence numbers encrypting to the same > 48- > >> bit ciphertext in a single 129-datagram window), the receiver can > >> trial-decrypt with both (or all) sequence numbers in that colliding > >> hash table entry. Or the receiver can keep it even simpler and just > >> drop all but one colliding entry, introducing a pretty low > probability > >> of introducing occasional "false packet drops." > >> > >> The hash table is pretty trivial to maintain efficiently as well: > e.g., > >> whenever the horizon H moves forward by delta D, remove the first D > >> entries from the current window and precompute another D encrypted > >> sequence numbers (where D will most often be 1). In the simple > design > >> that doesn't bother dealing with hash table collisions (e.g., that > >> allows each hash table entry to contain only one value), perhaps > don't > >> even bother clearing/removing old entries; just gradually overwrite > >> them with new ones as H moves forward. > > > > [JG] In case there is a packet loss of at least W subsequent DTLS > records: > > How can the receiver then ever adjust its hash table? Wouldn't that > mean > > that no records at all would be accepted anymore? > > Excellent question - I had intended to discuss that in my original post > but in the end forgot to include it. > > Indeed, with this approach as it stands, if every packet within a full > window of W consecutive packets fails to reach the receiver, then the > receiver has no way to resynchronize and the connection will simply > fail. In congestion-controlled protocols like TCP (or DCCP) that do > exponential backoff when they detect many consecutive losses, the > protocol may be more likely simply to hard-timeout than to reach the > W-packet resynchronization limit. But admittedly many UDP-based > protocols aren't (or are rather weakly) congestion-controlled, so this > may be more of a problem for them. It's probably the case that the > "forward-looking window" should be allowed to have a different value > from the "backward-looking window", and perhaps the "forward-looking > window" should depend on RTT (e.g., measured maximum packets-in- > flight). > > However, one way to eliminate this risk of permanent desynchronization, > at the cost of a bit more complexity in the receiver implementation > (though this needn't affect the protocol spec at all) is for the > receiver's "forward-looking window" to consist not of W consecutive > sequence numbers but a sparse set of sequence numbers at > exponentially-increasing distances. For example, if H is the current > highest sequence number, include in the forward-looking cache the > encrypted sequence numbers of the sequence number of the next multiple > of 2^1 beyond H, the next multiple of 2^2, etc., for as many multiples > of powers-of-two to get sufficiently far out in the sequence number > space where we're convinced there's no realistic chance of a run of > total or near-total packet loss unless it really means the connection > is > dead anyway. :) [JG] That would mean legitimate records would be dropped until an entry in the hash would match. Thus this proposal would potentially degrade the service, right?
Basically I fail to see how a stream cipher can be operated reliable on top of an unreliable transport protocol. Best regards, Jens > > Again, these are all considerations that need not affect the protocol > but could be tuned by implementations (perhaps with some > recommendations > in the protocol spec). > > But regardless, all of this is just to make the case that header > encryption is in principle just as feasible in DTLS as in TLS, and > using > fairly similar techniques; right now I think we should keep the main > focus on whether to do it (at least) in TLS, for which is seems pretty > easy to do it in at least two different ways I've proposed. > > B
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls