Hi all, Rekeying too often unnecessarily does not increase any cryptographic security. In addition, it could create other cryptographic issues for the system. The first issue is key collision risk when AES-128 is used and the second issue could be multi-target (multi-key) risk theoretically.
Therefore, I would suggest not to rekey (as currently specified) too often unnecessarily. I think providing a data limit guidance sub-section under the Security Consideration section is one good option to be considered. Users just follow the guidance to set their own data limit(s). Quynh. ________________________________________ From: TLS <tls-boun...@ietf.org> on behalf of Dang, Quynh <quynh.d...@nist.gov> Sent: Friday, December 18, 2015 10:49 AM To: tls@ietf.org Subject: Re: [TLS] Data volume limits The collision probability of ciphertext blocks also depends on the size of the plaintext (record size in a TLS implementation) in each call of the GCM encryption function. Let's call each plaintext to be 2^x 128-bit blocks. TLS 1.3 uses 96-bit IV. If someone wants the collision probability below 1/2^y such as 1/2^24 or 1/2^32 (2^32 = 4,294,967,296 and 2^24 = 16,777,216 ), the total number of plaintext blocks under a given key must be 2^((96 + x - y)/2) or lower. So, 2^((96 + x - y)/2) 128-bit blocks are the limit to achieve IND-* with GCM. If someone does not need IND-* property, the above restriction is not needed. Quynh. ________________________________________ From: TLS <tls-boun...@ietf.org> on behalf of Yoav Nir <ynir.i...@gmail.com> Sent: Thursday, December 17, 2015 6:07 AM To: Nikos Mavrogiannopoulos Cc: tls@ietf.org; Simon Josefsson Subject: Re: [TLS] Data volume limits > On 17 Dec 2015, at 10:19 AM, Nikos Mavrogiannopoulos <n...@redhat.com> wrote: > > On Wed, 2015-12-16 at 09:57 -1000, Brian Smith wrote: > >> Therefore, I think we shouldn't add the rekeying mechanism as it is >> unnecessary and it adds too much complexity. > > Any arbitrary limit for a TLS connection is almost guaranteed to cause > problems in the future. We cannot predict whether 2^x should be > sufficient for everyone, and I'm pretty sure this will prove to be a > terrible mistake. TLS is already being used for VPNs and transferring > larger amounts of data in long lived connections is a reality even > today. The rekey today happens using the reauthentication mechanism, > which has very complex semantics. Converting these to a simpler and > predictable rekey mechanism would be an improvement. Agreed. The alternative to having a rekey mechanism is to push the complexity to the application protocol, requiring it to be able to use more than one connection to transfer all the data, which may require some sort of session layer to maintain state between connections. So unless we can guarantee or require that every algorithm we are going to use is good for some ridiculous amount of data (2^64 bytes may be enough), we need rekeying. Yoav _______________________________________________ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls _______________________________________________ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls _______________________________________________ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls