On Tuesday 15 December 2015 20:01:58 Bill Frantz wrote:
> So we have to trade off the risks of too much data vs. the risks
> of a complex rekey protocol vs. the risks having the big data
> applications build new connections every 2**36 or so bytes.
> 
> If we don't have rekeying, then the big data applications are
> the only ones at risk. If we do, it may be a wedge which can
> compromise all users.

if the rekey doesn't allow the application to change authentication 
tokens (as it now stands), then rekey is much more secure than 
renegotiation was in TLS <= 1.2

so if we include rekeying in TLS, I'd suggest to set its limit to 
something fairly low for dig data transfers, that is gigabytes, not 
terabytes, otherwise we'll be introducing code that is simply not tested 
for interoperability

(with AES-NI you can easily transfer gigabytes in just few minutes)
-- 
Regards,
Hubert Kario
Senior Quality Engineer, QE BaseOS Security team
Web: www.cz.redhat.com
Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to