On Thu, Jan 11, 2024, at 15:45, Christian Huitema wrote: > Good for you. Not all implementations do that. It is hard for me to > blame them, because the 10 seconds recommendation is justified by for > "clients on the Internet", and delays larger than 1 or maybe 2 seconds > are quite rare on the "Earthian" Internet that we have today. Given that > complexity is the enemy of security, I certainly understand why some > implementations keep it simple.
I consider it necessary for reliability purposes. If you don't at least try to account for RTT, you are skewing any window anti-replay window you have. > Yes, but the relation between delays and variance is rather complex. On > the current Internet, a good bit of delay variance comes from congestion > control protocols pushing ever more data until queues overflow -- but > better congestion control or using modern AQM in front of the bottleneck > can take care of that. Keep in mind that a lot of these exchanges won't have had much data exchanged on the connection at the time. So a lot of the variance from self-induced congestion won't come into play. That's not to say that you won't be affected by what others are doing or other active connections though. And of course, your space networks have other more interesting factors pushing delays in all directions. > Plus, how exactly does one test this kind of "variable delay" code? Time is just another input to your program, right? > The more I think of it, the more I believe the solution may be some kind > of plug-in in the TLS implementation. Let the application that requires > the extra complexity deal with it, develop and test the adequate plug-in. I don't think that there is much need for active code, just configuration. Maybe I'm missing something though. _______________________________________________ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls