On 1/10/2024 7:00 PM, Martin Thomson wrote:
On Thu, Jan 11, 2024, at 11:07, Christian Huitema wrote:
One first problem with this code is that implementations may or may not
have an estimate of the RTT when they are issuing the ticket. In theory
the server could measure the RTT by comparing the send time of the
server first flight and the arrival time of the client's second flight,
but that's extra complexity and implementations may well skip that test.
This seems like a pretty easy thing to measure. It's what I implemented for
NSS. I don't think that you should assume that people won't do that.
Good for you. Not all implementations do that. It is hard for me to
blame them, because the 10 seconds recommendation is justified by for
"clients on the Internet", and delays larger than 1 or maybe 2 seconds
are quite rare on the "Earthian" Internet that we have today. Given that
complexity is the enemy of security, I certainly understand why some
implementations keep it simple.
In theory, the client could compensate the clients_ticket_age to include
the RTT, but that's risky. If the client is connecting to a server
implementation that does adjust the ticket creation time, the adjusted
client time will be found larger than the current time at the server.
Servers may well test for that, reject the ticket and fall back to 1RTT.
Compensating for the RTT estimate is possible (again, I can attest to this
being easy to implement), but the problem is that you also need to account for
a larger RTT variance when RTT increases. For that, it is not so easy as the
more obvious mechanisms for tracking anti-replay need to be globally
configured, not driven by per-connection parameters. If you have clients with
1s RTTs coexisting with those that have RTTs measured in minutes, the
configuration needs to be different.
See
https://searchfox.org/mozilla-central/rev/b1a029fadaaabb333d8139f9ec3924a20c0c941f/security/nss/lib/ssl/sslexp.h#187-194
(which doesn't talk about RTT variance, but it should).
Yes, but the relation between delays and variance is rather complex. On
the current Internet, a good bit of delay variance comes from congestion
control protocols pushing ever more data until queues overflow -- but
better congestion control or using modern AQM in front of the bottleneck
can take care of that. In space, I am more concerned about delay
variation over time, since everything is either in some orbit or falling
through space at considerable speed. Plus I see dreams of building
networks of space relays, which implies variable paths between two
points. So the RTT at the time the ticket is issued may differ from the
RTT when the ticket is used by quite a bit.
And then, as you point out, larger tolerance means loosening the
freshness test a lot. Might be much better to rely on Bloom filters or
similar solutions.
Plus, how exactly does one test this kind of "variable delay" code?
If you know about a population of clients with high variance, you can build
allowances in, but I don't have a general solution. However, allowing for that
variance comes with a cost in terms of state space, especially if you have to
allow for high connection rates.
The more I think of it, the more I believe the solution may be some kind
of plug-in in the TLS implementation. Let the application that requires
the extra complexity deal with it, develop and test the adequate plug-in.
-- Christian Huitema
_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls