On 1/3/21 4:22 PM, Mark Delany wrote:
Creating quiescent sockets has certainly been discussed in the context of RSS
where you
might want to server-notify a large number of long-held client connections very
infrequently.
While a kernel could quiesce a TCP socket down to maybe 100 bytes or so
(endpoint tuples,
sequence numbers, window sizes and a few other odds and sods), a big residual
cost is
application state - in particular TLS state.
Even with a participating application, quiescing in-memory state to something
less than,
say, 1KB is probably hard but might be doable with a participating TLS library.
If so, a
million quiescent connections could conceivably be stashed in a coupla GB of
memory. And
of course if you're prepared to wear a disk read to recover quiescent state,
your
in-memory cost could be less than 100 bytes allowing many millions of quiescent
connections per server.
Having said all that, as far as I understand it, none of the DNS-over-TCP
systems imply
centralization, that's just how a few applications have chosen to deploy. We
deploy DOH to
a private self-managed server pool which consume a measly 10-20 concurrent TCP
sessions.
I was thinking more in the original context of this thread w.r.t.
potential distribution of emergency alerts. That could, if
semi-centralized, easily result in 100s of million connections to juggle
across a single service just for the USA. While it presumably wouldn't
be quite that centralized, it's a sizable problem to manage.
Obviously you could distribute it out ala the CDN model that the content
providers use, but then you're potentially devoting a sizable chunk of
hardware resources at something that really doesn't otherwise require it.
The nice thing is that such emergency alerts don't require
confidentiality and can relatively easily bear in-band,
application-level authentication (in fact, that seems preferable to only
using session-level authentication). That means you could easily carry
them over plain HTTP or similar which removes the TLS overhead you mention.
Several GB of RAM is nothing for a modern server, of course. It sounds
like you'd probably run into other scaling issues before you hit memory
limitations needed to juggle legitimate TCP connection state.
--
Brandon Martin