On Tue, 18 Oct 2022, Valery Smyslov wrote:
implementation with say 10 CPUs. Does it make any difference for this
implementation
If it receives CPU_QUEUES with 100 or with 1000? It seems to me that in both
cases
it will follow its own local policy for limiting the number of per-CPU SAs,
most probably capping it to 10.
That would be a mistake. You always want to allow a few more than the
CPUs you have. The maximum is mostly to protect against DoS attacks.
How it protects against DoS attacks, can you elaborate?
Requesting to install 1 million Child SA's until the remote server falls over.
Perhaps less extremely, to contain the number of resources a sysadmin
allocates to a specific "multi CPU" tunnel.
If you only have 10 CPUs, but the other end has 50, there shouldn't
be much issue to install 50 SA's. Not sure if we said so in the draft,
I'm not so sure. For example, if you use HSM, you are limited by its
capabilities.
Sure. Maybe put the HSM on the fallback SA and not on the per-CPU SAs if
you don't have an option to use it for all.
but you could even omit installing 40 of the outgoing SA's since you
would never be expected to use them anyway. but you have to install all
50 incoming ones because the peer might use them.
And what to do in situations you are unable to install all 50 (for any reason)?
And how it is expected to deal with situations when the number of CPUs
is changed over the lifetime of IKE SA? As far as I understand some modern
systems allows adding CPUs or removing them on the fly.
Right. All of these are reasons to not too tighyly but in limitations
and the notify conveys roughly what each end is willing to put into this
connection, but there might be slight temporary or not changes. We feel
it is still better to convey what both sides consider "ideal", over just
sending CREATE_CHILD_SAs and getting them to fail.
The use case is clear. And the idea to have per-CPU SAs is clear too.
The problem (my problem) is the way how it is achieved.
I understand.
I don't think this logic is credible in real life, but even in this case
there is already a mechanism that allows to limit the number of
per-CPU SAs - it is the TS_MAX_QUEUE notify.
So why we need CPU_QUEUES?
TS_MAX_QUEUE is conveying an irrecoverable error condition. It should
never happen.
That's not what the draft says:
The responder may at any time reject
additional Child SAs by returning TS_MAX_QUEUE.
So, my reading is that this notify can be sent at any time if peer
is not willing to create more per-CPU SAs. And sending this notify
doesn't cause deletion of IKE SA and all its Child SAs (it is my guess).
Right. It is still something expected. Say there were 4 CPUs and we
commited to 4, but now one CPU was removed. So TS_MAX_QUEUE tells the
peer not to try for the 4th one. For whatever reason we cannot do it
anymore. This prevents that peer from keeping to try this every second.
Perhaps it will still try every hour. Or perhaps it will let the peer
run an additional one once it gets that 4th CPU back.
The common case will be both peers present what they want and (within
reason) will establish. No failed CREATE_CHILD_SAs happen unless there
was an unexpected change on one of the peers. Where in your proposal,
there will be a failed CREATE_CHILD_SAs as part of the normal process.
If my reading is wrong and this is a fatal error (or what do you mean by "
irrecoverable "?),
then the protocol is worse than I thought for devices that for any reason
cannot afford
installing unlimited number of SAs (e.g. if they use HSM with
limited memory). In this case they cannot even tell the peer
that they have limited resources.
I might fatal as "do not attempt to do this again, I am out of
resources". Maybe you call that more a temporary error. What I was
trying to convey is that the error means "resources all in use, don't
keep trying this for now". If you feel that is a "temporary" error,
that's fine with me. As long as the peer wouldn't keep trying this
for other CPUs but is smart enough to realize this one failure means
not to keep pounding the peer with more CREATE_CHILD_SA requests.
Where as CPU_QUEUES tells you how many per-CPU child SAs
you can do. This is meant to reduce the number of in-flight CREATE_CHILD_SA's
that will never become successful.
It seems to me that it's enough to have one CREATE_CHILD_SA with the proper
error notify to indicate that the peer is unwilling to create more SAs.
I'm not sure this is a big saving.
Note that you don't have to bring up CPUs on-demand via ACQUIREs. You
can also fire off all of them at once (after the initial child sa is
established). With CPU_QUEUES, you know whether to send 5 or 10 of
these. With your method, you have to bring them up one by one to see
if you can bring up more or not.
I'm also not convinced that CPU_QUEUE_INFO is really needed, it mostly exists
for debugging purposes (again if we get rid of Fallback SA). And I don't think
we need
a new error notify TS_MAX_QUEUE, I believe TS_UNACCEPTABLE can be used instead.
We did it to distinquish between "too many of the same child sa" versus
other errors in cases of multiple subnets / child SAs under the same IKE
peer. Rethinking it, I am no longer able to reproduce why we think it
was required :)
I believe TS_UNACCEPTABLE is well suited for this purpose. You know for sure
that TS itself is OK,
since you have already installed SA(s) with the same TS, and it's not fatal
error notify and
is standardized in RFC 7296 and it does not prevent creating SAs with other TS.
If the peers have two connections:
conn one
[stuff]
leftsunet=10.0.1.0/24
rightsunet=10.0.2.0/24
conn two
[same stuff so shared IKE SA with conn one]
leftsunet=192.0.1.0/24
rightsunet=192.0.2.0/24
If you put up 10 copies of conn one, and then start doing the first
(so fallback sa) of conn two and get TS_UNACCEPTABLE, what error
condition did you hit? Does the peer only accept 10 connections per
IKE peer, or did conn two have subnet parameters that didn't match the
peer ?
The idea of the fallback SA is that you always have at least one child
SA guaranteed to be up that can encrypt and send a packet. It can be
installed to not be per-CPU. It's a guarantee that you will never need
to wait (and cache?) 1 RTT's time worth of packets, which can be a lot
of packets. You don't want dynamic resteering. Just have the fallback
SA "be ready" in case there is no per-cpu SA.
The drawback of the Fallback SA is that it needs a special processing.
Normally we delete SAs when they are idle for a long time
to conserve resources, but the draft says it must not be done with the Fallback
SA.
Yes. But honestly that is a pretty tiny code change in IKEv2. Again, I
will let Steffen and Antony talk about performance, but I think
"re-steering" packets is hugely expensive and slow, especially if it
needs to be done dynamically, eg first you have to determine which CPU
to steer it to, and then steer the packet. Then maybe remember this
choice for a while because you cannot do this lookup for each packet.
Then if that SA dies and you need to find another one, that's a whole
other error path you need to traverse.
I think it depends. I'd like to see optimization efforts to influence
the protocol as less as possible. Ideally this should be local matter
for implementations. This would allow them to interoperate
with unsupporting implementations (and even to benefit from
multi-SAs even in these situations).
Those that don't support this don't see notifies ? Or do you mean to
somehow install multiple SA's for the same thing on "unsupported"
systems?
Yes. The idea is that If one peer supports per-CPU SAs and the
other doesn't, they still be able to communicate and have multiple SAs.
There is no way for you to know in advance if the peer will send you a
delete for the older child SA when establishing the new child SA, thus
defeating your purpose. I know RFC 7296 says you can have multiple
identical child SAs but in practise a bunch of software just assumes
these are the same client reconnecting and the previous chuld sa state
was lost. This proposed mechanism therefor wants to explicitely state
"we are going to do multiple identical SAs, can you support me".
For example, if the supporting system has several weak CPUs,
while the unsupporting one has much more powerful CPU,
then multiple SAs will help to improve performance -
the supporting system will distribute load on its weak CPUs,
while for unsupporting the load will be small enough even for a single CPU.
But you simply don't know if the duplicate SA is going to lead to a
deletion of the older SA.
The problem currently is that when an identical child SA
is successfully negotiated, implementations differ on what they do.
Some allow this, some delete the older one. The goal of this draft
is to make the desire for multple idential child SAs very explicit.
RFC 7296 explicitly allows multiple Child SAs with identical selectors,
so if implementations immediately delete them, then they are either broken
or have reasons to do it (e.g. have no resources).
Broken or not, they are not going to get "fixed". It was a design
choice.
Paul
_______________________________________________
IPsec mailing list
IPsec@ietf.org
https://www.ietf.org/mailman/listinfo/ipsec