Éric Vyncke has entered the following ballot position for draft-ietf-ipsecme-multi-sa-performance-08: No Objection
When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to https://www.ietf.org/about/groups/iesg/statements/handling-ballot-positions/ for more information about how to handle DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: https://datatracker.ietf.org/doc/draft-ietf-ipsecme-multi-sa-performance/ ---------------------------------------------------------------------- COMMENT: ---------------------------------------------------------------------- # Éric Vyncke, INT AD, comments for draft-ietf-ipsecme-multi-sa-performance-08 Thank you for the work put into this document. Please find below some non-blocking COMMENT points (but replies would be appreciated even if only for my own education), and some nits. Special thanks to Tero Kivinen for the shepherd's detailed write-up including the WG consensus (including the 'competitive' draft) and the justification of the intended status. Other thanks to Tim Winters, the Internet directorate reviewer (at my request), please consider this int-dir review: https://datatracker.ietf.org/doc/review-ietf-ipsecme-multi-sa-performance-08-intdir-telechat-winters-2024-04-30/ I hope that this review helps to improve the document, Regards, -éric # COMMENTS (non-blocking) ## Unbalanced ? It has been a long time since I worked with IPsec, but I have a small concern about this proposal: one peer will use its own selector/SADB to select a child SA and the associated SPI based on *local* state (e.g., CPU utilization) but the selected SA/SPI may end up on the remote peer on a heavily loaded CPU. If this is an issue, then should it be documented somewhere in this document (the remote can shuffle of course the SA among its CPUs)? ## Section 1 I am intrigued by the lack of linearity in the implementation result: 1 CPU -> 5 Gbps then 30 CPUs -> 60 Gbps (i.e., 2 Gbps/CPU). Some explanations would be appreciated + reference to the implementation itself. Does the amount of core per CPU have any impact ? `PFP is also not widely implemented.` a reference would be welcome as this statement does not appear to be based on facts (I do not dispute the validity of the statement). # NITS (non-blocking / cosmetic) ## Section 4 s/a 1RTT delay/a 1 RTT delay/ _______________________________________________ IPsec mailing list IPsec@ietf.org https://www.ietf.org/mailman/listinfo/ipsec