Hi Ciara, > -----Original Message----- > From: Power, Ciara <ciara.po...@intel.com> > Sent: Thursday, August 25, 2022 3:29 PM > To: Zhang, Roy Fan <roy.fan.zh...@intel.com>; De Lara Guarch, Pablo > <pablo.de.lara.gua...@intel.com> > Cc: dev@dpdk.org; Ji, Kai <kai...@intel.com>; Power, Ciara > <ciara.po...@intel.com> > Subject: [PATCH v2 3/5] crypto/ipsec_mb: add remaining SGL support > > The intel-ipsec-mb library supports SGL for GCM and ChaChaPoly algorithms > using the JOB API. > This support was added to AESNI_MB PMD previously, but the SGL feature > flags could not be added due to no SGL support for other algorithms. > > This patch adds a workaround SGL approach for other algorithms using the > JOB API. The segmented input buffers are copied into a linear buffer, which is > passed as a single job to intel-ipsec-mb. > The job is processed, and on return, the linear buffer is split into the > original > destination segments. > > Existing AESNI_MB testcases are passing with these feature flags added. > > Signed-off-by: Ciara Power <ciara.po...@intel.com> > > --- > v2: > - Small improvements when copying segments to linear buffer. > - Added documentation changes. > --- > doc/guides/cryptodevs/aesni_mb.rst | 1 - > doc/guides/cryptodevs/features/aesni_mb.ini | 4 + > doc/guides/rel_notes/release_22_11.rst | 4 + > drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 191 ++++++++++++++++---- > 4 files changed, 166 insertions(+), 34 deletions(-) >
... > +++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c ... > > +static int > +handle_sgl_linear(IMB_JOB *job, struct rte_crypto_op *op, uint32_t > dst_offset, > + struct aesni_mb_session *session) > +{ > + uint64_t cipher_len, auth_len; > + uint8_t *src, *linear_buf = NULL; > + int total_len; > + int lb_offset = 0; Suggest using unsigned here (probably uint64_t for total_len, as it gets the value from max between cipher_len and auth_len). > + struct rte_mbuf *src_seg; > + uint16_t src_len; > + > + if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN || > + job->cipher_mode == > IMB_CIPHER_KASUMI_UEA1_BITLEN) > + cipher_len = (job->msg_len_to_cipher_in_bits >> 3) + > + (job->cipher_start_src_offset_in_bits >> 3); > + else > + cipher_len = job->msg_len_to_cipher_in_bytes + > + job->cipher_start_src_offset_in_bytes; > + > + if (job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN || > + job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN) > + auth_len = (job->msg_len_to_hash_in_bits >> 3) + > + job->hash_start_src_offset_in_bytes; > + else if (job->hash_alg == IMB_AUTH_AES_GMAC) > + auth_len = job->u.GCM.aad_len_in_bytes; > + else > + auth_len = job->msg_len_to_hash_in_bytes + > + job->hash_start_src_offset_in_bytes; > + > + total_len = RTE_MAX(auth_len, cipher_len); > + linear_buf = rte_zmalloc(NULL, total_len + job- > >auth_tag_output_len_in_bytes, 0); > + if (linear_buf == NULL) { > + IPSEC_MB_LOG(ERR, "Error allocating memory for SGL Linear > Buffer\n"); > + return -1; > + } > + .. > +static void > +post_process_sgl_linear(struct rte_crypto_op *op, IMB_JOB *job, > + struct aesni_mb_session *sess, uint8_t *linear_buf) { > + > + int lb_offset = 0; > + struct rte_mbuf *m_dst = op->sym->m_dst == NULL ? > + op->sym->m_src : op->sym->m_dst; > + uint16_t total_len, dst_len; > + uint64_t cipher_len, auth_len; > + uint8_t *dst; > + > + if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN || > + job->cipher_mode == > IMB_CIPHER_KASUMI_UEA1_BITLEN) > + cipher_len = (job->msg_len_to_cipher_in_bits >> 3) + > + (job->cipher_start_src_offset_in_bits >> 3); > + else > + cipher_len = job->msg_len_to_cipher_in_bytes + > + job->cipher_start_src_offset_in_bytes; > + > + if (job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN || > + job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN) > + auth_len = (job->msg_len_to_hash_in_bits >> 3) + > + job->hash_start_src_offset_in_bytes; > + else if (job->hash_alg == IMB_AUTH_AES_GMAC) > + auth_len = job->u.GCM.aad_len_in_bytes; > + else > + auth_len = job->msg_len_to_hash_in_bytes + > + job->hash_start_src_offset_in_bytes; > + This code above is the same as the code in handle_sgl_linear. Maybe you can have a separate function and remove duplication.