> -----Original Message-----
> From: Shani Peretz <shper...@nvidia.com>
> Sent: Wednesday, 26 March 2025 9:14
> To: Akhil Goyal <gak...@marvell.com>; dev@dpdk.org
> Cc: Suanming Mou <suanmi...@nvidia.com>; sta...@dpdk.org; Brian
> Dooley <brian.doo...@intel.com>; Pablo de Lara
> <pablo.de.lara.gua...@intel.com>
> Subject: RE: [EXTERNAL] [PATCH] app/crypto-perf: fix aad offset alignment
> 
> External email: Use caution opening links or attachments
> 
> 
> > -----Original Message-----
> > From: Akhil Goyal <gak...@marvell.com>
> > Sent: Monday, 17 March 2025 12:23
> > To: Shani Peretz <shper...@nvidia.com>; dev@dpdk.org
> > Cc: Suanming Mou <suanmi...@nvidia.com>; sta...@dpdk.org; Brian
> Dooley
> > <brian.doo...@intel.com>; Pablo de Lara
> > <pablo.de.lara.gua...@intel.com>
> > Subject: RE: [EXTERNAL] [PATCH] app/crypto-perf: fix aad offset
> > alignment
> >
> > External email: Use caution opening links or attachments
> >
> >
> > Hi,
> > > AAD offset in AES-GCM crypto test was calculated by adding 16-byte
> > > alignment after the IV, which is only needed in AES-CCM.
> >
> > Agreed that CCM has a requirement for 16B alignment.
> > But for GCM, does it break any protocol? Can we not align to byte
> > boundary for performance?
> > This is a performance application which mainly focus on getting the
> > best throughput.
> > Did you check if it is having some performance degradation?
> >
> > >
> > > The patch correct the AAD offset calculation in AES-GCM algorithm tests.
> > >
> > > Fixes: 0b242422d385 ("app/crypto-perf: set AAD after the crypto
> > > operation")
> > > Cc: sta...@dpdk.org
> > >
> > > Signed-off-by: Shani Peretz <shper...@nvidia.com>
> > > ---
> > >  app/test-crypto-perf/cperf_ops.c | 4 +++-
> > >  1 file changed, 3 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/app/test-crypto-perf/cperf_ops.c
> > > b/app/test-crypto-perf/cperf_ops.c
> > > index 6d5f510220..f9be51e17f 100644
> > > --- a/app/test-crypto-perf/cperf_ops.c
> > > +++ b/app/test-crypto-perf/cperf_ops.c
> > > @@ -688,7 +688,9 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
> > >       uint16_t i;
> > >       /* AAD is placed after the IV */
> > >       uint16_t aad_offset = iv_offset +
> > > -                     RTE_ALIGN_CEIL(test_vector->aead_iv.length, 16);
> > > +                     ((options->aead_algo ==
> > > + RTE_CRYPTO_AEAD_AES_CCM)
> > > ?
> > > +                     RTE_ALIGN_CEIL(test_vector->aead_iv.length, 16) :
> > > +                     test_vector->aead_iv.length);
> > >
> > >       for (i = 0; i < nb_ops; i++) {
> > >               struct rte_crypto_sym_op *sym_op = ops[i]->sym;
> > > --
> > > 2.25.1
> 
> I checked the throughput test, and I haven't noticed any degradation
> compared to upstream. I can share the results if needed.
> Note that regardless of the performance it fixes several segmentation faults 
> in
> the test.
> (The problem is that we allocate the crypto_op_private_size without
> alignment, but we try to access as if it was 16 byte alignment)

Hey, Is there anything else I can do to promote this fix?

Reply via email to