Hi Abhi,

> -----Original Message-----
> From: Gujjar, Abhinandan S <abhinandan.guj...@intel.com>
> Sent: Tuesday, December 6, 2022 9:56 PM
> To: Kundapura, Ganapati <ganapati.kundap...@intel.com>; dev@dpdk.org;
> jer...@marvell.com; Naga Harish K, S V <s.v.naga.haris...@intel.com>
> Cc: Jayatheerthan, Jay <jay.jayatheert...@intel.com>
> Subject: RE: [PATCH v2 4/5] eventdev/crypto: fix overflow in circular buffer
> 
> 
> 
> > -----Original Message-----
> > From: Kundapura, Ganapati <ganapati.kundap...@intel.com>
> > Sent: Thursday, December 1, 2022 12:17 PM
> > To: dev@dpdk.org; jer...@marvell.com; Naga Harish K, S V
> > <s.v.naga.haris...@intel.com>; Gujjar, Abhinandan S
> > <abhinandan.guj...@intel.com>
> > Cc: Jayatheerthan, Jay <jay.jayatheert...@intel.com>
> > Subject: [PATCH v2 4/5] eventdev/crypto: fix overflow in circular
> > buffer
> >
> > Crypto adapter checks CPM backpressure once in enq_run() This leads to
> > buffer overflow if some ops failed to flush to cryptodev.
> Adapter is agnostic to hardware, replace CPM with "crypto device"
> 
> Rephrase the commit message by adding:-
> In case of crypto enqueue failures, even though backpressure flag is set to
> stop further dequeue from eventdev the current logic does not stop
> dequeuing events for max_nb events. This is fixed by checking backpressure
> just before dequeuing events from event device.
> 
Updated in V3
> >
> > Checked CPM backpressure for every iteration in enq_run()
> >
> > Fixes: 7901eac3409a ("eventdev: add crypto adapter implementation")
> >
> > Signed-off-by: Ganapati Kundapura <ganapati.kundap...@intel.com>
> > ---
> > v2:
> > * Updated subject line in commit message
> >
> > diff --git a/lib/eventdev/rte_event_crypto_adapter.c
> > b/lib/eventdev/rte_event_crypto_adapter.c
> > index 72deedd..1d39c5b 100644
> > --- a/lib/eventdev/rte_event_crypto_adapter.c
> > +++ b/lib/eventdev/rte_event_crypto_adapter.c
> > @@ -573,14 +573,15 @@ eca_crypto_adapter_enq_run(struct
> > event_crypto_adapter *adapter,
> >     if (adapter->mode == RTE_EVENT_CRYPTO_ADAPTER_OP_NEW)
> >             return 0;
> >
> > -   if (unlikely(adapter->stop_enq_to_cryptodev)) {
> > -           nb_enqueued += eca_crypto_enq_flush(adapter);
> > +   for (nb_enq = 0; nb_enq < max_enq; nb_enq += n) {
> >
> > -           if (unlikely(adapter->stop_enq_to_cryptodev))
> > -                   goto skip_event_dequeue_burst;
> > -   }
> > +           if (unlikely(adapter->stop_enq_to_cryptodev)) {
> > +                   nb_enqueued += eca_crypto_enq_flush(adapter);
> > +
> > +                   if (unlikely(adapter->stop_enq_to_cryptodev))
> > +                           break;
> > +           }
> >
> > -   for (nb_enq = 0; nb_enq < max_enq; nb_enq += n) {
> >             stats->event_poll_count++;
> >             n = rte_event_dequeue_burst(event_dev_id,
> >                                         event_port_id, ev, BATCH_SIZE, 0);
> @@ -591,8 +592,6 @@
> > eca_crypto_adapter_enq_run(struct event_crypto_adapter *adapter,
> >             nb_enqueued += eca_enq_to_cryptodev(adapter, ev, n);
> >     }
> >
> > -skip_event_dequeue_burst:
> > -
> >     if ((++adapter->transmit_loop_count &
> >             (CRYPTO_ENQ_FLUSH_THRESHOLD - 1)) == 0) {
> >             nb_enqueued += eca_crypto_enq_flush(adapter);
> > --
> > 2.6.4

Reply via email to