Hi Akhil

> -----Original Message-----
... 
> > +
> > +Either enqueue functions will not command the crypto device to start
> > processing
> > +until ``rte_cryptodev_dp_submit_done`` function is called. Before then
> the user
> > +shall expect the driver only stores the necessory context data in the
> > +``rte_crypto_dp_service_ctx`` buffer for the next enqueue operation. If
> the
> > user
> > +wants to abandon the submitted operations, simply call
> > +``rte_cryptodev_dp_configure_service`` function instead with the
> parameter
> > +``is_update`` set to 0. The driver will recover the service context data to
> > +the previous state.
> 
> Can you explain a use case where this is actually being used? This looks fancy
> but
> Do we have this type of requirement in any protocol stacks/specifications?
> I believe it to be an extra burden on the application writer if it is not a
> protocol requirement.
> 

I missed responding this one. 
The requirement comes from cooping with VPP crypto framework.

The reason for this feature is fill the gap of cryptodev enqueue and dequeue 
operations.
If the user application/library uses the approach similar to " 
rte_crypto_sym_vec" (such as VPP vnet_crypto_async_frame_t) that clusters 
multiple crypto ops as a burst, the application requires enqueuing and 
dequeuing all ops as a whole inside, or nothing. 
It is very slow for rte_cryptodev_enqueue/dequeue_burst to achieve this today 
as the user has no control over how many ops I want to enqueue/dequeue 
preciously. For example I want to enqueue a " rte_crypto_sym_vec" buffer 
contains 32 descriptors, and stores " rte_crypto_sym_vec" as opaque data in 
enqueue,  but rte_cryptodev_enqueue_burst returns 31, I have no option but 
cache the 1 left job for next enqueue attempt (or I manually check the inflight 
count in every enqueue). Also during dequeue since the number "32" is stored 
inside  rte_crypto_sym_vec.num, I have no way to know how many ops to dequeue, 
but blindly dequeue them and store in a software ring, parse the the dequeue 
count from retrieved opaque data, and check the ring count against dequeue 
count. 

With the new way provided we can easily achieve the goal. For HW crypto PMD 
such implementation is relatively easy, we only need to create a shadow copy to 
the queue pair data in ``rte_crypto_dp_service_ctx`` and updates in 
enqueue/dequeue, when "enqueue/dequeue_done" is called the queue is kicked to 
start processing jobs already set in the queue and  merge the shadow copy queue 
data into driver maintained one.

Regards,
Fan

Reply via email to