> -----Original Message-----
> From: Honnappa Nagarahalli [mailto:honnappa.nagaraha...@arm.com]
> Sent: Tuesday, October 1, 2019 7:29 AM
> To: honnappa.nagaraha...@arm.com; Ananyev, Konstantin
> <konstantin.anan...@intel.com>; step...@networkplumber.org;
> paul...@linux.ibm.com
> Cc: Wang, Yipeng1 <yipeng1.w...@intel.com>; Medvedkin, Vladimir
> <vladimir.medved...@intel.com>; ruifeng.w...@arm.com;
> dharmik.thak...@arm.com; dev@dpdk.org; n...@arm.com
> Subject: [PATCH v3 1/3] lib/ring: add peek API
>
> From: Ruifeng Wang <ruifeng.w...@arm.com>
>
> The peek API allows fetching the next available object in the ring
> without dequeuing it. This helps in scenarios where dequeuing of
> objects depend on their value.
>
> Signed-off-by: Dharmik Thakkar <dharmik.thak...@arm.com>
> Signed-off-by: Ruifeng Wang <ruifeng.w...@arm.com>
> Reviewed-by: Honnappa Nagarahalli <honnappa.nagaraha...@arm.com>
> Reviewed-by: Gavin Hu <gavin...@arm.com>
> ---
> lib/librte_ring/rte_ring.h | 30 ++++++++++++++++++++++++++++++
> 1 file changed, 30 insertions(+)
>
> diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
> index 2a9f768a1..d3d0d5e18 100644
> --- a/lib/librte_ring/rte_ring.h
> +++ b/lib/librte_ring/rte_ring.h
> @@ -953,6 +953,36 @@ rte_ring_dequeue_burst(struct rte_ring *r, void
> **obj_table,
> r->cons.single, available);
> }
>
> +/**
> + * Peek one object from a ring.
> + *
> + * The peek API allows fetching the next available object in the ring
> + * without dequeuing it. This API is not multi-thread safe with respect
> + * to other consumer threads.
> + *
> + * @param r
> + * A pointer to the ring structure.
> + * @param obj_p
> + * A pointer to a void * pointer (object) that will be filled.
> + * @return
> + * - 0: Success, object available
> + * - -ENOENT: Not enough entries in the ring.
> + */
> +__rte_experimental
> +static __rte_always_inline int
> +rte_ring_peek(struct rte_ring *r, void **obj_p)
As it is not MT safe, then I think we need _sc_ in the name,
to follow other rte_ring functions naming conventions
(rte_ring_sc_peek() or so).
As a better alternative what do you think about introducing
a serialized versions of DPDK rte_ring dequeue functions?
Something like that:
/* same as original ring dequeue, but:
* 1) move cons.head only if cons.head == const.tail
* 2) don't update cons.tail
*/
unsigned int
rte_ring_serial_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned int
n,
unsigned int *available);
/* sets both cons.head and cons.tail to cons.head + num */
void rte_ring_serial_dequeue_finish(struct rte_ring *r, uint32_t num);
/* resets cons.head to const.tail value */
void rte_ring_serial_dequeue_abort(struct rte_ring *r);
Then your dq_reclaim cycle function will look like that:
const uint32_t nb_elt = dq->elt_size/8 + 1;
uint32_t avl, n;
uintptr_t elt[nb_elt];
...
do {
/* read next elem from the queue */
n = rte_ring_serial_dequeue_bulk(dq->r, elt, nb_elt, &avl);
if (n == 0)
break;
/* wrong period, keep elem in the queue */
if (rte_rcu_qsbr_check(dr->v, elt[0]) != 1) {
rte_ring_serial_dequeue_abort(dq->r);
break;
}
/* can reclaim, remove elem from the queue */
rte_ring_serial_dequeue_finish(dr->q, nb_elt);
/*call reclaim function */
dr->f(dr->p, elt);
} while (avl >= nb_elt);
That way, I think even rte_rcu_qsbr_dq_reclaim() can be MT safe.
As long as actual reclamation callback itself is MT safe of course.
> +{
> + uint32_t prod_tail = r->prod.tail;
> + uint32_t cons_head = r->cons.head;
> + uint32_t count = (prod_tail - cons_head) & r->mask;
> + unsigned int n = 1;
> + if (count) {
> + DEQUEUE_PTRS(r, &r[1], cons_head, obj_p, n, void *);
> + return 0;
> + }
> + return -ENOENT;
> +}
> +
> #ifdef __cplusplus
> }
> #endif
> --
> 2.17.1