This new entry point is meant to allow applications to guarantee that all work already pushed to the "call_rcu" queues has completed before, for instance, tearing themselves down.
Signed-off-by: Jérémie Galarneau <jeremie.galarn...@efficios.com> --- doc/rcu-api.md | 23 ++++++++++++++++++++--- urcu-call-rcu-impl.h | 36 ++++++++++++++++++++++++++++++++++++ urcu-call-rcu.h | 1 + 3 files changed, 57 insertions(+), 3 deletions(-) diff --git a/doc/rcu-api.md b/doc/rcu-api.md index ea316d1..798dd6f 100644 --- a/doc/rcu-api.md +++ b/doc/rcu-api.md @@ -99,9 +99,26 @@ void rcu_barrier(void); Wait for all `call_rcu()` work initiated prior to `rcu_barrier()` by _any_ thread on the system to have completed before `rcu_barrier()` returns. `rcu_barrier()` should never be called from a `call_rcu()` -thread. This function can be used, for instance, to ensure that -all memory reclaim involving a shared object has completed -before allowing `dlclose()` of this shared object to complete. +thread. + + +```c +void rcu_barrier_finalize(void); +``` + +Wait for all `call_rcu()` work initiated by _any_ thread prior to +`rcu_barrier_finalize()`, along with all `call_rcu()` work initiated +(chained) by these `call_rcu()` to have completed before returning. + +`rcu_barrier_finalize()` should never be called from a `call_rcu()` +thread. This function can be used, for instance, to ensure that all +memory reclamation involving a shared object has completed before +allowing `dlclose()` of this shared object to complete. + +`rcu_barrier_finalize()` should not be used if the application does +not bound its chaining of `call_rcu()`. Otherwise, +`rcu_barrier_finalize()` may never finish since the execution of +`call_rcu()` callbacks may keep generating new work indefinitely. ```c diff --git a/urcu-call-rcu-impl.h b/urcu-call-rcu-impl.h index 65f63ee..e3a3b4e 100644 --- a/urcu-call-rcu-impl.h +++ b/urcu-call-rcu-impl.h @@ -901,6 +901,42 @@ online: } /* + * Wait for all in-flight call_rcu callbacks, and whichever call_rcu callbacks + * they might in turn add, to complete execution before returning. + * + * It may also complete more work than necessary since other threads could also + * be inserting new work items in the work queue. + */ +void rcu_barrier_finalize(void) +{ + for (;;) { + int work_left = 0; + struct call_rcu_data *crdp; + /* + * Callbacks executed during rcu_barrier() might insert new work + * in the call_rcu queues. Therefore, we have to loop until all + * queues are observed to be empty. + */ + rcu_barrier(); + /* Protect access to call_rcu_data_list */ + call_rcu_lock(&call_rcu_mutex); + cds_list_for_each_entry(crdp, &call_rcu_data_list, list) { + if (!cds_wfcq_empty(&crdp->cbs_head, &crdp->cbs_tail)) { + work_left = 1; + break; + } + } + call_rcu_unlock(&call_rcu_mutex); + /* + * Ensure that newly issued work items are also completed before + * returning. + */ + if (!work_left) + return; + } +} + +/* * Acquire the call_rcu_mutex in order to ensure that the child sees * all of the call_rcu() data structures in a consistent state. Ensure * that all call_rcu threads are in a quiescent state across fork. diff --git a/urcu-call-rcu.h b/urcu-call-rcu.h index 339ebac..7c04a4a 100644 --- a/urcu-call-rcu.h +++ b/urcu-call-rcu.h @@ -93,6 +93,7 @@ void call_rcu_after_fork_parent(void); void call_rcu_after_fork_child(void); void rcu_barrier(void); +void rcu_barrier_finalize(void); #ifdef __cplusplus } -- 2.6.4 _______________________________________________ lttng-dev mailing list lttng-dev@lists.lttng.org http://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev