When a reference counted object is also RCU protected the deletion of
the object's memory is always postponed.  This allows
memory_order_relaxed to be used also for unreferencing, as RCU
quiescing provides a full memory barrier (it has to, or otherwise
there could be lingering accesses to objects after they are recycled).

Also, when access to the reference counted object is protected via a
mutex or a lock, the locking primitives provide the required memory
barrier functionality.

Also, add ovs_refcount_try_ref_rcu(), which takes a reference only if
the refcount is non-zero and returns true if a reference was taken,
false otherwise.  This can be used in combined RCU/refcount scenarios
where we have an RCU protected reference to an refcounted object, but
which may be unref'ed at any time.  If ovs_refcount_try_ref_rcu()
fails, the object may still be safely used until the current thread
quiesces.

Signed-off-by: Jarno Rajahalme <jrajaha...@nicira.com>
---
v2: Added ovs_refcount_unref_relaxed(),
    added '_rcu' to the name of ovs_refcount_try_ref_rcu().

 lib/ovs-atomic.h |   74 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 lib/ovs-rcu.c    |    5 +++-
 2 files changed, 78 insertions(+), 1 deletion(-)

diff --git a/lib/ovs-atomic.h b/lib/ovs-atomic.h
index 95142f5..663a480 100644
--- a/lib/ovs-atomic.h
+++ b/lib/ovs-atomic.h
@@ -400,4 +400,78 @@ ovs_refcount_read(const struct ovs_refcount *refcount_)
     return count;
 }
 
+/* Increments 'refcount', but only if it is non-zero.
+ *
+ * This may only be called an object which is RCU protected during
+ * this call.  This implies that its possible desctruction is
+ * postponed until all current RCU threads quiesce.
+ *
+ * Returns false if the refcount was zero.  In this case the object may
+ * be safely accessed until the current thread quiesces, but no additional
+ * references to the object may be taken.
+ *
+ * Does not provide a memory barrier, as the calling thread must have
+ * RCU protected access to the object already.
+ *
+ * It is critical that we never increment a zero refcount to a
+ * non-zero value, as whenever a refcount reaches the zero value, the
+ * protected object may be irrevocably scheduled for deletion. */
+static inline bool
+ovs_refcount_try_ref_rcu(struct ovs_refcount *refcount)
+{
+    unsigned int count;
+
+    atomic_read_explicit(&refcount->count, &count, memory_order_relaxed);
+    do {
+        if (count == 0) {
+            return false;
+        }
+    } while (!atomic_compare_exchange_weak_explicit(&refcount->count, &count,
+                                                    count + 1,
+                                                    memory_order_relaxed,
+                                                    memory_order_relaxed));
+    return true;
+}
+
+/* Decrements 'refcount' and returns the previous reference count.  To
+ * be used only when a memory barrier is already provided for the
+ * protected object independently.
+ *
+ * For example:
+ *
+ * if (ovs_refcount_unref_relaxed(&object->ref_cnt) == 1) {
+ *     // Schedule uninitialization and freeing of the object:
+ *     ovsrcu_postpone(destructor_function, object);
+ * }
+ *
+ * Here RCU quiescing already provides a full memory barrier.  No additional
+ * barriers are needed here.
+ *
+ * Or:
+ *
+ * if (stp && ovs_refcount_unref_relaxed(&stp->ref_cnt) == 1) {
+ *     ovs_mutex_lock(&mutex);
+ *     list_remove(&stp->node);
+ *     ovs_mutex_unlock(&mutex);
+ *     free(stp->name);
+ *     free(stp);
+ * }
+ *
+ * Here a mutex is used to guard access to all of 'stp' apart from
+ * 'ref_cnt'.  Hence all changes to 'stp' by other threads must be
+ * visible when we get the mutex, and no access after the unlock can
+ * be reordered to happen prior the lock operation.  No additional
+ * barriers are needed here.
+ */
+static inline unsigned int
+ovs_refcount_unref_relaxed(struct ovs_refcount *refcount)
+{
+    unsigned int old_refcount;
+
+    atomic_sub_explicit(&refcount->count, 1, &old_refcount,
+                        memory_order_relaxed);
+    ovs_assert(old_refcount > 0);
+    return old_refcount;
+}
+
 #endif /* ovs-atomic.h */
diff --git a/lib/ovs-rcu.c b/lib/ovs-rcu.c
index 62fe614..050a2ef 100644
--- a/lib/ovs-rcu.c
+++ b/lib/ovs-rcu.c
@@ -132,7 +132,10 @@ ovsrcu_quiesce_start(void)
 }
 
 /* Indicates a momentary quiescent state.  See "Details" near the top of
- * ovs-rcu.h. */
+ * ovs-rcu.h.
+ *
+ * Provides a full memory barrier via seq_change().
+ */
 void
 ovsrcu_quiesce(void)
 {
-- 
1.7.10.4

_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to