On May 23, 2014, at 10:23 AM, Ben Pfaff <[email protected]> wrote:
> On Thu, May 22, 2014 at 05:37:41PM -0700, Jarno Rajahalme wrote:
>> Add cmap_replace() and cmap_first(), as well as CMAP_FOR_EACH_SAFE and
>> CMAP_FOR_EACH_CONTINUE to make porting existing hmap using code a bit
>> easier.
>>
>> CMAP_FOR_EACH_SAFE is useful in RCU postponed destructors, when it is
>> known that additional postponing is not needed.
>>
>> Signed-off-by: Jarno Rajahalme <[email protected]>
>
> I am not sure that ovsrcu_init() may be used in cmap_replace__() here.
> I believe that we need to make sure that the store to replacement->next
> becomes visible before the store to b->nodes[slot]. I am not sure that
> a single release fence is sufficient, because I believe that the write
> to replacement->next could still get reordered after the write to
> b->nodes[slot]. After all, both are on the same side of the release
> fence:
>
> /* The pointer to 'node' is changed to point to 'replacement',
> * which is the next node if no replacement node is given. */
> if (!replacement) {
> replacement = cmap_node_next_protected(node);
> } else {
> /* 'replacement' takes the position of 'node' in the list. */
> ovsrcu_init(&replacement->next, cmap_node_next_protected(node));
> }
>
> if (b->nodes[slot] == node) {
> b->nodes[slot] = replacement;
> atomic_thread_fence(memory_order_release);
>
It seems to me that moving the fence up one line would solve the problem, no?
> The purpose of CMAP_FOR_EACH_SAFE in cmap is different enough from the
> typical use to deserve an explanatory comment. (At least, the comment
> claiming that there is no CMAP_FOR_EACH_SAFE macro needs to be fixed.)
>
How about this incremental:
---
lib/cmap.c | 2 +-
lib/cmap.h | 6 +++---
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/lib/cmap.c b/lib/cmap.c
index 6c23a62..ed596a8 100644
--- a/lib/cmap.c
+++ b/lib/cmap.c
@@ -669,8 +669,8 @@ cmap_replace__(struct cmap_impl *impl, struct cmap_node
*node,
}
if (b->nodes[slot] == node) {
- b->nodes[slot] = replacement;
atomic_thread_fence(memory_order_release);
+ b->nodes[slot] = replacement;
} else {
struct cmap_node *iter = b->nodes[slot];
for (;;) {
diff --git a/lib/cmap.h b/lib/cmap.h
index 2ca23ef..2292a75 100644
--- a/lib/cmap.h
+++ b/lib/cmap.h
@@ -163,9 +163,9 @@ struct cmap_node *cmap_find_protected(const struct cmap *,
uint32_t hash);
* ...operate on my_node...
* }
*
- * There is no CMAP_FOR_EACH_SAFE variant because it would be rarely useful:
- * usually destruction of an element has to wait for an RCU grace period to
- * expire.
+ * CMAP_FOR_EACH_SAFE variant is useful only in deallocation code already
+ * executing at postponed time, when it is known that the RCU grace period
+ * has already expired.
*/
#define CMAP_FOR_EACH(NODE, MEMBER, CURSOR, CMAP) \
for ((cmap_cursor_init(CURSOR, CMAP), \
Jarno
_______________________________________________
dev mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/dev