On 2023-03-21 06:21, Ondřej Surý wrote:
On 20. 3. 2023, at 19:37, Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
wrote:
On 2023-03-17 17:37, Ondřej Surý via lttng-dev wrote:
FIXME: This is experiment that adds explicit memory barrier in the
free_completion in the workqueue.c, so ThreadSanitizer knows it's ok to
free the resources.
Signed-off-by: Ondřej Surý <ond...@sury.org>
---
src/workqueue.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/workqueue.c b/src/workqueue.c
index 1039d72..f21907f 100644
--- a/src/workqueue.c
+++ b/src/workqueue.c
@@ -377,6 +377,7 @@ void free_completion(struct urcu_ref *ref)
struct urcu_workqueue_completion *completion;
completion = caa_container_of(ref, struct urcu_workqueue_completion, ref);
+ assert(!urcu_ref_get_unless_zero(&completion->ref));
Perhaps what we really want here is an ANNOTATE_UNPUBLISH_MEMORY_RANGE() of
some sort ?
I guess?
My experience with TSAN tells me, that you need some kind of memory barrier
when using acquire-release
semantics and you do:
if (__atomic_sub_fetch(obj->ref, __ATOMIC_RELEASE) == 0) {
/* __ATOMIC_ACQUIRE needed here */
free(obj);
}
we end up using following code in BIND 9:
if (__atomic_sub_fetch(obj->ref, __ATOMIC_ACQ_REL) == 0) {
free(obj);
}
So, I am guessing after the change of uatomic_sub_return() to __ATOMIC_ACQ_REL,
this patch should no longer be needed.
Actually we want __ATOMIC_SEQ_CST, which is even stronger than ACQ_REL.
Thanks,
Mathieu
Ondrej
--
Ondřej Surý (He/Him)
ond...@sury.org
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
_______________________________________________
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev