Hi Mathieu,
On 4/16/21 4:52 PM, Mathieu Desnoyers via lttng-dev wrote:
Hi Paul, Will, Peter,
I noticed in this discussion https://lkml.org/lkml/2021/4/16/118 that LTO
is able to break rcu_dereference. This seems to be taken care of by
arch/arm64/include/asm/rwonce.h on arm64 in the Linux kernel tree.
In the liburcu user-space library, we have this comment near rcu_dereference()
in
include/urcu/static/pointer.h:
* The compiler memory barrier in CMM_LOAD_SHARED() ensures that
value-speculative
* optimizations (e.g. VSS: Value Speculation Scheduling) does not perform the
* data read before the pointer read by speculating the value of the pointer.
* Correct ordering is ensured because the pointer is read as a volatile
access.
* This acts as a global side-effect operation, which forbids reordering of
* dependent memory operations. Note that such concern about
dependency-breaking
* optimizations will eventually be taken care of by the "memory_order_consume"
* addition to forthcoming C++ standard.
(note: CMM_LOAD_SHARED() is the equivalent of READ_ONCE(), but was introduced in
liburcu as a public API before READ_ONCE() existed in the Linux kernel)
this is not directly on topic, but what do you think of porting userspace RCU to
use the C++ memory model and GCC/LLVM atomic builtins (__atomic_store etc)
rather than rolling your own? Tools like thread sanitizer would then understand
what userspace RCU is doing. Not to mention the compiler. More developers
would understand it too!
From a code organization viewpoint, going down this path would presumably mean
directly using GCC/LLVM atomic support when available, and falling back on
something like the current uatomic to emulate them for older compilers.
Some parts of uatomic have pretty clear equivalents (see below), but not all, so
the conversion could be quite tricky.
Peter tells me the "memory_order_consume" is not something which can be used
today.
This is a pity, because it seems to have been invented with rcu_dereference in
mind.
Any information on its status at C/C++ standard levels and implementation-wise ?
Pragmatically speaking, what should we change in liburcu to ensure we don't
generate
broken code when LTO is enabled ? I suspect there are a few options here:
1) Fail to build if LTO is enabled,
2) Generate slower code for rcu_dereference, either on all architectures or only
on weakly-ordered architectures,
3) Generate different code depending on whether LTO is enabled or not. AFAIU
this would only
work if every compile unit is aware that it will end up being optimized
with LTO. Not sure
how this could be done in the context of user-space.
4) [ Insert better idea here. ]
Thoughts ?
Best wishes, Duncan.
PS: We are experimentally running with the following patch, as it already makes
thread sanitizer a lot happier:
--- a/External/UserspaceRCU/userspace-rcu/include/urcu/system.h
+++ b/External/UserspaceRCU/userspace-rcu/include/urcu/system.h
@@ -26,34 +26,45 @@
* Identify a shared load. A cmm_smp_rmc() or cmm_smp_mc() should come
* before the load.
*/
-#define _CMM_LOAD_SHARED(p) CMM_ACCESS_ONCE(p)
+#define _CMM_LOAD_SHARED(p) \
+ __extension__ \
+ ({ \
+ __typeof__(p) v; \
+ __atomic_load(&p, &v, __ATOMIC_RELAXED); \
+ v; \
+ })
/*
* Load a data from shared memory, doing a cache flush if required.
*/
-#define CMM_LOAD_SHARED(p) \
- __extension__ \
- ({ \
- cmm_smp_rmc(); \
- _CMM_LOAD_SHARED(p); \
+#define CMM_LOAD_SHARED(p) \
+ __extension__ \
+ ({ \
+ __typeof__(p) v; \
+ __atomic_load(&p, &v, __ATOMIC_ACQUIRE); \
+ v; \
})
/*
* Identify a shared store. A cmm_smp_wmc() or cmm_smp_mc() should
* follow the store.
*/
-#define _CMM_STORE_SHARED(x, v) __extension__ ({ CMM_ACCESS_ONCE(x) =
(v); })
+#define _CMM_STORE_SHARED(x, v) \
+ __extension__ \
+ ({ \
+ __typeof__(x) w = v; \
+ __atomic_store(&x, &w, __ATOMIC_RELAXED); \
+ })
/*
* Store v into x, where x is located in shared memory. Performs the
* required cache flush after writing. Returns v.
*/
-#define CMM_STORE_SHARED(x, v) \
- __extension__ \
- ({ \
- __typeof__(x) _v = _CMM_STORE_SHARED(x, v); \
- cmm_smp_wmc(); \
- _v = _v; /* Work around clang "unused result" */ \
+#define CMM_STORE_SHARED(x, v) \
+ __extension__ \
+ ({ \
+ __typeof__(x) w = v; \
+ __atomic_store(&x, &w, __ATOMIC_RELEASE); \
})
#endif /* _URCU_SYSTEM_H */
_______________________________________________
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev