https://gcc.gnu.org/bugzilla/show_bug.cgi?id=78158
--- Comment #4 from Jakub Jelinek <jakub at gcc dot gnu.org> --- That is clearly due to bugs in tsan.c, in my PR55439 r194133 changes. At -O0 the code is not optimized, so we end up with e.g. __m_6 = 5; __b_7 = std::operator& (__m_6, 65535); __m.10_8 = (int) __m_6; _9 = &this_5->_M_i; _10 = __atomic_load_1 (_9, __m.10_8); or __m_12 = __m_7(D); _13 = std::__cmpexch_failure_order (__m_12); _14 = (int) __i2_11; this_15 = this_9; __i1_16 = __i1_10; __i2_17 = (__int_type) _14; __m1_18 = __m_12; __m2_19 = _13; __b2_20 = std::operator& (__m2_19, 65535); __b1_21 = std::operator& (__m1_18, 65535); __m2.16_22 = (int) __m2_19; __m1.17_23 = (int) __m1_18; _24 = (int) __i2_17; _25 = &this_15->_M_i; _26 = __atomic_compare_exchange_1 (_25, __i1_16, _24, 0, __m1.17_23, __m2.16_22); and then we run into instrument_builtin_call: case check_last: case fetch_op: last_arg = gimple_call_arg (stmt, num - 1); if (!tree_fits_uhwi_p (last_arg) || memmodel_base (tree_to_uhwi (last_arg)) >= MEMMODEL_LAST) return; // <<<<======== HERE ... case weak_cas: if (!integer_nonzerop (gimple_call_arg (stmt, 3))) continue; // <<<<======== HERE /* FALLTHRU */ case strong_cas: gcc_assert (num == 6); for (j = 0; j < 6; j++) args[j] = gimple_call_arg (stmt, j); if (!tree_fits_uhwi_p (args[4]) || memmodel_base (tree_to_uhwi (args[4])) >= MEMMODEL_LAST) return; // <<<<======== HERE if (!tree_fits_uhwi_p (args[5]) || memmodel_base (tree_to_uhwi (args[5])) >= MEMMODEL_LAST) return; // <<<<======== HERE ... if (!tree_fits_uhwi_p (last_arg) || memmodel_base (tree_to_uhwi (last_arg)) >= MEMMODEL_LAST) return; // <<<<======== HERE If last_arg or gimple_call_arg (stmt, 3) is not an integer, but SSA_NAME, we could for some very limited cases try to iterate over SSA_NAME definition statements and see if we can figure out a constant, but as can be seen on the snippets above, that is not really useful here, there are function calls which at -O0 are not inlined. Thus I think what we need to do is when we can't check the memory model value at compile time, emit a runtime check which does the same thing, and use the tsan builtins if the runtime test(s) pass, otherwise fall through into the __atomic_* builtins.