Issue 161372
Summary Remove unused atomic loads without ordering requirements
Labels backend:AArch64, backend:X86, clang:codegen
Assignees
Reporter tmatheson-arm
    https://godbolt.org/z/TYrzds3dT

```c++
#include <atomic>

std::atomic<uint64_t> x;
uint64_t y;

void atomic() {
 uint64_t unused = x.load(std::memory_order_relaxed);
}

void nonatomic() {
    uint64_t unused = y;
}
```

For `atomic()`, the load is generated as `load atomic i64, ptr @x monotonic` whereas for `nonatomic()` it is already optimised out. The load here has no ordering constraints, so for aarch64 (and other targets) this results in an unnecessary load operation:
```asm
atomic():
        adrp    x8, x
        ldr     xzr, [x8, :lo12:x]
        ret

nonatomic():
        ret
```

_______________________________________________
llvm-bugs mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-bugs

Reply via email to