On Tue, Feb 28, 2017 at 12:29:50PM +0000, Kyrill Tkachov wrote: > Hi all, > > For the testcase in this patch we currently generate: > foo: > mov w1, 0 > ldaxr w2, [x0] > cmp w2, 3 > bne .L2 > stxr w3, w1, [x0] > cmp w3, 0 > .L2: > cset w0, eq > ret > > Note that the STXR could have been storing the WZR register instead of moving > zero into w1. > This is due to overly strict predicates and constraints in the store > exclusive pattern and the > atomic compare exchange expanders and splitters. > This simple patch fixes that in the patterns concerned and with it we can > generate: > foo: > ldaxr w1, [x0] > cmp w1, 3 > bne .L2 > stxr w2, wzr, [x0] > cmp w2, 0 > .L2: > cset w0, eq > ret > > > Bootstrapped and tested on aarch64-none-linux-gnu. > Ok for GCC 8?
OK. Thanks, James > 2017-02-28 Kyrylo Tkachov <kyrylo.tkac...@arm.com> > > * config/aarch64/atomics.md (atomic_compare_and_swap<mode> expander): > Use aarch64_reg_or_zero predicate for operand 4. > (aarch64_compare_and_swap<mode> define_insn_and_split): > Use aarch64_reg_or_zero predicate for operand 3. Add 'Z' constraint. > (aarch64_store_exclusive<mode>): Likewise for operand 2. > > 2017-02-28 Kyrylo Tkachov <kyrylo.tkac...@arm.com> > > * gcc.target/aarch64/atomic_cmp_exchange_zero_reg_1.c: New test.