Ping.
Thanks,
Kyrill
On 08/05/17 11:59, Kyrill Tkachov wrote:
Ping.
Thanks,
Kyrill
On 24/04/17 10:37, Kyrill Tkachov wrote:
Pinging this back into context so that I don't forget about it...
https://gcc.gnu.org/ml/gcc-patches/2017-02/msg01648.html
Thanks,
Kyrill
On 28/02/17 12:29, Kyrill Tkachov wrote:
Hi all,
For the testcase in this patch we currently generate:
foo:
mov w1, 0
ldaxr w2, [x0]
cmp w2, 3
bne .L2
stxr w3, w1, [x0]
cmp w3, 0
.L2:
cset w0, eq
ret
Note that the STXR could have been storing the WZR register instead of moving
zero into w1.
This is due to overly strict predicates and constraints in the store exclusive
pattern and the
atomic compare exchange expanders and splitters.
This simple patch fixes that in the patterns concerned and with it we can
generate:
foo:
ldaxr w1, [x0]
cmp w1, 3
bne .L2
stxr w2, wzr, [x0]
cmp w2, 0
.L2:
cset w0, eq
ret
Bootstrapped and tested on aarch64-none-linux-gnu.
Ok for GCC 8?
Thanks,
Kyrill
2017-02-28 Kyrylo Tkachov <kyrylo.tkac...@arm.com>
* config/aarch64/atomics.md (atomic_compare_and_swap<mode> expander):
Use aarch64_reg_or_zero predicate for operand 4.
(aarch64_compare_and_swap<mode> define_insn_and_split):
Use aarch64_reg_or_zero predicate for operand 3. Add 'Z' constraint.
(aarch64_store_exclusive<mode>): Likewise for operand 2.
2017-02-28 Kyrylo Tkachov <kyrylo.tkac...@arm.com>
* gcc.target/aarch64/atomic_cmp_exchange_zero_reg_1.c: New test.