On Wed, Dec 10, 2014 at 10:18 AM, Andrew Pinski <pins...@gmail.com> wrote: > Hi, > As mentioned in > https://gcc.gnu.org/ml/gcc-patches/2014-12/msg00609.html, the > load/store pair peepholes currently accept volatile mem which can > cause wrong code as the architecture does not define which part of the > pair happens first. > > This patch disables the peephole for volatile mem and adds two > testcases so that volatile loads are not converted into load pair (I > could add the same for store pair if needed). In the second testcase, > only f3 does not get converted to load pair, even though the order of > the loads are different. > > OK? Bootstrapped and tested on aarch64-linux-gnu without any regressions. > > Thanks, > Andrew Pinski > > ChangeLog: > * config/aarch64/aarch64.c (aarch64_operands_ok_for_ldpstp): Reject > volatile mems. > (aarch64_operands_adjust_ok_for_ldpstp): Likewise. > > testsuite/ChangeLog: > * gcc.target/aarch64/volatileloadpair-1.c: New testcase. > * gcc.target/aarch64/volatileloadpair-2.c: New testcase.
> @@ -10702,6 +10706,11 @@ aarch64_operands_adjust_ok_for_ldpstp (r > if (!MEM_P (mem_1) || aarch64_mem_pair_operand (mem_1, mode)) > return false; > > + /* The mems cannot be volatile. */ > + if (MEM_VOLATILE_P (mem_1) || MEM_VOLATILE_P (mem_2) > + || MEM_VOLATILE_P (mem_3) ||MEM_VOLATILE_P (mem_4)) > + return false; > + I realized that "!MEM_P (mem_1)" can't be true here. Now we are fixing this, could you please remove the MEM_P check and put volatile checks before "aarch64_mem_pair_operand"? It's more expensive. Thanks, bin