https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65449

--- Comment #2 from ma.jiang at zte dot com.cn ---
(In reply to Bernd Edlinger from comment #1)
> Hi Richard,
> 
> the invalid/different code for -O2 -fstrict-volatile-bitfields will be
> fixed with my proposed patch, because the misalignedness of mm should
> be visible at -O2 and prevent the strict_volatile bitfields path to be
> entered.
> 
> Could you give your OK to the latest version?
> see https://gcc.gnu.org/ml/gcc-patches/2015-03/msg00817.html
> 
> Thanks
> Bernd.

Hi Bernd,
   Your patch do fix the unaligned stw problem. But,there are still 4 lbz
instructions for  "*((volatile int *)mm)=4;" after your fix. I thought they
were caused by the -fstrict-volatile-bitfields originally.After a more careful
check, I find they are caused by " temp = force_reg (mode, op0);" in
store_fixed_bit_field_1. The "*((int *)mm)=4;" produce  no lbz instructions,
but still produce useless load when doing rtl expand.

(insn 7 6 8 2 (set (reg:QI 157)
        (mem/c:QI (plus:SI (reg/f:SI 155)
                (const_int 1 [0x1])) [1 MEM[(int *)&mt + 1B]+0 S1 A8])) nt.c:5
489 {*movqi_internal}
     (nil))
These loads will be eliminated in fwprop1 as they are meaningless.However,
after adding volatile for the memory mm, the fwprop1 pass can not delete these
loads as volatile loads should not be removed.
  So, I think we should first get rid of the volatile flag from op0 before call
force_reg (mode, op0). I have tried to adding "rtx op1 = copy_rtx (op0);
MEM_VOLATILE_P(op1)=0;"  just before force_reg, and it do remove those lbz
instruction.

Reply via email to