Hi,

On Mon, 30 Mar 2015 12:33:47, Richard Biener wrote:
>
> On Mon, Mar 16, 2015 at 11:53 AM, Bernd Edlinger
> <bernd.edlin...@hotmail.de> wrote:
>>
>> Hi,
>>
>>
>> when looking at the m68k I realized the following, which is
>> a general problem...
>>
>> If the alignment of the structure is less than sizeof(field), the
>> strict volatile bitfields code may read beyond the end of the
>> structure!
>>
>> Consider this example:
>>
>> struct s
>> {
>> char x : 8;
>> volatile unsigned int y : 31;
>> volatile unsigned int z : 1;
>> } __attribute__((packed));
>>
>> struct s global;
>>
>>
>> Here we have sizeof(struct s) = 5, alignment(global) == 1,
>> However when we access global.z we read a 32-bit word
>> at offset 4, which touches 3 bytes that are not safe to use.
>>
>> Something like that does never happen with -fno-strict-volatile-bitfields,
>> because IIRC, with the only exception of the simple_mem_bitfield_p code path,
>> there is never an access mode used which is larger than MEM_ALIGN(x).
>
> Are you sure? There is still PR36043 ...
>

I meant the extract_bit_field/store_bit_field,
anything can happen if you use emit_move_insn directly,
but extract_bit_field should not do that.

>> In this example, if I want to use the packed attribute,
>> I also have to use the aligned(4) attribute, this satisfies the
>> check "MEM_ALIGN (op0) < modesize", which is IMO always necessary
>> for strict volatile bitfields, not only on STRICT_ALIGNMENT targets.
>
> But your patch still somehow special-cases them.

Yes. My original intention was to by-pass the strict-volatile-bitfields code
path all together, for everything that it can not safely handle.

that would mean:

--- gcc/expmed.c        (revision 221427)
+++ gcc/expmed.c        (working copy)
@@ -472,11 +472,16 @@ strict_volatile_bitfield_p (rtx op0, unsigned HOST
     return false;

   /* Check for cases of unaligned fields that must be split.  */
-  if (bitnum % BITS_PER_UNIT + bitsize> modesize
-      || (STRICT_ALIGNMENT
-         && bitnum % GET_MODE_ALIGNMENT (fieldmode) + bitsize> modesize))
+  if (bitnum % modesize + bitsize> modesize)
     return false;

+  /* The memory must be sufficiently aligned for a MODESIZE access.
+     This condition guarantees, that the memory access will not
+     touch anything after the end of the structure.  */
+  if (MEM_ALIGN (op0) < modesize)
+    return false;
+
   /* Check for cases where the C++ memory model applies.  */
   if (bitregion_end != 0
       && (bitnum - bitnum % modesize < bitregion_start


and _removing_ that:

@@ -988,6 +995,16 @@ store_bit_field (rtx str_rtx, unsigned HOST_WIDE_I

          str_rtx = narrow_bit_field_mem (str_rtx, fieldmode, bitsize, bitnum,
                                          &bitnum);
+         if (!STRICT_ALIGNMENT
+             && bitnum + bitsize> GET_MODE_BITSIZE (fieldmode))
+           {
+             unsigned HOST_WIDE_INT offset;
+             offset = (bitnum + bitsize + BITS_PER_UNIT - 1
+                       - GET_MODE_BITSIZE (fieldmode)) / BITS_PER_UNIT;
+             str_rtx = adjust_bitfield_address (str_rtx, fieldmode, offset);
+             bitnum -= offset * BITS_PER_UNIT;
+           }
+         gcc_assert (bitnum + bitsize <= GET_MODE_BITSIZE (fieldmode));
          temp = copy_to_reg (str_rtx);
          if (!store_bit_field_1 (temp, bitsize, bitnum, 0, 0,
                                  fieldmode, value, true))

and _remoing_ that too:

        }

       str_rtx = narrow_bit_field_mem (str_rtx, mode1, bitsize, bitnum,
                                      &bitnum);
+      if (!STRICT_ALIGNMENT
+         && bitnum + bitsize> GET_MODE_BITSIZE (mode1))
+       {
+         unsigned HOST_WIDE_INT offset;
+         offset = (bitnum + bitsize + BITS_PER_UNIT - 1
+                   - GET_MODE_BITSIZE (mode1)) / BITS_PER_UNIT;
+         str_rtx = adjust_bitfield_address (str_rtx, mode1, offset);
+         bitnum -= offset * BITS_PER_UNIT;
+       }
+      gcc_assert (bitnum + bitsize <= GET_MODE_BITSIZE (mode1));
       str_rtx = copy_to_reg (str_rtx);
     }


I would keep just the added assertions in the expansion path.

However, I added that to do what you requested in a previous e-mail:
>>
>> every access is split in 4 QImode accesses. but that is as
>> expected, because the structure is byte aligned.
>  
> No, it is not expected because the CPU can handle unaligned SImode
> reads/writes just fine (even if not as an atomic operation).
> The C++ memory model allows an SImode read to s.y (-fstrict-volatile-bitfields
> would, as well, but the CPU doesn't guarantee atomic operation here)
>  
> Richard.

... but now I am not too happy with it either.
What we need for the "device register" accesses is just a predictable access
at a predicable offset. And it should better be aligned to MODE_BITSIZE,
it is irrelevant for device registers what the CPU can access.

Furthermore, if we have a structure like: { int x:16; int y:16; }
when MODE_ALIGNMNET(SImode)=16, it would be possible
to access y as SImode at offset 0, or at offset 2, both would be equally 
possible.

The device-register use case can not work well with that uncertainty at all.



>
>> On a target, that has BIGGEST_ALIGNMENT < BITS_PER_WORD,
>> to use the strict volatile bitfields, you have to add the 
>> __attribute__((aligned(4)))
>> to the structure.
>>
>> I had to do that on the pr23623.c test case, to have it passed on m68k for 
>> instance.
>>
>>
>> I have attached the updated patch. As explained before, the check
>> MEM_ALIGN (op0) < modesize should always be done in 
>> strict_volatile_bitfield_p.
>>
>> For the targets, that usually enable -fstrict-volatile-bitfields, nothing 
>> changes,
>> Except when we use "packed" on the structure, we need to add also an 
>> aligned(4)
>> attribute. For m68k where the natural alignment of any structure is <=2 we 
>> need to
>> force aligned(4) if we want to ensure the access is in SImode.
>>
>> Boot-strapped and reg-tested on x86_64-linux-gnu.
>> OK for trunk?
>
> So - shouldn't the check be
>
> if (MEM_ALIGN (op0) < GET_MODE_ALIGNMENT (fieldmode))
> return false;
>
> instead? And looking at
>
> /* Check for cases of unaligned fields that must be split. */
> - if (bitnum % BITS_PER_UNIT + bitsize> modesize
> - || (STRICT_ALIGNMENT
> - && bitnum % GET_MODE_ALIGNMENT (fieldmode) + bitsize> modesize))
> + if (bitnum % (STRICT_ALIGNMENT ? modesize : BITS_PER_UNIT)
> + + bitsize> modesize)
> return false;
>
> I wonder what the required semantics are - are they not that we need to access
> the whole "underlying" field with the access (the C++ memory model only
> requires we don't go beyond the field)? It seems that information isn't 
> readily
> available here though. So the check checks that we can access the field
> with a single access using fieldmode. Which means (again),
>
> if (bitnum % (STRICT_ALIGNMENT ? GET_MODE_ALIGNMENT (fieldmode) :
> BITS_PER_UNIT)
> + bitsize> modesize)
>
> Also this means that for !STRICT_ALIGNMENT platforms the MEM_ALIGN check isn't
> sufficient which is what the other hunks in the patch are about to fix?
>
> It seems at least the
>
> @@ -988,6 +995,16 @@ store_bit_field (rtx str_rtx, unsigned HOST_WIDE_I
>
> str_rtx = narrow_bit_field_mem (str_rtx, fieldmode, bitsize, bitnum,
> &bitnum);
> + if (!STRICT_ALIGNMENT
> + && bitnum + bitsize> GET_MODE_BITSIZE (fieldmode))
> + {
> + unsigned HOST_WIDE_INT offset;
> + offset = (bitnum + bitsize + BITS_PER_UNIT - 1
> + - GET_MODE_BITSIZE (fieldmode)) / BITS_PER_UNIT;
> + str_rtx = adjust_bitfield_address (str_rtx, fieldmode, offset);
> + bitnum -= offset * BITS_PER_UNIT;
> + }
> + gcc_assert (bitnum + bitsize <= GET_MODE_BITSIZE (fieldmode));
>
> hunks could do with a comment.
>
> That said, I fail to realize how the issue is specific to
> strict-volatile bitfields?
>

Yes, and I think, that's probably because I tried to do two things at a time:

1) fix the wrong code in both STRICT_ALIGNMENT/!STRICT_ALIGNMENT
2) enhance the !STRICT_ALIGNMENT to use unaligned accesses where
it dit not do that with -fno-strict-volatile-bitfields.


> I also hope somebody else will also look at the patch - I'm not feeling like 
> the
> appropriate person to review changes in this area (even if I did so in
> the past).
>

And I would not mind, if some one else tries to fix this mess (even if I did so 
in the past ;-)

Maybe I should just remove "2)" from the patch. Would you prefer that?


Thanks
Bernd.

                                          

Reply via email to