On Thu, Dec 5, 2013 at 1:27 PM, Bernd Edlinger
<bernd.edlin...@hotmail.de> wrote:
> Hi Richard,
>
> I had just an idea how to solve that recursion problem without completely 
> ignoring the
> memory mode. I hope you are gonna like it.
>
> This time I even added a comment :-)

Ehm, ...

+      /* If MODE has no size i.e. BLKmode or is larger than WORD_MODE
+        or BITREGION_END is used we can use WORD_MODE as upper limit.
+        However, if the field is too unaligned to be fetched with a single
+        access, we also have to use WORD_MODE.  This can happen in Ada.  */
       if (GET_MODE_BITSIZE (mode) == 0
-         || GET_MODE_BITSIZE (mode) > GET_MODE_BITSIZE (word_mode))
+         || GET_MODE_BITSIZE (mode) > GET_MODE_BITSIZE (word_mode)
+         || bitregion_end != 0
+         || bitnum % BITS_PER_UNIT + bitsize > GET_MODE_SIZE (mode)
+         || (STRICT_ALIGNMENT
+             && bitnum % GET_MODE_ALIGNMENT (mode) + bitsize
+                > GET_MODE_BITSIZE (mode)))
        mode = word_mode;

If the field is unaligned how does choosing a larger mode help?  We should
always be able to use multiple accesses with a smaller mode in this case.

So - I'd buy the || bitregion_end != 0 thing but the rest ...?  So, ok if
that alone fixes the bug.  Note that when bitregion_end == 0 the
access should be limited by the mode we pass to get_best_mode.

Btw, it should be always possible to return QImode here, so I wonder
how enlarging the mode fixes the underlying issue (maybe the bug
is really elsewhere?)

Thanks,
Richard.

> Ok for trunk after boot-strap and regression-testing?
>
>
> Bernd.
>
> On Tue, 3 Dec 2013 12:23:11, Richard Biener wrote:
>>
>> On Tue, Dec 3, 2013 at 12:01 PM, Bernd Edlinger
>> <bernd.edlin...@hotmail.de> wrote:
>>> Hi,
>>>
>>> This is my proposal for ulimately getting rid of the nasty 
>>> store_fixed_bit_field recursion.
>>>
>>> IMHO, the root of the recursion trouble is here:
>>>
>>> @@ -1007,12 +1013,8 @@ store_fixed_bit_field (rtx op0, unsigned
>>>
>>> if (MEM_P (op0))
>>> {
>>> mode = GET_MODE (op0);
>>> if (GET_MODE_BITSIZE (mode) == 0
>>> || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
>>> mode = word_mode;
>>> mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
>>> MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
>>>
>>>
>>> But, because now we always have bitregion_start and bitregion_end to limit
>>> the access size, it is no longer necessary to restrict the largest mode, 
>>> that
>>> get_best_mode may return.
>>
>> Note that this is not true, as get_bit_range itself may end up giving
>> up with bitregion_start = bitregion_end = 0. But maybe this is not
>> what you are saying here? That is, does a
>>
>> gcc_assert (bitregion_start != 0 || bitregion_end != 0);
>>
>> before the get_best_mode call work for you?
>>
>>> This patch is very similar to the previous patch, which split up the 
>>> extract_fixed_bit_field,
>>>
>>> This time, I just split up store_fixed_bit_field and use 
>>> store_fixed_bit_field_1 to force the
>>> strict-volatile-bitfield mode it necessary, and let get_best_mode find a 
>>> mode, that is
>>> can be used to access the field, which is no longer impacted by the memory 
>>> context's selected
>>> mode in this case.
>>>
>>> I tried this patch with an ARM-Cross compiler and a large eCos application, 
>>> to see if anything
>>> changes in the generated code with this patch, but 2 MB of code stays 
>>> binary the same,
>>> that's a good sign.
>>>
>>> I added the new Ada test case, and the test case from PR59134, which does 
>>> no longer re-produce
>>> after my previous C++ memory model patch, but this "fix" was more or less 
>>> by chance.
>>>
>>>
>>> Boot-Strap on X86_64-pc-linux-gnu (languages=all,ada,go) and 
>>> regression-tests
>>> still running.
>>>
>>>
>>> Ok for trunk (when the tests succeed)?
>>>
>>>
>>> Thanks
>>> Bernd.

Reply via email to