On Mon, Jun 4, 2012 at 9:07 PM, Marc Glisse <marc.gli...@inria.fr> wrote:
> On Mon, 4 Jun 2012, Florian Weimer wrote:
>
>> On 06/01/2012 01:34 PM, Jakub Jelinek wrote:
>>>
>>> Have you looked at the assembly differences with this in?
>>
>>
>> It's not great.
>>
>> Here's an example:
>>
>> void
>> write(std::vector<float>& blob, unsigned n, float v1, float v2, float v3,
>> float v4)
>> {
>>  blob[n] = v1;
>>  blob[n + 1] = v2;
>>  blob[n + 2] = v3;
>>  blob[n + 3] = v4;
>> }
>
>
> Would be great if it ended up testing only n and n+3.
> __attribute__((__noreturn__)) is not quite strong enough to allow this
> optimization, it would require something like __attribute__((__crashing__))
> to let the compiler know that if the function is called, you don't care what
> happens to blob. And possibly the use of a signed n.
>
> Note that even when the optimization would be legal, gcc seems to have a few
> difficulties:
>
> extern "C" void fail() __attribute((noreturn));
> void write(signed m, signed n)
> {
>  if((n+3)>m) fail();
>  if((n+2)>m) fail();
>  if((n+1)>m) fail();
>  if(n>m) fail();
> }
>
> keeps 3 tests.

Well, the issue is that we'd first need to commonize the fail () calls which we
do now, but even then VRP fails to simplify the comparisons against the
symbolic ranges (it's not very good at that).

And that would only be at -O1.  Note that such range-checks will defeat
most, if not all, loop optimizations, too.  So C++ code using std::vector
in compute-intensive parts would be severely pessimized.

So, I don't think fortifying libstdc++ is a good idea at all.

Richard.

> --
> Marc Glisse

Reply via email to