Wilco Dijkstra <wilco.dijks...@arm.com> writes:
> Hi Richard,
>
>>> No - the testcases fail with that.
>>
>> Hmm, OK.  Could you give more details?  What does the motivating case
>> actually look like?
>
> Well it's now a very long time ago since I first posted this patch but the 
> failure
> was in SPEC. It did something like &array[0xffffff000 - x], presumably after 
> some
> optimization with some specific options I was using at the time. The exact 
> details
> don't matter since I've got minimal testcases.
>
>> One of the reasons this is a hard patch to review is that it doesn't
>> include a testcase for the kind of situation it's trying to fix.
>
> There is a very simple testcase which fails before and passes with my patch.
>
>>> It also reduces codequality by not allowing commonly used offsets as
>>> part of the symbol relocation.
>>
>> Which kinds of cases specifically?  Although I'm guessing the problem is
>> when indexing into external arrays of unknown size.
>
> Well there are 41000 uses in SPEC2017 that fail the offset_within_block_p
> test but pass my range test. There are 3 cases where the reverse is true
> (a huge offset: 17694720).
>
> Overall my range test allows 99.99% of the offsets, so we can safely conclude
> my patch doesn't regress any existing code.
>
>> So IMO we should be able to assume that the start and end + 1 addresses
>> of any referenced object are within reach.  For section anchors, we can
>> extend that to the containing anchor block, which is really just a
>> super-sized object.
>
> This isn't about section anchors - in many cases the array is an extern.

Sure, the "extern array of unknown size" case isn't about section anchors.
But this part of my message (snipped above) was about the other case
(objects of known size), and applied to individual objects as well as
section anchors.

What I was trying to say is: yes, we need better offsets for references
to extern objects of unknown size.  But your patch does that by reducing
the offset range for all objects, including ones with known sizes in
which the documented ranges should be safe.

I was trying to explain why I don't think we need to reduce the range
in that case too.  If offset_within_block_p then any offset should be
OK (the aggressive interpretation) or the original documented ranges
should be OK.  I think we only need the smaller range when
offset_within_block_p returns false.

>> The question then is what to do about symbols whose size isn't known.
>> And I agree that unconditionally applying the full code-model offset
>> range is too aggressive in that case.
>
> That's very common indeed, so we need to apply some kind of reasonable
> range check.
>
>> Well, for one thing, if code quality isn't affected by using +/-64k
>> for the tiny model, do we have any evidence that we need a larger offset
>> for the other code models?
>
> Certainly for SPEC +-64KB is too small, but SPEC won't build in the tiny code
> model. For the tiny code model the emphasis should be on ensuring that code
> that fits should build correctly rather than trying to optimize it to the max 
> and
> getting relocations that are out of range... So I believe it is reasonable to 
> use a
> more conservative range in the tiny model.
>
>> But more importantly, we can't say definitively that code quality isn't
>> affected, only that it wasn't affected for the cases we've looked at.
>> People could patch the compiler if the new ranges turn out not to strike
>> the right balance for their use cases, but not everyone wants to do that.
>
> Well if we're worried about codequality then the offset in block approach
> affects it the most. Offsets larger than 1MB are extremely rare, so the chance
> that there will ever be a request for a larger range is simply zero.
>
>> Maybe we need a new command-line option.
>
> That's way overkill... All this analysis is overcomplicating what is really a 
> very
> basic problem with a very simple solution.

Well, I'm not going to object if another maintainer is happy to approve
the patch as-is.

Richard

Reply via email to