Wilco Dijkstra <wilco.dijks...@arm.com> writes:
> Hi Ramana,
>
>> -Generate code for the large code model.  This makes no assumptions about
>> -addresses and sizes of sections.  Programs can be statically linked only.  
>> The
>> +Generate code for the large code model.  This allows large .bss and .data
>> +sections, however .text and .rodata must still be < 2GiB in total.  Position
>> +independent code or statically linked libraries are not supported.  The
>>  @option{-mcmodel=large} option is incompatible with @option{-mabi=ilp32},
>> -@option{-fpic} and @option{-fPIC}.
>> +@option{-fpie}, @option{-fPIE}, @option{-fpic}, @option{-fPIC},
>> +@option{-static}, @option{-moutline-atomics}.
>
>> What's the issue with -static ?
>
> Basically the small and large model are fundamentally incompatible. The 
> infamous
> "dumb linker" approach means it doesn't try to sort sections, so an ADRP 
> relocation
> will be out of reach if its data is placed after a huge array. Static linking 
> with GLIBC or
> enabling outline atomics are in fact the same issue.

This is probably a daft question, but why does the incompatibility
only affect static linking?  Couldn't the same problem affect dynamic
executables if the code mixes medium and large code in an unfortunate way?

E.g.:

extern int x[];
int *f() { return x; }

produces:

f:
        adrp    x0, x
        add     x0, x0, :lo12:x
        ret

and the ADRP would be out of range if x were linked after a huge array
from an -mcmodel=large translation unit.

But isn't the problem in both cases more that -mcmodel=medium doesn't
support large .data sections (which is why -mcmodel=large exists),
rather than that the models are incompatible?  It should still be safe
to link -mcmodel=large code with -mcmodel=medium code if the linked
object fits the -mcmodel=medium restrictions.  (Which isn't as daft
as it sounds.  Code that wants to be compatible with both models
could be compiled with -mcmodel=large.)

To put it another way, I'd assumed that the distinction between code models
was based on the sizes of things in the linked code, so that including
-mcmodel=medium code in a link would be invalid if the linked code were
too big.  It sounds like you're treating the distinction between code
models as being based on the sizes of things in or referenced by each
individual translation unit.  Is that right?

I can see how the translation-unit-based interpretation would be useful
if we can make it work.  But...

> As I proposed for the medium model, the solution is to place large arrays into
> specially named sections so that they are ordered last. That means all 
> "small" data
> referenced via ADRP will be in range, and thus using outline atomics or mixing
> small/medium model just work. The large model will have to do the same.

...I suppose this wouldn't cope with an aggregation of many data structures
that happen to be just below whatever limit we pick.  It sounds like
something that would improve the situation but wouldn't flip a switch
from "incompatible" to "compatible".

Thanks,
Richard

Reply via email to