Hi Richard, >> Basically the small and large model are fundamentally incompatible. The >> infamous >> "dumb linker" approach means it doesn't try to sort sections, so an ADRP >> relocation >> will be out of reach if its data is placed after a huge array. Static >> linking with GLIBC or >> enabling outline atomics are in fact the same issue. > > This is probably a daft question, but why does the incompatibility > only affect static linking? Couldn't the same problem affect dynamic > executables if the code mixes medium and large code in an unfortunate way?
Any mixing of large and small means you get the lowest common denominator. Statically linking with a small model library (like outline atomics or GLIBC) is exactly the same as linking in a small object. You effectively end up with max sizes from the small model with all the limitations of the large model (no PIC or PIE). I can't decode the other paragraphs at all - what exactly would you do differently in each compilation unit? >> As I proposed for the medium model, the solution is to place large arrays >> into >> specially named sections so that they are ordered last. That means all >> "small" data >> referenced via ADRP will be in range, and thus using outline atomics or >> mixing >> small/medium model just work. The large model will have to do the same. > > ...I suppose this wouldn't cope with an aggregation of many data structures > that happen to be just below whatever limit we pick. It sounds like > something that would improve the situation but wouldn't flip a switch > from "incompatible" to "compatible". It would change from being incompatible to something that just works out of the box. You can always break any limit with specially crafted programs - even in cases where the claim is 'unlimited' (eg. number of switch cases, basic blocks, block size, function size, object size etc). However the actual goal is to make real programs work well. Today large + small << small, but we do: medium + small = medium. Cheers, Wilco