> On 8 Nov 2016, at 08:18, Mike Stump <mikest...@comcast.net> wrote: > > On Nov 7, 2016, at 6:33 PM, Iain Sandoe <iain_san...@mentor.com> wrote: >> >> a) right now, we need to know the target linker version - while it’s not >> impossible to try and conjure up some test to see if a linker we can run >> supports coalesced sections or not, the configury code and complexity needed >> to support that would exceed what I’m proposing at present (and still would >> not cover the native and canadian cases). > > A traditional canadian can run the host linker for the target on the build > machine with --version (or whatever flag) and capture the version number. I > don't know what setup you have engineered for, since you didn't say. First > question, can you run the host linker for the target on the build machine? > If so, you can directly capture the output. The next question is, is it the > same version as the version that would be used on the host?
I suppose that one could demand that - and require a build thus. So I build x86_64-darwin14 X powerpc-darwin9 and then a native host build = x86_64-darwin14 host = target = powerpc-darwin9 If we demand that the same version linker is used for all, then perhaps that could work. It seems likely that we’ll end up with mis-configures and stuff hard to support with non-expert build folks. >> I’m not debating the various solutions in your reply to Jeff - but honestly >> I wonder how many of them are realistically in reach of the typical end-user >> (I have done most of them at one stage or another, but I wonder how many >> would be stopped dead by “first find and build ld64, which itself needs a >> c++11 compiler and BTW needs you to build libLTO.dylib .. which needs you to >> build at least LLVM itself"). > > Package managers exist to solve that problem nicely, if someone wants a > trivial solution. They have the ability to scoop up binaries and just copy > them onto a machine, solving hard chicken/egg problems. Other possibilities > are scripts that setup everything and release the scripts. yes, I’m working on at least the latter (don’t have time to become a package manager). > >> am I missing a point here? > > The answer to the two questions above. The answer to the question, what > specific question do you want answered, and what is available to the build > machine, specifically to answer that question? > > Also, you deflect on the os version to linker version number, but you never > said what didn't actually work. What specifically doesn't work? This method > is trivial and the mapping is direct and expressible in a few lines per > version supported. I still maintain that the only limitation is you must > choose exactly 1 version per config triplet; I don't see that as a problem. > If it were, I didn't see you explain the problem. Even if it is, that > problem is solved after the general problem that nothing works today. By > having at least _a_ mapping, you generally solve the problem for most people, > most of the time. It *requires* that one configures arch-darwinNN .. and doesn’t allow for arch-darwin (to mean anything other than build=host=target) - but really we should be able to build arch-darwin wit config flags including -mmacosx-version-min= to be deployable on any darwin > than the specified min. I actually do this for day job toolchains, it’s not purely hypothetical. Since darwin toolchains are all supposed to be “just works” cross ones. > For example, if you target 10.0, there is no new features from an Xcode that > was from 10.11.6 timeframe. The only way to get those features, would be to > run a newer ld64, and, if you are doing that, then you likely have enough > sources to run ld directly. And if you are running it directly, then you can > just ask it what version it is. If we can engineer that a suitable ld64 can be run at configuration time so that the version can be discovered automatically, I’m with you 100% - but the scenarios put forward seem very complex for typical folks, What would you prefer to replace this patch with? Iain