> On 7 Nov 2016, at 13:53, Mike Stump <m...@mrs.kithrup.com> wrote:
> 
> On Nov 7, 2016, at 10:40 AM, Jeff Law <l...@redhat.com> wrote:
>> 
>> On 11/07/2016 10:48 AM, Mike Stump wrote:
>>> On Nov 6, 2016, at 11:39 AM, Iain Sandoe <iain_san...@mentor.com> wrote:
>>>> This is an initial patch in a series that converts Darwin's configury to 
>>>> detect ld64 features, rather than the current process of hard-coding them 
>>>> on target system version.
>>> 
>>> So, I really do hate to ask, but does this have to be a config option?  
>>> Normally, we'd just have configure examine things by itself.  For canadian 
>>> crosses, there should be enough state present to key off of directly, 
>>> specially if they are wired up to work.
>>> 
>>> I've rather have the thing that doesn't just work without that config flag, 
>>> just work.  I'd like to think I can figure how how to make it just work, if 
>>> given an idea of what doesn't actually work.
>>> 
>>> Essentially, you do the operation that doesn't work, detect it failed to 
>>> work, then the you know it didn't work.
>>> 
>> But how is that supposed to work in a cross environment when he can't 
>> directly query the linker's behavior?
> 
> :-)  So, the two most obvious solutions would be the programs that need to 
> exist for a build, are portable and ported to run on a system that the build 
> can use, or one can have a forwarding stub from a system the build uses to 
> machine that can host the software that is less portable.  I've done both 
> before, both work fine.  Portable software can also include things like 
> simulators to run software under simulation on the local machine (or on a 
> machine the forwarding stub links to).  I've done that as well.  For example, 
> I've done native bootstraps of gcc on my non-bootstrappable cross compiler by 
> running everything under GNU sim for bootstrap and enhancing the GNU sim 
> stubs to include a few more system calls that bootstrap uses.  :-) read/write 
> already work, once just needs readdir and a few others.

this is pretty “black belt” stuff - I don’t see most of our users wanting to 
dive this deeply … 
> 
> Also, for darwin, in some cases, we can actually run the target or host 
> programs on the build machine directly.

I have that (at least weakly) in the patch posted - reluctant to add more 
smarts to make it cover more cases unless it’s proven useful.

> 
>> In an ideal world we could trivially query the linker's behavior prior to 
>> invocation.  But we don't have that kind of infrastructure in place.
> 
> There are cases that just work.  If you have a forwarding stub for the cross, 
> then you can just run it as usual.  If you have a BINFMT style simulator on 
> the local machine, again, you can just run it.  And on darwin, there are 
> cases where you can run target and/or host programs on the build machine 
> directly.
> 
> For darwin, I can't tell if he wants the runtime property of the target 
> system for programs that will be linked on it, or a behavior of the local 
> linker that will do the deed.  For the local linker, that can be queried 
> directly. For the target system, we can know it's behavior by knowing what 
> the target is.  We already know what the target is from the macosxversion 
> flag, which embodies the dynamic linker. Also, for any specific version of 
> macosx, there can have a table of what version of ld64 it has on it, by fiat. 
>  We can say, if you want to target such a system, you should use the latest 
> Xcode that supported that system.  This can reduce complexities and simplify 
> our lives.

.. and produce the situation where we can never have a c++11 compiler on 
powerpc-darwin9, because the “required ld64” doesn’t support it (OK. maybe we 
don’t care about that) but supposing we can now have symbol aliasses with 
modern ld64 (I think we can) - would we want to prevent a 10.6 from using that?

So, I am strongly against the situation where we fix the capability of the 
toolchain on some assumption of externally-available tools predicated on the 
system revision.

The intent of my patch is to move away from this to a situation where we use 
configuration tests to determine the capability from the tools [when we can run 
them] and on the basis of their version(s) when we are configuring in a cross 
scenario.

>> ISTM the way to go is to have a configure test to try and DTRT automatically 
>> for native builds and a flag to set for crosses (or potentially override the 
>> configure test).
> 
> 
> Sure, if it can't be known.
> 
> For example, if you have the target include directory, you don't to have 
> flags for questions that can be answered by the target headers.  Ditto the 
> libraries.  My question is what is the specific question we are asking?  
> Additionally answering things on the basis of version numbers isn't quite in 
> the GNU spirit.  I'm not opposed to it, but, it is slightly better to form 
> the actual question if possible.

Actually, there’s a bunch of configury in GCC that picks up the version of 
binutils components when it can (an in-tree build) and makes decisions at least 
as a fall-back on that  (so precedent) - we can’t do that for ld64 (there’s no 
equivalent in-tree build) but we _can_ tell configure when we know,

> In complex canadian cross scenarios, we might well want to grab the source to 
> ld64 and compile it up, just as we would any other software for canadian 
> environments.

This is OK for professional toolchain folks, but it’s a complex set of 
operations c.f. “obtaining the relevant installation for the desired host”.

As a head’s up, the situation is even worse for the assembler FWIW - where we 
now have a bunch of different behaviours dependent on whether the underlying 
thing is “cctools” or “clang” and the version ranges of cctools and clang 
overlap.

cheers,
Iain

Reply via email to