Jeremy Huntwork wrote: > On 3/8/12 4:24 PM, Bruce Dubbs wrote: >> Jeremy Huntwork wrote: >>> On 3/2/12 11:10 AM, Bruce Dubbs wrote: >>>> Yes, I saw that. Reviewing. >>> How is that coming along? >> Not well, sorry. I've got some personal things going on right now >> and can't get to it. I'll look as soon as I get some time. > > Has anyone else had a chance to try out the build fully and compare? > I'm waiting to hear more of a consensus from others who have tested > it before I drop this in, although I'm confident it's sound.
This may have been covered in this thread already, but I don't recall anymore -- did you do an ICA run with this change? I'm a little concerned about changing the core toolchain around, actually, and I think the minimum testing should include an ICA run, showing that there are no more differences introduced than in the current book. (Having fewer differences would obviously be better, I think.) I don't have a ton of experience with changes to the toolchain, except in a couple of cases inside glibc (some of the bugs we've found). I don't have a good grasp on how gcc actually operates (with respect to paths) in the patched mode in the current book, versus with sysroot. It *almost* looks like sysroot is the equivalent of a DESTDIR install, where all the libs are installed into <sysroot prefix>/whatever/path, but ask for /whatever/path at runtime (e.g. when ld.so is looking for DT_NEEDED entries, or in DT_RPATH entries in libs themselves, or when the new compiler is run and it tries to find includes) -- is that accurate, or not really? (I assume your home directory on quantum still has the current proposed diffs?) <possibly not-well-founded opinions below...> If upstream does endorse sysroot as the way all cross compilers should be built, I'm not sure I buy that argument. (But that might just be because it's different. :-/) I'm also not sure if they actually do; I've heard both this and its opposite asserted. If I understand what it is, then I think sysroot is a good idea for real cross compilation, where the host can't execute binaries built for the target, because you'll need a different ld.so. But we're not doing that, either, except possibly when converting from a 32-bit system under a 32-bit kernel, to a 64-bit system (either multilib or pure64); the old kernel won't load the new binaries. It seems that Greg never got the time to comment any more thoroughly on the modifications, either. I'd kinda like to hear what he has to say, perhaps in shortened form somewhere, but with a few links at least to some other discussions, if there are any. Though I won't necessarily agree with him, I suspect; we'll see. Still would be interesting to hear why the current setup (with the reversion) is better. Unfortunately I'm still limping by on this glibc-2.10.1 system, built as a crazy amalgamation of clfs-multilib, lfs-svn-2009-08ish, and Greg's 2009-ish compile-Linux-from-source scripts. I haven't had time to investigate rebuilding it. :-/ Maybe after I get some newer hardware; it should run OK until then, and more CPU cores will let me run builds with higher parallelism.
signature.asc
Description: OpenPGP digital signature
-- http://linuxfromscratch.org/mailman/listinfo/lfs-dev FAQ: http://www.linuxfromscratch.org/faq/ Unsubscribe: See the above information page