Eric writes: > > So your target audience is "people who use newlib, use the uberbaum > > build, and who work on multiple gcc trees", right? Seems > > like such a small audience it's not likely to be widely used, > > but that's just my impression. > > I agree unfortunately. Really if you're not wanting to have a single > tree it's more effort than it's worth - either that or split it up into > build and install binutils and then build and install gcc (using > --with-newlib=...). Easily script-able, but I'm pretty sure that's not > what doug wants.
You are correct, it's not quite what I'm looking for. The purpose of doing this is to speed up the development process. The existing scripts that take _existing_ and _tested_ releases are fine by me, I don't care much about this aspect. Such scripts are generally run just once and that's it (once the bugs in the scripts are worked out, which is separate from bugs in the tools). And scenarios where the majority of the hacking is on one particular piece aren't really what I'm interested in either - there the majority of the hacking is in one package so often "make" followed by "make check" will continue to "just work". Consider a scenario where c++ folks had to do "make install" of the backend before they could link cc1plus and run dejagnu. And suppose this was standard-operating-procedure. [It's not a real scenario of course, but it does have the right flavor of the problem I wish to solve.] It's the day-to-day development of a fresh port that I want to speed up. If I've gone through a run of "make check-gcc" and fixed some random bugs, with fixes in any or all of libgloss, bfd, or gcc, for example, I'd prefer it if I could just type make and then make check-gcc again. Note that newlib isn't necessarily the libc in use here. It could be any libc, though the context is cross-compilation.