Thanks to everyone for all the help. I'm looking more at the development process than the distribution process which means different issues are most important for me. The big issue I'm looking at is that I've got lots of programs which can be visualised as having "conventional" dependencies with the twist that suppose executable "foo" depends upon "colourSegmentation.o", if the target processor has SSE3 instructions the IF there's an processor optimised segmentation.c in the SSE3 directory compile and link against that, IF it doesn't exist compile and link against the version in the GENERIC_C directory. I think maintaining separate makefiles that are manually kept up to date in each case as new processor oprtimised code gets written is going to be reliable in the longer term. I think I'll follow the general advice to maintain a single makefile that describes the non-processor specific dependencies by hand and then try some homebrew script to automatically infer and add appropriate paths to object files in each processor-capability makefile depending on availability for each processor-capability set. (This is probaby not a common problem.)
> I recommend mk from Plan 9, the syntax is clean and clearly defined > (not the problem is it BSD make, is it GNU make or is it some archaic > Unix make?). I found that all meta build systems suck in one way or > another -- some do a good job at first glance, like scons, but they > all hide what they really do and in the end it's like trying to > understand configure scripts if something goes wrong. make or mk are > better choices in this regard. Yeah. I don't mind powerful languages for doing stuff "automatically", the problem is systems that aren't designed to be easily debuggable when they go wrong. -- cheers, dave tweed__________________________ computer vision reasearcher: david.tw...@gmail.com "while having code so boring anyone can maintain it, use Python." -- attempted insult seen on slashdot