On 24/02/2014 01:12, Mick wrote: > On Sunday 23 Feb 2014 22:32:32 Alan McKinnon wrote: >> On 23/02/2014 20:18, Canek Peláez Valdés wrote: >>> I don't think forking would attract much developers. Writing something >>> new trying to follow "the*nix design principles", but being modern and >>> with the same features (all of them optional, of course) of systemd >>> will have more chances; although I think it will fail because most of >>> the people that can code "better" actually like the systemd design, >>> and would prefer to contribute to it. >>> >>> And if you found enough of this mythical good-coders, good luck >>> defining what it means "the*nix design principles". >> >> I've been wondering about this concept of "the*nix design principles"... > > Well, I'm no authority on this since I can't code, but here's a starter for > 10: > > http://www.faqs.org/docs/artu/ch01s06.html > > http://people.fas.harvard.edu/~lib113/reference/unix/co-unix4.html
I really like documents like this, all airy-fairy and giving the impression that the whole design was worked out nicely in advance. It wasn't. the doc even quotes this fellow who had nothing to do with the doc itself: "Those who don't understand UNIX are doomed to reinvent it, poorly." --Henry Spencer Let me tell you how Unix was designed, how the whole thing took shape once K&R had gotten C pretty much stabilized. It is most apparent in IO error handling in early designs and it goes like this: We don't do error handling. We don't even try and deal with it at the point it occurred, we just chuck it back up the stack, essentially giving them message "stuff it, I'm not dealing with this. You called me, you fix it." Doesn't sound like good design does it? Sounds more like do whatever you think you can get away with. Good design in this area gives you something conceptually along the lines of try...catch...finally (with possibly some work done to avoid throwing another exception in the finally). Unix error "design" does this: exit <some arb number> and an error message is in $@ if you feel like looking for it Strangely, this approach is exactly why Unix took off and got such widespread adoption throughout the 70s. An engineer will understand that a well-thought out design that is theoretically correct requires an underlying design that is consistent. In the 70s, hardware consistency was a joke - every installation was different. Consistent error handling would severely limit the arches this new OS could run on. By taking a "Stuff it, you deal with it coz I'm not!" approach, the handling was fobbed off to a higher layer that was a) not really able to deal with it and b) at least in a position to try *something*. By ripping out the theoretical correctness aspects, devs were left with something that actually could compile and run. You had to bolt on your own fancy bits to make it reliable but eventually over time these things too stabilized into a consistent pattern (mostly by hardware vendors going bankrupt and their stuff leaving the playing field) And so we come to what "Unix design" probably really is: "You do what you have to to get the job done, the simpler the better, but I'm not *really* gonna hold you to that." I still don't like what Lennart has done with this project, but I also fail to see what design principle he has violated. -- Alan McKinnon alan.mckin...@gmail.com