On Tue, 01 Mar 2016 01:02:30 +0700 Robert Elz <k...@munnari.oz.au> wrote:
> But in real life, when we aren't writing stuff like "false && false" > or "false && true", when someone writes a command like > > $x_option && command implementing -x > > they aren't asserting that "${x_option}" is supposed to always be > true, and that the script should abort if that is not the case, > what they are doing is making a one line version of > > if $x_option > then > command implementing -x > fi Robert, thank you for your detailed and illuminating replies. Only after reading the above did I finally understand why X && Y is special under -e. I use it myself -- doesn't everyone? -- and never thought it should cause the script to terminate. You gave two other unfortunate examples: the effect of a subshell, and what to do with the build_and_run example. I doubt the wisdom of following Posix here because of "how useless it is to attempt to rely upon -e". If I may suggest, do you think we could craft a simpler, better rule instead? In general I expect scripts under -e to behave as though in a Makefile (with -j1 option). IMO that provides (or should provide) the best guidance: evaluate each statement's status *as if* in a subshell. For example, your "unexpected" result is completely ordinary under make: $ make if false; true; then echo fail; fi fail Similarly, we want, > build_and_run command-that-should-not-fail || > mail -s help kre </dev/null to evaluate as (build_and_run command-that-should-not-fail) || mail -s help kre </dev/null (except of course for side effects of build_and_run). Afaict the && "longcircuit" is the only desirable exception. Is it? If so, would a good policy be 1. Evaluate exit status of each pipeline *as if* in a subshell (like make), except 2. For X && Y constructions, evaluate as if "X; then Y; fi". You're doing the work; I don't want to muddy the waters. It just seems to me that when Posix is providing lousy guidance, innovation is better then emulation. --jkl