Greg Wooledge wrote:
On Thu, Aug 11, 2011 at 11:56:10PM -0700, Linda Walsh wrote:
**Exception**
declare -i a
a=0
--
As a is declared to be an integer, it has the results evaluated
at assignment time. a=0 is an integer expression that doesn't set
$?=1
Neither should:
((a=0))
a=0 is an assignment. Assignments always return 0.
((a=0)) is an arithmetic command. It has the *side effect* of putting the
value 0 into the variable a, but its primary goal is to evaluate the
mathematical expression inside of the parens, in the same way that C does,
and return 0 ("true") or 1 ("false").
In C, if you write the expression a=0, for example:
if (a=0) {
Granted, the C code shown above is most typically a mistake (a==0 is what
is usually wanted), and in fact many compilers will issue a warning if
they see it.
Back to bash,
----
If I write a==0 on the bash command line, it will generate an error.
a=0 does not. 'Bash' knows the difference between an assignment and a
equality test in math.
Maybe that's the 'compatible "out", we are, perhaps, all looking for.
I'd be fine with a restriction that one must do an assignment of the end
result for it NOT to be affected by '-e'.
I.e. if I do:
imadev:~$ a=0; ((a==1234)); echo $?
1
----
That makes sense to me -- since you did ((a==123)), there is no 'if'
or && following it that would catch the value or to make bash think that
the value
has been caught. Your use of $? afterwards is in a next statement that bash
wouldn't see.
But in the case of ((res=a==1234)); I'd expect $? to return 1, BUT NOT
trigger a -e
exception.
AFAIK, that's what I would have *expected* before -- now I never write code
((a==123)); ... to see if it died...maybe it wouldn't have. I might
have complained
about that, though since it was an extension, I might not care as much
and it didn't
have previously defined behavior, like bash does now, that is being broken.
to ensure that all your *ERRORS*
are caught by the script, and not allowed to execute 'unhandled'.
I do not believe the intent of set -e was ever to catch *programmer*
errors. It was intended to catch failing commands that would cascade
into additional failures if unhandled -- for example,
cd /foo
rm -rf bar
If the cd command fails, we most certainly *don't* want to execute the
rm command, because we'd be removing the wrong thing. set -e allows
a lazy programmer to skip adding a check for cd's failure. A wiser
programmer would skip the set -e (knowing how fallible and cantankerous
it is), and check cd's success directly:
cd /foo || exit
rm -rf bar
Right, and as you are developing, you don't right your first code
cd /foo && rm -fr bar?
if the 'rm' fails I'd expect it to die...
Somethings ..
mkfs ....
I won't have an error check in a first script that comes from doing it
'interactively,
bash line-re-edits, that are later saved as a script.
Scripts born that way are just 'batch jobs' that need to be turned into
full on
error checked scripts.
It's that development phase -- when you go from need a 'batchjob'
program-script, that
-e is helpful to help you get the bugs out -- NOT during production, THOUGH,
running with -eu on all the time, is a way of enforcing a small level
of strictness that can catch errors in "production code" (only because
it should never
be triggered) -- i.e. if a script exits that way, it's a 'bug' that
needs to be
fixed.
That's why it's a *development* aid, things aren't perfect in
development. It's when
things are done that the code should run w/no errors (like that's always
true! Ha!)