Date: Tue, 15 Mar 2022 18:57:08 +0100 From: Edgar =?iso-8859-1?B?RnXf?= <e...@math.uni-bonn.de> Message-ID: <yjdtdj4q78t20...@trav.math.uni-bonn.de>
| Is it on purpose that sh's (at least, NetBSD-8's sh's) | built-in printf doesn't give a non-zero rc if the underlying | write(2) fails (with EPIPE, in my case)? On purpose? Not sure, but historically the vast majority of processes don't check for write errors on stdout, common usage is just printf(whatever); exit(0) (or fall off the end of, or return from, main(), same thing). Checking the status from printf doesn't help, in the cases where write errors can reasonably occur, the output is buffered, and unless there's a lot of it, nothing gets written until the implicit fflush() done by exit(). By that time the exit status has already been determined. To handle write errors, processes would (all) need to explicitly do fflush(stdout), and check for errors, write a message to stderr (and hope that one works) and exit (non-zero) if there is one. | It turns out that this | | { sleep 1; printf "Hallo" || echo "ERROR">&2; } | echo Foo Note that in this test you're not really testing for write errors, the write will normally generate SIGPIPE, which will kill the sender, which in this case is a sub-shell running the left side, that shell dies, so there is nothing left to run the echo command. [After I composed this reply, I saw that you discovered this for yourself from your later message.] | doesn't print "ERROR" with both sh and bash, while it does with ksh. ksh (our ksh anyway) doesn't have a builtin printf. So... | Replacing printf with echo changes nothing, while using the /usr/bin | (or /bin, in case of echo) form does what I would expect. That's because when it is a separate process, it is just that process that gets the SIGPIPE, and the waiting shell detects the "killed by a signal" status, that's not "true", so runs the || code. If you want to test for write errors this way you need instead { trap '' PIPE; sleep 1; printf "Hallo" || echo "ERROR">&2; } | echo Foo (you could alternatively ignore SIGPIPE in the shell from which you're running this command). If you do that, you'd discover that (in -8 or -9) the builtin echo in sh actually does report the error (and then, every echo command from that exact same shell, or subshells of it, forever after will also report an error, whether or not there is one - that's a bug that was reported and fixed a while ago), but /bin/echo and both builtin and /usr/bin printf don't. That is, the builtin echo actually checks for the error (and then left the error state stuck on, forever, but echo is the only thing in sh that ever checks it, so just echo keeps reporting errors). Also, just FYI, since processes on the left side of a pipe being killed by SIGPIPE is just a normal part of life, shells don't treat this as much different than them simply exiting (different than being killed by many other signals, SIGINT death is another one that gets more or less ignored (still non-zero exit status, for anything that checks, but no noise blathered on the terminal). This issue was brought up on the austin-group list (POSIX) by some doing testing in linux, with output from commands being sent to /dev/full (yes, a special file whose only purpose, as far as I can tell, is to return ENOSPC to any write request). I believe that in glibc these days there is some horrible hack so that programs don't need to be modified from the paradigm above, but still write errors on stdout can be detected. I know very little (after time has elapsed, even less than the miniscule amount I once did) about how it is done, I just remember that it sounded like a very bad idea. There was nothing explicit in POSIX about this, but some of the austin group people feel that it is a travesty that any command can ever fail to do what it is supposed to do, and still exit(0) under any circumstances. This being regardless of this being how it has always been, since forever, (or at least since stdio, and "portable i/o" which preceded it) and in the cases where it actually matters, rather than contrived tests, if a write error happens, it generally means there's something serious (overflowed filesystem, broken hardware, ...) which is very hard to miss, and more noise from programs which don't really matter, IMO, doesn't help. Anyway, in the next version, POSIX is planning on making it mandatory for lots of commands to check for write errors on stdout, and if found exit with a status indicating an error (which also mandates a message on stderr from almost all utilities). I don't much like this, but my arguments fell on deaf ears, so despite my misgivings, in current, both versions of both echo and printf (the builtin ones in /bin/sh and the ones in the filesystem) do check for errors and report them (and exit non-zero) if they happen. None of the many other utilities (built into sh, and otherwise) has been changed, except I think maybe the built-in pwd in sh (not sure if I did that one or not now) - pwd received a lot of attention because pwd was the command which (pwd >/dev/full) which started all of this. kre