Bruce Korb wrote: > On Mon, Aug 30, 2010 at 9:59 AM, Jim Meyering <j...@meyering.net> wrote: >> Bruce Korb wrote: >>> Hi Jim, >>> >>> On Mon, Aug 30, 2010 at 9:26 AM, Jim Meyering <j...@meyering.net> wrote: >>>> I don't like the length of "ignore_value" either, but think of that >>>> as a feature, not a problem. It's more of an auto-regulator: If I'm >>>> ignoring so many "important" return values that the "ignore_value" >>>> uses impair readability, then I take that as an indication that I'm >>>> doing something wrong. >>> >>> Theoretically, that sounds right. In "real life" I develop a lot of >>> daemon software >>> on appliances that really cannot go down. If some FILE* descriptor becomes >>> error prone, I trigger some alert somewhere but I may as well keep trying >>> because it might get better. I can't just quit because I'm unhappy with >>> some >>> FILE* descriptor. Maybe extend the prefix a bit: >>> >>> void_fwrite() >>> >>> for example? Or even: >>> >>> ignore_value_fwrite() >>> >>> In any case, it is the wrapping of the function call that I see as >>> comprehension >>> clutter. Much more so than the length of the prefix. >> >> Hi Bruce, >> >> Can your daemon software really afford to totally ignore >> write failure? If you're ignoring fwrite's return value, >> you can't even *detect* a write failure, much less act on it. > > Writing status to a private (not-syslog) file. If there's a problem, > someone ought to be informed. As mentioned various places, > checking the return of fwrite won't ensure you've captured a problem. > If you catch something, you have some problem, but if you have > a problem, you won't necessarily catch it. So you have to fflush > and then check error status. And then emit an event. (That works, > or the system falls over.) If there were an error at one log point > that wasn't noticed, it'd get caught within a second or two anyway. > Nobody's going to initiate a manual intervention within seconds > anyway. So, the probability of missing a problem for "too long" > is sufficiently close to zero that it isn't worth worrying over at every > opportunity (every fwrite call). > > The particular code in question was not a daemon, but my autogen > thingey. The code in question emitted a shell script. If the output > is cut short, the user needs to figure out why, but I ought to check > ferror when I'm all done. In this context, I'd have used xfwrite() > instead of explicitly casting it to void. > >> One case in which not reporting a write error makes sense is when >> the error arises in failing to write to a log, and that (logging) is >> the only way to indicate failure (i.e. in a daemon). But not even >> to detect it? Hence never to report write error... In what >> circumstance can that be ok? > > As I said, checking ferror() at the end of a write-to-log session then > triggering an event. "good enough" even if not excruciatingly perfect.
Checking ferror is good enough as long as you rely only on fwrite, and not say, *printf. Some *printf failures are specified as not detectable via ferror (but of course, the standard does not say that outright). > Anyway, even ignore_value_fwrite() is a readability win over > ignore_value(fwrite()), but I'd prefer either void_fwrite() or yfwrite() -- > but I'm now leaning more toward void_fwrite() at this point. #define void_fwrite(a, b, c, d) (ignore_value (fwrite (a, b, c, d)))