Hi everybody, If you use -D_FORTIFY_SOURCE=2 (or 1), some additionally checks, both, at compile- and run-time, supposed to catch buffer overflows, are performed. Debian (and some other distributions) use this option by default, when packaging software. And since I generally build my code with -Werror, I cannot just ignore the warnings triggered by _FORTIFY_SOURCE, when packaging my software.
First I thought the addional compiler warnings are a good thing, and it might be a good practice to just change my code. But after looking into them, it turned out that most (if not all) of the time, the unused-result warnings triggered by _FORTIFY_SOURCE, are IMO ridicolous, suggest less maintainable code, and are unrelated of buffer overflows. One example: int value = -1; fscanf(fh, "%u", &value); This is causing a warning, apparently you should handle the return value of fscanf, for no good reason whatsoever. The output variable is initialized with a value, indicating an invalid value. The behavior is completely the same compared to the more verbose alternative, _FORTIFY_SOURCE is suggesting: int value; if (fscanf(fh, "%u", &value) != 1) value = -1; With _FORTIFY_SOURCE you get a warning every time you try to read or write some data and don't explicitly handle the return value of the respective stdlib function. But I generally try to keep my code as simple as possible (which also reduces potential bugs), and most of the time all I could do if an IO function fails, would be logging an error message. But I'd rather avoid to bloat my code with unnecessary error handling, that is only triggered in rather theoretical scenarios, except when it has security implications. But what could possibly go wrong, if I keep writing to stdout after it has been closed by the parent process, for example? Or why should I have to check whether all bytes have been read into an (initialized) buffer, before passing it to a parser which must be able to deal with invalid input, anyway. Regardless of the aspect that addressing these would significantly bloat my code, I also feel that it would make my code less robust, by adding error handling code that won't (or can't even easily) be triggered during testing. And what does all of this has to do with buffer overflows, anyway? Sebastian