On Sunday 13 December 2009, Zach Welch wrote: > On Sun, 2009-12-13 at 15:49 -0800, David Brownell wrote: > [snip] > > > > Note that MISRA is not universally lauded. As I understand, some of > > > > its practices are contrary to other widely adopted coding policies. > > > > > > Sure. Some points are a laugh like "a null pointer should not be > > > dereferenced". > > > > That one touches on interesting system philosophy though. Should > > one guard _every_ pointer access with an "if (ptr == NULL)" check? > > Only if the NULL can be a result of normal run-time operation. Memory > allocation failures should be expected in normal run-time operations; > even though they are undesirable, the program must always handle them.
Yes. Cases where NULL is valid should get such checks. But that's self-evident, and not the interesting case. > In library APIs where it's impossible to determine such pre-conditions, > they must be checked... or we have a buggy library. And that's the issue I was highlighting. By calling that a "buggy" library you are extending its specification. It's possible to define them in that way ... but it's fairly widely understood to be costly, enough so that such definitions are unusual. Such spec extensions enlarge the scope of what must be tested, hurt performance in several ways, and increase system complexity. In some cases those tradeoffs are worth making in that way; not often. I admit that when I first came across it, the "don't check for NULL" philosophy seemed pretty wrong. But, quite a few years later on, now I see that it's quite effective. The "it's wrong" argument is on micro scales. The "it's right" one is macro scale. For every small micro win, there are many more large macro wins. > > I'm personally used to the answer to that being "no", and being > > able to rely on inputs being non-NULL ... unless they're explicitly > > stated as allowing NULL. If you pass a NULL where the function spec requires that the parameter not be NULL, there is only one action which is in all cases clearly incorrect: passing the NULL in the first place. Anything else is down to a choice of policies about how to handle such run time errors. "Implementation choices". One choice being: > If the NULL can derive _only_ from programming error, then the assert() > macro should be used instead. Note that the difference between an assert() failure and what happens when you actually indirect through a NULL pointer (on sane systems!) is *ONLY* in which diagnostic is emitted ... which includes SIGSEGV or nothing-at-all, vs "assertion failure". > > I'm not sure what you mean to imply by that NULL example, but the > > two philosophies I'm aware of are: > > > > - Having largely superfluous NULL checks obfuscates the code and > > makes it less clear. If anyone wrongly passes a NULL, we get a > > nice fast-fail that's easily tracked down and fixed. Plus, there > > is a *LOT* of code (and runtime cycles) wasted on such checks. > > This model falls down in embedded devices and under heavy load with > dynamic memory allocation. In neither case will it be remotely easy for > the user of the device or application to produce a viable crash report. That "dynamic memory allocation" presupposes bugs of the "didn't check for allocation failure" ... those happen to be bugs which static analysis tools are pretty good at surfacing. Or even GCC with the warn_unused_result machinery. So that's a non-issue in any reasonable development model. (Worth checking that OpenOCD is kicking in that function attribute ... also the "nonnull" one. I suspect that it isn't.) The "deeply embedded devices" scenario was the second one I called out. It doesn't actually apply to all such devices, but most places where that policy is used are ROM-based (and have CPU and other resources to waste on such checks). > If this philosophy has a religious following, it comes with ceremonies > that have the same effect on its practitioners as does Russian Roulette. Not true. It's pretty common in operating systems ... where the overhead of the superfluous testing is actively detrimental. I've seen benchmarks showing library performance improved by over 10% on common tasks, just by getting rid of such crap. Ditto device drivers and other key system infrastructure. It turns out that the icache impact of such code is much worse than you'd think. Ditto code density and thus paging rates. When six or ten different components are each wasting 10% of their time on such defensive programming, the result isn't just a 10% overall slowdown. You're running Linux right now, yes? That's one place using such policies. And there are no "ceremonies" ... if someone gets a null pointer exception it's pretty easily -- and *VERY* quickly -- fixed. And the developers are well trained to avoid creating such bugs in the first place. In contrast, I've seen that on systems which test for NULLs all over the place ... some fairly rude systematic bugs never get fixed, because of all those lowlevel checks which ameliorate the fault modes. And in fact just trying to fix them is rarely well received, since those "it's always safe to do stupid crap" assumptions are *everywhere* in the code. - Dave _______________________________________________ Openocd-development mailing list Openocd-development@lists.berlios.de https://lists.berlios.de/mailman/listinfo/openocd-development