On Sunday 13 December 2009, David Brownell wrote:
> I admit that when I first came across it, the "don't check for NULL"
> philosophy seemed pretty wrong.  But, quite a few years later on, now
> I see that it's quite effective.  The "it's wrong" argument is on micro
> scales.  The "it's right" one is macro scale.  For every small micro win,
> there are many more large macro wins.

And some similar situations, somewhat counter-intuitive results
that are interesting (IMO) but not relevant here:

 - A professor of mine once collected scenarios where you could
   come up with highly accurate algorithms ... but if you were
   willing to compromise a bit and *STILL* fail less often than
   the hardware used to run the algorithm (!!), you could get
   significantly better performance.  O(N) vs O(N^4), etc.

 - I recently scanned a document about FPGA usage in space
   systems.  It turns out they needed to make VHDL/Verilog
   compilers avoid certain optimizations ... like, if you code
   redundant or voting schemes to cope with alpha particles
   inducing errors, you really don't want the compiler to take
   out that redundancy, and increase your failure rates!

One of the common threads there is needing to have a handle on
just how common an error is before choosing strategies for
dealing with it.

- Dave
_______________________________________________
Openocd-development mailing list
Openocd-development@lists.berlios.de
https://lists.berlios.de/mailman/listinfo/openocd-development

Reply via email to