Rustom Mody <rustompm...@gmail.com>: > The field of numerical analysis came into existence only because this > fact multiplied by the fact that computers do their (inaccurate ≠ > inexact) computations billions of times faster than we do makes > significance a very significant problem!
A couple of related anecdotes involving integer errors. 1. I worked on a (video) product that had to execute a piece of code every 7 µs or so. A key requirement was that the beat must not drift far apart from the ideal over time. At first I thought the traditional nanosecond resolution would be sufficient for the purpose but then made a calculation: maximum rounding error = 0.5 ns/7 µs = 70 µs/s = 6 s/day That's why I decided to calculate the interval down to a femtosecond, whose error was well within our tolerance. 2. After running the LXDE GUI on my 32-bit Linux environment for some time, the CPU utilization monitor showed the CPU was mysteriously doing work 100% of the time. The only way out was to reboot the machine. After a few months and a few reboots, I investigated the situation more carefully. It turned out LXDE's CPU meter was reading jiffy counts from a textual /proc file with scanf("%ld"). Jiffies start from 0 at the time of the boot and increment every millisecond. Thus, the maximum signed 32-bit integer is reached in less than 25 days. When scanf("%ld") overflows, it sets the value to MAX_LONG. That effectively meant time stopped going forward and all rate calculations would shoot through the roof. This problem would not have occurred if the C standard consistently specified modulo arithmetics for integer operations. The solution was to use scanf("%lld") instead. Marko -- https://mail.python.org/mailman/listinfo/python-list