On 02/02/16 23:00, John Snow wrote: > > > On 02/02/2016 04:47 PM, Laszlo Ersek wrote: >> On 02/02/16 21:03, John Snow wrote: >>> Recently, qemu iotest 013 has started to fail for me: >>> >>> Fedora release 22 (Twenty Two) >>> >>> 3.5.0-9.fc22 >>> clang version 3.5.0 (tags/RELEASE_350/final) >>> Target: x86_64-redhat-linux-gnu >>> Thread model: posix >>> >>> >>> +4 KiB/home/jsnow/src/qemu/qemu-io-cmds.c:230:18: runtime error: >>> division by zero >>> >>> >>> The problem is that in the print report for read_f, t2 and t1 can >>> actually be the same exact timestamp, and tdiv will try to divide by 0.0. >>> >>> Normally this is not a problem as this is defined to be INFINITY in C99 >>> Annex F. >>> >>> Clang, however, has once again decided to take the pedantic road and >>> state that Annex F is optional, and therefore division by 0.0 is >>> actually undefined when using -fsanitize=undefined. >>> >>> Groan. >>> >>> Two workarounds: >>> >>> (1) Modify the tdiv() function to just return INFINITY manually if the >>> timestamp provided is 0 >>> >>> (2) Modify tester scripts to also use -fno-sanitize=float-divide-by-zero >>> >>> >>> I prepared a patch to do the first workaround [1] so I could test >>> patches with clang in peace as I need to test my pull requests under >>> clang to make sure I don't break OSX, but it seems so absurd to have to >>> do this, so I have copied our resident language lawyers (and language >>> pragmatists) so that they can have a say. >>> >>> Relevant upstream BZ: https://llvm.org/bugs/show_bug.cgi?id=17000 >>> >>> --js >>> >>> [1] >>> https://github.com/jnsnow/qemu/commit/af93977dd2bc7ea936b8064c41c5a0f9d25ae2d1 >>> >> >> Apologies in advance for the knee-jerk reaction: >> >> I don't use double, ever. The last time I did anything resembling >> numerical analysis was in college (now gracefully veiled by time). >> >> If I need decimals after the point, I opt for fixed point math, done >> with integers. Surely uint64_t suffices for the purposes of >> "qemu-io-cmds.c"; it just forces the programmer to think about those >> issues explicitly that "double" promises, but fails, to solve. >> >> I doubt microsecond resolution is necessary here, but even if it is, I'd >> assume that approx. 584,942 years sufficed as an upper limit on time >> differences. >> > > Microsecond precision appears to not be good /enough/, where two > subsequent reads return the same microsecond value.
Right, but "same timestamp with microsecond precision" is a problem independent of how you represent the difference of zero between them. With double, you are tempted to go ahead and divide with the difference. Maybe the quotient (in C) is infinity, maybe undefined behavior. Either way, we need to read the floating point stuff in the spec (which we otherwise never do), and argue with the compiler writers (which we sometimes cannot avoid, but we don't like it). With uint64_t, you cannot avoid thinking about "division by zero", so your code will be explicit about it. > >> To frobnicate the saying about regular expressions, "when people want to >> print decimals, they reach for floating point -- now they have two >> problems". >> > > Now I've got a third problem: no real input on if clang is correct to > whine or not. Indeed. That's my fault, but in this case I'm not ashamed of it. I can choose between (a) reading the standardese on floating point *plus* internalizing all of: What Every Computer Scientist Should Know About Floating-Point Arithmetic http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html and (b) not using float. I'm an honest & practical person :), I openly go for (b). That doesn't mean someone who mastered (a) shouldn't respond! Thanks Laszlo > >> Thanks and sorry :( >> Laszlo >>