https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806

--- Comment #19 from rguenther at suse dot de <rguenther at suse dot de> ---
On Fri, 21 Feb 2020, vincent-gcc at vinc17 dot net wrote:

> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806
> 
> --- Comment #15 from Vincent Lefèvre <vincent-gcc at vinc17 dot net> ---
> Note that there are very few ways to be able to distinguish the sign of zero.
> The main one is division by zero. Other ones are:
> 
> * Conversion to a character string, e.g. via printf(). But in this case, if
> -fno-signed-zeros is used, whether "0" or "-0" is output (even in a way that
> seems to be inconsistent) doesn't matter since the user does not care about 
> the
> sign of 0, i.e. "0" and "-0" are regarded as equivalent (IIRC, this would be a
> bit like NaN, which has a sign bit in IEEE 754, but the output does not need 
> to
> match its sign bit).
> 
> * Memory analysis. Again, the sign does not matter, but for instance, reading
> an object twice as a byte sequence while the object has not been changed by 
> the
> code must give the same result. I doubt that this is affected by optimization.
> 
> * copysign(). The C standard is clear: "On implementations that represent a
> signed zero but do not treat negative zero consistently in arithmetic
> operations, the copysign functions regard the sign of zero as positive." Thus
> with -fno-signed-zeros, the sign of zero must be regarded as positive with 
> this
> function. If GCC chooses to deviate from the standard here, this needs to be
> documented.

I'm sure GCC doesn't adhere to this (it also relies on the systems
math library which doesn't "see" whether -fno-signed-zeros is in effect).
We'd need to special-case -0.0 at runtime for copysign (x, y) which
would be quite wasteful since -fno-signed-zeros is used for performance...

Reply via email to