On Tuesday, September 13, 2016 at 10:07:49 AM UTC-4, Fengyang Wang wrote: > > This is an intuitive explanation, but the mathematics of IEEE floating > points seem to be designed so that 0.0 represents a "really small positive > number" and -0.0 represents "exact zero" or at least "an even smaller > really small negative number"; hence -0.0 + 0.0 = 0.0. I never understood > this either. >
For one thing, the signed zero preserves 1/(1/x) == x, even when x is +Inf or -Inf, since 1/-Inf is -0.0 and 1/-0.0 is -Inf. More generally, when there is underflow (numbers get so small they can no longer be represented), you lose the value but you don't lose the sign. Also, the sign of zero is useful in evaluating complex-valued functions that have a branch cut along the real axis, so that you know which side of the branch you are on (see the classic paper Much Ado About Nothing's SIgn Bit <https://people.freebsd.org/~das/kahan86branch.pdf> by Kahan).
