This is an intuitive explanation, but the mathematics of IEEE floating 
points seem to be designed so that 0.0 represents a "really small positive 
number" and -0.0 represents "exact zero" or at least "an even smaller 
really small negative number"; hence -0.0 + 0.0 = 0.0. I never understood 
this either.

On Saturday, July 9, 2016 at 7:40:58 AM UTC-4, Tom Breloff wrote:
>
> Yes. They are different numbers. In a way, negative zero represents "a 
> really small negative number" that can't be represented exactly using 
> floating point. 
>
> On Saturday, July 9, 2016, Davide Lasagna <lasagn...@gmail.com 
> <javascript:>> wrote:
>
>> Hi, 
>>
>> I have just been bitten by a function hashing a custom type containing a 
>> vector of floats. It turns out that hashing positive and negative floating 
>> point zeros returns different hashes. 
>>
>> Demo:
>> julia> hash(-0.0)
>> 0x3be7d0f7780de548
>>
>> julia> hash(0.0)
>> 0x77cfa1eef01bca90
>>
>> julia> hash(0)
>> 0x77cfa1eef01bca90
>>
>> Is this expected behaviour?
>>
>>

Reply via email to