I read that a common way to demonstrate that floating point numbers suffer 
from approximation problems is by calculating this: 

0.3 - 0.1 * 3

which should produce 0 but in Java, Python, and Javascript for example, 
they produce -5.551115123125783e-17 .

Surprisingly (or not, ;) ), Go produces the correct 0 result! I wonder why 
is this so? Is it some higher precision being used versus these other 
languages? Or is it some extra correcting logic behind the scenes?

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to