On Tuesday 12 July 2016 08:17, Ethan Furman wrote: > So, so far there is no explanation of why leading zeroes make a number > more precise.
Obviously it doesn't, just as trailing zeroes doesn't make a number more precise. Precision in the sense used by scientists is a property of how the measurement was done, not of the number itself. We can arbitrarily add zeroes to the end of 0.5 until the cows come home, and its still just a half, no more, no less. But scientists have a useful convention that you can indicate the measurement precision by showing trailing zeroes: 0.5 is short-hand for 0.5 ± 0.05; 0.50 is short-hand for 0.5 ± 0.005; 0.5000 is short-hand for 0.5 ± 0.00005; etc. That's just a convention, although its a useful one. But its not the only useful convention: 01 is a byte with value 1; 0001 is a double-byte quantity (short int?) with value 1; 00000001 is a 32-bit quantity with value 1; etc. We say that these examples differ in their "number of digits" instead of "number of decimal places", so the difference doesn't quite map to the scientist's meaning of the word "precision". But why should we let that stop us from using the "precision" field of a format string to represent number of digits? If not, then what are the alternatives? Using str.format, how would you get the same output as this? py> "%8.4d" % 25 ' 0025' -- Steve -- https://mail.python.org/mailman/listinfo/python-list