Christoph Zwerschke wrote: > nikie wrote: > > Let's look at two different examples: Consider the following C# code: > > > > static decimal test() { > > decimal x = 10001; > > x /= 100; > > x -= 100; > > return x; > > > > It returns "0.01", as you would expect it. > > Yes, I would expect that because I have defined x as decimal, not int. > > > Now, consider the python equivalent: > > > > def test(): > > x = 10001 > > x /= 100 > > x -= 100 > > return x > > No, that's not the Python equivalent. The equivalent of the line > > decimal x = 10001 > > in Python would be > > x = 10001.0 > > or even: > > from decimal import Decimal > x = Decimal(10001)
Hm, then I probably didn't get your original point: I thought your argument was that a dynamically typed language was "safer" because it would choose the "right" type (in your example, an arbitrary-pecision integer) automatically. As you can see from the above sample, it sometimes picks the "wrong" type, too. Now you tell me that this doesn't count, because I should have told Python what type to use. But shouldn't that apply to the Java binary-search example, too? I mean, you could have told Java to used a 64-bit or arbitrary-length integer type instead of a 32-bit integer (which would actually be equivalent to the Python code), so it would do the same thing as the Python binary search implementation. > ... > By the way, the equivalent Python code to your C# program gives on my > machine the very same result: > >>> x = 10001.0; x /= 100; x -= 100; print x > 0.01 Try entering "x" in the interpreter, and read up about the difference between str() and repr(). > > > Even if you used "from __future__ import division", it would actually > > return "0.010000000000005116", which, depending on the context, may > > still be an intolerable error. > > With from __future__ import division, I also get 0.01 printed. Anyway, > if there are small discrepancies then these have nothing to do with > Python but rather with the underlying floating-point hardware and C > library, the way how you print the value and the fact that 0.01 can > principally not be stored exactly as a float (nor as a C# decimal), only > as a Python Decimal. The is OT, but what makes you think a C# decimal can't store 0.01? > > I can even think of an example where C's (and Java's) bounded ints are > > the right choice, while Python's arbitraty-precision math isn't: > > Assume you get two 32-bit integers containing two time values (or > > values from an incremental encoder, or counter values). How do you > > find out how many timer ticks (or increments, or counts) have occured > > between those two values, and which one was earlier? In C, you can > > just write: > > > > long Distance(long t1, long t0) { return t1-t0; } > > > > And all the wraparound cases will be handled correctly (assuming there > > have been less than 2^31 timer ticks between these two time values). > > "Distance" will return a positive value if t1 was measured after t0, a > > negative value otherwise, even if there's been a wraparound in > > between. Try the same in Python and tell me which version is simpler! > > First of all, the whole problem only arises because you are using a > statically typed counter ;-) And it only is easy in C when your counter > has 32 bits. But what about a 24 bit counter? Easy, multiply it with 256 and it's a 32-bit counter ;-) Fortunately, 24-bit-counters are quite rare. 16-bit or 32-bit counters on the other hand are quite common, especially when you're working close to the hardware (where C is at home). All I wanted to point out is that bounded integers do have their advantages, because some people in this thread apparently have never stumbled over them. -- http://mail.python.org/mailman/listinfo/python-list