On 11/11/2013 01:15, Ned Batchelder wrote:
On Friday, November 8, 2013 12:48:04 PM UTC-5, Pascal Bit wrote:
Here's the code:
from random import random
from time import clock
s = clock()
for i in (1, 2, 3, 6, 8):
M = 0
N = 10**i
for n in xrange(N):
r = random()
if 0.5 < r < 0.6:
M += 1
k = (N, float(M)/N)
print (clock()-s)
Running on win7 python 2.7 32 bit it uses around 30 seconds avg.
Running on xubuntu, 32 bit, on vmware on windows 7: 20 seconds!
The code runs faster on vm, than the computer itself...
The python version in this case is 1.5 times faster...
I don't understand.
What causes this?
The docs for time.clock() make clear that the meaning on Windows and Unix are
different:
"On Unix, return the current processor time as a floating point number expressed in
seconds."
"On Windows, this function returns wall-clock seconds elapsed since the first call
to this function..."
Try the experiment again with time.time() instead.
--Ned.
http://www.python.org/dev/peps/pep-0418/ for some related reading about
Python time functions.
--
Python is the second best programming language in the world.
But the best has yet to be invented. Christian Tismer
Mark Lawrence
--
https://mail.python.org/mailman/listinfo/python-list