On 12/13/11 2:34 PM, William Stein wrote:
Hi,
I was just looking at some timings for trac 12149, and it occurred to
me that our "timeout" command may be fine for programmers, but for us
mathematicians surely we want something that gives a better measure of
the distribution of timings? Wouldn't it be nice to get both the mean
and standard deviation of the trials, rather than just the mean?
I'm guessing I'm not the first person to think of this, so maybe one
of the people reading this has already written some code for this?
I'm curious what people would think about the timeit command being
improved a bit as follows:
(1) by default it returns/displays the mean and standard deviation.
This way we can more easily compare timings between different runs or
different code. If the standard deviation were "10 ms", say, and two
timings differ by 2 ms, we know that it means little.
(2) Timeit would return an object that prints itself as in 1. But
it would also have a plot_histogram method, which would return a
histogram plot of all the timings from the runs. Also, there would
be methods on this object to return the timing as a float in seconds,
etc. It could also give access to both wall and cputimes, since both
are very relevant for Sage (which uses pexpect sometimes).
Thoughts? Are there any experts out there in code benchmarking done
from a more mathematically sophisticated perspective than just one
number? It's entirely possible I'm making some stupid mistake in
suggesting the above.
How about a method to compare itself to another timeit object and tell
if there is a statistically significant difference for a certain
p-value? We could even embed this in "<" and ">" comparison logic.
Jason
--
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org