People have hit on some of the issues, e.g.
(a) the algorithm + algorithm must be deterministic  (no calls to
"random")
(b) the computation might be slow because of multiprocessing (time
slots scheduled to something else)
 but also there are other often quite critical issues affecting
timing, like:

(c) cache loading / memory interference.  The second or subsequent
runs may be way faster because all the data and program are now in
cache. Is this fair?

(d) garbage collection (assuming you are using one of them).  Time may
be slow because one run includes GC processing.

People timing Lisp programs sometimes induce a GC just before a timing
run.
Of course if the GC causes an additional allocation of memory, the
next GC is different etc.

I assume there are ways of counting instructions executed (or byte-
code interpreted) that
may be useful in comparing algorithms that are essentially running in
the same environment.
This does not mean that the relative speeds will be the same in other
environments.

RJF

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to