Hi,

I'm wondering if anyone could make a tutorial about benchmarking Sage
methods. I am personally not much of a speed fanatic, but when I am
refactoring other people's code I'm trying to make sure it doesn't
slow down as the result and this does require me to run timings. I'm
aware of the basics of %time and %timeit, but here are some questions
I haven't directly seen answered:

- How to usefully measure the runtime of a method that involves
subprocesses (GAP, symmetrica, whatever)? I don't think it appears in
the output of %timeit.

- Why does %time often gives a longer time (both CPU and wall) than
%timeit, even if there is nothing to output to IO?

- How do %time and %timeit deal with the cache? Am I seeing it right
that any instructions following a %timeit have no effect on the state
(they get executed in a kind of sandbox?) while those following a
%time are processed like any other instructions?

- What is the most accurate way to measure changes to startup time,
other than starting Sage many times and measuring wall times?

- How can I find out how much a change in some class has slowed down /
sped up methods in other classes? I could probably run doctests, but
I'm not sure how realistic they are (some of them might even be
nondeterministic, causing further noise), and I would probably have to
write some scripts to process the output. Are there good tools that
predict the most likely places where a code change can cause
consequences?

- Memory is another issue: how can I see if my cache runs the danger
of blowing up?

  Thanks a lot,
  Darij

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at http://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to