On 8/10/07, Jonathan Bober <[EMAIL PROTECTED]> wrote
> This doesn't sound like a good idea to me, if for no other reason than
> the fact that either FASTCPU will need to change over time, which will
> require updating the time it is supposed to take for all the tests to
> run, or the tests will have to be run on a specific machine to really
> have any meaning.

It also had the feel of something that "feels like a good idea to me... but
actually sucks in retrospect."  OK, so let's table that proposal.

> Here is a high level description of another possible idea, ignoring most
> implementation details for a moment. To test the speed of my SAGE
> installation, I simply run a function benchmark(). This runs lots of
> test code, and probably takes at least an hour to run, and returns a
> Benchmark object, say, which contains lots of information about how long
> various tests took to run.

I think this is a great idea in theory, but given the very limited resource
of SAGE development (and that everything is voluntary), it is unlikely
to work in practice.  One problem, which I think Martin mentioned an hour ago, i
s that we've tried to
do something like this literally 4 or 5 times before, and it's incredibly hard
work to actually implement it -- and maintain it -- and the coverage
of the resulting
code is often poor.  I mean, if you think about what you'll put in your
benchmark class it'll probably totally different than what I would put in
(except we would both benchmark number_of_partitions ;-).   Of course,
if a few people had a full time job for a few months making lots of different
benchmark code, they could produce something really really useful.

> Now if I want to compare the speed to a different SAGE installation, I
> can load a Benchmark instance from disk, as in:
>
> sage: b1 = benchmark()
> sage: b2 = load('sage2.6.4-benchmark')
> sage: b1
> (Benchmark instance created with SAGE version:2.7.3; branch: sage-main;  
> processor: something-or-other)
> sage: b2
> (Benchmark instance created with SAGE version:2.6.4; branch: sage-main;  
> processor: something-or-other)
> sage: b1.compare(b2)
> --The following tests ran faster under version 2.7.3 ...
> [some information about tests and timings.]
> [...]
> --The following tests ran faster under version 2.6.4 ...
> [more info]
>
> or maybe instead
>
> sage: b1.compare(b2)
> testname        time1   time2   difference
> ...             ...     ...     ...
> ...             ...     ...     ...
> (etc)
>
> An automated test could then be written to pick up on things that
> significantly slow down between releases. For example, maybe when sage
> -test is run, it can be supplied with timing data from a previous run,
> and produce warnings if anything is slower that it used to be.

I think doing this, but with the doctests (as David Harvey I think suggested)
might work better, since they are already written and are regularly maintained.

William

--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~----------~----~----~----~------~----~------~--~---

Reply via email to