On Aug 10, 2007, at 2:36 PM, David Harvey wrote:
> On Fri, 10 Aug 2007, William Stein wrote:
>
>>> An automated test could then be written to pick up on things that
>>> significantly slow down between releases. For example, maybe when
>>> sage
>>> -test is run, it can be supplied with timing da
On Fri, 10 Aug 2007, William Stein wrote:
>> An automated test could then be written to pick up on things that
>> significantly slow down between releases. For example, maybe when sage
>> -test is run, it can be supplied with timing data from a previous run,
>> and produce warnings if anything
2007/8/10, Jonathan Bober <[EMAIL PROTECTED]>:
> Here is a high level description of another possible idea, ignoring most
> implementation details for a moment. To test the speed of my SAGE
> installation, I simply run a function benchmark(). This runs lots of
> test code, and probably takes at le
On 8/10/07, Jonathan Bober <[EMAIL PROTECTED]> wrote
> This doesn't sound like a good idea to me, if for no other reason than
> the fact that either FASTCPU will need to change over time, which will
> require updating the time it is supposed to take for all the tests to
> run, or the tests will ha
On Fri, 2007-08-10 at 12:52 -0700, William Stein wrote:
> Here's a crazy idea. We decide on an extension to the doctest syntax
> that states that a doctest fails if it takes over a certain amount of time
> to run on a machine with at least a 2Ghz processor. For example,
>
> sage: 2 + 2 # take
On Aug 10, 4:28 pm, David Harvey <[EMAIL PROTECTED]> wrote:
> I'm unclear what is meant by "relative times" mentioned above. Does
> that mean (a) profile of function A relative to function B, or (b)
> profile of function A in version X vs in version Y?
A to B at a given version on a given machi
On 8/10/07, David Harvey <[EMAIL PROTECTED]> wrote:
> > On 8/10/07, Jack Schmidt <[EMAIL PROTECTED]> wrote:
> >>
> >> You might look at how GAP does this. Its tst directory contains
> >> expected timings. One only compares relative times. GAP tests do not
> >> fail on a pentium 75mhz, since GAP
On 8/10/07, Jack Schmidt <[EMAIL PROTECTED]> wrote:
>
> Something like that. Each test file contains a record of how long the
> entire file takes in units proportional to how long a specific test
> file takes. I believe the constant is currently chosen so that 1 unit
> corresponds to 1msec on a
On Aug 10, 2007, at 4:10 PM, William Stein wrote:
>
> On 8/10/07, Jack Schmidt <[EMAIL PROTECTED]> wrote:
>>
>> You might look at how GAP does this. Its tst directory contains
>> expected timings. One only compares relative times. GAP tests do not
>> fail on a pentium 75mhz, since GAP users e
Something like that. Each test file contains a record of how long the
entire file takes in units proportional to how long a specific test
file takes. I believe the constant is currently chosen so that 1 unit
corresponds to 1msec on a 1ghz pentium class processor.
The standard test suite runs th
On 8/10/07, Jack Schmidt <[EMAIL PROTECTED]> wrote:
>
> You might look at how GAP does this. Its tst directory contains
> expected timings. One only compares relative times. GAP tests do not
> fail on a pentium 75mhz, since GAP users employ a wide range of
> hardware. Surely other software has
You might look at how GAP does this. Its tst directory contains
expected timings. One only compares relative times. GAP tests do not
fail on a pentium 75mhz, since GAP users employ a wide range of
hardware. Surely other software has similar features.
On Aug 10, 3:07 pm, Martin Albrecht <[EMAI
On 8/10/07, Martin Albrecht <[EMAIL PROTECTED]> wrote:
> > Yeah sure, I didn't mean to hijack the thread. The segfaults are
> > obviously more important than the slownesses. I think your proposal is
> > more likely to succeed if we don't get sidetracked :-)
>
> So let's open up a new thread then.
13 matches
Mail list logo