On Monday, February 29, 2016 at 3:16:37 PM UTC+5:30, Erik Bray wrote:
>
> Hi, 
>
> I apologize if I don't fully follow the context to this thread, but I 
> saw the mention of performance testing, in particular in the context 
> of regression tests, and I thought I would bring up a project Mike 
> Droettboom developed called airspeed velocity, which is a nice simple 
> package for tracking performance of tests over time: 
>
> http://asv.readthedocs.org/en/latest/ 
>
> One writes very simple tests (often just one or two liners) and asv 
> records the time taken to run that test and/or the memory usage (and 
> in principle other statistics could be plugged in).  Obviously the 
> benchmarks are very machine-dependent too, so each benchmark tries to 
> include metadata about the platform.  So the best way to use it is 
> have some dedicated machines, whose hardware doesn't change that 
> often, run the benchmark suite continuously.  It outputs the 
> benchmarks to a JSON file that can be kept under version control and 
> used to update the generated benchmark history.  You can see a sample 
> of Astropy's benchmarks here: 
>
> http://droettboom.com/astropy-benchmark/ 
>
> It has already proven itself several times to be very useful at 
> catching noticeable performance regressions.  Might be worth a look 
> (disclaimer I worked on this project a bit too, but have no financial 
> incentive for promoting it etc. etc. :) 
>
> Erik 
>
Thanks for your help.Actually I did not think about that side. The 
performance that can track using ASV mostly depends on hardware
and other externally related things. Actually i thought it is not related 
to regression testing. thanks a lot.As you mentioned there are some 
questions about the hardware part.
(How to match it to this project. first I have to study). :-)


On Monday, February 29, 2016 at 6:38:29 PM UTC+5:30, Jeroen Demeyer wrote:
>
> For SageMath, the "doing something useful" part should be to make this 
> part of the continuous integration tests such that patches which 
> significantly slow down tests are flagged. Ideally, with very little 
> false positives (I think that the test for startup time in the patchbot 
> has too many false positives). 

 
In that case i did not get the "significantly slow down" part? But i think 
that you meant about compare that time that the funtion/unit took before 
patching and after patching?
In that case we need a database and compare with the new time duration or 
something similar to it. :-)


-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.

Reply via email to