> On Jan 10, 2015, at 3:59 PM, Dustin J. Mitchell <dus...@v.igoro.us> wrote:
> 
> As someone partially responsible for the infrastructure Mozilla uses
> to do its performance benchmarking, I can say that it's *really* hard.
> Getting live operating systems to sit still and behave is a mess, and
> then *keeping* them still over months and years (while attending to
> necessary security upgrades, hardware migrations, and so on) is even
> worse.
> 
> One of the smarter things we've figured out how to do is to "phase in"
> potentially disruptive changes so that we can either see that there's
> no impact, or estimate a correction factor for comparing results
> before and after the change.

In my unfortunately somewhat uninformed opinion, one thing that can really help 
is not to commit to long-term stability, but rather to just have a clearly 
documented log of operations performed on the monitoring cluster.  Twisted has 
far less intense performance-analysis requirements than Mozilla, I should hope, 
and a lot less data to deal with, so just the ability to see events on the X 
axis could be enough to tell contributors what's going on with performance 
deltas.

I should point out that the main reason we need a performance testing rig is 
not continuous performance monitoring over time, but rather, clear performance 
tracking of individual changes, ideally before they land.  One of the things 
I'm unhappy about with speed center (and a big reason it's basically 
unmaintained) is that it's very hard to tell it to build a branch and to get a 
good picture of the aggregate effect of that branch on the benchmarks.

-glyph

_______________________________________________
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python

Reply via email to