> > Probably the test machine was changed, a new version of OpenSSL or
> pyOpenSSL, or something else?
> 
> One of those things.  There is no infrastructure in place for identifying 
> events
> which impact the performance testing infrastructure.  The only performance

Yes, this is an important point: track changes in infrastructure (everything 
that might
have an influence, but is outside the tested code).

> testing environment is a very old mac mini still running Snow Leopard, which

omg;)

> > I'd say: the infrastructure aspects when doing performance tests do
> matter. To the degree that performance results are of very limited value at
> all, if the former aspects are not accounted for.
> 
> I don't think the results that we have presently are worth much at all.  My
> point was mostly that there is some infrastructure which is halfway usable,
> and so you don't have to start from scratch.  If you could take over this

You mean taking over the code "as is"

http://bazaar.launchpad.net/~twisted-dev/twisted-benchmarks/trunk/files

or the task in general (Twisted benchmarking)?

> project (I am pretty sure at this point there is nobody to take it over 
> *from*,

We are currently developing performance test infrastructure for Crossbar.io - 
naturally,
it is eating it's own dog food: the infrastructure is based on Crossbar.io and 
WAMP to
orchestrate and wire up things in a distributed test setup.

We could extend that to test at the Twisted(-only) level. Need to think about 
how
that fits into "overall strategy", as the Crossbar.io perf. test stuff isn't 
open-source.

The testing hardware above (mac, no real network) is insufficient for what I 
need.
I'm thinking about buying and setting up 2 more boxes for Linux.

Rgd. Codespeed (https://github.com/tobami/codespeed), which seems to be used
by speedcenter.twistedmatrix.com: I have issues here as well.

E.g. I need latency histograms, but this seems unsupported (benchmark results 
can
only have avg/min/max/stddev). For me, this isn't "nice to have", but 
essential. 
Throughput is one thing. Constistent low latency a completely different. The 
latter is
much much harder.

But what is the "interface" between test cases from "twisted-benchmarks" to 
codespeed?

This

https://github.com/tobami/codespeed#saving-data

seems to suggest, performance test results are HTTP/POSTed as JSON to codespeed.

And codespeed is then only responsible for visualization and web hosting, right?

I think we can find something better for that part.

> (And if you care a lot about performance in a particular environment you
> could set it up in that environment and get attention for it :)).

Yes, in particular that very last one is a factor to justify efforts;) Anything 
like having
a promo logo or similar - that would be an argument to invest time and material.
I will seriously contemplate .. need to align with strategy/available time.

We already host FreeBSD buildslaves for both Twisted and PyPy. That might be 
another
synergy (hosting the latter on that same boxes).

> You should also have a look at the existing benchmark suite and potentially
> look at maintaining / expanding that as well.

I will try to integrate some of this into our upcoming perf. infrastructure.

/Tobias

> 
> Thoughts?
> 
> -glyph
> _______________________________________________
> Twisted-Python mailing list
> Twisted-Python@twistedmatrix.com
> http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python

_______________________________________________
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python

Reply via email to