On 19/06/15 11:52, Alan Griffiths wrote:
> On 19/06/15 11:22, Alexandros Frantzis wrote:
>> Hello all,
>>
>> there have recently been a few occasions where we wanted to experiment with
>> performance improvements, but lacked a way to easily measure their effect.
>> There have also been a few occasions where different implementations of the
>> (supposedly) same performance test came up with different results. In light 
>> of
>> these issues, it would be helpful to have a common performance test 
>> framework,
>> to make it easy to create and share performance tests scenarios.
>
> +1
...
> I'd say try the python approach and see if we hit limits.

Is there any need to capture requirements? Or do we have a few "current"
test scenarios to try automating?

-- 
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel

Reply via email to