On Thu, Jul 24, 2014 at 3:00 AM, Robert Carr <robert.c...@canonical.com> wrote:
> I'm a little skeptical of the idea that testing the full input stack in > autopilot application tests is necessary or is a significant contributor to > quality, I think it certainly confuses test scope which has it's own > consequences. My preferred solutions in order: Well, the whole point of high-level functional tests is to have as broad a scope as possible. This approach has found bugs in mir input event handling before: Remember the bug about mir not always picking up input devices that were created after the mir server started? That was found by autopilot, and wouldn't have been has we been injecting input events into the applications under test directly. On the unity7 / desktop side, we've found window-stacking issues with autopilot this way as well. I think I have enough data to say that "yes, having as broad a scope for functional / user acceptance tests as possible helps find problems, and increased quality". Anyway, it sounds like there are several solutions here. I'll wait for someone to tell me what the proposed fix is. If it requires a change to autopilot-qt, I really need about 2 weeks notice before it lands in distro (If you think the mir release process is bad/slow, you should try releasing autopilot :P ). Cheers, -- Thomi Richards thomi.richa...@canonical.com
-- Mir-devel mailing list Mir-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/mir-devel