Hi Jörg,

Jörg Jahnke schrieb:

Is this really the answer to Thorstens older question? I guess not.

I was just looking at one tree in the wood. Unfortunately this tree is in my way so that I cannot even enter the wood.

And also to Mechtilde's initial question, which Stephan posted here, it is not the answer because there the issue of differences in functionality was raised. With a failed build you get no functionality at all.

Correct :) So it is a precondition, that buildbots are "as reiable as other build environments".



From the perspective of a QA member, who wants to create a build that he can test, a failed build certainly is more than annoying. In a previous mail Gregor asked for the different tasks BuildBots were meant to work on. IMO the focus up to now was more on testing the builds for many different platforms and early find and fix build problems for these platforms. They currently perhaps have more a developer focus and not a QA focus. So that you stumble over a build problem on a BuildBot does not mean that the BuildBot system is broken, it might as well mean that the current BuildBots are more useful for developers than for QA means.

Yes - and I think, this was mor the intention of Mechtilde: we need to make people aware, that community QA people really like to do more testing (without bugging developers or Hamburg RE for testbuilds). With the current setup this is very hard to achieve.


But indeed we should think about adding that QA focus and installing some BuildBots that are as close as possible to the Hamburg RE environment where the milestone builds take place, so that the BuildBots can create builds with a higher reliabiliy.

There are inded two ways to get a better match of the builds:
- Buildbots use the same build environment as Hamburg RE
- Hamburg RE uses the same build environment as the rest o the community (including Buildbots)



IMO that would be very useful. We should ensure that we not spend time and resources into adding BuildBots with an environment close to the Hamburg RE one and later find out that all was wasted because the real problems were e.g. different Windows Managers of the test machines or whatever.

The differncens in the build environment cause some of the problems - the window manager indeed cause other problems, as the configure settings does, as the CPU at the testmachine does, as the mouse driver on the test machine does ...

This thing is very complex - and is will be a complex task to analyze all this. So we should lower the complexity by reducing known differences. If we then get quite good results, we could start and raise complexity again and see, what differences in build and testing environment we can cope with.


Best,

André

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org

Reply via email to