On 8/15/13 12:19 PM, janI wrote:
> On Aug 15, 2013 11:14 AM, "Jürgen Schmidt" <jogischm...@gmail.com> wrote:
>>
>> On 8/14/13 8:30 PM, Rob Weir wrote:
>>> On Wed, Aug 14, 2013 at 1:55 PM, janI <j...@apache.org> wrote:
>>>> On 14 August 2013 19:36, Edwin Sharp <el...@mail-page.com> wrote:
>>>>
>>>>> Dear Rob
>>>>> The 4.0 release was too ambitious - we should advance in smaller
> steps.
>>>>> Nothing compares to general public testing - betas and release
> candidates
>>>>> should not be avoided.
>>>>> TestLink cases should be less comprehesive (in terms of feature
> coverage)
>>>>> and more stress testing oriented.
>>>>> Regards,
>>>>> Edwin
>>>>>
>>>>> On Wed, Aug 14, 2013, at 19:59, Rob Weir wrote:
>>>>>> We're working now on AOO 4.0.1, to fix defects in AOO 4.0.0.  The
> fact
>>>>>> that we're doing this, and their are no arguments against it, shows
>>>>>> that we value quality.   I'd like to take this a step further, and
> see
>>>>>> what we can learn from the defects in AOO 4.0.0 and what we can do
>>>>>> going forward to improve.
>>>>>>
>>>>>> Quality, in the end, is a process, not a state of grace.  We improve
>>>>>> by working smarter, not working harder.  The goal should be to learn
>>>>>> and improve, as individuals and as a community.
>>>>>>
>>>>>> Every regression that made it into 4.0.0 was added there by a
>>>>>> programmer.  And the defect went undetected by testers.  This is not
>>>>>> to blame.  It just means that we're all human.  We know that.  We all
>>>>>> make mistakes.  I make mistakes.  A quality process is not about
>>>>>> becoming perfect, but about acknowledging that we make mistakes and
>>>>>> that certain formal and informal practices are needed to prevent and
>>>>>> detect these mistakes.
>>>>>>
>>>>>> But enough about generalities.  I'm hoping you'll join with me in
>>>>>> examining the 32 confirmed 4.0.0 regression defects and answering a
>>>>>> few questions:
>>>>>>
>>>>>> 1) What caused the bug?   What was the "root cause"?  Note:
>>>>>> "programmer error" is not really a cause.  We should ask what caused
>>>>>> the error.
>>>>>>
>>>>>> 2) What can we do to prevent bugs like this from being checked in?
>>>>>>
>>>>>> 3) Why wasn't the bug found during testing?  Was it not covered by
> any
>>>>>> existing test case?  Was a test case run but the defect was not
>>>>>> recognized?  Was the defect introduced into the software after the
>>>>>> tests had already been executed?
>>>>>>
>>>>>> 4) What can we do to ensure that bugs like this are caught during
>>>>> testing?
>>>>>>
>>>>>> So 2 basic questions -- what went wrong and how can we prevent it in
>>>>>> the future, looked at from perspective of programmers and testers.
>  If
>>>>>> we can keep these questions in mind, and try to answer them, we may
> be
>>>>>> able to find some patterns that can lead to some process changes for
>>>>>> AOO 4.1.
>>>>>>
>>>>>> You can find the 4.0.0 regressions in Bugzilla here:
>>>>>>
>>>>>>
>>>>>
> https://issues.apache.org/ooo/buglist.cgi?cmdtype=dorem&remaction=run&namedcmd=400_regressions&sharer_id=248521&list_id=80834
>>>>>>
>>>>>>
>>>>>> Regards,
>>>>>>
>>>>>> -Rob
>>>>>
>>>>
>>>> I strongly believe that one of the things that went wrong is our
> limited
>>>> possibility to retest (due to resources), when I look at our current
> manual
>>>
>>> I wonder about that as well.  That's one reason it would be good to
>>> know how many of the confirmed regressions were introduced late in the
>>> release process, and thus missed coverage in our full test pass.
>>>
>>>> testcases, a lot of those could be automated, e.g. with a simple UI
> macro,
>>>> that would enable us to run these test cases with every build. It may
> sound
>>>> like a dream but where I come from, we did that every night, and it
> caught
>>>> a lot of regression bugs and sideeffects.
>>>>
>>>
>>> This begs the question:  Is the functionality of the regressions
>>> covered by our test cases?  Or are they covered but we didn't execute
>>> them?  Or we executed them but didn't recognize the defect?  I don't
>>> know (yet).
>>>
>>>> A simple start, if to request that every bug fix, is issued with at
> least
>>>> one test case (automated or manual).
>>>>
>>>
>>> Often there is, though this information lives in Bugzilla.  One thing
>>> we did on another (non open source) project is to mark defects in our
>>> bugtracking system that should become test cases.   Not every bug did
>>> that.  For example, a defect report to update a mispelling in the UI
>>> would not lead to a new test case.  But many would.
>>
>> we have the automated test framework that needs some more attention and
>> polishing. And of course the tests have to improved to get satisfying
>> result.
>>
>> We have
>>
>> BVT - build verification test
>> FVT - functional verification test
>> PVT - performance verification test
>> SVT - system verification test
>>
>> But I have to confess that I have limited knowledge about it yet
> 
> I aware that we ha a limited automated framework, at least thats what I
> found and played with.
> 
> but, it is not integrated into our build, or our buildbot. Especially
> testing in buildbot gives better qa. An manually controlled automated test
> is not really an ideal solution.

+1 and I think it was the intended idea behind this, have it run on a
regular basis ideally on the build bots. The work is not finished and
have to be done as so many open work items.
If thet run more stable and the results are in good shape we can
rpobably quite easy includ ethem in our build bots.
I know that hdu has some experience with this and can share some infos


> 
> Take a look at the manual test cases, a lot could be automated, and free qa
> resources for more complex testing.

sure but again the work have to be done and volunteers are welcome as
always. You have probably enough to do, the same for me or others ...

Juergen

> 
> rgds
> jan I
>>
>> Juergen
>>
>>
>>>
>>> Regards,
>>>
>>> -Rob
>>>
>>>> rgds
>>>> jan I.
>>>>
>>>>
>>>>>>
>>>>>> ---------------------------------------------------------------------
>>>>>> To unsubscribe, e-mail: qa-unsubscr...@openoffice.apache.org
>>>>>> For additional commands, e-mail: qa-h...@openoffice.apache.org
>>>>>>
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: qa-unsubscr...@openoffice.apache.org
>>>>> For additional commands, e-mail: qa-h...@openoffice.apache.org
>>>>>
>>>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: dev-unsubscr...@openoffice.apache.org
>>> For additional commands, e-mail: dev-h...@openoffice.apache.org
>>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscr...@openoffice.apache.org
>> For additional commands, e-mail: dev-h...@openoffice.apache.org
>>
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@openoffice.apache.org
For additional commands, e-mail: dev-h...@openoffice.apache.org

Reply via email to