On Thu, Aug 15, 2013 at 8:29 AM, Jürgen Schmidt <[email protected]> wrote:
> On 8/15/13 1:33 PM, Rob Weir wrote:
>> On Thu, Aug 15, 2013 at 6:58 AM, Jürgen Schmidt <[email protected]> 
>> wrote:
>>> On 8/15/13 12:19 PM, janI wrote:
>>>> On Aug 15, 2013 11:14 AM, "Jürgen Schmidt" <[email protected]> wrote:
>>>>>
>>>>> On 8/14/13 8:30 PM, Rob Weir wrote:
>>>>>> On Wed, Aug 14, 2013 at 1:55 PM, janI <[email protected]> wrote:
>>>>>>> On 14 August 2013 19:36, Edwin Sharp <[email protected]> wrote:
>>>>>>>
>>>>>>>> Dear Rob
>>>>>>>> The 4.0 release was too ambitious - we should advance in smaller
>>>> steps.
>>>>>>>> Nothing compares to general public testing - betas and release
>>>> candidates
>>>>>>>> should not be avoided.
>>>>>>>> TestLink cases should be less comprehesive (in terms of feature
>>>> coverage)
>>>>>>>> and more stress testing oriented.
>>>>>>>> Regards,
>>>>>>>> Edwin
>>>>>>>>
>>>>>>>> On Wed, Aug 14, 2013, at 19:59, Rob Weir wrote:
>>>>>>>>> We're working now on AOO 4.0.1, to fix defects in AOO 4.0.0.  The
>>>> fact
>>>>>>>>> that we're doing this, and their are no arguments against it, shows
>>>>>>>>> that we value quality.   I'd like to take this a step further, and
>>>> see
>>>>>>>>> what we can learn from the defects in AOO 4.0.0 and what we can do
>>>>>>>>> going forward to improve.
>>>>>>>>>
>>>>>>>>> Quality, in the end, is a process, not a state of grace.  We improve
>>>>>>>>> by working smarter, not working harder.  The goal should be to learn
>>>>>>>>> and improve, as individuals and as a community.
>>>>>>>>>
>>>>>>>>> Every regression that made it into 4.0.0 was added there by a
>>>>>>>>> programmer.  And the defect went undetected by testers.  This is not
>>>>>>>>> to blame.  It just means that we're all human.  We know that.  We all
>>>>>>>>> make mistakes.  I make mistakes.  A quality process is not about
>>>>>>>>> becoming perfect, but about acknowledging that we make mistakes and
>>>>>>>>> that certain formal and informal practices are needed to prevent and
>>>>>>>>> detect these mistakes.
>>>>>>>>>
>>>>>>>>> But enough about generalities.  I'm hoping you'll join with me in
>>>>>>>>> examining the 32 confirmed 4.0.0 regression defects and answering a
>>>>>>>>> few questions:
>>>>>>>>>
>>>>>>>>> 1) What caused the bug?   What was the "root cause"?  Note:
>>>>>>>>> "programmer error" is not really a cause.  We should ask what caused
>>>>>>>>> the error.
>>>>>>>>>
>>>>>>>>> 2) What can we do to prevent bugs like this from being checked in?
>>>>>>>>>
>>>>>>>>> 3) Why wasn't the bug found during testing?  Was it not covered by
>>>> any
>>>>>>>>> existing test case?  Was a test case run but the defect was not
>>>>>>>>> recognized?  Was the defect introduced into the software after the
>>>>>>>>> tests had already been executed?
>>>>>>>>>
>>>>>>>>> 4) What can we do to ensure that bugs like this are caught during
>>>>>>>> testing?
>>>>>>>>>
>>>>>>>>> So 2 basic questions -- what went wrong and how can we prevent it in
>>>>>>>>> the future, looked at from perspective of programmers and testers.
>>>>  If
>>>>>>>>> we can keep these questions in mind, and try to answer them, we may
>>>> be
>>>>>>>>> able to find some patterns that can lead to some process changes for
>>>>>>>>> AOO 4.1.
>>>>>>>>>
>>>>>>>>> You can find the 4.0.0 regressions in Bugzilla here:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>> https://issues.apache.org/ooo/buglist.cgi?cmdtype=dorem&remaction=run&namedcmd=400_regressions&sharer_id=248521&list_id=80834
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Regards,
>>>>>>>>>
>>>>>>>>> -Rob
>>>>>>>>
>>>>>>>
>>>>>>> I strongly believe that one of the things that went wrong is our
>>>> limited
>>>>>>> possibility to retest (due to resources), when I look at our current
>>>> manual
>>>>>>
>>>>>> I wonder about that as well.  That's one reason it would be good to
>>>>>> know how many of the confirmed regressions were introduced late in the
>>>>>> release process, and thus missed coverage in our full test pass.
>>>>>>
>>>>>>> testcases, a lot of those could be automated, e.g. with a simple UI
>>>> macro,
>>>>>>> that would enable us to run these test cases with every build. It may
>>>> sound
>>>>>>> like a dream but where I come from, we did that every night, and it
>>>> caught
>>>>>>> a lot of regression bugs and sideeffects.
>>>>>>>
>>>>>>
>>>>>> This begs the question:  Is the functionality of the regressions
>>>>>> covered by our test cases?  Or are they covered but we didn't execute
>>>>>> them?  Or we executed them but didn't recognize the defect?  I don't
>>>>>> know (yet).
>>>>>>
>>>>>>> A simple start, if to request that every bug fix, is issued with at
>>>> least
>>>>>>> one test case (automated or manual).
>>>>>>>
>>>>>>
>>>>>> Often there is, though this information lives in Bugzilla.  One thing
>>>>>> we did on another (non open source) project is to mark defects in our
>>>>>> bugtracking system that should become test cases.   Not every bug did
>>>>>> that.  For example, a defect report to update a mispelling in the UI
>>>>>> would not lead to a new test case.  But many would.
>>>>>
>>>>> we have the automated test framework that needs some more attention and
>>>>> polishing. And of course the tests have to improved to get satisfying
>>>>> result.
>>>>>
>>>>> We have
>>>>>
>>>>> BVT - build verification test
>>>>> FVT - functional verification test
>>>>> PVT - performance verification test
>>>>> SVT - system verification test
>>>>>
>>>>> But I have to confess that I have limited knowledge about it yet
>>>>
>>>> I aware that we ha a limited automated framework, at least thats what I
>>>> found and played with.
>>>>
>>>> but, it is not integrated into our build, or our buildbot. Especially
>>>> testing in buildbot gives better qa. An manually controlled automated test
>>>> is not really an ideal solution.
>>>
>>> +1 and I think it was the intended idea behind this, have it run on a
>>> regular basis ideally on the build bots. The work is not finished and
>>> have to be done as so many open work items.
>>> If thet run more stable and the results are in good shape we can
>>> rpobably quite easy includ ethem in our build bots.
>>> I know that hdu has some experience with this and can share some infos
>>>
>>
>> A thought experiment:   If we ran the existing test automation on
>> 4.0.0, how many of the bugs that we're fixing in 4.0.1 do you think
>> would be detected?
>
> I don't think that we would have detected these problems.
>
> The performance issue could have been detected if the correct reference
> data would have been used. But the related changes are in the code for
> ~1 year. I don't know if we have a related test. But for sure a
> candidate that could have been found in time if we would have a test and
> good reference data.
>

I think it is reasonable to expect that a performance verification
test would include a test saving a large XLS file.  Of course, this
only works if we run the test automation.

> All related problems with external extension are not detected.
>
> Some of these problems are only visible when you see the screen.,
> difficult to find.
>
> The copy/paste problem with Punjabi? Not easy to detect.
>

Right.

GUI-based automation tends to be broad but shallow.  It requires
special skills (Java programming in our case) to develop and maintain.
 If run regularly, like with every build, it can catch catastrophic
errors almost immediately.  Automation has the benefit of costing
nothing to run (once automated) and not making mistakes.  The downside
is it does not find bugs that it is not programmed to detect.

The old saying applies here:  "Every class of users finds a new class
of bugs".  There is no one perfect way of testing that finds all bugs.
 The best approach is a mix of techniques.

One thing I wonder about:   Do we use test assertions in our code,
like the old C/C++ assert() macro or similar?  The blocking issue we
found in RC1 would likely have been found that way, for example.  The
nice thing about test assertions is they can even help developers find
the bug before code is checked in.  And unlike test automation a test
assertion does not fall out of synch with the code or the UI, since it
is in the code.  It is more likely to be maintained.  Also, adding a
test assertion after a bug fix is easier than writing a GUI test case.

Regards,

-Rob

> Juergen
>
>>
>> -Rob
>>
>>>
>>>>
>>>> Take a look at the manual test cases, a lot could be automated, and free qa
>>>> resources for more complex testing.
>>>
>>> sure but again the work have to be done and volunteers are welcome as
>>> always. You have probably enough to do, the same for me or others ...
>>>
>>> Juergen
>>>
>>>>
>>>> rgds
>>>> jan I
>>>>>
>>>>> Juergen
>>>>>
>>>>>
>>>>>>
>>>>>> Regards,
>>>>>>
>>>>>> -Rob
>>>>>>
>>>>>>> rgds
>>>>>>> jan I.
>>>>>>>
>>>>>>>
>>>>>>>>>
>>>>>>>>> ---------------------------------------------------------------------
>>>>>>>>> To unsubscribe, e-mail: [email protected]
>>>>>>>>> For additional commands, e-mail: [email protected]
>>>>>>>>>
>>>>>>>>
>>>>>>>> ---------------------------------------------------------------------
>>>>>>>> To unsubscribe, e-mail: [email protected]
>>>>>>>> For additional commands, e-mail: [email protected]
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>> ---------------------------------------------------------------------
>>>>>> To unsubscribe, e-mail: [email protected]
>>>>>> For additional commands, e-mail: [email protected]
>>>>>>
>>>>>
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: [email protected]
>>>>> For additional commands, e-mail: [email protected]
>>>>>
>>>>
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: [email protected]
>>> For additional commands, e-mail: [email protected]
>>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [email protected]
>> For additional commands, e-mail: [email protected]
>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
>

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to