Starting with the honor system might be good.  It's not so easy some times to 
relate lines of code to functionality.  Also just because it hits a line of 
code doesn't mean it's really tested.

Can't we just get people to just put a check mark on some table in the wiki?

Darren

> On Oct 28, 2013, at 12:08 PM, Santhosh Edukulla 
> <santhosh.eduku...@citrix.com> wrote:
> 
> 1.It seems we already have a code coverage numbers using sonar as below. It 
> currently shows only the numbers for unit tests.
> 
> https://analysis.apache.org/dashboard/index/100206
> 
> 2. The below link has an explanation for using it for both integration and 
> unit tests.
> 
> http://docs.codehaus.org/display/SONAR/Code+Coverage+by+Integration+Tests+for+Java+Project
> 
> 3. Many links suggests it has good decision coverage facility compared to 
> other coverage tools.
> 
> http://onlysoftware.wordpress.com/2012/12/19/code-coverage-tools-jacoco-cobertura-emma-comparison-in-sonar/
> 
> Regards,
> Santhosh
> ________________________________________
> From: Laszlo Hornyak [laszlo.horn...@gmail.com]
> Sent: Monday, October 28, 2013 1:43 PM
> To: dev@cloudstack.apache.org
> Subject: Re: Tiered Quality
> 
> Sonar already tracks the unit test coverage. It is also able to track the
> integration test coverage, however this might be a bit more sophisticated
> in CS since not all hardware/software requirements are available in the
> jenkins environment. However, this could be a problem in any environment.
> 
> 
>> On Mon, Oct 28, 2013 at 5:53 AM, Prasanna Santhanam <t...@apache.org> wrote:
>> 
>> We need a way to check coverage of (unit+integration) tests. How many
>> lines of code hit on a deployed system that corresponds to the
>> component donated/committed. We don't have that for existing tests so
>> it makes it hard to judge if a feature that comes with tests covers
>> enough of itself.
>> 
>>> On Sun, Oct 27, 2013 at 11:00:46PM +0100, Laszlo Hornyak wrote:
>>> Ok, makes sense, but that sounds like even more work :) Can you share the
>>> plan on how will this work?
>>> 
>>> 
>>> On Sun, Oct 27, 2013 at 7:54 PM, Darren Shepherd <
>>> darren.s.sheph...@gmail.com> wrote:
>>> 
>>>> I think it can't be at a component level because components are too
>> large.
>>>> It needs to be at a feature for implementation level.  For example,
>> live
>>>> storage migration for xen and live storage migration for kvm (don't
>> know if
>>>> that's a real thing) would be two separate items.
>>>> 
>>>> Darren
>>>> 
>>>>> On Oct 27, 2013, at 10:57 AM, Laszlo Hornyak <
>> laszlo.horn...@gmail.com>
>>>> wrote:
>>>>> 
>>>>> I believe this will be very useful for users.
>>>>> As far as I understand someone will have to qualify components. What
>> will
>>>>> be the method for qualification? I do not think simply the test
>> coverage
>>>>> would be right. But then if you want to go deeper, then you need a
>> bigger
>>>>> effort testing the components.
>>>>> 
>>>>> 
>>>>> 
>>>>> On Sun, Oct 27, 2013 at 4:51 PM, Darren Shepherd <
>>>>> darren.s.sheph...@gmail.com> wrote:
>>>>> 
>>>>>> I don't know if a similar thing has been talked about before but I
>>>>>> thought I'd just throws this out there.  The ultimate way to ensure
>>>>>> quality is that we have unit test and integration test coverage on
>> all
>>>>>> functionality.  That way somebody authors some code, commits to, for
>>>>>> example, 4.2, but then when we release 4.3, 4.4, etc they aren't on
>>>>>> the hook to manually tests the functionality with each release.  The
>>>>>> obvious nature of a community project is that people come and go.
>> If
>>>>>> a contributor wants to ensure the long term viability of the
>>>>>> component, they should ensure that there are unit+integration tests.
>>>>>> 
>>>>>> Now, for whatever reason whether good or bad, it's not always
>> possible
>>>>>> to have full integration tests.  I don't want to throw down the
>> gamut
>>>>>> and say everything must have coverage because that will mean some
>>>>>> useful code/feature will not get in because of some coverage wasn't
>>>>>> possible at the time.
>>>>>> 
>>>>>> What I propose is that for every feature or function we put it in a
>>>>>> tier of what is the quality of it (very similar to how OpenStack
>>>>>> qualifies their hypervisor integration).  Tier A means unit test and
>>>>>> integration test coverage gates the release.  Tier B means unit test
>>>>>> coverage gates the release.  Tier C mean who knows, it compiled.  We
>>>>>> can go through and classify the components and then as a community
>> we
>>>>>> can try to get as much into Tier A as possible.
>>>>>> 
>>>>>> Darren
>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> 
>>>>> EOF
>>> 
>>> 
>>> 
>>> --
>>> 
>>> EOF
>> 
>> --
>> Prasanna.,
>> 
>> ------------------------
>> Powered by BigRock.com
> 
> 
> --
> 
> EOF

Reply via email to