> -Original Message-
> From: Animesh Chaturvedi [mailto:animesh.chaturv...@citrix.com]
> Sent: Tuesday, August 20, 2013 10:52 AM
> To: dev@cloudstack.apache.org
> Subject: RE: CloudStack 4.2 Quality Question
>
>
>
> > -Original Message-
>
> -Original Message-
> From: David Nalley [mailto:da...@gnsa.us]
> Sent: Tuesday, August 20, 2013 10:11 AM
> To: dev@cloudstack.apache.org
> Subject: Re: CloudStack 4.2 Quality Question
>
> On Tue, Aug 20, 2013 at 12:58 PM, Alex Huang
> wrote:
> >> On t
Yeah, I totally agree that if fixes need to be made in 4.2, then we should
be putting fixes into 4.2. My only concern was the lack of time between the
branch "settling down" and it going into the field.
As long as we have a sufficient amount of time for me to perform some
manual regression testing
On Tue, Aug 20, 2013 at 12:58 PM, Alex Huang wrote:
>> On the rapid influx of fixes, I don't think that we should tell people to
>> stop
>> pushing fixes into 4.2, but we also want to minimize churn.
>> Animesh and I were discussing this in person yesterday and I wonder if we
>> should branch 4.2
> On the rapid influx of fixes, I don't think that we should tell people to stop
> pushing fixes into 4.2, but we also want to minimize churn.
> Animesh and I were discussing this in person yesterday and I wonder if we
> should branch 4.2 temporarily (perhaps call it 4.2-forward) stabilize the 4.2
I understand that
>> this thread should be driven to consensus so I will not cut an RC today and
>> wait for what community decides.
>>
>> There are few more logistics that I would like to discuss on how we should
>> proceed after RC. I will start a different thread on that once
so I will not cut an RC today and
> wait for what community decides.
>
> There are few more logistics that I would like to discuss on how we should
> proceed after RC. I will start a different thread on that once there is
> community consensus on RC.
>
> Thanks
> Animesh
>
&g
PM
> To: dev@cloudstack.apache.org
> Subject: Re: CloudStack 4.2 Quality Question
>
> Also, just to clarify a bit, what I'm mainly wondering about is if we
> can find a way to make sure our defects are not only trending downward
> once we get to a certain point in the release,
issues.
>>
>>
>> > -Original Message-
>> > From: Chiradeep Vittal [mailto:chiradeep.vit...@citrix.com]
>> > Sent: Monday, August 19, 2013 2:15 PM
>> > To: dev@cloudstack.apache.org
>> > Subject: Re: CloudStack 4.2 Quality Question
>
gt;
>
> > -Original Message-
> > From: Chiradeep Vittal [mailto:chiradeep.vit...@citrix.com]
> > Sent: Monday, August 19, 2013 2:15 PM
> > To: dev@cloudstack.apache.org
> > Subject: Re: CloudStack 4.2 Quality Question
> >
> > You can always loo
Yeah, I agree it's been tapering down, but 150 or so in the last week
doesn't really seem like a good place to end before building a RC.
On Mon, Aug 19, 2013 at 3:14 PM, Chiradeep Vittal <
chiradeep.vit...@citrix.com> wrote:
> You can always look at the release dashboard
> https://issues.apache.
> Sent: Monday, August 19, 2013 2:15 PM
> To: dev@cloudstack.apache.org
> Subject: Re: CloudStack 4.2 Quality Question
>
> You can always look at the release dashboard
> https://issues.apache.org/jira/secure/Dashboard.jspa?selectPageId=123209
> 42
>
> This is exactly wha
You can always look at the release dashboard
https://issues.apache.org/jira/secure/Dashboard.jspa?selectPageId=12320942
This is exactly what has been happening
Animesh has been sending this out fairly regularly.
On 8/19/13 2:01 PM, "Mike Tutkowski" wrote:
>I think we need some kind of process
I think we need some kind of process in place to monitor the number of
critical and blocker defects over the course of the release and make sure
they're tapering off as we approach a RC build.
My main comment about this release (this is the first CS release that I've
participated in) is that I've
"it is" meaning "premature"
On Mon, Aug 19, 2013 at 2:49 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:
> I still believe it is, but the VMware "problem" was my fault - I must not
> have had nonoss on b/c it works now. :)
>
>
> On Mon, Aug 19, 2013 at 2:44 PM, Chip Childers
> wrote:
I still believe it is, but the VMware "problem" was my fault - I must not
have had nonoss on b/c it works now. :)
On Mon, Aug 19, 2013 at 2:44 PM, Chip Childers wrote:
> On Mon, Aug 19, 2013 at 02:33:33PM -0600, Mike Tutkowski wrote:
> > Looks like we have serious issues as of today in 4.2 (I've
On Mon, Aug 19, 2013 at 02:33:33PM -0600, Mike Tutkowski wrote:
> Looks like we have serious issues as of today in 4.2 (I've noticed
> DevCloud2 not working and VMware support is broken) (aside from plenty of
> blocker defects still to be completed).
>
> When do we start talking about how the 4.2
Looks like we have serious issues as of today in 4.2 (I've noticed
DevCloud2 not working and VMware support is broken) (aside from plenty of
blocker defects still to be completed).
When do we start talking about how the 4.2 RC build is not going to happen
until the codebase calms down a bit? I hav
Good idea, Daan.
On Thu, Aug 15, 2013 at 2:17 PM, Daan Hoogland wrote:
> Hugo has a local jenkins that runs as code gets checked in at apache,
> You could have your SolidFire SAN base install be updated by a trigger
> on commits in the central git at apache's, and then run your
> regression test
Hugo has a local jenkins that runs as code gets checked in at apache,
You could have your SolidFire SAN base install be updated by a trigger
on commits in the central git at apache's, and then run your
regression tests on it.
Hope that helps,
Daan
On Thu, Aug 15, 2013 at 9:58 PM, Mike Tutkowski
I plan to write automated system tests, but I don't think there's any way
for me to check them in and get use out of them because they require a
SolidFire SAN. In other words, I guess I will just have to keep them local
(as in just something I run).
So far, I have manual regression tests I run and
H Mike,
Make sure you write plenty of unit tests for whatever methods you need
for the features you are implementing. You can not exaggerate for a
while. Coverage is behind quite a bit and I am not a betting man, so I
bet there is going to be a 4.2.1.
In short: I share your concern.
Daan
On Thu,
Mike,
I have similar concern as well.
We have regression and BVT tests running daily on 4.2 branch locally besides
manual validation of the defects and impacted areas. Even though we would not
be able to catch all the issues, several regressions were caught and you can
see the defects by usi
23 matches
Mail list logo