[GitHub] cloudstack pull request: Update ConsoleProxyPasswordBasedEncryptor...

2015-04-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/10


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request: Update ConsoleProxyPasswordBasedEncryptor...

2015-04-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/11


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request: removed unused static main in ConsoleProx...

2015-04-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/176


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Downloading pom from ceph.com fails

2015-04-17 Thread Gaurav Aradhye
Thanks.. build on master succeeds now.

Regards,
Gaurav Aradhye

On Apr 16, 2015, at 10:06 PM, Nux!  wrote:

> No need, Rohit did the master afaik.
> 
> --
> Sent from the Delta quadrant using Borg technology!
> 
> Nux!
> www.nux.ro
> 
> - Original Message -
>> From: "Gaurav Aradhye" 
>> To: dev@cloudstack.apache.org
>> Sent: Thursday, 16 April, 2015 14:51:23
>> Subject: Re: Downloading pom from ceph.com fails
> 
>> Thanks nux! Should it be cherry picked to master also? I have observed
>> failure on master too.
>> On Apr 16, 2015 7:13 PM, "Nux!"  wrote:
>> 
>>> Ok, did a pull request for 4.4 branch.
>>> 
>>> In the meanwhile EL6 RPMs here:
>>> http://tmp.nux.ro/acs443/el6/
>>> 
>>> --
>>> Sent from the Delta quadrant using Borg technology!
>>> 
>>> Nux!
>>> www.nux.ro
>>> 
>>> - Original Message -
 From: "Ian Southam" 
 To: dev@cloudstack.apache.org
 Sent: Thursday, 16 April, 2015 14:01:30
 Subject: Re: Downloading pom from ceph.com fails
>>> 
 Hi,
 
 Probably a good idea to commit but I confess it is a “works on my
>>> laptop” change
 ;).
 
 —
 Ian
 
 On 16 Apr 2015, at 14:41, Gaurav Aradhye 
>>> wrote:
 
> Changed subject to not spam original post.
> 
> I encountered this issue in building latest master also. Ian, should
>>> this change
> be committed?
> 
> Regards,
> Gaurav Aradhye
> 
> On Apr 16, 2015, at 6:07 PM, Ian Southam 
>>> wrote:
> 
>> Change ceph.com to eu.ceph.com in ./plugins/hypervisors/kvm/pom.xml
>>> then it will
>> compile again.
>> 
>> —
>> Grts!
>> Ian
>> 
>> On 16 Apr 2015, at 10:37, Nux!  wrote:
>> 
>>> Looks like there is a pom here
>>> http://repo1.maven.org/maven2/com/github/K0zka/libvirt/0.5.1/
>>> 
>>> What file needs to be modified to point the build process there?
>>> 
>>> --
>>> Sent from the Delta quadrant using Borg technology!
>>> 
>>> Nux!
>>> www.nux.ro
>>> 
>>> - Original Message -
 From: "Nux!" 
 To: dev@cloudstack.apache.org
 Sent: Thursday, 16 April, 2015 09:30:24
 Subject: Re: [VOTE] Apache Cloudstack 4.4.3
>>> 
 Ok, I can't even build it, it stops at:
 
 [INFO]
>>> 
 [INFO] Building Apache CloudStack Plugin - Hypervisor KVM 4.4.3
 [INFO]
>>> 
 Downloading:
 
>>> http://libvirt.org/maven2/org/libvirt/libvirt/0.5.1/libvirt-0.5.1.pom
 Downloading:
>>> http://ceph.com/maven/org/libvirt/libvirt/0.5.1/libvirt-0.5.1.pom
 
 
 Apparently those URLs do not work.
 
 --
 Sent from the Delta quadrant using Borg technology!
 
 Nux!
 www.nux.ro
 
 - Original Message -
> From: "Nux!" 
> To: dev@cloudstack.apache.org
> Sent: Thursday, 16 April, 2015 09:00:50
> Subject: Re: [VOTE] Apache Cloudstack 4.4.3
 
> https://dist.apache.org/repos/dist/dev/cloudstack/4.4/ does not
>>> exist.
> 
> I guess the valid one is either
> https://dist.apache.org/repos/dist/dev/cloudstack/4.4.3/ OR simply
> 
>>> https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=snapshot;h=e9441d47867104505ef260c1857549f93df96aba;sf=tgz
> 
> --
> Sent from the Delta quadrant using Borg technology!
> 
> Nux!
> www.nux.ro
> 
> - Original Message -
>> From: "Daan Hoogland" 
>> To: "dev" 
>> Sent: Wednesday, 15 April, 2015 23:02:55
>> Subject: [VOTE] Apache Cloudstack 4.4.3
> 
>> Hi All,
>> 
>> I've created a 4.4.3 release, with the following artifacts up for
>>> a vote:
>> 
>> Git Branch and Commit SH:
>> 
>>> https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.4
>> Commit: e9441d47867104505ef260c1857549f93df96aba
>> 
>> List of changes:
>> 
>>> https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.4
>> https://issues.apache.org/jira/issues/?filter=12330007
>> 
>> Source release (checksums and signatures are available at the same
>> location):
>> https://dist.apache.org/repos/dist/dev/cloudstack/4.4/
>> 
>> PGP release keys (signed using 2048D/5AABEBEA):
>> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
>> 
>> Vote will be open for 72 hours.
>> 
>> For sanity in tallying the vote, can PMC members please be sure to
>> indicate "(binding)" with their vote?
>> 
>> [ ] +1  approve
>> [ ] +0  no opinion
>> [ ] -1  

RE: [DISCUSS] 4.6 release management

2015-04-17 Thread Raja Pullela
+1 for the "Some people (I'm part of them) are concerned on our current way of 
supporting and back porting fixes to multiple release"
This should be a top priority along with keeping master stable - make sure BVTs 
are passing at 100% all the time.
Also if we can plan/target increasing test/BVT coverage, that will be super!

Thanks,
Raja
-Original Message-
From: Marcus [mailto:shadow...@gmail.com] 
Sent: Friday, April 17, 2015 4:35 AM
To: dev@cloudstack.apache.org
Subject: Re: [DISCUSS] 4.6 release management

"storage plugin involve changes on Hypervisor code"

I know this is just an example, but at least on KVM side this is no longer 
true. Previously you had to implement a KVM-specific 'StorageAdaptor' that 
would run on the hypervisor, and register that with the agent code, but Mike 
and I added some reflection/annotation that allows for auto-detection of the 
adaptor upon Agent start up, so storage plugins can be completely 
self-contained now. They don't even have to be a part of our code base.

There may be other parts of the code where we can do similar things to decouple 
if we can identify those points.  Ideally, if someone has to modify core code 
to add their plugin it should only be because they are adding some new 
functionality *that core cloudstack needs to be aware of*, and that 
functionality should be added in a way that other plugins can also 
provide/implement it. Otherwise, they can always add new APIs specific to their 
appliance or product and leveraging data from cloudstack's db, all via plugin. 
They can add new global/zone/cluster configs and UI tools via plugin as well.

On Thu, Apr 16, 2015 at 3:49 PM, Pierre-Luc Dion  wrote:
> Today during the CloudStackdays  we did a round table about Release 
> management targeting the next 4.6 releases.
>
>
> Quick bullet point discussions:
>
> ideas to change release planning
>
>- Plugin contribution is complicated because often  a new plugin involve
>change on the core:
>   - ex: storage plugin involve changes on Hypervisor code
>- There is an idea of going on a 2 weeks release model which could
>introduce issue the database schema.
>- Database schema version should be different then the application
>version.
>- There is a will to enforce git workflow in 4.6  and trigger simulator
>job on  PullRequest.
>- Some people (I'm part of them) are concerned on our current way of
>supporting and back porting fixes to multiple release (4.3.x, 4.4.x,
>4.5.x). But the current level of confidence against latest release is low,
>so that need to be improved.
>
>
> So, the main messages is that w'd like to improve the release 
> velocity, and release branch stability.  so we would like to propose 
> few change in the way we would add code to the 4.6 branch as follow:
>
> - All new contribution to 4.6 would be thru Pull Request or merge 
> request, which would trigger a simulator job, ideally only if that 
> pass the PR would be accepted and automatically merged.  At this time, 
> I think we pretty much have everything in place to do that. At a first 
> step we would use
> simulator+marvin jobs then improve tests coverage from there.
>
> Please comments :-)


[GitHub] cloudstack pull request: removed unused static main in ConsoleProx...

2015-04-17 Thread karuturi
GitHub user karuturi opened a pull request:

https://github.com/apache/cloudstack/pull/176

removed unused static main in ConsoleProxyPasswordBasedEncryptor

This closes #11
This closes #10

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/karuturi/cloudstack cppbe

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/176.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #176


commit c03228bf07a55fc91bd133298bdb84c6a71a74c2
Author: Rajani Karuturi 
Date:   2015-04-17T04:24:03Z

removed unused static main in ConsoleProxyPasswordBasedEncryptor

This closes #11
This closes #10




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


RE: OVM3 test failures

2015-04-17 Thread Raja Pullela
Cool, thank you for the update! Will check out this in the morning. 

-Original Message-
From: Daan Hoogland [mailto:daan.hoogl...@gmail.com] 
Sent: Friday, April 17, 2015 2:10 AM
To: Funs Kessen
Cc: Raja Pullela; dev@cloudstack.apache.org
Subject: Re: OVM3 test failures

Funs, so you were not jumping a canon. The problem seems to me to be in the 
jenkins job in context of the marvin module. The tests from
ovm3 themselves do pass. I did some tinkering with jenkins.bac.o and wil keep 
at it to get the red removed from the report page. I saw similar problems in 
jenkins.bac.o and am addressing those now. I will do the builds.a.o job next.

On Thu, Apr 16, 2015 at 3:08 PM, Funs Kessen  wrote:
> Daan,
>
> I can confirm that a fresh checkout of master and a build of marvin 
> after the apidoc works for me too.
>
> Cheers,
>
> Funs
>
>> On 16 Apr 2015, at 21:46, Funs Kessen  wrote:
>>
>> Daan,
>>
>> I did the same, but noticed it’s the cloudstack-marvin plugin, so am 
>> doing it again to figure out what’s going on there to see if I didn’t jump 
>> the gun with my comment.
>>
>> Cheers,
>>
>> Funs
>>
>>> On 16 Apr 2015, at 21:35, Daan Hoogland  wrote:
>>>
>>> Funs,Raja,
>>>
>>> I did some more investigation. The issue has no relation to the 
>>> change mentioned by e indeed but neither to the latest commits. It 
>>> runs on my laptop (TM) so I suspect a problem on the jenkins slave 
>>> or an false assumption about the slaves in the tests. We'll need to 
>>> investigate further. I look at other marvin jobs. At first glance 
>>> they all seem to feel. (4.4, 4.5 etc.)
>>>
>>> On Thu, Apr 16, 2015 at 1:59 PM, Funs Kessen  wrote:
 Hi Raja,

 It seems there is no relation between what Daan was talking about and this 
 problem as far as I can see.

 If you revert your last commit I suspect that everything is ok, so we need 
 to figure out what your commit triggers that causes it to fail I guess as 
 prior to that stuff seemed to work (tm).

 Cheers

 Funs

 PS: Are you on Cloudstack days in Austin per chance, then I could help you 
 figuring out what’s going on.

> On 16 Apr 2015, at 20:38, Raja Pullela  wrote:
>
> the builds are failing due to ovm3 audit failures.
> https://builds.apache.org/job/cloudstack-marvin/1723/console
>
> Daan, not sure if the change you were planning to do will address this 
> issue?
>
> Raj
> -Original Message-
> From: Daan Hoogland [mailto:daan.hoogl...@gmail.com]
> Sent: Friday, March 27, 2015 7:47 PM
> To: Roger Crerie
> Cc: dev@cloudstack.apache.org; Funs Kessen
> Subject: Re: OVM3 test failures
>
> Shame on me, oh the public humiliation. will fix and submit, 
> thanks
>
> On Fri, Mar 27, 2015 at 3:07 PM, Roger Crerie  
> wrote:
>> Tests passed after I fixed the audit failures.  Good on you :).
>>
>> Roger
>>
>> -Original Message-
>> From: Roger Crerie [mailto:roger.cre...@hds.com]
>> Sent: Friday, March 27, 2015 9:51 AM
>> To: Daan Hoogland
>> Cc: Funs Kessen; dev
>> Subject: RE: OVM3 test failures
>>
>> I'm getting audit failures now.  See attached text file.  I'll fix them 
>> in code and run again but wanted to alert you to this.
>>
>> Roger
>>
>> -Original Message-
>> From: Daan Hoogland [mailto:daan.hoogl...@gmail.com]
>> Sent: Friday, March 27, 2015 9:48 AM
>> To: Roger Crerie
>> Cc: Funs Kessen; dev
>> Subject: Re: OVM3 test failures
>>
>> You should be able to do 'git pull' in your working dir, from the shell.
>>
>> On Fri, Mar 27, 2015 at 2:38 PM, Roger Crerie  
>> wrote:
>>> Having never pulled anything from cloudstack but the master how would I 
>>> go about getting this fix?
>>>
>>> Roger
>>>
>>> -Original Message-
>>> From: Funs Kessen [mailto:fozzielumpk...@gmail.com] On Behalf Of 
>>> Funs Kessen
>>> Sent: Friday, March 27, 2015 7:21 AM
>>> To: Daan Hoogland
>>> Cc: Roger Crerie; dev
>>> Subject: Re: OVM3 test failures
>>>
>>> Nah you are smart enough, it’s just that I was lazy and did a search 
>>> and replace and then noticed I broke something and fixed it with 
>>> another commit.
>>>
>>> thanks!
>>>
 On 27 Mar 2015, at 12:12, Daan Hoogland  
 wrote:

 /me not being smart enough to use pull requests.

 I am pulling it now.

 On Fri, Mar 27, 2015 at 12:04 PM, Funs Kessen  wrote:
> Hi Daan,
>
>> On 27 Mar 2015, at 11:58, Daan Hoogland  
>> wrote:
>>
>> Funs, I commented on it, I think it has a typo in it.
>> Roger, can you apply that patch and test (after looking at my 
>> comment)?
>
> I replied to you, that’s why the pull request contains two 
> commit I

Build failed in Jenkins: simulator-singlerun #1119

2015-04-17 Thread jenkins
See 

Changes:

[Rajani Karuturi] removed unused static main in 
ConsoleProxyPasswordBasedEncryptor

--
[...truncated 10480 lines...]
> Initializing database=cloudbridge with host=localhost port=3306 
username=cloud password=cloud
> Running query: drop database if exists `cloudbridge`
> Running query: create database `cloudbridge`
> Running query: GRANT ALL ON cloudbridge.* to 'cloud'@`localhost` 
identified by 'cloud'
> Running query: GRANT ALL ON cloudbridge.* to 'cloud'@`%` 
identified by 'cloud'
> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing upgrade: com.cloud.upgrade.DatabaseUpgradeChecker
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
cloud-developer ---
[INFO] 
[INFO] --- maven-install-plugin:2.3.1:install (default-install) @ 
cloud-developer ---
[INFO] Installing 
 
to 
/var/lib/jenkins/.m2/repository/org/apache/cloudstack/cloud-developer/4.6.0-SNAPSHOT/cloud-developer-4.6.0-SNAPSHOT.pom
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 2:02.280s
[INFO] Finished at: Fri Apr 17 03:07:36 EDT 2015
[INFO] Final Memory: 48M/217M
[INFO] 
[WARNING] The requested profile "simulator" could not be activated because it 
does not exist.
[simulator-singlerun] $ mvn -P developer -pl developer -Ddeploydb-simulator
[INFO] Scanning for projects...
[INFO] 
[INFO] 
[INFO] Building Apache CloudStack Developer Mode 4.6.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.11:check (cloudstack-checkstyle) @ 
cloud-developer ---
[INFO] Starting audit...
Audit done.

[INFO] 
[INFO] --- properties-maven-plugin:1.0-alpha-2:read-project-properties 
(default) @ cloud-developer ---
[WARNING] Ignoring missing properties file: 

[INFO] 
[INFO] --- maven-remote-resources-plugin:1.3:process (default) @ 
cloud-developer ---
[INFO] 
[INFO] --- maven-antrun-plugin:1.8:run (default) @ cloud-developer ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] >>> exec-maven-plugin:1.2.1:java (create-schema-simulator) @ 
cloud-developer >>>
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.11:check (cloudstack-checkstyle) @ 
cloud-developer ---
[INFO] Starting audit...
Audit done.

[INFO] 
[INFO] <<< exec-maven-plugin:1.2.1:java (create-schema-simulator) @ 
cloud-developer <<<
[INFO] 
[INFO] --- exec-maven-plugin:1.2.1:java (create-schema-simulator) @ 
cloud-developer ---
log4j:WARN No appenders could be found for logger 
(org.springframework.core.env.StandardEnvironment).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.
==

Re: [DISCUSS] 4.6 release management

2015-04-17 Thread Sebastien Goasguen

> On Apr 17, 2015, at 12:49 AM, Pierre-Luc Dion  wrote:
> 
> Today during the CloudStackdays  we did a round table about Release
> management targeting the next 4.6 releases.
> 
> 
> Quick bullet point discussions:
> 
> ideas to change release planning
> 
>   - Plugin contribution is complicated because often  a new plugin involve
>   change on the core:
>  - ex: storage plugin involve changes on Hypervisor code
>   - There is an idea of going on a 2 weeks release model which could
>   introduce issue the database schema.
>   - Database schema version should be different then the application
>   version.
>   - There is a will to enforce git workflow in 4.6  and trigger simulator
>   job on  PullRequest.
>   - Some people (I'm part of them) are concerned on our current way of
>   supporting and back porting fixes to multiple release (4.3.x, 4.4.x,
>   4.5.x). But the current level of confidence against latest release is low,
>   so that need to be improved.
> 
> 
> So, the main messages is that w'd like to improve the release velocity, and
> release branch stability.  so we would like to propose few change in the
> way we would add code to the 4.6 branch as follow:
> 
> - All new contribution to 4.6 would be thru Pull Request or merge request,
> which would trigger a simulator job, ideally only if that pass the PR would
> be accepted and automatically merged.  At this time, I think we pretty much
> have everything in place to do that. At a first step we would use
> simulator+marvin jobs then improve tests coverage from there.

+1

We do need to realize what this means and be all fine with it.

It means that if someone who is not RM directly commits to the release branch, 
the commit will be reverted.
And that from the beginning of the branching…

IMHO, I think this would be a good step but I don’t think it goes far enough.

This still uses a paradigm where a release is made from a release branch that 
was started from an unstable development branch.
Hence you still need *extensive* QA.

If we truly want to release faster, we need to release from the same QA’d 
branch time after time….a release needs to be based on a previous release

Basically, we need a rolling release cycle. That will have the added benefit to 
not leave releases behind and have to focus on backporting.

> 
> Please comments :-)



Re: Review Request 21435: Fix for https://issues.apache.org/jira/browse/CLOUDSTACK-6610

2015-04-17 Thread Sebastien Goasguen

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/21435/#review80438
---

Ship it!


testing, discard

- Sebastien Goasguen


On May 14, 2014, 2:33 p.m., Gustavo Nery wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/21435/
> ---
> 
> (Updated May 14, 2014, 2:33 p.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> The HypervisorTemplateAdapter are publishing the event on event bus sending 
> the enityType and entityUUID as null. Without this informations, the listener 
> on event bus cant get the template that was deleted.
> 
> Just modified the HypervisorTemplateAdapter.java to pass the right params to 
> the UsageEventUtils publishUsageEvent method.
> 
> 
> Diffs
> -
> 
>   server/src/com/cloud/template/HypervisorTemplateAdapter.java 71eac66 
> 
> Diff: https://reviews.apache.org/r/21435/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Gustavo Nery
> 
>



Re: [DISCUSS] 4.6 release management

2015-04-17 Thread Sebastien Goasguen

> On Apr 17, 2015, at 6:26 AM, Raja Pullela  wrote:
> 
> +1 for the "Some people (I'm part of them) are concerned on our current way 
> of supporting and back porting fixes to multiple release"
> This should be a top priority along with keeping master stable - make sure 
> BVTs are passing at 100% all the time.

Raja, which BVT are you talking about ? AFAIK, all current tests run on all 
commits through Travis.

> Also if we can plan/target increasing test/BVT coverage, that will be super!
> 
> Thanks,
> Raja
> -Original Message-
> From: Marcus [mailto:shadow...@gmail.com] 
> Sent: Friday, April 17, 2015 4:35 AM
> To: dev@cloudstack.apache.org
> Subject: Re: [DISCUSS] 4.6 release management
> 
> "storage plugin involve changes on Hypervisor code"
> 
> I know this is just an example, but at least on KVM side this is no longer 
> true. Previously you had to implement a KVM-specific 'StorageAdaptor' that 
> would run on the hypervisor, and register that with the agent code, but Mike 
> and I added some reflection/annotation that allows for auto-detection of the 
> adaptor upon Agent start up, so storage plugins can be completely 
> self-contained now. They don't even have to be a part of our code base.
> 
> There may be other parts of the code where we can do similar things to 
> decouple if we can identify those points.  Ideally, if someone has to modify 
> core code to add their plugin it should only be because they are adding some 
> new functionality *that core cloudstack needs to be aware of*, and that 
> functionality should be added in a way that other plugins can also 
> provide/implement it. Otherwise, they can always add new APIs specific to 
> their appliance or product and leveraging data from cloudstack's db, all via 
> plugin. They can add new global/zone/cluster configs and UI tools via plugin 
> as well.
> 
> On Thu, Apr 16, 2015 at 3:49 PM, Pierre-Luc Dion  wrote:
>> Today during the CloudStackdays  we did a round table about Release 
>> management targeting the next 4.6 releases.
>> 
>> 
>> Quick bullet point discussions:
>> 
>> ideas to change release planning
>> 
>>   - Plugin contribution is complicated because often  a new plugin involve
>>   change on the core:
>>  - ex: storage plugin involve changes on Hypervisor code
>>   - There is an idea of going on a 2 weeks release model which could
>>   introduce issue the database schema.
>>   - Database schema version should be different then the application
>>   version.
>>   - There is a will to enforce git workflow in 4.6  and trigger simulator
>>   job on  PullRequest.
>>   - Some people (I'm part of them) are concerned on our current way of
>>   supporting and back porting fixes to multiple release (4.3.x, 4.4.x,
>>   4.5.x). But the current level of confidence against latest release is low,
>>   so that need to be improved.
>> 
>> 
>> So, the main messages is that w'd like to improve the release 
>> velocity, and release branch stability.  so we would like to propose 
>> few change in the way we would add code to the 4.6 branch as follow:
>> 
>> - All new contribution to 4.6 would be thru Pull Request or merge 
>> request, which would trigger a simulator job, ideally only if that 
>> pass the PR would be accepted and automatically merged.  At this time, 
>> I think we pretty much have everything in place to do that. At a first 
>> step we would use
>> simulator+marvin jobs then improve tests coverage from there.
>> 
>> Please comments :-)



[ANNOUNCE] No More Review Board (RB)

2015-04-17 Thread Sebastien Goasguen
Morning All,

After discussion [1], we decided to stop using review board to receive 
contributions from non-committers.
Starting now, we accept contributions only through GitHub pull request. (Even 
though you can also attach a patch to a JIRA ticket).

A large number of projects use GitHub and we believe it is easier to submit new 
contribution that way, as well as enter reviews and get more visibility to the 
patches.

The website contribution instructions have been updated [2] and a complete step 
by step guide is available [3].

Cheers,

PS: The left over reviews in RB are still visible but are in discarded state.

-Sebastien

[1] http://markmail.org/thread/vufdbvquiijsbrgz
[2] http://cloudstack.apache.org/developers.html
[3] https://github.com/apache/cloudstack/blob/master/CONTRIBUTING.md

[GitHub] cloudstack pull request: CLOUDSTACK-8366 Add more Test cases for v...

2015-04-17 Thread abhinavroy02
Github user abhinavroy02 commented on the pull request:

https://github.com/apache/cloudstack/pull/147#issuecomment-93946446
  
The new pull request has changes in 3 files -
1. testpath_storage_migration.py
In this file I have added tests for migration in and across vmfs and 
nfs types primary storages,
support for both windows and linux VMs
Host maintenance and storage maintenance test
Some negative tests and also tests for migration across cluster and 
zone wide storages.

2. base.py
This file has 2 new wrappers getState, one each for StoragePool class 
and Host class.

3. utils.py
This file has a modified function restart_mgmt_server. We can use this 
function if our test requires 
a management server restart


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request: modified travis.yml added sudo: required

2015-04-17 Thread karuturi
GitHub user karuturi opened a pull request:

https://github.com/apache/cloudstack/pull/177

modified travis.yml added sudo: required

travis by default uses container based infra which doesnt not allow sudo.

http://docs.travis-ci.com/user/workers/container-based-infrastructure/
for details.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/karuturi/cloudstack travis

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/177.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #177


commit cb1695b5c40486f74419b56917c274929786ed8f
Author: Rajani Karuturi 
Date:   2015-04-17T06:12:18Z

modified travis.yml added sudo: required

travis by default uses container based infra which doesnt not allow sudo.

http://docs.travis-ci.com/user/workers/container-based-infrastructure/
for details.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request: RFC: improve iptables persistent on VR

2015-04-17 Thread resmo
GitHub user resmo opened a pull request:

https://github.com/apache/cloudstack/pull/178

RFC: improve iptables persistent on VR

Iptables rules were loaded in `iptables-persistent` service during boot.  
So the first try was to save them where `iptables-persistent` reads them in 
/etc/iptables/rules.v4 / .v6.

The problem was, that the service `cloud-early-config` resets  
/etc/iptables/rules.v4 / .v6 to the setup state. So even if you save iptables 
rules, they were overwritten during boot.

That is why a fix was made in 2fad87d to workaround the problem.

I reverted the workaround and made sure /etc/iptables/rules.v4 / .v6. won't 
get overwritten by  `cloud-early-config`

Signed-off-by: Rene Moser 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/resmo/cloudstack fix/iptables-persistent

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/178.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #178


commit 1740c15b6b212318c8dccd0db87c273d845883ef
Author: Rene Moser 
Date:   2015-04-17T09:17:11Z

Revert "Make the routers persistent"

This reverts commit 2fad87d3f3fec380ba5d595ee95f5caa88b37ee8.

commit cc2fc0a63fd496b816a3540463903ba21988f9d6
Author: Rene Moser 
Date:   2015-04-17T09:37:43Z

make iptables persistent on VR

Iptables rules were loaded in `iptables-persistent` service during boot.  
So the first try was to save them where `iptables-persistent` reads them in 
/etc/iptables/rules.v4 / .v6.

The problem was, that the service `cloud-early-config` resets  
/etc/iptables/rules.v4 / .v6 to the setup state. So even if you save iptables 
rules, they were overwritten during boot.

That is why a fix was made in 2fad87d3f3fec380ba5d595ee95f5caa88b37ee8 to 
workaround the problem.

I reverted the workaround and made sure /etc/iptables/rules.v4 / .v6. won't 
get overwritten by  `cloud-early-config`

Signed-off-by: Rene Moser 




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: simulator-singlerun #1120

2015-04-17 Thread jenkins
See 

--
[...truncated 10502 lines...]
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.
> WARNING: Provided file does not exist: 

> Initializing database=simulator with host=localhost port=3306 
username=cloud password=cloud
> Running query: drop database if exists `simulator`
> Running query: create database `simulator`
> Running query: GRANT ALL ON simulator.* to 'cloud'@`localhost` 
identified by 'cloud'
> Running query: GRANT ALL ON simulator.* to 'cloud'@`%` identified 
by 'cloud'
> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing upgrade: com.cloud.upgrade.DatabaseUpgradeChecker
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
cloud-developer ---
[INFO] 
[INFO] --- maven-install-plugin:2.3.1:install (default-install) @ 
cloud-developer ---
[INFO] Installing 
 
to 
/var/lib/jenkins/.m2/repository/org/apache/cloudstack/cloud-developer/4.6.0-SNAPSHOT/cloud-developer-4.6.0-SNAPSHOT.pom
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 17.136s
[INFO] Finished at: Fri Apr 17 05:59:08 EDT 2015
[INFO] Final Memory: 45M/207M
[INFO] 
[simulator-singlerun] $ /bin/bash -x /tmp/hudson8225810946323218159.sh
+ jps -l
+ grep -q Launcher
+ rm -f xunit.xml
+ echo ''
+ rm -rf /tmp/MarvinLogs
+ echo Check for initialization of the management server
Check for initialization of the management server
+ COUNTER=0
+ SERVER_PID=17463
+ mvn -P systemvm,simulator -pl :cloud-client-ui jetty:run
+ '[' 0 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=1
+ '[' 1 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=2
+ '[' 2 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=3
+ '[' 3 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=4
+ '[' 4 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=5
+ '[' 5 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=6
+ '[' 6 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=7
+ '[' 7 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=8
+ '[' 8 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=9
+ '[' 9 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=10
+ '[' 10 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=11
+ '[' 11 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=12
+ '[' 12 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=13
+ '[' 13 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=14
+ '[' 14 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=15
+ '[' 15 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=16
+ '[' 16 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=17
+ '[' 17 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=18
+ '[' 18 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=19
+ '[' 19 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=20
+ '[' 20 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=21
+ '[' 21 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=22
+ '[' 22 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-outpu

Re: [ANNOUNCE] New committer: Gaurav Nandkumar Aradhye

2015-04-17 Thread Gaurav Aradhye
Thanks Giles, Nux!

Regards,
Gaurav Aradhye

On Apr 16, 2015, at 6:49 PM, Nux!  wrote:

> Congrats :)
> 
> --
> Sent from the Delta quadrant using Borg technology!
> 
> Nux!
> www.nux.ro
> 
> - Original Message -
>> From: "Daan Hoogland" 
>> To: "dev" 
>> Sent: Monday, 13 April, 2015 20:43:30
>> Subject: [ANNOUNCE] New committer: Gaurav Nandkumar Aradhye
> 
>> The Project Management Committee (PMC) for Apache CloudStack
>> has asked Gaurav Aradhya to become a committer and we are pleased to
>> announce that they have accepted.
>> 
>> Being a committer allows many contributors to contribute more
>> autonomously. For developers, it makes it easier to submit changes and
>> eliminates the need to have contributions reviewed via the patch
>> submission process. Whether contributions are development-related or
>> otherwise, it is a recognition of a contributor's participation in the
>> project and commitment to the project and the Apache Way.
>> 
>> Please join me in congratulating Gaurav
>> 
>> --
>> Daan
>> on behalf of the CloudStack PMC



[GitHub] cloudstack pull request: modified travis.yml added sudo: required

2015-04-17 Thread imduffy15
Github user imduffy15 commented on the pull request:

https://github.com/apache/cloudstack/pull/177#issuecomment-93969318
  
Looks good to me @karuturi 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request: modified travis.yml added sudo: required

2015-04-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/177


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request: modified travis.yml added sudo: required

2015-04-17 Thread karuturi
Github user karuturi commented on the pull request:

https://github.com/apache/cloudstack/pull/177#issuecomment-93970920
  
Thanks @imduffy15. merged it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: simulator-singlerun #1122

2015-04-17 Thread jenkins
See 

--
[...truncated 10795 lines...]
from marvin.lib.common import (get_domain,
  File 
"
 line 91, in 
from marvin.lib.vcenter import Vcenter
  File 
"
 line 22, in 
ssl._create_default_https_context = ssl._create_unverified_context
AttributeError: 'module' object has no attribute '_create_unverified_context'
Traceback (most recent call last):
  File 
"
 line 25, in 
from marvin.lib.common import *
  File 
"
 line 91, in 
from marvin.lib.vcenter import Vcenter
  File 
"
 line 22, in 
ssl._create_default_https_context = ssl._create_unverified_context
AttributeError: 'module' object has no attribute '_create_unverified_context'
Traceback (most recent call last):
  File 
"
 line 24, in 
from marvin.lib.common import (get_zone,
  File 
"
 line 91, in 
from marvin.lib.vcenter import Vcenter
  File 
"
 line 22, in 
ssl._create_default_https_context = ssl._create_unverified_context
AttributeError: 'module' object has no attribute '_create_unverified_context'
Traceback (most recent call last):
  File 
"
 line 34, in 
from marvin.lib.common import (get_domain,
  File 
"
 line 91, in 
from marvin.lib.vcenter import Vcenter
  File 
"
 line 22, in 
ssl._create_default_https_context = ssl._create_unverified_context
AttributeError: 'module' object has no attribute '_create_unverified_context'
Traceback (most recent call last):
  File 
"
 line 21, in 
from marvin.lib.common import setNonContiguousVlanIds, get_zone
  File 
"
 line 91, in 
from marvin.lib.vcenter import Vcenter
  File 
"
 line 22, in 
ssl._create_default_https_context = ssl._create_unverified_context
AttributeError: 'module' object has no attribute '_create_unverified_context'
Traceback (most recent call last):
  File 
"
 line 24, in 
from marvin.lib.common import *
  File 
"
 line 91, in 
from marvin.lib.vcenter import Vcenter
  File 
"
 line 22, in 
ssl._create_default_https_context = ssl._create_unverified_context
AttributeError: 'module' object has no attribute '_create_unverified_context'
Traceback (most recent call last):
  File 
"
 line 23, in 
from marvin.lib.common import *
  File 
"
 line 91, in 
from marvin.lib.vcenter import Vcenter
  File 
"
 line 22, in 
ssl._create_default_https_context = ssl._create_unverified_context
AttributeError: 'module' object has no attribute '_create_unverified_context'
Traceback (most recent call last):
  File 
"
 line 25, in 
from marvin.lib.common import *
  File 
"
 line 91, in 
from marvin.lib.vcenter import Vcenter
  File 
"
 line 22, in 
ssl._create_default_

Re: [VOTE] Apache Cloudstack 4.4.3

2015-04-17 Thread Wilder Rodrigues
Hi all,

There goes my +1 for the 4.4.3 RC

See details about the tests below and the full report at the end of the email.

Environment:

XenServer 6.2 running under VMWare Zone (inside ou BetaCloud - ACS 4.4.2)
Management Server running from MacBook Pro
MySql on MacBook Pro

Datacenter:

Zone type: Advanced
Storage type: NFS Local Storage
Virtual Networking: VirtualRouter
Isolation method: VLAN

Integration tests for VLAN physical network: 

Repository: https://github.com/wilderrodrigues/cloudstack_integration_tests

test_accounts.py
test_reset_vm_on_reboot.py
test_routers.py
test_service_offerings.py
test_vm_life_cycle.py
test_vpc_routers.py
test_vpc_vpn.py
test_privategw_acl.py 



Zone type: Advanced
Storage type: NFS Local Storage
Virtual Networking: OVS
Isolation method: GRE

Integration tests for GRE physical network:

Deploy a datacenter with Isolation method equals to OVS
Create Network Offering using OVS
Create Isolated Networking using OVS offering
Create VM using the OVS isolated networking
Acquire Pub IP / Load Balancing / Firewalling / SSH
Reboot VM
Destroy VM
Reboot router
Destroy router

I will also test some basic zone stuff now.

Cheers,
Wilder


Test Create Account and user for that account ... === TestName: 
test_01_create_account | Status : SUCCESS ===
ok
Test Sub domain allowed to launch VM  when a Domain level zone is created ... 
=== TestName: test_01_add_vm_to_subdomain | Status : SUCCESS ===
ok
Test delete domain without force option ... === TestName: test_DeleteDomain | 
Status : SUCCESS ===
ok
Test delete domain with force option ... === TestName: test_forceDeleteDomain | 
Status : SUCCESS ===
ok
Test update admin details ... === TestName: test_updateAdminDetails | Status : 
SUCCESS ===
ok
Test update domain admin details ... === TestName: 
test_updateDomainAdminDetails | Status : SUCCESS ===
ok
Test user update API ... === TestName: test_updateUserDetails | Status : 
SUCCESS ===
ok
Test login API with domain ... === TestName: test_LoginApiDomain | Status : 
SUCCESS ===
ok
Test if Login API does not return UUID's ... === TestName: 
test_LoginApiUuidResponse | Status : SUCCESS ===
ok

--
Ran 9 tests in 1363.478s

OK
/tmp//MarvinLogs/test_accounts_13YHHM/results.txt

Test reset virtual machine on reboot ... === TestName: 
test_01_reset_vm_on_reboot | Status : SUCCESS ===
ok

--
Ran 1 test in 267.857s

OK
/tmp//MarvinLogs/test_reset_vm_on_reboot_MF6YHR/results.txt

Test router internal advanced zone ... SKIP: Marvin configuration has no host 
credentials to check router services
Test restart network ... === TestName: test_03_restart_network_cleanup | Status 
: SUCCESS ===
ok
Test router basic setup ... === TestName: test_05_router_basic | Status : 
SUCCESS ===
ok
Test router advanced setup ... === TestName: test_06_router_advanced | Status : 
SUCCESS ===
ok
Test stop router ... === TestName: test_07_stop_router | Status : SUCCESS ===
ok
Test start router ... === TestName: test_08_start_router | Status : SUCCESS ===
ok
Test reboot router ... === TestName: test_09_reboot_router | Status : SUCCESS 
===
ok

--
Ran 7 tests in 538.464s

OK (SKIP=1)
/tmp//MarvinLogs/test_routers_VA76LJ/results.txt

Test to create service offering ... === TestName: 
test_01_create_service_offering | Status : SUCCESS ===
ok
Test to update existing service offering ... === TestName: 
test_02_edit_service_offering | Status : SUCCESS ===
ok
Test to delete service offering ... === TestName: 
test_03_delete_service_offering | Status : SUCCESS ===
ok

--
Ran 3 tests in 227.704s

OK
/tmp//MarvinLogs/test_service_offerings_X2HT71/results.txt

Test advanced zone virtual router ... === TestName: test_advZoneVirtualRouter | 
Status : SUCCESS ===
ok
Test Deploy Virtual Machine ... === TestName: test_deploy_vm | Status : SUCCESS 
===
ok
Test Multiple Deploy Virtual Machine ... === TestName: test_deploy_vm_multiple 
| Status : SUCCESS ===
ok
Test Stop Virtual Machine ... === TestName: test_01_stop_vm | Status : SUCCESS 
===
ok
Test Start Virtual Machine ... === TestName: test_02_start_vm | Status : 
SUCCESS ===
ok
Test Reboot Virtual Machine ... === TestName: test_03_reboot_vm | Status : 
SUCCESS ===
ok
Test destroy Virtual Machine ... === TestName: test_06_destroy_vm | Status : 
SUCCESS ===
ok
Test recover Virtual Machine ... === TestName: test_07_restore_vm | Status : 
SUCCESS ===
ok
Test migrate VM ... SKIP: At least two hosts should be present in the zone for 
migration
Test destroy(expunge) Virtual Machine ... === TestName: test_09_expunge_vm | 
Status : SUCCESS ===
ok

--

Re: Support for SecurityGroup in OpenVSwitch mode in Xenserver

2015-04-17 Thread Jayapal Reddy Uradi
 Hi Suresh,

Basically SG rules needs bridge mode. So in KVM also it expects bridge mode.

Thanks,
Jayapal
 
On 17-Apr-2015, at 2:33 AM, Suresh Ramamurthy 

 wrote:

> Hi Jayapal,
> 
> Thanks a lot for the response.
> 
> From what you explained, looks like SG for KVM also expects Bridge module.
> Correct me if I am wrong.
> 
> Thanks,
> Suresh
> 
> 
> On Wed, Apr 15, 2015 at 11:36 PM, Jayapal Reddy Uradi <
> jayapalreddy.ur...@citrix.com> wrote:
> 
>> Hi Suresh,
>> 
>> Yes, for security groups expects network mode 'bridge' for xenserver.
>> This is because the security group rules  iptables/ebatables in host
>> filters on the bridge interfaces.
>> 
>> Please look at how we can achieve host level isolation of VM traffic for
>> SG using openVswitch.
>> 
>> Thanks,
>> Jayapal
>> 
>> On 16-Apr-2015, at 10:14 AM, Suresh Ramamurthy <
>> suresh.ramamur...@nuagenetworks.net>
>> wrote:
>> 
>>> Hi Security Group Experts,
>>> 
>>> I am trying to play with SecurityGroup in XenServer setup.
>>> 
>>> When I looked at the latest 4.5 code I found that the code expects Bridge
>>> module to be present in Xenserver.
>>> 
>>> Is that true? Is Security Group supported using OpenVSwitch in Xenserver?
>>> 
>>> Thanks,
>>> Suresh
>> 
>> 



Build failed in Jenkins: simulator-singlerun #1124

2015-04-17 Thread jenkins
See 

--
[...truncated 10471 lines...]
[INFO] 
[INFO] Building Apache CloudStack Developer Mode 4.6.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.11:check (cloudstack-checkstyle) @ 
cloud-developer ---
[INFO] Starting audit...
Audit done.

[INFO] 
[INFO] --- properties-maven-plugin:1.0-alpha-2:read-project-properties 
(default) @ cloud-developer ---
[WARNING] Ignoring missing properties file: 

[INFO] 
[INFO] --- maven-remote-resources-plugin:1.3:process (default) @ 
cloud-developer ---
[INFO] 
[INFO] --- maven-antrun-plugin:1.8:run (default) @ cloud-developer ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] >>> exec-maven-plugin:1.2.1:java (create-schema-simulator) @ 
cloud-developer >>>
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.11:check (cloudstack-checkstyle) @ 
cloud-developer ---
[INFO] Starting audit...
Audit done.

[INFO] 
[INFO] <<< exec-maven-plugin:1.2.1:java (create-schema-simulator) @ 
cloud-developer <<<
[INFO] 
[INFO] --- exec-maven-plugin:1.2.1:java (create-schema-simulator) @ 
cloud-developer ---
log4j:WARN No appenders could be found for logger 
(org.springframework.core.env.StandardEnvironment).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.
> WARNING: Provided file does not exist: 

> Initializing database=simulator with host=localhost port=3306 
username=cloud password=cloud
> Running query: drop database if exists `simulator`
> Running query: create database `simulator`
> Running query: GRANT ALL ON simulator.* to 'cloud'@`localhost` 
identified by 'cloud'
> Running query: GRANT ALL ON simulator.* to 'cloud'@`%` identified 
by 'cloud'
> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing upgrade: com.cloud.upgrade.DatabaseUpgradeChecker
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
cloud-developer ---
[INFO] 
[INFO] --- maven-install-plugin:2.3.1:install (default-install) @ 
cloud-developer ---
[INFO] Installing 
 
to 
/var/lib/jenkins/.m2/repository/org/apache/cloudstack/cloud-developer/4.6.0-SNAPSHOT/cloud-developer-4.6.0-SNAPSHOT.pom
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 14.867s
[INFO] Finished at: Fri Apr 17 09:41:28 EDT 2015
[INFO] Final Memory: 45M/195M
[INFO] 
[simulator-singlerun] $ /bin/bash -x /tmp/hudson7648071852505673035.sh
+ jps -l
+ grep -q Launcher
+ rm -f xunit.xml
+ echo ''
+ rm -rf /tmp/MarvinLogs
+ echo Check for initialization of the management server
Check for initialization of the management server
+ COUNTER=0
+ SERVER_PID=5801
+ '[' 0 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ mvn -P systemvm,simulator -pl :cloud-client-ui jetty:run
+ sleep 5
+ COUNTER=1
+ '[' 1 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=2
+ '[' 2 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=3
+ '[' 3 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=4
+ '[' 4 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=5
+ '[' 5 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=6
+ '[' 6 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=7
+ '[' 7 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=8
+ '[' 8 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=9
+ '[' 9 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=10
+ '[' 10 -lt 44 ']'
+ grep -q 

Re: [VOTE] Apache Cloudstack 4.4.3

2015-04-17 Thread Wilder Rodrigues
Hi all,

Tests results for Basic Zone... all working fine.

Environment:

XenServer 6.2 running under VMWare Zone (inside ou BetaCloud - ACS 4.4.2)
Management Server running from MacBook Pro
MySql on MacBook Pro

Datacenter:

Zone type: Basic
Storage type: NFS Local Storage
Virtual Networking: VirtualRouter
Isolation method: VLAN

Integration tests for VLAN physical network: 

Repository: https://github.com/wilderrodrigues/cloudstack_integration_tests

test_accounts.py
test_reset_vm_on_reboot.py
test_service_offerings.py
test_vm_life_cycle.py

Cheers,
Wilder

Test Create Account and user for that account ... === TestName: 
test_01_create_account | Status : SUCCESS ===
ok
Test Sub domain allowed to launch VM  when a Domain level zone is created ... 
=== TestName: test_01_add_vm_to_subdomain | Status : SUCCESS ===
ok
Test update admin details ... === TestName: test_updateAdminDetails | Status : 
SUCCESS ===
ok
Test update domain admin details ... === TestName: 
test_updateDomainAdminDetails | Status : SUCCESS ===
ok
Test user update API ... === TestName: test_updateUserDetails | Status : 
SUCCESS ===
ok
Test login API with domain ... === TestName: test_LoginApiDomain | Status : 
SUCCESS ===
ok
Test if Login API does not return UUID's ... === TestName: 
test_LoginApiUuidResponse | Status : SUCCESS ===
ok

--
Ran 7 tests in 107.123s

OK
/tmp//MarvinLogs/test_accounts_QU2PD3/results.txt

Test reset virtual machine on reboot ... === TestName: 
test_01_reset_vm_on_reboot | Status : SUCCESS ===
ok

--
Ran 1 test in 70.787s

OK
/tmp//MarvinLogs/test_reset_vm_on_reboot_B05BTQ/results.txt

Test to create service offering ... === TestName: 
test_01_create_service_offering | Status : SUCCESS ===
ok
Test to update existing service offering ... === TestName: 
test_02_edit_service_offering | Status : SUCCESS ===
ok
Test to delete service offering ... === TestName: 
test_03_delete_service_offering | Status : SUCCESS ===
ok

--
Ran 3 tests in 40.748s

OK
/tmp//MarvinLogs/test_service_offerings_AVIHBP/results.txt

Test Deploy Virtual Machine ... === TestName: test_deploy_vm | Status : SUCCESS 
===
ok
Test Multiple Deploy Virtual Machine ... === TestName: test_deploy_vm_multiple 
| Status : SUCCESS ===
ok
Test Stop Virtual Machine ... === TestName: test_01_stop_vm | Status : SUCCESS 
===
ok
Test Start Virtual Machine ... === TestName: test_02_start_vm | Status : 
SUCCESS ===
ok
Test Reboot Virtual Machine ... === TestName: test_03_reboot_vm | Status : 
SUCCESS ===
ok
Test destroy Virtual Machine ... === TestName: test_06_destroy_vm | Status : 
SUCCESS ===
ok
Test recover Virtual Machine ... === TestName: test_07_restore_vm | Status : 
SUCCESS ===
ok
Test migrate VM ... SKIP: At least two hosts should be present in the zone for 
migration
Test destroy(expunge) Virtual Machine ... === TestName: test_09_expunge_vm | 
Status : SUCCESS ===
ok

--
Ran 9 tests in 388.114s

OK (SKIP=1)
/tmp//MarvinLogs/test_vm_life_cycle_C29HL1/results.txt 



On 17 Apr 2015, at 14:56, Wilder Rodrigues  
wrote:

> Hi all,
> 
> There goes my +1 for the 4.4.3 RC
> 
> See details about the tests below and the full report at the end of the email.
> 
> Environment:
> 
> XenServer 6.2 running under VMWare Zone (inside ou BetaCloud - ACS 4.4.2)
> Management Server running from MacBook Pro
> MySql on MacBook Pro
> 
> Datacenter:
> 
> Zone type: Advanced
> Storage type: NFS Local Storage
> Virtual Networking: VirtualRouter
> Isolation method: VLAN
> 
> Integration tests for VLAN physical network: 
> 
> Repository: https://github.com/wilderrodrigues/cloudstack_integration_tests
> 
> test_accounts.py
> test_reset_vm_on_reboot.py
> test_routers.py
> test_service_offerings.py
> test_vm_life_cycle.py
> test_vpc_routers.py
> test_vpc_vpn.py
> test_privategw_acl.py 
> 
> 
> 
> Zone type: Advanced
> Storage type: NFS Local Storage
> Virtual Networking: OVS
> Isolation method: GRE
> 
> Integration tests for GRE physical network:
> 
> Deploy a datacenter with Isolation method equals to OVS
> Create Network Offering using OVS
> Create Isolated Networking using OVS offering
> Create VM using the OVS isolated networking
> Acquire Pub IP / Load Balancing / Firewalling / SSH
> Reboot VM
> Destroy VM
> Reboot router
> Destroy router
> 
> I will also test some basic zone stuff now.
> 
> Cheers,
> Wilder
> 
> 
> Test Create Account and user for that account ... === TestName: 
> test_01_create_account | Status : SUCCESS ===
> ok
> Test Sub domain allowed to launch VM  when a Domain level zone is created ... 
> === TestName: test_01_add_vm_to_subdomain | Status : SUCCESS ===
> ok
> Test delete 

Re: [VOTE] Apache Cloudstack 4.4.3

2015-04-17 Thread Rohit Yadav
+1 (binding)

Repositories created from SHA e9441d47867104505ef260c1857549f93df96aba with 
additional patch to use eu.ceph.com to avoid build failures:

http://packages.shapeblue.com/cloudstack/testing/debian/4.4/
http://packages.shapeblue.com/cloudstack/testing/centos/4.4/

SystemVM template: http://packages.shapeblue.com/systemvmtemplate/4.4

Tested with KVM (Ubuntu 14.04 based) using the above repository (feel free to 
test using this);

- Basic Zone with SG, basic vm lifecycles
- Advance Zone, basic vm lifecycles

Regards.

> On 17-Apr-2015, at 2:56 pm, Wilder Rodrigues  
> wrote:
>
> Hi all,
>
> There goes my +1 for the 4.4.3 RC
>
> See details about the tests below and the full report at the end of the email.
>
> Environment:
>
> XenServer 6.2 running under VMWare Zone (inside ou BetaCloud - ACS 4.4.2)
> Management Server running from MacBook Pro
> MySql on MacBook Pro
>
> Datacenter:
>
> Zone type: Advanced
> Storage type: NFS Local Storage
> Virtual Networking: VirtualRouter
> Isolation method: VLAN
>
> Integration tests for VLAN physical network:
>
> Repository: https://github.com/wilderrodrigues/cloudstack_integration_tests
>
> test_accounts.py
> test_reset_vm_on_reboot.py
> test_routers.py
> test_service_offerings.py
> test_vm_life_cycle.py
> test_vpc_routers.py
> test_vpc_vpn.py
> test_privategw_acl.py
>
> 
>
> Zone type: Advanced
> Storage type: NFS Local Storage
> Virtual Networking: OVS
> Isolation method: GRE
>
> Integration tests for GRE physical network:
>
> Deploy a datacenter with Isolation method equals to OVS
> Create Network Offering using OVS
> Create Isolated Networking using OVS offering
> Create VM using the OVS isolated networking
> Acquire Pub IP / Load Balancing / Firewalling / SSH
> Reboot VM
> Destroy VM
> Reboot router
> Destroy router
>
> I will also test some basic zone stuff now.
>
> Cheers,
> Wilder
>
> 
> Test Create Account and user for that account ... === TestName: 
> test_01_create_account | Status : SUCCESS ===
> ok
> Test Sub domain allowed to launch VM  when a Domain level zone is created ... 
> === TestName: test_01_add_vm_to_subdomain | Status : SUCCESS ===
> ok
> Test delete domain without force option ... === TestName: test_DeleteDomain | 
> Status : SUCCESS ===
> ok
> Test delete domain with force option ... === TestName: test_forceDeleteDomain 
> | Status : SUCCESS ===
> ok
> Test update admin details ... === TestName: test_updateAdminDetails | Status 
> : SUCCESS ===
> ok
> Test update domain admin details ... === TestName: 
> test_updateDomainAdminDetails | Status : SUCCESS ===
> ok
> Test user update API ... === TestName: test_updateUserDetails | Status : 
> SUCCESS ===
> ok
> Test login API with domain ... === TestName: test_LoginApiDomain | Status : 
> SUCCESS ===
> ok
> Test if Login API does not return UUID's ... === TestName: 
> test_LoginApiUuidResponse | Status : SUCCESS ===
> ok
>
> --
> Ran 9 tests in 1363.478s
>
> OK
> /tmp//MarvinLogs/test_accounts_13YHHM/results.txt
>
> Test reset virtual machine on reboot ... === TestName: 
> test_01_reset_vm_on_reboot | Status : SUCCESS ===
> ok
>
> --
> Ran 1 test in 267.857s
>
> OK
> /tmp//MarvinLogs/test_reset_vm_on_reboot_MF6YHR/results.txt
>
> Test router internal advanced zone ... SKIP: Marvin configuration has no host 
> credentials to check router services
> Test restart network ... === TestName: test_03_restart_network_cleanup | 
> Status : SUCCESS ===
> ok
> Test router basic setup ... === TestName: test_05_router_basic | Status : 
> SUCCESS ===
> ok
> Test router advanced setup ... === TestName: test_06_router_advanced | Status 
> : SUCCESS ===
> ok
> Test stop router ... === TestName: test_07_stop_router | Status : SUCCESS ===
> ok
> Test start router ... === TestName: test_08_start_router | Status : SUCCESS 
> ===
> ok
> Test reboot router ... === TestName: test_09_reboot_router | Status : SUCCESS 
> ===
> ok
>
> --
> Ran 7 tests in 538.464s
>
> OK (SKIP=1)
> /tmp//MarvinLogs/test_routers_VA76LJ/results.txt
>
> Test to create service offering ... === TestName: 
> test_01_create_service_offering | Status : SUCCESS ===
> ok
> Test to update existing service offering ... === TestName: 
> test_02_edit_service_offering | Status : SUCCESS ===
> ok
> Test to delete service offering ... === TestName: 
> test_03_delete_service_offering | Status : SUCCESS ===
> ok
>
> --
> Ran 3 tests in 227.704s
>
> OK
> /tmp//MarvinLogs/test_service_offerings_X2HT71/results.txt
>
> Test advanced zone virtual router ... === TestName: test_advZoneVirtualRouter 
> | Status : SUCCESS ===
> ok
> Test Deploy Virtual Machine ... === TestNam

Build failed in Jenkins: simulator-singlerun #1125

2015-04-17 Thread jenkins
See 

--
[...truncated 10505 lines...]
> Initializing database=simulator with host=localhost port=3306 
username=cloud password=cloud
> Running query: drop database if exists `simulator`
> Running query: create database `simulator`
> Running query: GRANT ALL ON simulator.* to 'cloud'@`localhost` 
identified by 'cloud'
> Running query: GRANT ALL ON simulator.* to 'cloud'@`%` identified 
by 'cloud'
> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing upgrade: com.cloud.upgrade.DatabaseUpgradeChecker
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
cloud-developer ---
[INFO] 
[INFO] --- maven-install-plugin:2.3.1:install (default-install) @ 
cloud-developer ---
[INFO] Installing 
 
to 
/var/lib/jenkins/.m2/repository/org/apache/cloudstack/cloud-developer/4.6.0-SNAPSHOT/cloud-developer-4.6.0-SNAPSHOT.pom
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 27.637s
[INFO] Finished at: Fri Apr 17 10:09:57 EDT 2015
[INFO] Final Memory: 43M/171M
[INFO] 
[simulator-singlerun] $ /bin/bash -x /tmp/hudson7856287238172788361.sh
+ grep -q Launcher
+ jps -l
+ rm -f xunit.xml
+ echo ''
+ rm -rf /tmp/MarvinLogs
+ echo Check for initialization of the management server
Check for initialization of the management server
+ COUNTER=0
+ SERVER_PID=12690
+ '[' 0 -lt 44 ']'
+ mvn -P systemvm,simulator -pl :cloud-client-ui jetty:run
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=1
+ '[' 1 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=2
+ '[' 2 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=3
+ '[' 3 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=4
+ '[' 4 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=5
+ '[' 5 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=6
+ '[' 6 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=7
+ '[' 7 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=8
+ '[' 8 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=9
+ '[' 9 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=10
+ '[' 10 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=11
+ '[' 11 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=12
+ '[' 12 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=13
+ '[' 13 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=14
+ '[' 14 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=15
+ '[' 15 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=16
+ '[' 16 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=17
+ '[' 17 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=18
+ '[' 18 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=19
+ '[' 19 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=20
+ '[' 20 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=21
+ '[' 21 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=22
+ '[' 22 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=23
+ '[' 23 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ break
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ echo Started OK pid 12690
Started OK pid 12690
+ sleep 20
+ export 
PYTHONPATH=

Build failed in Jenkins: simulator-4.5-singlerun #208

2015-04-17 Thread jenkins
See 

Changes:

[Rohit Yadav] CLOUDSTACK-6543 Sort domain lists in UI

[Rohit Yadav] CLOUDSTACK-8134. Worker VMs don't have MS id set in vCenter 
annotation 'cloud.vm.workertag'.

[Rohit Yadav] CLOUDSTACK-8318. Storage vMotion support for VMFS.

[Rohit Yadav] CLOUDSTACK-8319. For both 'MigrateVolume' and 
'MigrateVMWithVolumes, ensure VM's vconfiguration files are migrated along with 
VM's root volume.

[Rohit Yadav] CLOUDSTACK-8320. Upon a failed migration, a dummy volume is 
created which remains in 'Expunging' state.

[Rohit Yadav] CLOUDSTACK-8119. [VMware] Cannot attach more than 8 volumes to a 
VM.

[Rohit Yadav] CLOUDSTACK-8119. [VMware] Cannot attach more than 8 volumes to a 
VM.

[Rohit Yadav] CLOUDSTACK-8108. vCenter admin name is logged in clear text.

--
[...truncated 8847 lines...]
> Initializing database=cloud_usage with host=localhost port=3306 
username=cloud password=cloud
> Running query: drop database if exists `cloud_usage`
> Running query: create database `cloud_usage`
> Running query: GRANT ALL ON cloud_usage.* to 'cloud'@`localhost` 
identified by 'cloud'
> Running query: GRANT ALL ON cloud_usage.* to 'cloud'@`%` 
identified by 'cloud'
> Initializing database=cloudbridge with host=localhost port=3306 
username=cloud password=cloud
> Running query: drop database if exists `cloudbridge`
> Running query: create database `cloudbridge`
> Running query: GRANT ALL ON cloudbridge.* to 'cloud'@`localhost` 
identified by 'cloud'
> Running query: GRANT ALL ON cloudbridge.* to 'cloud'@`%` 
identified by 'cloud'
> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing upgrade: com.cloud.upgrade.DatabaseUpgradeChecker
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
cloud-developer ---
[INFO] 
[INFO] --- maven-install-plugin:2.3.1:install (default-install) @ 
cloud-developer ---
[INFO] Installing 

 to 
/var/lib/jenkins/.m2/repository/org/apache/cloudstack/cloud-developer/4.5.1-SNAPSHOT/cloud-developer-4.5.1-SNAPSHOT.pom
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 2:00.984s
[INFO] Finished at: Fri Apr 17 10:33:20 EDT 2015
[INFO] Final Memory: 42M/174M
[INFO] 
[WARNING] The requested profile "simulator" could not be activated because it 
does not exist.
[simulator-4.5-singlerun] $ mvn -P developer -pl developer -Ddeploydb-simulator
[INFO] Scanning for projects...
[INFO] 
[INFO] 
[INFO] Building Apache CloudStack Developer Mode 4.5.1-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.11:check (cloudstack-checkstyle) @ 
cloud-developer ---
[INFO] Starting audit...
Audit done.

[INFO] 
[INF

Build failed in Jenkins: simulator-4.5-singlerun #209

2015-04-17 Thread jenkins
See 

--
[...truncated 8857 lines...]
> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing upgrade: com.cloud.upgrade.DatabaseUpgradeChecker
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
cloud-developer ---
[INFO] 
[INFO] --- maven-install-plugin:2.3.1:install (default-install) @ 
cloud-developer ---
[INFO] Installing 

 to 
/var/lib/jenkins/.m2/repository/org/apache/cloudstack/cloud-developer/4.5.1-SNAPSHOT/cloud-developer-4.5.1-SNAPSHOT.pom
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 1:51.449s
[INFO] Finished at: Fri Apr 17 10:55:30 EDT 2015
[INFO] Final Memory: 42M/205M
[INFO] 
[WARNING] The requested profile "simulator" could not be activated because it 
does not exist.
[simulator-4.5-singlerun] $ mvn -P developer -pl developer -Ddeploydb-simulator
[INFO] Scanning for projects...
[INFO] 
[INFO] 
[INFO] Building Apache CloudStack Developer Mode 4.5.1-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.11:check (cloudstack-checkstyle) @ 
cloud-developer ---
[INFO] Starting audit...
Audit done.

[INFO] 
[INFO] --- properties-maven-plugin:1.0-alpha-2:read-project-properties 
(default) @ cloud-developer ---
[WARNING] Ignoring missing properties file: 

[INFO] 
[INFO] --- maven-remote-resources-plugin:1.3:process (default) @ 
cloud-developer ---
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (default) @ cloud-developer ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] >>> exec-maven-plugin:1.2.1:java (create-schema-simulator) @ 
cloud-developer >>>
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.11:check (cloudstack-checkstyle) @ 
cloud-developer ---
[INFO] Starting audit...
Audit done.

[INFO] 
[INFO] <<< exec-maven-plugin:1.2.1:java (create-schema-simulator) @ 
cloud-developer <<<
[INFO] 
[INFO] --- exec-maven-plugin:1.2.1:java (create-schema-simulator) @ 
cloud-developer ---
log4j:WARN No appenders could be found for logger 
(org.springframework.core.env.StandardEnvironment).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.
> WARNING: Provided file does not exist: 

> Initializing database=simulator with host=localhost port=3306 
username=cloud password=cloud
> Running query: drop database if exists `simulator`
> Running query: create database `simulator`
> Running query: GRANT ALL ON simulator.* to 'cloud'@`localhost` 
identified by 'cloud'
> Running query: GRANT ALL ON simulator.* to 'cloud'@`%` identified 
by 'cloud'
> Processing SQL file at 

> Processing SQL file

[GitHub] cloudstack pull request: CLOUDSTACK-8390: Skipping VPC tests on Hy...

2015-04-17 Thread gauravaradhye
GitHub user gauravaradhye opened a pull request:

https://github.com/apache/cloudstack/pull/179

CLOUDSTACK-8390: Skipping VPC tests on Hyperv

VPC network is not supported on hyper-v hypervisor. Skip relevant test 
cases accordingly.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gauravaradhye/cloudstack 8390

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/179.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #179


commit 572aa4f0166b3dd37a1db9f6f465ce22441de891
Author: Gaurav Aradhye 
Date:   2015-04-16T14:59:09Z

CLOUDSTACK-8390: Skipping VPC tests on Hyperv




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request: CLOUDSTACK-8390: Skipping VPC tests on Hy...

2015-04-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/179


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] 4.6 release management

2015-04-17 Thread Daan Hoogland
On Fri, Apr 17, 2015 at 2:43 AM, Sebastien Goasguen  wrote:
>
>> On Apr 17, 2015, at 12:49 AM, Pierre-Luc Dion  wrote:
>>
>> Today during the CloudStackdays  we did a round table about Release
>> management targeting the next 4.6 releases.
>>
>>
>> Quick bullet point discussions:
>>
>> ideas to change release planning
>>
>>   - Plugin contribution is complicated because often  a new plugin involve
>>   change on the core:
>>  - ex: storage plugin involve changes on Hypervisor code
>>   - There is an idea of going on a 2 weeks release model which could
>>   introduce issue the database schema.
>>   - Database schema version should be different then the application
>>   version.
>>   - There is a will to enforce git workflow in 4.6  and trigger simulator
>>   job on  PullRequest.
>>   - Some people (I'm part of them) are concerned on our current way of
>>   supporting and back porting fixes to multiple release (4.3.x, 4.4.x,
>>   4.5.x). But the current level of confidence against latest release is low,
>>   so that need to be improved.
>>
>>
>> So, the main messages is that w'd like to improve the release velocity, and
>> release branch stability.  so we would like to propose few change in the
>> way we would add code to the 4.6 branch as follow:
>>
>> - All new contribution to 4.6 would be thru Pull Request or merge request,
>> which would trigger a simulator job, ideally only if that pass the PR would
>> be accepted and automatically merged.  At this time, I think we pretty much
>> have everything in place to do that. At a first step we would use
>> simulator+marvin jobs then improve tests coverage from there.
>
> +1
>
> We do need to realize what this means and be all fine with it.
>
> It means that if someone who is not RM directly commits to the release 
> branch, the commit will be reverted.
> And that from the beginning of the branching…
I agree and we can even go as far as reverting fixes that are
cherry-picked in favour of merged forward.

>
> IMHO, I think this would be a good step but I don’t think it goes far enough.
Agreed here as well but let's take the step while discussing further
steps and not implement to much process as well

>
> This still uses a paradigm where a release is made from a release branch that 
> was started from an unstable development branch.
> Hence you still need *extensive* QA.
The problem here is that there is no stable point to fork from at the
moment. We will get there and we shouldn't stop taking steps in that
direction.

>
> If we truly want to release faster, we need to release from the same QA’d 
> branch time after time….a release needs to be based on a previous release
>
> Basically, we need a rolling release cycle. That will have the added benefit 
> to not leave releases behind and have to focus on backporting.
>
>>
>> Please comments :-)
>



-- 
Daan


Build failed in Jenkins: simulator-singlerun #1126

2015-04-17 Thread jenkins
See 

Changes:

[Gaurav Aradhye] CLOUDSTACK-8390: Skipping VPC tests on Hyperv

--
[...truncated 10507 lines...]
> Initializing database=simulator with host=localhost port=3306 
username=cloud password=cloud
> Running query: drop database if exists `simulator`
> Running query: create database `simulator`
> Running query: GRANT ALL ON simulator.* to 'cloud'@`localhost` 
identified by 'cloud'
> Running query: GRANT ALL ON simulator.* to 'cloud'@`%` identified 
by 'cloud'
> Processing SQL file at 

> Processing SQL file at 

> Processing SQL file at 

> Processing upgrade: com.cloud.upgrade.DatabaseUpgradeChecker
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
cloud-developer ---
[INFO] 
[INFO] --- maven-install-plugin:2.3.1:install (default-install) @ 
cloud-developer ---
[INFO] Installing 
 
to 
/var/lib/jenkins/.m2/repository/org/apache/cloudstack/cloud-developer/4.6.0-SNAPSHOT/cloud-developer-4.6.0-SNAPSHOT.pom
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 27.311s
[INFO] Finished at: Fri Apr 17 12:07:40 EDT 2015
[INFO] Final Memory: 45M/182M
[INFO] 
[simulator-singlerun] $ /bin/bash -x /tmp/hudson4697941323495075048.sh
+ jps -l
+ grep -q Launcher
+ rm -f xunit.xml
+ echo ''
+ rm -rf /tmp/MarvinLogs
+ echo Check for initialization of the management server
Check for initialization of the management server
+ COUNTER=0
+ SERVER_PID=891
+ mvn -P systemvm,simulator -pl :cloud-client-ui jetty:run
+ '[' 0 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=1
+ '[' 1 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=2
+ '[' 2 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=3
+ '[' 3 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=4
+ '[' 4 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=5
+ '[' 5 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=6
+ '[' 6 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=7
+ '[' 7 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=8
+ '[' 8 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=9
+ '[' 9 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=10
+ '[' 10 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=11
+ '[' 11 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=12
+ '[' 12 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=13
+ '[' 13 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=14
+ '[' 14 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=15
+ '[' 15 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=16
+ '[' 16 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=17
+ '[' 17 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=18
+ '[' 18 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=19
+ '[' 19 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=20
+ '[' 20 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=21
+ '[' 21 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=22
+ '[' 22 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ sleep 5
+ COUNTER=23
+ '[' 23 -lt 44 ']'
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ break
+ grep -q 'Management server node 127.0.0.1 is up' jetty-output.out
+ echo Started OK pid 891

Re: [DISCUSS] 4.6 release management

2015-04-17 Thread Marcus
Well, would we just swap the last release branch with master? Master
is the dev branch, and the last release is really what we have as a
stable branch.

On Fri, Apr 17, 2015 at 8:44 AM, Daan Hoogland  wrote:
> On Fri, Apr 17, 2015 at 2:43 AM, Sebastien Goasguen  wrote:
>>
>>> On Apr 17, 2015, at 12:49 AM, Pierre-Luc Dion  wrote:
>>>
>>> Today during the CloudStackdays  we did a round table about Release
>>> management targeting the next 4.6 releases.
>>>
>>>
>>> Quick bullet point discussions:
>>>
>>> ideas to change release planning
>>>
>>>   - Plugin contribution is complicated because often  a new plugin involve
>>>   change on the core:
>>>  - ex: storage plugin involve changes on Hypervisor code
>>>   - There is an idea of going on a 2 weeks release model which could
>>>   introduce issue the database schema.
>>>   - Database schema version should be different then the application
>>>   version.
>>>   - There is a will to enforce git workflow in 4.6  and trigger simulator
>>>   job on  PullRequest.
>>>   - Some people (I'm part of them) are concerned on our current way of
>>>   supporting and back porting fixes to multiple release (4.3.x, 4.4.x,
>>>   4.5.x). But the current level of confidence against latest release is low,
>>>   so that need to be improved.
>>>
>>>
>>> So, the main messages is that w'd like to improve the release velocity, and
>>> release branch stability.  so we would like to propose few change in the
>>> way we would add code to the 4.6 branch as follow:
>>>
>>> - All new contribution to 4.6 would be thru Pull Request or merge request,
>>> which would trigger a simulator job, ideally only if that pass the PR would
>>> be accepted and automatically merged.  At this time, I think we pretty much
>>> have everything in place to do that. At a first step we would use
>>> simulator+marvin jobs then improve tests coverage from there.
>>
>> +1
>>
>> We do need to realize what this means and be all fine with it.
>>
>> It means that if someone who is not RM directly commits to the release 
>> branch, the commit will be reverted.
>> And that from the beginning of the branching…
> I agree and we can even go as far as reverting fixes that are
> cherry-picked in favour of merged forward.
>
>>
>> IMHO, I think this would be a good step but I don’t think it goes far enough.
> Agreed here as well but let's take the step while discussing further
> steps and not implement to much process as well
>
>>
>> This still uses a paradigm where a release is made from a release branch 
>> that was started from an unstable development branch.
>> Hence you still need *extensive* QA.
> The problem here is that there is no stable point to fork from at the
> moment. We will get there and we shouldn't stop taking steps in that
> direction.
>
>>
>> If we truly want to release faster, we need to release from the same QA’d 
>> branch time after time….a release needs to be based on a previous release
>>
>> Basically, we need a rolling release cycle. That will have the added benefit 
>> to not leave releases behind and have to focus on backporting.
>>
>>>
>>> Please comments :-)
>>
>
>
>
> --
> Daan


Re: [DISCUSS] 4.6 release management

2015-04-17 Thread Daan Hoogland
We heavily invested in code now on master. Not looking forward to
backporting that.

mobile dev with bilingual spelling checker used (read at your own risk)
Op 17 apr. 2015 21:02 schreef "Marcus" :

> Well, would we just swap the last release branch with master? Master
> is the dev branch, and the last release is really what we have as a
> stable branch.
>
> On Fri, Apr 17, 2015 at 8:44 AM, Daan Hoogland 
> wrote:
> > On Fri, Apr 17, 2015 at 2:43 AM, Sebastien Goasguen 
> wrote:
> >>
> >>> On Apr 17, 2015, at 12:49 AM, Pierre-Luc Dion 
> wrote:
> >>>
> >>> Today during the CloudStackdays  we did a round table about Release
> >>> management targeting the next 4.6 releases.
> >>>
> >>>
> >>> Quick bullet point discussions:
> >>>
> >>> ideas to change release planning
> >>>
> >>>   - Plugin contribution is complicated because often  a new plugin
> involve
> >>>   change on the core:
> >>>  - ex: storage plugin involve changes on Hypervisor code
> >>>   - There is an idea of going on a 2 weeks release model which could
> >>>   introduce issue the database schema.
> >>>   - Database schema version should be different then the application
> >>>   version.
> >>>   - There is a will to enforce git workflow in 4.6  and trigger
> simulator
> >>>   job on  PullRequest.
> >>>   - Some people (I'm part of them) are concerned on our current way of
> >>>   supporting and back porting fixes to multiple release (4.3.x, 4.4.x,
> >>>   4.5.x). But the current level of confidence against latest release
> is low,
> >>>   so that need to be improved.
> >>>
> >>>
> >>> So, the main messages is that w'd like to improve the release
> velocity, and
> >>> release branch stability.  so we would like to propose few change in
> the
> >>> way we would add code to the 4.6 branch as follow:
> >>>
> >>> - All new contribution to 4.6 would be thru Pull Request or merge
> request,
> >>> which would trigger a simulator job, ideally only if that pass the PR
> would
> >>> be accepted and automatically merged.  At this time, I think we pretty
> much
> >>> have everything in place to do that. At a first step we would use
> >>> simulator+marvin jobs then improve tests coverage from there.
> >>
> >> +1
> >>
> >> We do need to realize what this means and be all fine with it.
> >>
> >> It means that if someone who is not RM directly commits to the release
> branch, the commit will be reverted.
> >> And that from the beginning of the branching…
> > I agree and we can even go as far as reverting fixes that are
> > cherry-picked in favour of merged forward.
> >
> >>
> >> IMHO, I think this would be a good step but I don’t think it goes far
> enough.
> > Agreed here as well but let's take the step while discussing further
> > steps and not implement to much process as well
> >
> >>
> >> This still uses a paradigm where a release is made from a release
> branch that was started from an unstable development branch.
> >> Hence you still need *extensive* QA.
> > The problem here is that there is no stable point to fork from at the
> > moment. We will get there and we shouldn't stop taking steps in that
> > direction.
> >
> >>
> >> If we truly want to release faster, we need to release from the same
> QA’d branch time after time….a release needs to be based on a previous
> release
> >>
> >> Basically, we need a rolling release cycle. That will have the added
> benefit to not leave releases behind and have to focus on backporting.
> >>
> >>>
> >>> Please comments :-)
> >>
> >
> >
> >
> > --
> > Daan
>


Re: [DISCUSS] 4.6 release management

2015-04-17 Thread Marcus
Have they diverged that much? Due to cherry-picking, I guess.
Otherwise you should be able to do it cleanly.

There's a good opportunity to do this next release. Instead of
creating a release branch, we freeze master and start creating dev
branches.

On Fri, Apr 17, 2015 at 10:46 PM, Daan Hoogland  wrote:
> We heavily invested in code now on master. Not looking forward to
> backporting that.
>
> mobile dev with bilingual spelling checker used (read at your own risk)
> Op 17 apr. 2015 21:02 schreef "Marcus" :
>
>> Well, would we just swap the last release branch with master? Master
>> is the dev branch, and the last release is really what we have as a
>> stable branch.
>>
>> On Fri, Apr 17, 2015 at 8:44 AM, Daan Hoogland 
>> wrote:
>> > On Fri, Apr 17, 2015 at 2:43 AM, Sebastien Goasguen 
>> wrote:
>> >>
>> >>> On Apr 17, 2015, at 12:49 AM, Pierre-Luc Dion 
>> wrote:
>> >>>
>> >>> Today during the CloudStackdays  we did a round table about Release
>> >>> management targeting the next 4.6 releases.
>> >>>
>> >>>
>> >>> Quick bullet point discussions:
>> >>>
>> >>> ideas to change release planning
>> >>>
>> >>>   - Plugin contribution is complicated because often  a new plugin
>> involve
>> >>>   change on the core:
>> >>>  - ex: storage plugin involve changes on Hypervisor code
>> >>>   - There is an idea of going on a 2 weeks release model which could
>> >>>   introduce issue the database schema.
>> >>>   - Database schema version should be different then the application
>> >>>   version.
>> >>>   - There is a will to enforce git workflow in 4.6  and trigger
>> simulator
>> >>>   job on  PullRequest.
>> >>>   - Some people (I'm part of them) are concerned on our current way of
>> >>>   supporting and back porting fixes to multiple release (4.3.x, 4.4.x,
>> >>>   4.5.x). But the current level of confidence against latest release
>> is low,
>> >>>   so that need to be improved.
>> >>>
>> >>>
>> >>> So, the main messages is that w'd like to improve the release
>> velocity, and
>> >>> release branch stability.  so we would like to propose few change in
>> the
>> >>> way we would add code to the 4.6 branch as follow:
>> >>>
>> >>> - All new contribution to 4.6 would be thru Pull Request or merge
>> request,
>> >>> which would trigger a simulator job, ideally only if that pass the PR
>> would
>> >>> be accepted and automatically merged.  At this time, I think we pretty
>> much
>> >>> have everything in place to do that. At a first step we would use
>> >>> simulator+marvin jobs then improve tests coverage from there.
>> >>
>> >> +1
>> >>
>> >> We do need to realize what this means and be all fine with it.
>> >>
>> >> It means that if someone who is not RM directly commits to the release
>> branch, the commit will be reverted.
>> >> And that from the beginning of the branching…
>> > I agree and we can even go as far as reverting fixes that are
>> > cherry-picked in favour of merged forward.
>> >
>> >>
>> >> IMHO, I think this would be a good step but I don’t think it goes far
>> enough.
>> > Agreed here as well but let's take the step while discussing further
>> > steps and not implement to much process as well
>> >
>> >>
>> >> This still uses a paradigm where a release is made from a release
>> branch that was started from an unstable development branch.
>> >> Hence you still need *extensive* QA.
>> > The problem here is that there is no stable point to fork from at the
>> > moment. We will get there and we shouldn't stop taking steps in that
>> > direction.
>> >
>> >>
>> >> If we truly want to release faster, we need to release from the same
>> QA’d branch time after time….a release needs to be based on a previous
>> release
>> >>
>> >> Basically, we need a rolling release cycle. That will have the added
>> benefit to not leave releases behind and have to focus on backporting.
>> >>
>> >>>
>> >>> Please comments :-)
>> >>
>> >
>> >
>> >
>> > --
>> > Daan
>>