JIRA 3061

2013-11-25 Thread Mandar Barve
Hi all,

Problem: JIRA says list hosts API response didn't return CPU used parameter
value. This bug is reported against version 4.0.2.

I could not reproduce this problem with CS version 4.2.

I used CloudMonkey CLI to fire API commands to the management server. With
a basic zone created that has 1 pod, cluster and couple of system vms
connected to the management server using CloudMonkey CLI sent the list
hosts API command and the JSON response output could be captured in the log
file. JSON response and the CLI output shows "cpuused". The value seen here
could be matched against the portal reported host statistics value for CPU
used.

CLI output:

> list hosts
count = 1
host:
id = df4fe805-a320-4417-b8be-22dd0b86561e
name = devcloud
capabilities = xen-3.0-x86_32p , hvm
clusterid = b3b80638-1fc5-4d13-aafc-28ff5155c681
clustername = test000
clustertype = CloudManaged
cpuallocated = 0%
cpunumber = 2
cpuspeed = 2486
*cpuused = 0.22%*
cpuwithoverprovisioning = 4972.0
created = 2013-10-07T18:57:58+0530
disconnected = 2013-10-15T11:24:19+0530
events = PingTimeout; AgentConnected; HostDown; ShutdownRequested;
AgentDisconnected; ManagementServerDown; Remove; Ping; StartAgentRebalance
hahost = False
hypervisor = XenServer
ipaddress = 192.168.56.10
islocalstorageactive = False
lastpinged = 1970-01-16T20:20:29+0530

 JSON response log:

2013-10-15 11:48:32,724 - requester.py:45 - [DEBUG]  START Request

2013-10-15 11:48:32,724 - requester.py:45 - [DEBUG] Requesting
command=listHosts, args={}
2013-10-15 11:48:32,725 - requester.py:45 - [DEBUG] Request sent:
http://localhost:8080/client/api?apiKey=c9uPXphFfiQS5589hVp245hWrqcg1yxcVNg9h1xJES34j8uAtvKj0EP6h8jlSC5_VlajL1a2TaXuYFGoON0DMg&command=listHosts&response=json&signature=hKQ5hI0XFpAzNPJYJ7ivR53%2FzJU%3D
2013-10-15 11:48:32,820 - requester.py:45 - [DEBUG] Response received: {
"listhostsresponse" : { "count":1 ,"host" : [
 
{"id":"df4fe805-a320-4417-b8be-22dd0b86561e","name":"devcloud","state":"Up","disconnected":"2013-10-15T11:24:19+0530","type":"Routing","ipaddress":"192.168.56.10","zoneid":"7b015b74-f00f-4216-b523-efc2e32c6bc5","zonename":"DevCloud0","podid":"c58e91d0-ad57-4d09-a485-f0decab857b4","podname":"test00","version":"4.2.0","hypervisor":"XenServer","cpunumber":2,"cpuspeed":2486,"cpuallocated":"0%",
*"cpuused":"0.22%"*,"cpuwithoverprovisioning":"4972.0","networkkbsread":57462,"networkkbswrite":38105,"memorytotal":251632,"memoryallocated":0,"memoryused":546428,"capabilities":"xen-3.0-x86_32p
,
hvm","lastpinged":"1970-01-16T20:20:29+0530","managementserverid":8796750265493,"clusterid":"b3b80638-1fc5-4d13-aafc-28ff5155c681","clustername":"test000","clustertype":"CloudManaged","islocalstorageactive":false,"created":"2013-10-07T18:57:58+0530","events":"PingTimeout;
AgentConnected; HostDown; ShutdownRequested; AgentDisconnected;
ManagementServerDown; Remove; Ping;
StartAgentRebalance","resourcestate":"Enabled","hahost":false} ] } }
2013-10-15 11:48:32,821 - requester.py:45 - [DEBUG]  END Request


Can this be closed? I have updated the JIRA with same comment.

Thanks,
Mandar


Re: Review Request 15825: Added unit tests for Juniper Contrail Virtual Network (VN) and Virtual Machine (VM) Model classes.

2013-11-25 Thread Hugo Trippaers

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15825/#review29367
---



plugins/network-elements/juniper-contrail/src/org/apache/cloudstack/network/contrail/model/VirtualMachineModel.java


This should not be a e.printStackTrace. Use s_logger.error(String, 
Throwable) please so you ensure it ends up in the log indicated by the 
administrator



plugins/network-elements/juniper-contrail/test/org/apache/cloudstack/network/contrail/management/VirtualMachineModelTest.java


Are you sure you want to catch this exception? Why not let is go through 
and get reported as a test exception?



plugins/network-elements/juniper-contrail/test/org/apache/cloudstack/network/contrail/management/VirtualMachineModelTest.java


Should this exception be a test failure?



plugins/network-elements/juniper-contrail/test/org/apache/cloudstack/network/contrail/management/VirtualNetworkModelTest.java


Test failure instead of catching it?



plugins/network-elements/juniper-contrail/test/org/apache/cloudstack/network/contrail/management/VirtualNetworkModelTest.java


same here


- Hugo Trippaers


On Nov. 24, 2013, 6:59 p.m., Sachchidanand Vaidya wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/15825/
> ---
> 
> (Updated Nov. 24, 2013, 6:59 p.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> The code change adds UnitTest cases for Juniper Contrail Virtual Network(VN) 
> and Virtual Machine(VM) Model classes.
> Only VN & VM creation test cases added. 
> 
> 
> Diffs
> -
> 
>   
> plugins/network-elements/juniper-contrail/src/org/apache/cloudstack/network/contrail/management/ModelDatabase.java
>  7f66a3b 
>   
> plugins/network-elements/juniper-contrail/src/org/apache/cloudstack/network/contrail/model/VirtualMachineModel.java
>  32d5d93 
>   
> plugins/network-elements/juniper-contrail/test/org/apache/cloudstack/network/contrail/management/VirtualMachineModelTest.java
>  PRE-CREATION 
>   
> plugins/network-elements/juniper-contrail/test/org/apache/cloudstack/network/contrail/management/VirtualNetworkModelTest.java
>  0938541 
> 
> Diff: https://reviews.apache.org/r/15825/diff/
> 
> 
> Testing
> ---
> 
> Juniper contrail Unit Tests ran successfully as part of package build.
> 
> 
> Thanks,
> 
> Sachchidanand Vaidya
> 
>



JIRA 285

2013-11-25 Thread Mandar Barve
Hi all,
 I could not reproduce this issue with vesion 4.0.2. I tried creating
hourly snapshot schedule with a keep value of 4. I could see 4 snapshots
retained. Then as mentioned in the bug I deleted the schedule and recreated
with same parameters except changing the keep value to 3. After this I
could see only 3 snapshots retained.

Has this been resolved? Can it be closed? I have updated the JIRA with my
comments.

Thanks,
Mandar


Re: Review Request 15667: Fix for Coverity issues CID_1116744, CID_1116718 and CID_1116682, all related to resource leak.

2013-11-25 Thread Hugo Trippaers

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15667/#review29368
---


Can you rebase this patch against latest master? A lot of whitespace and 
formatting issues have been addressed recently, causing this patch to fail.

- Hugo Trippaers


On Nov. 19, 2013, 10:02 a.m., Wilder Rodrigues wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/15667/
> ---
> 
> (Updated Nov. 19, 2013, 10:02 a.m.)
> 
> 
> Review request for cloudstack and Hugo Trippaers.
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> Fixing resource leak on Coverity issues CID_1116744, CID_1116718 and 
> CID_1116682.
> 
> The resource leak problems were mostly related to InpuStreams instantiated 
> and not closed after used.
> 
> 
> Diffs
> -
> 
>   
> framework/ipc/src/org/apache/cloudstack/framework/serializer/OnwireClassRegistry.java
>  ac9c6bc 
>   server/src/com/cloud/api/doc/ApiXmlDocWriter.java b7d526d 
>   server/src/com/cloud/server/ConfigurationServerImpl.java 4020926 
> 
> Diff: https://reviews.apache.org/r/15667/diff/
> 
> 
> Testing
> ---
> 
> A full build was executed on top of the branch created for these changes. 
> After committed and patched, the a brand new branch was created from Master 
> and patched with this patch. Everything worked fine.
> 
> No new feature was added.
> 
> 
> Thanks,
> 
> Wilder Rodrigues
> 
>



Re: Review Request 15418: Fixes about: Code quality, checkstyle and cloudstack conventions

2013-11-25 Thread Hugo Trippaers


> On Nov. 18, 2013, 6:30 p.m., daan Hoogland wrote:
> > Ship It!

commit 876b7e492f15154591ba132ddbfe6a8e7a4c4c3f
Author: afornie 
Date:   Mon Nov 18 12:12:07 2013 +0100

Checkstyle adjustments in code and configuration


- Hugo


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15418/#review29058
---


On Nov. 18, 2013, 11:31 a.m., Antonio Fornie wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/15418/
> ---
> 
> (Updated Nov. 18, 2013, 11:31 a.m.)
> 
> 
> Review request for cloudstack, daan Hoogland and Hugo Trippaers.
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> Fixes about: Code quality, checkstyle and cloudstack conventions. Tabs 
> replaced by 4 spaces, proper instance variable names, removing trailing 
> spaces...
> 
> 
> Diffs
> -
> 
>   parents/checkstyle/src/main/resources/tooling/checkstyle.xml 83493d6 
>   plugins/network-elements/nicira-nvp/pom.xml 9341c93 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/ConfigurePortForwardingRulesOnLogicalRouterAnswer.java
>  94931a0 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/ConfigurePortForwardingRulesOnLogicalRouterCommand.java
>  16ef2c4 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/ConfigurePublicIpsOnLogicalRouterAnswer.java
>  09a3e7e 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/ConfigurePublicIpsOnLogicalRouterCommand.java
>  c08f540 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/ConfigureStaticNatRulesOnLogicalRouterAnswer.java
>  caab316 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/ConfigureStaticNatRulesOnLogicalRouterCommand.java
>  5f79ffc 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/CreateLogicalRouterAnswer.java
>  72a275b 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/CreateLogicalRouterCommand.java
>  1f3f24e 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/CreateLogicalSwitchAnswer.java
>  753edec 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/CreateLogicalSwitchCommand.java
>  b2a5aaf 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/CreateLogicalSwitchPortAnswer.java
>  8fa7927 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/CreateLogicalSwitchPortCommand.java
>  fe3f683 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/DeleteLogicalRouterAnswer.java
>  db07547 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/DeleteLogicalRouterCommand.java
>  96e2cb9 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/DeleteLogicalSwitchAnswer.java
>  e9cfbc4 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/DeleteLogicalSwitchCommand.java
>  25aa339 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/DeleteLogicalSwitchPortAnswer.java
>  f779677 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/DeleteLogicalSwitchPortCommand.java
>  e91a032 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/FindLogicalSwitchPortAnswer.java
>  edc0c5f 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/FindLogicalSwitchPortCommand.java
>  b737c50 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/UpdateLogicalSwitchPortAnswer.java
>  f4c4130 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/UpdateLogicalSwitchPortCommand.java
>  1b8b590 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/api/commands/AddNiciraNvpDeviceCmd.java
>  937b665 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/api/commands/DeleteNiciraNvpDeviceCmd.java
>  6eb6764 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/api/commands/ListNiciraNvpDeviceNetworksCmd.java
>  53203a7 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/api/commands/ListNiciraNvpDevicesCmd.java
>  3e02e19 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/api/response/NiciraNvpDeviceResponse.java
>  d6085e2 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/network/NiciraNvpDeviceVO.java
>  3832123 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/network/NiciraNvpNicMappingVO.java
>  d9dbb02 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/network/NiciraNvpRouterMappingVO.java
>  1e2a831 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/network/dao/NiciraNvpDaoImpl.java
>  5e07246 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/network/dao/NiciraNvpNicMappingDao.java
>  f693dcb 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/network/dao/NiciraNvpNicMappingDaoImpl.j

Re: Review Request 15418: Fixes about: Code quality, checkstyle and cloudstack conventions

2013-11-25 Thread Hugo Trippaers


> On Nov. 18, 2013, 6:30 p.m., daan Hoogland wrote:
> > Ship It!
> 
> Hugo Trippaers wrote:
> commit 876b7e492f15154591ba132ddbfe6a8e7a4c4c3f
> Author: afornie 
> Date:   Mon Nov 18 12:12:07 2013 +0100
> 
> Checkstyle adjustments in code and configuration

Daan committed this.

Daan, please add the sign-off to the commit (git am -s) when you apply reviews 
from review board
Antonio, can you mark this review as submitted?


- Hugo


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15418/#review29058
---


On Nov. 18, 2013, 11:31 a.m., Antonio Fornie wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/15418/
> ---
> 
> (Updated Nov. 18, 2013, 11:31 a.m.)
> 
> 
> Review request for cloudstack, daan Hoogland and Hugo Trippaers.
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> Fixes about: Code quality, checkstyle and cloudstack conventions. Tabs 
> replaced by 4 spaces, proper instance variable names, removing trailing 
> spaces...
> 
> 
> Diffs
> -
> 
>   parents/checkstyle/src/main/resources/tooling/checkstyle.xml 83493d6 
>   plugins/network-elements/nicira-nvp/pom.xml 9341c93 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/ConfigurePortForwardingRulesOnLogicalRouterAnswer.java
>  94931a0 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/ConfigurePortForwardingRulesOnLogicalRouterCommand.java
>  16ef2c4 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/ConfigurePublicIpsOnLogicalRouterAnswer.java
>  09a3e7e 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/ConfigurePublicIpsOnLogicalRouterCommand.java
>  c08f540 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/ConfigureStaticNatRulesOnLogicalRouterAnswer.java
>  caab316 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/ConfigureStaticNatRulesOnLogicalRouterCommand.java
>  5f79ffc 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/CreateLogicalRouterAnswer.java
>  72a275b 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/CreateLogicalRouterCommand.java
>  1f3f24e 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/CreateLogicalSwitchAnswer.java
>  753edec 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/CreateLogicalSwitchCommand.java
>  b2a5aaf 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/CreateLogicalSwitchPortAnswer.java
>  8fa7927 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/CreateLogicalSwitchPortCommand.java
>  fe3f683 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/DeleteLogicalRouterAnswer.java
>  db07547 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/DeleteLogicalRouterCommand.java
>  96e2cb9 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/DeleteLogicalSwitchAnswer.java
>  e9cfbc4 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/DeleteLogicalSwitchCommand.java
>  25aa339 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/DeleteLogicalSwitchPortAnswer.java
>  f779677 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/DeleteLogicalSwitchPortCommand.java
>  e91a032 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/FindLogicalSwitchPortAnswer.java
>  edc0c5f 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/FindLogicalSwitchPortCommand.java
>  b737c50 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/UpdateLogicalSwitchPortAnswer.java
>  f4c4130 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/agent/api/UpdateLogicalSwitchPortCommand.java
>  1b8b590 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/api/commands/AddNiciraNvpDeviceCmd.java
>  937b665 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/api/commands/DeleteNiciraNvpDeviceCmd.java
>  6eb6764 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/api/commands/ListNiciraNvpDeviceNetworksCmd.java
>  53203a7 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/api/commands/ListNiciraNvpDevicesCmd.java
>  3e02e19 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/api/response/NiciraNvpDeviceResponse.java
>  d6085e2 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/network/NiciraNvpDeviceVO.java
>  3832123 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/network/NiciraNvpNicMappingVO.java
>  d9dbb02 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/network/NiciraNvpRouterMappingVO.java
>  1e2a831 
>   
> plugins/network-elements/nicira-nvp/src/com/cloud/network/dao/NiciraNvpDaoImp

Re: persistence layer

2013-11-25 Thread Sebastien Goasguen

On Nov 23, 2013, at 4:13 PM, Laszlo Hornyak  wrote:

> Wouldn't it be a lot of work to move to JOOQ? All queries will have to be
> rewritten.
> 
> 

An a non-java developer question: Will that help support different databases ? 
like moving to MariaDB ?

> 
> On Sat, Nov 23, 2013 at 11:32 AM, Darren Shepherd <
> darren.s.sheph...@gmail.com> wrote:
> 
>> Going to an ORM is not as simple as you would expect.  First, one can make
>> a strong argument that ORM is not the right solution, but that can be
>> ignored right now.
>> 
>> You have to look at the context of ACS and figure out what technology is
>> the most practical to adopt.  ACS does not have ORM today.  It has a custom
>> query api, object mapping, and change tracking for simple CRUD.   Honestly
>> these features are quite sufficient for ACS needs.  The problem, and why we
>> should change it, is that the current framework is custom, limited in
>> functionality, undocumented, and generally a barrier to people developing
>> on ACS.  So jOOQ is a somewhat similar approach but it is just far far
>> better, has a community of users that have developed over 3-4 years, is
>> well documented, and honestly just a very well thought out framework.
>> 
>> Darren
>> 
>>> On Nov 22, 2013, at 6:50 PM, Alex Ough  wrote:
>>> 
>>> All,
>>> 
>>> I'm very interested in converting the current DAO framework to an ORM. I
>>> didn't have any experience with java related ORMs, but I've done quite
>> lots
>>> of works with Django and LINQ. So can you add me if this project is
>> started?
>>> 
>>> Thanks
>>> Alex Ough
>>> 
>>> 
>>> On Fri, Nov 22, 2013 at 7:06 AM, Daan Hoogland >> wrote:
>>> 
 Had a quick look, It looks alright. One question/doubt: will we thigh
 ourselves more to mysql if we code sql more directly instead of
 abstracting away from it so we can leave db choice to the operator in
 the future!?!?
 
 On Thu, Nov 21, 2013 at 7:03 AM, Darren Shepherd
  wrote:
> I've done a lot of analysis on the data access layer, but just haven't
 had time to put together a discuss/recommendation.  In the end I'd
>> propose
 we move to jOOQ.  It's an excellent framework that will be very natural
>> to
 the style of data access that CloudStack uses and we can slowly migrate
>> to
 it.  I've hacked up some code and proven that I can get the two
>> frameworks
 to seamlessly interoperate.  So you can select from a custom DAO and
>> commit
 with jOOQ or vice versa.  Additionally jOOQ will work with the existing
 pojos we have today.
> 
> Check out jOOQ and let me know what you think of it.  I know for most
 people the immediate thought would be to move to JPA, but the way we
 managed "session" is completely incompatible with JPA and will require
 constant merging.  Additionally mixing our custom DAO framework with a
>> JPA
 solution looks darn near impossible.
> 
> Darren
> 
>> On Nov 11, 2013, at 8:33 PM, Laszlo Hornyak >> 
 wrote:
>> 
>> Hi,
>> 
>> What are the general directions with the persistence system?
>> What I know about it is:
>> - It works with JPA (javax.persistence) annotations
>> - But rather than integrating a general JPA implementation such us
>> hibernate, eclipselink or OpenJPA it uses its own query generator and
 DAO
>> classes to generate SQL statements.
>> 
>> Questions:
>> - Are you planing to use JPA? What is the motivation behind the custom
 DAO
>> system?
>> - There are some capabilities in the DAO system that are not used.
 Should
>> these capabilities be maintained or is it ok to remove the support for
>> unused features in small steps?
>> 
>> --
>> 
>> EOF
 
 
>> 
> 
> 
> 
> -- 
> 
> EOF



Re: [DISCUSS] Reporting tool for feeding back zone, pod and cluster information

2013-11-25 Thread Sebastien Goasguen

On Nov 23, 2013, at 5:01 AM, Wido den Hollander  wrote:

> Hi,
> 
> I discussed this during CCCEU13 with David, Chip and Hugo and I promised I 
> put it on the ml.
> 
> My idea is to come up with a reporting tool which users can run daily which 
> feeds us back information about how they are using CloudStack:
> 
> * Hypervisors
> * Zone sizes
> * Cluster sizes
> * Primary Storage sizes and types
> * Same for Secondary Storage
> * Number of management servers
> * Version
> 
> This would ofcourse be anonimized where we would send one file with JSON data 
> back to our servers where we can proccess it to do statistics.
> 
> The tool will obviously be open source and participating in this will be 
> opt-in only.
> 
> We currently don't know what's running out there, so that would be great to 
> know.
> 
> Some questions remain:
> * Who is going to maintain the data?
> * Who has access to the data?
> * How long do we keep it?
> * Do we do logging of IPs sending the data to us?
> 
> I certainly do not want to spy on our users, so that's why it's opt-in and 
> the tool should be part of the main repo, but I think that for us as a 
> project it's very useful to know what our users are doing with CloudStack.
> 
> Comments?
> 

+1

> Wido



Re: [PROPOSAL] User VM HA using native XS HA capabilities

2013-11-25 Thread Koushik Das
Thanks for the comments David. See inline.

-Koushik

On 22-Nov-2013, at 7:31 PM, David Nalley  wrote:

> Hi Koushik:
> 
> In general I like the idea. A couple of comments:
> 
> The upgrade section has a manual step for enabling HA manually per
> instance. Why a manual step? Why is CloudStack not checking the
> desired state (e.g. if HA is enabled in the instance service group)
> with the actual state (what is reflected on the hypervisor) and
> changing it when appropriate.
> 
> We are already going to need to reconcile the state (things like host
> the instance is running on will change for instance) with reality
> already - so it seems like making this an automatic step wouldn't be
> much extra effort and would scale far easier.

[Koushik] Are you suggesting that as part of the upgrade process, all impacted 
VMs should be automatically updated? If so, yes it can be done. For now I am 
keeping it manual, in future the process can be automated.

> 
> Are there plans on deprecating the custom HA solution, or will it be
> supported forever? If the plan is to deprecate, lets go ahead and
> start planning that/announcing/etc and not let it fall into disrepair.

[Koushik] That's the plan going forward. For the next release both options will 
be there. Maybe post that the custom HA solution can be removed for XS 6.2 and 
above.

> 
> --David
> 
> On Fri, Nov 22, 2013 at 7:27 AM, Koushik Das  wrote:
>> Initial draft of the FS 
>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/User+VM+HA+using+native+XS+HA+capabilities
>> 
>> -Koushik
>> 
>> On 21-Nov-2013, at 9:59 AM, Koushik Das  wrote:
>> 
>>> Cloudstack relies on custom HA logic for user VMs running on Xenserver. The 
>>> reason for doing it like this may be due the fact that native HA 
>>> capabilities in XS was not mature enough during the initial days. Also in 
>>> the custom HA logic, Cloudstack has to correctly determine the state of a 
>>> VM from the hypervisor before it can take any action. In case there are any 
>>> issues in determining the state, HA mechanism can get impacted. Since the 
>>> hypervisor best knows the state of the VM it is a better approach to rely 
>>> on native HA capabilities.
>>> 
>>> The idea is to rely on native HA capabilities for user VMs from XS 6.2 
>>> onwards. HA for system VMs would still be based on application logic. For 
>>> sake of backward compatibility the earlier option will be there as well and 
>>> there will be a choice to use any one option.
>>> 
>>> The additional requirement for this is to pre-configure native HA on a 
>>> Xenserver cluster before deploying any user VMs as documented here [1].
>>> 
>>> I have created a ticket in Jira [2]. I will post the FS for this shortly.
>>> 
>>> Thanks,
>>> Koushik
>>> 
>>> [1] 
>>> http://support.citrix.com/servlet/KbServlet/download/34969-102-704897/reference.pdf
>>>  (refer section 3.8)
>>> [2] https://issues.apache.org/jira/browse/CLOUDSTACK-5203
>>> 
>>> 
>> 



Re: [Doc] Validation Issue in Release Notes

2013-11-25 Thread Abhinandan Prateek
There are some issues with the 4.2/master docs. 4.2 is a priority.

Anyone who fixes the build gets a special mention in the release notes !
Now can we have someone fix this.

-abhi

On 23/11/13 4:20 pm, "Radhika Puthiyetath"
 wrote:

>Hi,
>
>Sorry for cross-posting.
>
>While validating the Release Notes by using publican,  there is a
>validity issue which I am not able to resolve.
>
>The command used is:
>
>
>Publican build -format=test -langs=en-us -config=publican.cfg.
>
>
>The error I am getting is the following:
>
>Release_Notes.xml:3509: validity error : Element listitem content does
>not follow the DTD, expecting
>(calloutlist | glosslist | bibliolist | itemizedlist | orderedlist |
>segmentedlist | simplelist | v
>ariablelist | caution | important | note | tip | warning | literallayout
>| programlisting | programl
>istingco | screen | screenco | screenshot | synopsis | cmdsynopsis |
>funcsynopsis | classsynopsis |
>fieldsynopsis | constructorsynopsis | destructorsynopsis | methodsynopsis
>| formalpara | para | simp
>ara | address | blockquote | graphic | graphicco | mediaobject |
>mediaobjectco | informalequation |
>informalexample | informalfigure | informaltable | equation | example |
>figure | table | msgset | pr
>ocedure | sidebar | qandaset | task | anchor | bridgehead | remark |
>highlights | abstract | authorb
>lurb | epigraph | indexterm | beginpage)+, got (para programlisting CDATA)
>
>The issue is that the CDATA cannot be located in the file. If it is
>removed, we can successfully build the file. The issue persists on both
>Master and 4.2
>
>Thanks in advance
>
>-Radhika



Re: [Doc] Validation Issue in Release Notes

2013-11-25 Thread Sebastien Goasguen

On Nov 25, 2013, at 5:40 AM, Abhinandan Prateek  
wrote:

> There are some issues with the 4.2/master docs. 4.2 is a priority.
> 
> Anyone who fixes the build gets a special mention in the release notes !
> Now can we have someone fix this.
> 

I can't even locate: en-US/Revision_History.xml


> -abhi
> 
> On 23/11/13 4:20 pm, "Radhika Puthiyetath"
>  wrote:
> 
>> Hi,
>> 
>> Sorry for cross-posting.
>> 
>> While validating the Release Notes by using publican,  there is a
>> validity issue which I am not able to resolve.
>> 
>> The command used is:
>> 
>> 
>> Publican build -format=test -langs=en-us -config=publican.cfg.
>> 
>> 
>> The error I am getting is the following:
>> 
>> Release_Notes.xml:3509: validity error : Element listitem content does
>> not follow the DTD, expecting
>> (calloutlist | glosslist | bibliolist | itemizedlist | orderedlist |
>> segmentedlist | simplelist | v
>> ariablelist | caution | important | note | tip | warning | literallayout
>> | programlisting | programl
>> istingco | screen | screenco | screenshot | synopsis | cmdsynopsis |
>> funcsynopsis | classsynopsis |
>> fieldsynopsis | constructorsynopsis | destructorsynopsis | methodsynopsis
>> | formalpara | para | simp
>> ara | address | blockquote | graphic | graphicco | mediaobject |
>> mediaobjectco | informalequation |
>> informalexample | informalfigure | informaltable | equation | example |
>> figure | table | msgset | pr
>> ocedure | sidebar | qandaset | task | anchor | bridgehead | remark |
>> highlights | abstract | authorb
>> lurb | epigraph | indexterm | beginpage)+, got (para programlisting CDATA)
>> 
>> The issue is that the CDATA cannot be located in the file. If it is
>> removed, we can successfully build the file. The issue persists on both
>> Master and 4.2
>> 
>> Thanks in advance
>> 
>> -Radhika
> 



Re: Review Request 15351: Fixing bugs from Coverity related to Dereferenced Null after check and as return value.

2013-11-25 Thread Wilder Rodrigues

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15351/
---

(Updated Nov. 25, 2013, 11:02 a.m.)


Review request for cloudstack and Hugo Trippaers.


Changes
---

Rebasing the patch with latest updates from Alex.


Repository: cloudstack-git


Description
---

Fixing Coverity bugs with IDs: cv_1125361, cv_1125357, cv_1125356, cv_1125355, 
cv_1117769, cv_1125354, cv_1125353, cv_1125346, cv_1125352, cv_1125360 


Diffs (updated)
-

  agent/src/com/cloud/agent/AgentShell.java 936e3cd 
  
plugins/network-elements/juniper-contrail/src/org/apache/cloudstack/network/contrail/management/ServiceManagerImpl.java
 ca44757 
  
plugins/network-elements/juniper-contrail/src/org/apache/cloudstack/network/contrail/model/VirtualMachineModel.java
 32d5d93 
  
plugins/user-authenticators/ldap/src/org/apache/cloudstack/api/command/LdapImportUsersCmd.java
 129392e 
  server/src/com/cloud/server/ConfigurationServerImpl.java cfc95ca 
  utils/src/com/cloud/utils/nio/Link.java 3b30053 

Diff: https://reviews.apache.org/r/15351/diff/


Testing
---

All tests passed during build.

[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 10:19.357s
[INFO] Finished at: Fri Nov 08 15:06:34 CET 2013
[INFO] Final Memory: 68M/163M


Thanks,

Wilder Rodrigues



RE: Unable add instance after update on CS4.2

2013-11-25 Thread Sanjay Tripathi
Good to hear that Diego!!

If you think you followed the steps correctly then you can file a bug for this 
issue here: https://issues.apache.org/jira/browse/CLOUDSTACK

--Sanjay

From: Diego Spinola Castro [mailto:spinolacas...@gmail.com]
Sent: Monday, November 25, 2013 5:28 PM
To: Sanjay Tripathi
Subject: Re: Unable add instance after update on CS4.2

Hi Sanjay.
I figured out what was wrong.

the total_capacity field of op_host_capacity table wasn't updated when i change 
the overprovisioning factor. i just updated the value and everthing works fine.




2013/11/25 Sanjay Tripathi 
mailto:sanjay.tripa...@citrix.com>>
Hi Diego,

>From the listClusters response, it looks like your cluster doesn't have enough 
>CPU capacity to deploy a new VM.

Type = 1 is for CPU resource.
{
  "capacitytotal": 576000,
  "capacityused": 701250,
  "percentused": "121.74",
  "type": 1
}
--Sanjay

> -Original Message-
> From: Diego Spinola Castro 
> [mailto:spinolacas...@gmail.com]
> Sent: Monday, November 25, 2013 1:11 AM
> To: us...@cloudstack.apache.org
> Subject: Unable add instance after update on CS4.2
>
> Hi guys, i'm running into a issue on  a vmware 4.1 cluster after update CS to
> 4.2.
>
> When i try create a instance i get the following error:
>
> 2013-11-24 17:37:43,817 DEBUG [cloud.api.ApiServlet]
> (catalina-exec-11:null) ===START===  187.37.35.156 -- GET
>  command=deployVirtualMachine&zoneId=83a1d5a6-6534-4600-b8b4-
> c1bd240eb711&templateId=227&hypervisor=VMware&serviceOfferingId=32
> &networkIds=0749b01c-9dbe-4008-a388-
> c6cb82988852&response=json&sessionkey=%2BrE4mGxi%2Bnqu2r7FxFj8QE
> V9%2FFA%3D&_=1385321825263
> 2013-11-24 17:37:43,858 DEBUG [cloud.api.ApiDispatcher]
> (catalina-exec-11:null) InfrastructureEntity name
> is:com.cloud.offering.ServiceOffering
> 2013-11-24 17:37:43,858 DEBUG [cloud.api.ApiDispatcher]
> (catalina-exec-11:null) ControlledEntity name
> is:com.cloud.template.VirtualMachineTemplate
> 2013-11-24 17:37:43,863 DEBUG [cloud.api.ApiDispatcher]
> (catalina-exec-11:null) ControlledEntity name is:com.cloud.network.Network
> 2013-11-24 17:37:43,869 DEBUG [cloud.user.AccountManagerImpl]
> (catalina-exec-11:null) Access to
> Acct[8bcd7766-9361-40a9-b660-eb2b0694d7a4-diego] granted to
> Acct[8bcd7766-9361-40a9-b660-eb2b0694d7a4-diego] by
> DomainChecker_EnhancerByCloudStack_560d9237
> 2013-11-24 17:37:43,873 DEBUG [cloud.user.AccountManagerImpl]
> (catalina-exec-11:null) Access to
> Acct[8bcd7766-9361-40a9-b660-eb2b0694d7a4-diego] granted to
> Acct[8bcd7766-9361-40a9-b660-eb2b0694d7a4-diego] by
> DomainChecker_EnhancerByCloudStack_560d9237
> 2013-11-24 17:37:43,878 DEBUG [cloud.user.AccountManagerImpl]
> (catalina-exec-11:null) Access to Ntwk[358|Guest|6] granted to
> Acct[8bcd7766-9361-40a9-b660-eb2b0694d7a4-diego] by
> DomainChecker_EnhancerByCloudStack_560d9237
> 2013-11-24 17:37:43,881 DEBUG [cloud.user.AccountManagerImpl]
> (catalina-exec-11:null) Access to
> Tmpl[227-OVA-227-2-150da313-1018-3320-becd-9dc003c96374 granted to
> Acct[8bcd7766-9361-40a9-b660-eb2b0694d7a4-diego] by
> DomainChecker_EnhancerByCloudStack_560d9237
> 2013-11-24 17:37:43,884 DEBUG [cloud.user.AccountManagerImpl]
> (catalina-exec-11:null) Access to
> Acct[8bcd7766-9361-40a9-b660-eb2b0694d7a4-diego] granted to
> Acct[8bcd7766-9361-40a9-b660-eb2b0694d7a4-diego] by
> DomainChecker_EnhancerByCloudStack_560d9237
> 2013-11-24 17:37:43,913 DEBUG [cloud.user.AccountManagerImpl]
> (catalina-exec-11:null) Access to
> Acct[8bcd7766-9361-40a9-b660-eb2b0694d7a4-diego] granted to
> Acct[8bcd7766-9361-40a9-b660-eb2b0694d7a4-diego] by
> DomainChecker_EnhancerByCloudStack_560d9237
> 2013-11-24 17:37:43,964 DEBUG [cloud.network.NetworkModelImpl]
> (catalina-exec-11:null) Service SecurityGroup is not supported in the network
> id=358
> 2013-11-24 17:37:44,144 DEBUG [cloud.vm.UserVmManagerImpl]
> (catalina-exec-11:null) Allocating in the DB for vm
> 2013-11-24 17:37:44,224 DEBUG [cloud.vm.VirtualMachineManagerImpl]
> (catalina-exec-11:null) Allocating entries for VM:
> VM[User|ab62df84-e398-4e02-8a55-9319ed694998]
> 2013-11-24 17:37:44,238 DEBUG [cloud.vm.VirtualMachineManagerImpl]
> (catalina-exec-11:null) Allocating nics for VM[User|ab62df84-e398-4e02-
> 8a55-9319ed694998]
> 2013-11-24 17:37:44,240 DEBUG [cloud.network.NetworkManagerImpl]
> (catalina-exec-11:null) Allocating nic for vm VM[User|ab62df84-e398-4e02-
> 8a55-9319ed694998] in network Ntwk[358|Guest|6] with requested profile
> NicProfile[0-0-null-null-null
> 2013-11-24 17:37:44,358 DEBUG [cloud.network.NetworkModelImpl]
> (catalina-exec-11:null) Service SecurityGroup is not supported in the network
> id=358
> 2013-11-24 17:37:44,361 DEBUG [cloud.vm.VirtualMachineManagerImpl]
> (catalina-exec-11:null) Allocating disks for VM[User|ab62df84-e398-4e02-
> 8a55-9319ed694998]
> 2013-11-24 17:37:44,389 DEBUG [cloud.vm.VirtualMachineManagerImpl]
> (catalina-exec-11:null) Allocation completed for VM:
> VM[Us

Re: persistence layer

2013-11-25 Thread Travis Graham
MariaDB is a drop in replacement for MySQL, so it can be used with or without 
the JOOQ changes.

Travis

On Nov 25, 2013, at 5:20 AM, Sebastien Goasguen  wrote:

> 
> On Nov 23, 2013, at 4:13 PM, Laszlo Hornyak  wrote:
> 
>> Wouldn't it be a lot of work to move to JOOQ? All queries will have to be
>> rewritten.
>> 
>> 
> 
> An a non-java developer question: Will that help support different databases 
> ? like moving to MariaDB ?
> 
>> 
>> On Sat, Nov 23, 2013 at 11:32 AM, Darren Shepherd <
>> darren.s.sheph...@gmail.com> wrote:
>> 
>>> Going to an ORM is not as simple as you would expect.  First, one can make
>>> a strong argument that ORM is not the right solution, but that can be
>>> ignored right now.
>>> 
>>> You have to look at the context of ACS and figure out what technology is
>>> the most practical to adopt.  ACS does not have ORM today.  It has a custom
>>> query api, object mapping, and change tracking for simple CRUD.   Honestly
>>> these features are quite sufficient for ACS needs.  The problem, and why we
>>> should change it, is that the current framework is custom, limited in
>>> functionality, undocumented, and generally a barrier to people developing
>>> on ACS.  So jOOQ is a somewhat similar approach but it is just far far
>>> better, has a community of users that have developed over 3-4 years, is
>>> well documented, and honestly just a very well thought out framework.
>>> 
>>> Darren
>>> 
 On Nov 22, 2013, at 6:50 PM, Alex Ough  wrote:
 
 All,
 
 I'm very interested in converting the current DAO framework to an ORM. I
 didn't have any experience with java related ORMs, but I've done quite
>>> lots
 of works with Django and LINQ. So can you add me if this project is
>>> started?
 
 Thanks
 Alex Ough
 
 
 On Fri, Nov 22, 2013 at 7:06 AM, Daan Hoogland >>> wrote:
 
> Had a quick look, It looks alright. One question/doubt: will we thigh
> ourselves more to mysql if we code sql more directly instead of
> abstracting away from it so we can leave db choice to the operator in
> the future!?!?
> 
> On Thu, Nov 21, 2013 at 7:03 AM, Darren Shepherd
>  wrote:
>> I've done a lot of analysis on the data access layer, but just haven't
> had time to put together a discuss/recommendation.  In the end I'd
>>> propose
> we move to jOOQ.  It's an excellent framework that will be very natural
>>> to
> the style of data access that CloudStack uses and we can slowly migrate
>>> to
> it.  I've hacked up some code and proven that I can get the two
>>> frameworks
> to seamlessly interoperate.  So you can select from a custom DAO and
>>> commit
> with jOOQ or vice versa.  Additionally jOOQ will work with the existing
> pojos we have today.
>> 
>> Check out jOOQ and let me know what you think of it.  I know for most
> people the immediate thought would be to move to JPA, but the way we
> managed "session" is completely incompatible with JPA and will require
> constant merging.  Additionally mixing our custom DAO framework with a
>>> JPA
> solution looks darn near impossible.
>> 
>> Darren
>> 
>>> On Nov 11, 2013, at 8:33 PM, Laszlo Hornyak >>> 
> wrote:
>>> 
>>> Hi,
>>> 
>>> What are the general directions with the persistence system?
>>> What I know about it is:
>>> - It works with JPA (javax.persistence) annotations
>>> - But rather than integrating a general JPA implementation such us
>>> hibernate, eclipselink or OpenJPA it uses its own query generator and
> DAO
>>> classes to generate SQL statements.
>>> 
>>> Questions:
>>> - Are you planing to use JPA? What is the motivation behind the custom
> DAO
>>> system?
>>> - There are some capabilities in the DAO system that are not used.
> Should
>>> these capabilities be maintained or is it ok to remove the support for
>>> unused features in small steps?
>>> 
>>> --
>>> 
>>> EOF
> 
> 
>>> 
>> 
>> 
>> 
>> -- 
>> 
>> EOF
> 



Review Request 15832: enable custom offering support for scalevm

2013-11-25 Thread bharat kumar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15832/
---

Review request for cloudstack and Koushik Das.


Bugs: CLOUDSTACK-5161
https://issues.apache.org/jira/browse/CLOUDSTACK-5161


Repository: cloudstack-git


Description
---

enable scaling of a vm using custom offering
CLOUDSTACK-5161


Diffs
-

  api/src/org/apache/cloudstack/api/ApiConstants.java ea3137d 
  
api/src/org/apache/cloudstack/api/command/admin/systemvm/ScaleSystemVMCmd.java 
212f129 
  
api/src/org/apache/cloudstack/api/command/admin/systemvm/UpgradeSystemVMCmd.java
 738b15d 
  api/src/org/apache/cloudstack/api/command/user/vm/ScaleVMCmd.java 44f5575 
  api/src/org/apache/cloudstack/api/command/user/vm/UpgradeVMCmd.java 161131b 
  engine/api/src/com/cloud/vm/VirtualMachineManager.java 9d19cf5 
  engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java 189c2ba 
  engine/schema/src/com/cloud/service/dao/ServiceOfferingDaoImpl.java 917eaef 
  server/src/com/cloud/server/ManagementServerImpl.java 5023e11 
  server/src/com/cloud/vm/UserVmManager.java 485e633 
  server/src/com/cloud/vm/UserVmManagerImpl.java ca10b06 
  server/test/com/cloud/vm/UserVmManagerTest.java 0a3ed3c 

Diff: https://reviews.apache.org/r/15832/diff/


Testing
---

Tested on master.


Thanks,

bharat kumar



Re: Review Request 15455: Fixing all Coverity bugs on file Upgrade2214to30 related to resource leak. Now the statements closed and nullified before a new assignment.

2013-11-25 Thread Wilder Rodrigues

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15455/
---

(Updated Nov. 25, 2013, 2:26 p.m.)


Review request for cloudstack and Hugo Trippaers.


Changes
---

adding new patch based on Alex changes


Repository: cloudstack-git


Description
---

Coverity bug #1116754 Resource leak on an exceptional path.
The system resource will not be reclaimed and reused, reducing the future 
availability of the resource.
In 
com.?cloud.?upgrade.?dao.?Upgrade2214to30.?setupPhysicalNetworks(java.?sql.?Connection):
 Leak of a system resource on an exception path (probably error handling) 
(CWE-404)

Since the file contained the same implementation all over the place - in many 
different methods - I already updated everything.


Diffs
-

  engine/schema/src/com/cloud/upgrade/dao/Upgrade2214to30.java 48b83b4 

Diff: https://reviews.apache.org/r/15455/diff/


Testing
---

All tests passed during build and I also applied the patch to a different 
branch, based on Master, and built the project: all passed.


File Attachments (updated)


new_diff
  
https://reviews.apache.org/media/uploaded/files/2013/11/25/aabab4c6-902d-4b37-a60a-7b52b2db13c2__fix_1116754.patch


Thanks,

Wilder Rodrigues



Re: Review Request 15455: Fixing all Coverity bugs on file Upgrade2214to30 related to resource leak. Now the statements closed and nullified before a new assignment.

2013-11-25 Thread Wilder Rodrigues

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15455/
---

(Updated Nov. 25, 2013, 2:32 p.m.)


Review request for cloudstack and Hugo Trippaers.


Changes
---

Not the right way to do that.


Repository: cloudstack-git


Description
---

Coverity bug #1116754 Resource leak on an exceptional path.
The system resource will not be reclaimed and reused, reducing the future 
availability of the resource.
In 
com.?cloud.?upgrade.?dao.?Upgrade2214to30.?setupPhysicalNetworks(java.?sql.?Connection):
 Leak of a system resource on an exception path (probably error handling) 
(CWE-404)

Since the file contained the same implementation all over the place - in many 
different methods - I already updated everything.


Diffs
-

  engine/schema/src/com/cloud/upgrade/dao/Upgrade2214to30.java 48b83b4 

Diff: https://reviews.apache.org/r/15455/diff/


Testing
---

All tests passed during build and I also applied the patch to a different 
branch, based on Master, and built the project: all passed.


Thanks,

Wilder Rodrigues



Review Request 15833: CLOUDSTACK-5257: Fixed Network ACL issue related to Egress traffic

2013-11-25 Thread Ashutosh Kelkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15833/
---

Review request for cloudstack, Girish Shilamkar and SrikanteswaraRao Talluri.


Bugs: CLOUDSTACK-5257
https://issues.apache.org/jira/browse/CLOUDSTACK-5257


Repository: cloudstack-git


Description
---

The test case was failing due to issue in ACL rule. The ACL rule was created 
for TCP protocol and the connection to outside world was checked using Ping 
protocol. In this case ICMP protocol should be used in ACL rule as Ping uses 
ICMP.
Also corrected the port numbers and cleaned up code.


Diffs
-

  test/integration/component/test_vpc_vms_deployment.py baefa55 

Diff: https://reviews.apache.org/r/15833/diff/


Testing
---

Tested locally on XenServer advances setup.

Log:
test_01_deploy_vms_in_network (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
Test deploy VMs in VPC networks ... skipped 'Skip'
test_02_deploy_vms_delete_network 
(test_vpc_vms_deployment_fixed.TestVMDeployVPC)
Test deploy VMs in VPC networks and delete one of the network ... skipped 'Skip'
test_03_deploy_vms_delete_add_network 
(test_vpc_vms_deployment_fixed.TestVMDeployVPC)
Test deploy VMs, delete one of the network and add another one ... skipped 
'Skip'
test_04_deploy_vms_delete_add_network_noLb 
(test_vpc_vms_deployment_fixed.TestVMDeployVPC)
Test deploy VMs, delete one network without LB and add another one ... skipped 
'Skip'
test_05_create_network_max_limit (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
Test create networks in VPC upto maximum limit for hypervisor ... skipped 'Skip'
test_06_delete_network_vm_running 
(test_vpc_vms_deployment_fixed.TestVMDeployVPC)
Test delete network having running instances in VPC ... skipped 'Skip'
test_07_delete_network_with_rules 
(test_vpc_vms_deployment_fixed.TestVMDeployVPC)
Test delete network that has PF/staticNat/LB rules/Network Acl ... skipped 
'Skip'

--
Ran 7 tests in 5.907s

OK (skipped=7)


Thanks,

Ashutosh Kelkar



Review Request 15834: CLOUDSTACK-4737: Root volume metering

2013-11-25 Thread Harikrishna Patnala

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15834/
---

Review request for cloudstack and Kishan Kavala.


Bugs: CLOUDSTACK-4737
https://issues.apache.org/jira/browse/CLOUDSTACK-4737


Repository: cloudstack-git


Description
---

CLOUDSTACK-4737: Root volume metering


Diffs
-

  engine/schema/src/com/cloud/event/dao/UsageEventDetailsDaoImpl.java a4382c4 
  engine/schema/src/com/cloud/usage/UsageVMInstanceVO.java 2fe346e 
  setup/db/db/schema-421to430.sql 8be0fb1 
  usage/src/com/cloud/usage/UsageManagerImpl.java 1ee21c9 

Diff: https://reviews.apache.org/r/15834/diff/


Testing
---


Thanks,

Harikrishna Patnala



Re: Review Request 15647: Fixing coverity issues related to resource leak on FileInputStream being created anonymously.

2013-11-25 Thread Wilder Rodrigues

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15647/
---

(Updated Nov. 25, 2013, 3:15 p.m.)


Review request for cloudstack and Hugo Trippaers.


Changes
---

rebasing my changes with Alex Huang changes (which removed whitespaces trailing)


Repository: cloudstack-git


Description
---

Fixing coverity issues related to resource leak on FileInputStream being 
created anonymously.

This patch fixed the following Coverity issues:

cv_1116497
cv_1116681
cv_1116694
cv_1116567
cv_1116495


Diffs (updated)
-

  awsapi/src/com/cloud/bridge/service/EC2RestServlet.java 5c56e9d 
  awsapi/src/com/cloud/bridge/service/controller/s3/ServiceProvider.java 
deb886f 
  awsapi/src/com/cloud/bridge/service/core/ec2/EC2Engine.java 59abca0 
  framework/cluster/src/com/cloud/cluster/ClusterManagerImpl.java 3e7138f 
  services/console-proxy/server/src/com/cloud/consoleproxy/ConsoleProxy.java 
0d28e09 

Diff: https://reviews.apache.org/r/15647/diff/


Testing
---

A build full build was executed on top of the branch created for these changes. 
After committed and patched, the a brand new branch was created from Master and 
patched with this patch. Everything worked fine.

No new feature was added.


Thanks,

Wilder Rodrigues



Re: Enabling AMQP/RabbitMQ Events on master

2013-11-25 Thread David Grizzanti
Murali,

Would you be able to comment on how to enable the event message bus 
notifications on master?

Thanks!

-- 
David Grizzanti
Software Engineer
Sungard Availability Services

e: david.grizza...@sungard.com
w: 215.446.1431
c: 570.575.0315

On November 21, 2013 at 12:35:31 PM, Alena Prokharchyk 
(alena.prokharc...@citrix.com) wrote:

Murali might help you with that as he developed the feature.  

-Alena.  

From: Min Chen mailto:min.c...@citrix.com>>  
Date: Thursday, November 21, 2013 9:30 AM  
To: "dev@cloudstack.apache.org" 
mailto:dev@cloudstack.apache.org>>, Alena 
Prokharchyk mailto:alena.prokharc...@citrix.com>> 
 
Cc: Darren Shepherd 
mailto:darren.sheph...@citrix.com>>  
Subject: Re: Enabling AMQP/RabbitMQ Events on master  

CC Darren here, I am having the same question on current master.  

Thanks  
-min  

On 11/21/13 5:00 AM, "David Grizzanti" 
mailto:david.grizza...@sungard.com>> wrote:  

Alena,  

Do you or anyone else on the list have any updated information about  
enabling the events on master?  

Thanks!  

On Thursday, November 7, 2013, David Grizzanti wrote:  

Alena,  

I don't think these steps will work on master (not installing packages  
of  
cloudstack), I'm building from source. The componentContext XML file  
doesn't seem to exist anymore since some of the Spring refactoring was  
done.  

Thanks  


On Thu, Nov 7, 2013 at 12:42 PM, Alena Prokharchyk <  
alena.prokharc...@citrix.com> wrote:  

David,  

Here are the instructions that I've got from one of the CS QA  
engineers,  
hope it helps.  

FS -  
http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.1.0/html/Admi  
n_Guide/events.html#event-framework  



Test cases relating to this feature was covered as part of Regions  
Feature  
testing -  
https://cwiki.apache.org/confluence/download/attachments/30757955/Regions  
-Test-Execution-42.xlsx  





Steps to set up RabbitMQ Server:  



Have a RabbitMQ server set up.  

Enable rabbitmq_management plugin  

C:\Program Files\RabbitMQ  
Server\rabbitmq_server-3.0.3\sbin>rabbitmq-plugins enable  
rabbitmq_management  

Restart RabbitMQ service.  

In management server :  
Added the following in  
/usr/share/cloudstack-management/webapps/client/WEB-INF/classes/component  
Context.xml  


  
  
  
  

  
  
  
  

Restart management server.  


-Alena.  

From: David Grizzanti 
mailto:david.grizza...@sungard.com>>  
Reply-To: "dev@cloudstack.apache.org" 
mailto:dev@cloudstack.apache.org>>  
Date: Thursday, November 7, 2013 5:04 AM  
To: "dev@cloudstack.apache.org" 
mailto:dev@cloudstack.apache.org>>  
Subject: Enabling AMQP/RabbitMQ Events on master  

Hi,  

I was looking for some help in enabling the AMQP/RabbitMQ events in  
CloudStack. I'm familiar with enabling these events in 4.2, however,  
I'm  
not all the familiar with Spring and given the new modularized changes  
I'm  
not really sure where the XML snippet belongs for the  
eventNotificationBus.  
Previously I had been placing this in applicationContext.  



--  
David Grizzanti  
Software Engineer  
Sungard Availability Services  

e: david.grizza...@sungard.com  
w: 215.446.1431  
c: 570.575.0315  




Re: Review Request 15508: Make sure that if the file does not exist an Exception is thrown and that once it exists it is also closed after the properties are loaded.

2013-11-25 Thread Wilder Rodrigues

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15508/
---

(Updated Nov. 25, 2013, 3:32 p.m.)


Review request for cloudstack and Hugo Trippaers.


Changes
---

Rebasing my patch in order to get the latest changes from Alex Huang


Repository: cloudstack-git


Description
---

Make sure that if the file does not exist an Exception is thrown and that once 
it exists it is also closed after the properties are loaded.

fix for Coverity bug cv_1125364 Resource leak
The system resource will not be reclaimed and reused, reducing the future 
availability of the resource.
In 
org.?apache.?cloudstack.?network.?contrail.?management.?ManagementNetworkGuru.?configure(java.?lang.?String,
 java.?util.?Map): Leak of a system resource (CWE-404)


Diffs (updated)
-

  
plugins/network-elements/juniper-contrail/src/org/apache/cloudstack/network/contrail/management/ManagementNetworkGuru.java
 e86e98a 

Diff: https://reviews.apache.org/r/15508/diff/


Testing
---

A test branch was created, the patch was applied against the branch and a full 
build was executed. Everything is working fine. The class changed is tested by 
MockLocalNfsSecondaryStorageResource.


Thanks,

Wilder Rodrigues



Resource Management/Locking [was: Re: What would be your ideal solution?]

2013-11-25 Thread John Burwell
Darren,

I originally presented my thoughts on this subject at CCC13 [1].  
Fundamentally, I see CloudStack as having two distinct tiers — orchestration 
management and automation control.  The orchestration tier coordinates the 
automation control layer to fulfill user goals (e.g. create a VM instance, 
alter a network route, snapshot a volume, etc) constrained by policies defined 
by the operator (e.g. multi-tenacy boundaries, ACLs, quotas, etc).  This layer 
must always be available to take new requests, and to report the best available 
infrastructure state information.  Since execution of work is guaranteed on 
completion of a request, this layer may pend work to be completed when the 
appropriate devices become available.

The automation control tier translates logical units of work to underlying 
infrastructure component APIs.  Upon completion of unit of work’s execution, 
the state of a device (e.g. hypervisor, storage device, network switch, router, 
etc) matches the state managed by the orchestration tier at the time unit of 
work was created.  In order to ensure that the state of the underlying devices 
remains consistent, these units of work must be executed serially.  Permitting 
concurrent changes to resources creates race conditions that lead to resource 
overcommitment and state divergence.   A symptom of this phenomenon are the 
myriad of scripts operators write to “synchronize” state between the CloudStack 
database and their hypervisors.  Another is the example provided below is the 
rapid create-destroy which can (and often does) leave dangling resources due to 
race conditions between the two operations.  

In order to provide reliability, CloudStack vertically partitions the 
infrastructure into zones (independent power source/network uplink combination) 
sub-divided into pods (racks).  At this time, regions are largely notional, as 
such, as are not partitions at this time.  Between the user’s zone selection 
and our allocators distribution of resources across pods, the system attempts 
to distribute resources widely as possible across these partitions to provide 
resilience against a variety infrastructure failures (e.g. power loss, network 
uplink disruption, switch failures, etc).  In order maximize this resilience, 
the control plane (orchestration + automation tiers) must be to operate on all 
available partitions.  For example, if we have two (2) zones (A & B) and twenty 
(20) pods per zone, we should be able to take and execute work in Zone A when 
one or more pods is lost, as well as, when taking and executing work in Zone B 
when Zone B has failed.

CloudStack is an eventually consistent system in that the state reflected in 
the orchestration tier will (optimistically) differ from the state of the 
underlying infrastructure (managed by the automation tier).  Furthermore, the 
system has a partitioning model to provide resilience in the face of a variety 
of logical and physical failures.  However, the automation control tier 
requires strictly consistent operations.  Based on these definitions, the 
system appears to violate the CAP theorem [2] (Brewer!).  The separation of the 
system into two distinct tiers isolates these characteristics, but the boundary 
between them must be carefully implemented to ensure that the consistency 
requirements of the automation tier are not leaked to the orchestration tier.

To properly implement this boundary, I think we should split the orchestration 
and automation control tiers into separate physical processes communicating via 
an RPC mechanism — allowing the automation control tier to completely 
encapsulate its work distribution model.  In my mind, the tricky wicket is 
providing serialization and partition tolerance in the automation control tier. 
 Realistically, there two options — explicit and implicit locking models.  
Explicit locking models employ an external coordination mechanism to coordinate 
exclusive access to resources (e.g. RDBMS lock pattern, ZooKeeper, Hazelcast, 
etc).  The challenge with this model is ensuring the availability of the 
locking mechanism in the face of partition — forcing CloudStack operators to 
ensure that they have deployed the underlying mechanism in a partition tolerant 
manner (e.g. don’t locate all of the replicas in the same pod, deploy a cluster 
per zone, etc).  Additionally, the durability introduced by these mechanisms 
inhibits the self-healing due to lock staleness.

In contrast, an implicit lock model structures the runtime execution model to 
provide exclusive access to a resource and model the partitioning scheme.  One 
such model is to provide a single work queue (mailbox) and consuming process 
(actor) per resource.  The orchestration tier provides a description of the 
partition and resource definitions to the automation control tier.  The 
automation control tier creates a supervisor per partition which in turn manage 
process creation per resource.  Therefore, process creation 

NullPointerException when invalid zone is passed into UsageEventUtils (CLOUDSTACK-5220)

2013-11-25 Thread David Grizzanti
Hi All,

I submitted a bug for null pointer exceptions I'm seeing when certain types of 
Usage Events are generated (when you have the event notification bus enabled).  
Initially I had submitted two separate bugs for individual events causing null 
pointer exceptions (CLOUDSTACK-5023 and CLOUDSTACK-5062).

The issue here is that the publishUsageEvents function assumes that a valid 
zoneId is passed in for it to use when generating the Event.  However, it many 
cases the caller is passing in "0", which causes a null pointer exception in 
publishUsageEvents.  I realized after looking a bit more that in some cases, 
the zone may not be relevant to the event being generated (i.e. adding VPN 
users to a project).  So, I proposed a fix in CLOUDSTACK-5220 which will catch 
cases where a 0 is passed for the zoneId and leave off the zone when the usage 
events is generated.

Does this sound like an acceptable solution to everyone?

Thanks
-- 
David Grizzanti
Software Engineer
Sungard Availability Services

e: david.grizza...@sungard.com
w: 215.446.1431
c: 570.575.0315

[Discuss] AutoScaling.next in CloudStack

2013-11-25 Thread tuna
Hi guys,

At CCCEU13 I talked about the AutoScale without NetScaler feature working with 
XenServer & XCP. For anyone don’t know about this feature, take a look into my 
slide here: 
http://www.slideshare.net/tuna20073882/autoscale-without-netscalerccceu13.

Chiradeep and I had a short talk after the presentation about how to improve 
the AutoScale feature in future. We agreed that:

+ Need to remove Load Balancing feature from AutoScaling. That’s very simple to 
do.
+ Need to use SNMP for monitoring not only instance-level but also 
application-level.
+ Also, supporting well KVM hypervisor

So, I blow up this thread for all of you guys to discuss the way we design that 
feature, such as:
+ technical side, how to integrate effectively SNMP into CLoudStack. Where do 
we put SNMP monitor components into infrastructure? etc
+ user experience, how user configure that feature with SNMP monitoring. I 
image that user can figure out they need AutoScale for which of following 
items: application, protocol (tcp, udp), port, bandwidth, disk, cpu and memory 
also, etc
+ How about autoscale action, not just only deploy or destroy VM, we need maybe 
dynamically increase-decrease memory/cpu, nic bandwidth, disk,…

Personally, we should think about a completely autoscaling feature.

Cheers,

—Tuna

Re: Resource Management/Locking [was: Re: What would be your ideal solution?]

2013-11-25 Thread Darren Shepherd
You bring up some interesting points.  I really need to digest this
further.  From a high level I think I agree, but there are a lot of implied
details of what you've said.

Darren


On Mon, Nov 25, 2013 at 8:39 AM, John Burwell  wrote:

> Darren,
>
> I originally presented my thoughts on this subject at CCC13 [1].
>  Fundamentally, I see CloudStack as having two distinct tiers —
> orchestration management and automation control.  The orchestration tier
> coordinates the automation control layer to fulfill user goals (e.g. create
> a VM instance, alter a network route, snapshot a volume, etc) constrained
> by policies defined by the operator (e.g. multi-tenacy boundaries, ACLs,
> quotas, etc).  This layer must always be available to take new requests,
> and to report the best available infrastructure state information.  Since
> execution of work is guaranteed on completion of a request, this layer may
> pend work to be completed when the appropriate devices become available.
>
> The automation control tier translates logical units of work to underlying
> infrastructure component APIs.  Upon completion of unit of work’s
> execution, the state of a device (e.g. hypervisor, storage device, network
> switch, router, etc) matches the state managed by the orchestration tier at
> the time unit of work was created.  In order to ensure that the state of
> the underlying devices remains consistent, these units of work must be
> executed serially.  Permitting concurrent changes to resources creates race
> conditions that lead to resource overcommitment and state divergence.   A
> symptom of this phenomenon are the myriad of scripts operators write to
> “synchronize” state between the CloudStack database and their hypervisors.
>  Another is the example provided below is the rapid create-destroy which
> can (and often does) leave dangling resources due to race conditions
> between the two operations.
>
> In order to provide reliability, CloudStack vertically partitions the
> infrastructure into zones (independent power source/network uplink
> combination) sub-divided into pods (racks).  At this time, regions are
> largely notional, as such, as are not partitions at this time.  Between the
> user’s zone selection and our allocators distribution of resources across
> pods, the system attempts to distribute resources widely as possible across
> these partitions to provide resilience against a variety infrastructure
> failures (e.g. power loss, network uplink disruption, switch failures,
> etc).  In order maximize this resilience, the control plane (orchestration
> + automation tiers) must be to operate on all available partitions.  For
> example, if we have two (2) zones (A & B) and twenty (20) pods per zone, we
> should be able to take and execute work in Zone A when one or more pods is
> lost, as well as, when taking and executing work in Zone B when Zone B has
> failed.
>
> CloudStack is an eventually consistent system in that the state reflected
> in the orchestration tier will (optimistically) differ from the state of
> the underlying infrastructure (managed by the automation tier).
>  Furthermore, the system has a partitioning model to provide resilience in
> the face of a variety of logical and physical failures.  However, the
> automation control tier requires strictly consistent operations.  Based on
> these definitions, the system appears to violate the CAP theorem [2]
> (Brewer!).  The separation of the system into two distinct tiers isolates
> these characteristics, but the boundary between them must be carefully
> implemented to ensure that the consistency requirements of the automation
> tier are not leaked to the orchestration tier.
>
> To properly implement this boundary, I think we should split the
> orchestration and automation control tiers into separate physical processes
> communicating via an RPC mechanism — allowing the automation control tier
> to completely encapsulate its work distribution model.  In my mind, the
> tricky wicket is providing serialization and partition tolerance in the
> automation control tier.  Realistically, there two options — explicit and
> implicit locking models.  Explicit locking models employ an external
> coordination mechanism to coordinate exclusive access to resources (e.g.
> RDBMS lock pattern, ZooKeeper, Hazelcast, etc).  The challenge with this
> model is ensuring the availability of the locking mechanism in the face of
> partition — forcing CloudStack operators to ensure that they have deployed
> the underlying mechanism in a partition tolerant manner (e.g. don’t locate
> all of the replicas in the same pod, deploy a cluster per zone, etc).
>  Additionally, the durability introduced by these mechanisms inhibits the
> self-healing due to lock staleness.
>
> In contrast, an implicit lock model structures the runtime execution model
> to provide exclusive access to a resource and model the partitioning
> scheme.  One such model is to provide a single work queue (mai

Re: Resource Management/Locking [was: Re: What would be your ideal solution?]

2013-11-25 Thread Darren Shepherd
I will ask one basic question.  How do you forsee managing one mailbox per
resource.  If I have multiple servers running in an active-active mode, how
do you determine which server has the mailbox?  Do you create actors on
demand?  How do you synchronize that operation?

Darren


On Mon, Nov 25, 2013 at 10:16 AM, Darren Shepherd <
darren.s.sheph...@gmail.com> wrote:

> You bring up some interesting points.  I really need to digest this
> further.  From a high level I think I agree, but there are a lot of implied
> details of what you've said.
>
> Darren
>
>
> On Mon, Nov 25, 2013 at 8:39 AM, John Burwell  wrote:
>
>> Darren,
>>
>> I originally presented my thoughts on this subject at CCC13 [1].
>>  Fundamentally, I see CloudStack as having two distinct tiers —
>> orchestration management and automation control.  The orchestration tier
>> coordinates the automation control layer to fulfill user goals (e.g. create
>> a VM instance, alter a network route, snapshot a volume, etc) constrained
>> by policies defined by the operator (e.g. multi-tenacy boundaries, ACLs,
>> quotas, etc).  This layer must always be available to take new requests,
>> and to report the best available infrastructure state information.  Since
>> execution of work is guaranteed on completion of a request, this layer may
>> pend work to be completed when the appropriate devices become available.
>>
>> The automation control tier translates logical units of work to
>> underlying infrastructure component APIs.  Upon completion of unit of
>> work’s execution, the state of a device (e.g. hypervisor, storage device,
>> network switch, router, etc) matches the state managed by the orchestration
>> tier at the time unit of work was created.  In order to ensure that the
>> state of the underlying devices remains consistent, these units of work
>> must be executed serially.  Permitting concurrent changes to resources
>> creates race conditions that lead to resource overcommitment and state
>> divergence.   A symptom of this phenomenon are the myriad of scripts
>> operators write to “synchronize” state between the CloudStack database and
>> their hypervisors.  Another is the example provided below is the rapid
>> create-destroy which can (and often does) leave dangling resources due to
>> race conditions between the two operations.
>>
>> In order to provide reliability, CloudStack vertically partitions the
>> infrastructure into zones (independent power source/network uplink
>> combination) sub-divided into pods (racks).  At this time, regions are
>> largely notional, as such, as are not partitions at this time.  Between the
>> user’s zone selection and our allocators distribution of resources across
>> pods, the system attempts to distribute resources widely as possible across
>> these partitions to provide resilience against a variety infrastructure
>> failures (e.g. power loss, network uplink disruption, switch failures,
>> etc).  In order maximize this resilience, the control plane (orchestration
>> + automation tiers) must be to operate on all available partitions.  For
>> example, if we have two (2) zones (A & B) and twenty (20) pods per zone, we
>> should be able to take and execute work in Zone A when one or more pods is
>> lost, as well as, when taking and executing work in Zone B when Zone B has
>> failed.
>>
>> CloudStack is an eventually consistent system in that the state reflected
>> in the orchestration tier will (optimistically) differ from the state of
>> the underlying infrastructure (managed by the automation tier).
>>  Furthermore, the system has a partitioning model to provide resilience in
>> the face of a variety of logical and physical failures.  However, the
>> automation control tier requires strictly consistent operations.  Based on
>> these definitions, the system appears to violate the CAP theorem [2]
>> (Brewer!).  The separation of the system into two distinct tiers isolates
>> these characteristics, but the boundary between them must be carefully
>> implemented to ensure that the consistency requirements of the automation
>> tier are not leaked to the orchestration tier.
>>
>> To properly implement this boundary, I think we should split the
>> orchestration and automation control tiers into separate physical processes
>> communicating via an RPC mechanism — allowing the automation control tier
>> to completely encapsulate its work distribution model.  In my mind, the
>> tricky wicket is providing serialization and partition tolerance in the
>> automation control tier.  Realistically, there two options — explicit and
>> implicit locking models.  Explicit locking models employ an external
>> coordination mechanism to coordinate exclusive access to resources (e.g.
>> RDBMS lock pattern, ZooKeeper, Hazelcast, etc).  The challenge with this
>> model is ensuring the availability of the locking mechanism in the face of
>> partition — forcing CloudStack operators to ensure that they have deployed
>> the underlying mechanism 

Need help in creating/posting Cloudstack plugin for Juniper's network devices

2013-11-25 Thread Pradeep HK
Hi,
my name is Pradeep HK and I am a Tech Lead in Juniper Networks, Bangalore.

I am leading the effort to develop Cloudstack Plugin(ie Network Guru) for 
Juniper's networking devices.

I have developed Network Plugin for orchestration of L2 services on Juniper's 
Networking devices.

 I need some pointers on :
(1)How do we go about posting the plugin ? Is it like any other source code? 
What is the procedure?
(2)In a customer installation, if they want to try out the new plugin,  what is 
the procedure

Appreciate ur help on this


-Pradeep

Re: Enabling AMQP/RabbitMQ Events on master

2013-11-25 Thread Darren Shepherd
Just create a file on the classpath
META-INF/cloudstack/core/spring-event-bus-context.xml with the below
contents (change server, port, username, etc)

http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
   xmlns:context="http://www.springframework.org/schema/context";
   xmlns:aop="http://www.springframework.org/schema/aop";
   xsi:schemaLocation="http://www.springframework.org/schema/beans

http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
  http://www.springframework.org/schema/aop
http://www.springframework.org/schema/aop/spring-aop-3.0.xsd
  http://www.springframework.org/schema/context

http://www.springframework.org/schema/context/spring-context-3.0.xsd";
  >











You can put that file at
/etc/cloudstack/management/META-INF/cloudstack/core/spring-event-bus-context.xml

Darren


On Mon, Nov 25, 2013 at 8:24 AM, David Grizzanti <
david.grizza...@sungard.com> wrote:

> Murali,
>
> Would you be able to comment on how to enable the event message bus
> notifications on master?
>
> Thanks!
>
> --
> David Grizzanti
> Software Engineer
> Sungard Availability Services
>
> e: david.grizza...@sungard.com
> w: 215.446.1431
> c: 570.575.0315
>
> On November 21, 2013 at 12:35:31 PM, Alena Prokharchyk (
> alena.prokharc...@citrix.com) wrote:
>
> Murali might help you with that as he developed the feature.
>
> -Alena.
>
> From: Min Chen mailto:min.c...@citrix.com>>
> Date: Thursday, November 21, 2013 9:30 AM
> To: "dev@cloudstack.apache.org" <
> dev@cloudstack.apache.org>, Alena
> Prokharchyk  alena.prokharc...@citrix.com>>
> Cc: Darren Shepherd  darren.sheph...@citrix.com>>
> Subject: Re: Enabling AMQP/RabbitMQ Events on master
>
> CC Darren here, I am having the same question on current master.
>
> Thanks
> -min
>
> On 11/21/13 5:00 AM, "David Grizzanti"  > wrote:
>
> Alena,
>
> Do you or anyone else on the list have any updated information about
> enabling the events on master?
>
> Thanks!
>
> On Thursday, November 7, 2013, David Grizzanti wrote:
>
> Alena,
>
> I don't think these steps will work on master (not installing packages
> of
> cloudstack), I'm building from source. The componentContext XML file
> doesn't seem to exist anymore since some of the Spring refactoring was
> done.
>
> Thanks
>
>
> On Thu, Nov 7, 2013 at 12:42 PM, Alena Prokharchyk <
> alena.prokharc...@citrix.com> wrote:
>
> David,
>
> Here are the instructions that I've got from one of the CS QA
> engineers,
> hope it helps.
>
> FS -
> http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.1.0/html/Admi
> n_Guide/events.html#event-framework
>
>
>
> Test cases relating to this feature was covered as part of Regions
> Feature
> testing -
> https://cwiki.apache.org/confluence/download/attachments/30757955/Regions
> -Test-Execution-42.xlsx
>
>
>
>
>
> Steps to set up RabbitMQ Server:
>
>
>
> Have a RabbitMQ server set up.
>
> Enable rabbitmq_management plugin
>
> C:\Program Files\RabbitMQ
> Server\rabbitmq_server-3.0.3\sbin>rabbitmq-plugins enable
> rabbitmq_management
>
> Restart RabbitMQ service.
>
> In management server :
> Added the following in
> /usr/share/cloudstack-management/webapps/client/WEB-INF/classes/component
> Context.xml
>
>
>  class="org.apache.cloudstack.mom.rabbitmq.RabbitMQEventBus">
> 
> 
> 
>
> 
> 
> 
> 
>
> Restart management server.
>
>
> -Alena.
>
> From: David Grizzanti  david.grizza...@sungard.com>>
> Reply-To: "dev@cloudstack.apache.org" <
> dev@cloudstack.apache.org>
> Date: Thursday, November 7, 2013 5:04 AM
> To: "dev@cloudstack.apache.org" <
> dev@cloudstack.apache.org>
> Subject: Enabling AMQP/RabbitMQ Events on master
>
> Hi,
>
> I was looking for some help in enabling the AMQP/RabbitMQ events in
> CloudStack. I'm familiar with enabling these events in 4.2, however,
> I'm
> not all the familiar with Spring and given the new modularized changes
> I'm
> not really sure where the XML snippet belongs for the
> eventNotificationBus.
> Previously I had been placing this in applicationContext.
>
>
>
> --
> David Grizzanti
> Software Engineer
> Sungard Availability Services
>
> e: david.grizza...@sungard.com
> w: 215.446.1431
> c: 570.575.0315
>
>
>


RE: Need help in creating/posting Cloudstack plugin for Juniper's network devices

2013-11-25 Thread Rayees Namathponnan
Hi Pradeep,

This may help you

https://cwiki.apache.org/confluence/display/CLOUDSTACK/Plug-ins%2C+Modules%2C+and+Extensions
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Extensions 


Regards,
Rayees 


-Original Message-
From: Pradeep HK [mailto:pradeep...@yahoo.com] 
Sent: Monday, November 25, 2013 3:27 AM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: Need help in creating/posting Cloudstack plugin for Juniper's network 
devices

Hi,
my name is Pradeep HK and I am a Tech Lead in Juniper Networks, Bangalore.

I am leading the effort to develop Cloudstack Plugin(ie Network Guru) for 
Juniper's networking devices.

I have developed Network Plugin for orchestration of L2 services on Juniper's 
Networking devices.

 I need some pointers on :
(1)How do we go about posting the plugin ? Is it like any other source code? 
What is the procedure?
(2)In a customer installation, if they want to try out the new plugin,  what is 
the procedure

Appreciate ur help on this


-Pradeep


what's the procedure for committing to 4.3?

2013-11-25 Thread Darren Shepherd
What's the procedure for making changes to 4.3?  Obviously commit to master
and then cherry-pick but at this point is there any other control around
it?  If I need to commit something in 4.3 do I just do it myself?

Darren


Re: Unable to add NetScaler in 4.3 branch builds - Automation blocker

2013-11-25 Thread Darren Shepherd
It appears that I didn't commit any of the noredist spring configuration
for network-elements.  I will get that added in a bit once I just do some
quick validation.

Darren


On Wed, Nov 20, 2013 at 7:41 PM, Rayees Namathponnan <
rayees.namathpon...@citrix.com> wrote:

> Created below defect, EIP / ELB automation blocked due to this
>
> https://issues.apache.org/jira/browse/CLOUDSTACK-5224
>
>
> Regards,
> Rayees
>
> -Original Message-
> From: Rayees Namathponnan [mailto:rayees.namathpon...@citrix.com]
> Sent: Wednesday, November 20, 2013 2:29 PM
> To: dev@cloudstack.apache.org
> Cc: Darren Shepherd  (
> darren.s.sheph...@gmail.com)
> Subject: Unable to add NetScaler in 4.3 branch builds
>
> Hi,
>
> I created noredist (non oss  build from 4.3 branch ;  and  created
> advanced zone in KVM.
>
> I am trying to add NetScaler as service provider;  but its failed with
> below error
>
>
> 2013-11-20 00:13:57,532 DEBUG [c.c.a.ApiServlet]
> (catalina-exec-24:ctx-2de1fb83) ===START===  10.223.240.194 -- GET
>  physicalnetworkid=05ef4a90-f9ba-449f-b1b6
>
> -a437e6c4d4dd&apiKey=a8WrP3KUsp4G9e4xsseUEgqRJF0hoZ8uZwtIL5tM7fnSNgZ-uez5ht7x0GvH8fnVzI59gjnq93VRZzazazy8dQ&name=Netscaler&command=addNetworkServiceProvider&s
> ignature=Kz%2FM3E60UlpWJg0VbjEs%2FdHpIUE%3D&response=json
> 2013-11-20 00:13:57,553 INFO  [c.c.a.ApiServer]
> (catalina-exec-24:ctx-2de1fb83 ctx-0bbc33a1 ctx-e0da0b07) Unable to find
> the Network Element implementing the Service Provider 'Netscaler'
> 2013-11-20 00:13:57,554 DEBUG [c.c.a.ApiServlet]
> (catalina-exec-24:ctx-2de1fb83 ctx-0bbc33a1 ctx-e0da0b07) ===END===
>  10.223.240.194 -- GET
>  
> physicalnetworkid=05ef4a90-f9ba-449f-b1b6-a437e6c4d4dd&apiKey=a8WrP3KUsp4G9e4xsseUEgqRJF0hoZ8uZwtIL5tM7fnSNgZ-uez5ht7x0GvH8fnVzI59gjnq93VRZzazazy8dQ&name=Netscaler&command=addNetworkServiceProvider&signature=Kz%2FM3E60UlpWJg0VbjEs%2FdHpIUE%3D&response=json
>
>
> Regards,
> Rayees
>


Re: what's the procedure for committing to 4.3?

2013-11-25 Thread Mike Tutkowski
Hey Darren,

For 4.2 we just committed to master and then cherry picked it to 4.2
ourselves. We tended to require a JIRA ticket to be associated with the
commit(s).

As the release went on, the Release Manager took over cherry picking and
would typically do so on our behalves, if we requested such an action.

Talk to you later


On Mon, Nov 25, 2013 at 11:26 AM, Darren Shepherd <
darren.s.sheph...@gmail.com> wrote:

> What's the procedure for making changes to 4.3?  Obviously commit to master
> and then cherry-pick but at this point is there any other control around
> it?  If I need to commit something in 4.3 do I just do it myself?
>
> Darren
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud
*™*


Re: what's the procedure for committing to 4.3?

2013-11-25 Thread Mike Tutkowski
As far as I know, we are NOT yet at the point where the Release Manager
does the cherry picking for us (upon our request).

Thanks


On Mon, Nov 25, 2013 at 11:47 AM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:

> Hey Darren,
>
> For 4.2 we just committed to master and then cherry picked it to 4.2
> ourselves. We tended to require a JIRA ticket to be associated with the
> commit(s).
>
> As the release went on, the Release Manager took over cherry picking and
> would typically do so on our behalves, if we requested such an action.
>
> Talk to you later
>
>
> On Mon, Nov 25, 2013 at 11:26 AM, Darren Shepherd <
> darren.s.sheph...@gmail.com> wrote:
>
>> What's the procedure for making changes to 4.3?  Obviously commit to
>> master
>> and then cherry-pick but at this point is there any other control around
>> it?  If I need to commit something in 4.3 do I just do it myself?
>>
>> Darren
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the 
> cloud
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud
*™*


Review Request 15840: CLOUDSTACK-5206: Ability to control the external id of first class objects

2013-11-25 Thread Nitin Mehta

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15840/
---

Review request for cloudstack.


Bugs: CLOUDSTACK-5206
https://issues.apache.org/jira/browse/CLOUDSTACK-5206


Repository: cloudstack-git


Description
---

CLOUDSTACK-5206: Ability to control the external id of first
 class objects. Putting in the generic methods and trying it
 for objects like vm, volume. This is the first cut


Diffs
-

  api/src/com/cloud/storage/VolumeApiService.java 47afa10 
  api/src/com/cloud/vm/UserVmService.java 444c47a 
  api/src/org/apache/cloudstack/api/ApiConstants.java 6f919c1 
  api/src/org/apache/cloudstack/api/BaseAsyncCreateCustomIdCmd.java 
PRE-CREATION 
  api/src/org/apache/cloudstack/api/BaseAsyncCustomIdCmd.java PRE-CREATION 
  api/src/org/apache/cloudstack/api/BaseCustomIdCmd.java PRE-CREATION 
  api/src/org/apache/cloudstack/api/command/user/vm/DeployVMCmd.java 7180f4e 
  api/src/org/apache/cloudstack/api/command/user/vm/UpdateVMCmd.java fbb785f 
  api/src/org/apache/cloudstack/api/command/user/volume/CreateVolumeCmd.java 
eb4ac88 
  api/src/org/apache/cloudstack/api/command/user/volume/UpdateVolumeCmd.java 
f12cef8 
  engine/schema/src/com/cloud/vm/dao/UserVmDao.java 606d424 
  engine/schema/src/com/cloud/vm/dao/UserVmDaoImpl.java 43bdef1 
  
server/resources/META-INF/cloudstack/core/spring-server-core-managers-context.xml
 2a080f9 
  server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java 
4bbc872 
  server/src/com/cloud/storage/VolumeApiServiceImpl.java c693527 
  server/src/com/cloud/uuididentity/UUIDManager.java PRE-CREATION 
  server/src/com/cloud/uuididentity/UUIDManagerImpl.java PRE-CREATION 
  server/src/com/cloud/vm/UserVmManager.java b7b4bd5 
  server/src/com/cloud/vm/UserVmManagerImpl.java 00d8063 

Diff: https://reviews.apache.org/r/15840/diff/


Testing
---

Tested locally.


Thanks,

Nitin Mehta



RE: Resource Management/Locking [was: Re: What would be your ideal solution?]

2013-11-25 Thread Edison Su
Won't the architecture used by Mesos/Omega solve the resource 
management/locking issue:
http://mesos.apache.org/documentation/latest/mesos-architecture/
http://eurosys2013.tudos.org/wp-content/uploads/2013/paper/Schwarzkopf.pdf
Basically, one server holds all the resource information in memory 
(cpu/memory/disk/ip address etc) about the whole data center, all the 
hypervisor hosts or any other resource entities are connecting to this server 
to report/update its own resource. As there is only one master server, CAP 
theorem is invalid.


> -Original Message-
> From: Darren Shepherd [mailto:darren.s.sheph...@gmail.com]
> Sent: Monday, November 25, 2013 9:17 AM
> To: John Burwell
> Cc: dev@cloudstack.apache.org
> Subject: Re: Resource Management/Locking [was: Re: What would be your
> ideal solution?]
> 
> You bring up some interesting points.  I really need to digest this further.
> From a high level I think I agree, but there are a lot of implied details of 
> what
> you've said.
> 
> Darren
> 
> 
> On Mon, Nov 25, 2013 at 8:39 AM, John Burwell 
> wrote:
> 
> > Darren,
> >
> > I originally presented my thoughts on this subject at CCC13 [1].
> >  Fundamentally, I see CloudStack as having two distinct tiers -
> > orchestration management and automation control.  The orchestration
> > tier coordinates the automation control layer to fulfill user goals
> > (e.g. create a VM instance, alter a network route, snapshot a volume,
> > etc) constrained by policies defined by the operator (e.g.
> > multi-tenacy boundaries, ACLs, quotas, etc).  This layer must always
> > be available to take new requests, and to report the best available
> > infrastructure state information.  Since execution of work is
> > guaranteed on completion of a request, this layer may pend work to be
> completed when the appropriate devices become available.
> >
> > The automation control tier translates logical units of work to
> > underlying infrastructure component APIs.  Upon completion of unit of
> > work's execution, the state of a device (e.g. hypervisor, storage
> > device, network switch, router, etc) matches the state managed by the
> > orchestration tier at the time unit of work was created.  In order to
> > ensure that the state of the underlying devices remains consistent,
> > these units of work must be executed serially.  Permitting concurrent
> changes to resources creates race
> > conditions that lead to resource overcommitment and state divergence.   A
> > symptom of this phenomenon are the myriad of scripts operators write
> > to "synchronize" state between the CloudStack database and their
> hypervisors.
> >  Another is the example provided below is the rapid create-destroy
> > which can (and often does) leave dangling resources due to race
> > conditions between the two operations.
> >
> > In order to provide reliability, CloudStack vertically partitions the
> > infrastructure into zones (independent power source/network uplink
> > combination) sub-divided into pods (racks).  At this time, regions are
> > largely notional, as such, as are not partitions at this time.
> > Between the user's zone selection and our allocators distribution of
> > resources across pods, the system attempts to distribute resources
> > widely as possible across these partitions to provide resilience
> > against a variety infrastructure failures (e.g. power loss, network
> > uplink disruption, switch failures, etc).  In order maximize this
> > resilience, the control plane (orchestration
> > + automation tiers) must be to operate on all available partitions.
> > + For
> > example, if we have two (2) zones (A & B) and twenty (20) pods per
> > zone, we should be able to take and execute work in Zone A when one or
> > more pods is lost, as well as, when taking and executing work in Zone
> > B when Zone B has failed.
> >
> > CloudStack is an eventually consistent system in that the state
> > reflected in the orchestration tier will (optimistically) differ from
> > the state of the underlying infrastructure (managed by the automation
> tier).
> >  Furthermore, the system has a partitioning model to provide
> > resilience in the face of a variety of logical and physical failures.
> > However, the automation control tier requires strictly consistent
> > operations.  Based on these definitions, the system appears to violate
> > the CAP theorem [2] (Brewer!).  The separation of the system into two
> > distinct tiers isolates these characteristics, but the boundary
> > between them must be carefully implemented to ensure that the
> > consistency requirements of the automation tier are not leaked to the
> orchestration tier.
> >
> > To properly implement this boundary, I think we should split the
> > orchestration and automation control tiers into separate physical
> > processes communicating via an RPC mechanism - allowing the
> automation
> > control tier to completely encapsulate its work distribution model.
> > In my mind, the tricky wicket is pro

Re: Resource Management/Locking [was: Re: What would be your ideal solution?]

2013-11-25 Thread John Burwell
Darren,

In a peer-to-peer model such as I describe, active-active is and is not a 
concept.  The supervision tree is responsible for identifying failure, and 
initiating process re-allocation for failed resources.  For example, if a pod’s 
management process crashed, it would also crash all of the processes managing 
the hosts in that pod.  The zone would then attempt to restart the pod’s 
management process (either local to the zone supervisor or on a remote instance 
which could be configurable) until it was able to start “ready” process for the 
child resource.  

This model requires a “special” root supervisor that is controlled by the 
orchestration tier which can identify when a zone supervisor becomes 
unavailable, and attempts to restart it.  The ownership of this “special” 
supervisor will require a consensus mechanism amongst the orchestration tier 
processes to elect an owner of the process and determine when a new owner needs 
to be elected (e.g. a Raft implementation such as barge [1]).  Given the 
orchestration tier is designed as an AP system, an orchestration tier process 
should be able to be an owner (i.e. the operator is not required to identify a 
“master” node).  There are likely other potential topologies (e.g. a root 
supervisor per zone rather than one for all zones), but in all cases ownership 
election would be the same.  Most importantly, there are no data durability 
requirements in this claim model.  When an orchestration process becomes unable 
to continue owning a root supervisor, the other orchestration processes 
recognize the missing owner and initiate ownership claim the process for the 
partition.

In all failure scenarios, the supervision tree must be rebuilt from the point 
of failure downward using the process allocation process I previously 
described.  For an initial implementation, I would recommend taking simply 
throwing any parts of the supervision tree that are already running in the 
event of a widespread failure (e.g. a zone with many pods).  Dependent on the 
recovery time and SLAs, a future optimization may be to re-attach “orphaned” 
branches of the previous tree to the tree being built as part of the recovery 
process (e.g. loss a zone supervisor due to a switch failure).  Additionally, 
the system would also need a mechanism to hand-off ownership of the root 
supervisor for planned outages (hardware upgrades/decommissioning, maintenance 
windows, etc).

Again, caveated with a a few hand waves, the idea is to build up a peer-to-peer 
management model that provides strict serialization guarantees.  Fundamentally, 
it utilizes a tree of processes to provide exclusive access, distribute work, 
and ensure availability requirements when partitions occur.  Details would need 
to be worked out for the best application to CloudStack (e.g root node 
ownership and orchestration tier gossip), but we would be implementing 
well-trod distributed systems concepts in the context cloud orchestration 
(sounds like a fun thing to do …).

Thanks,
-John

[1]: https://github.com/mgodave/barge

P.S. I see the libraries/frameworks referenced as the building blocks to a 
solution, but none of them (in whole or combination) solves the problem 
completely.

On Nov 25, 2013, at 12:29 PM, Darren Shepherd  
wrote:

> I will ask one basic question.  How do you forsee managing one mailbox per 
> resource.  If I have multiple servers running in an active-active mode, how 
> do you determine which server has the mailbox?  Do you create actors on 
> demand?  How do you synchronize that operation?
> 
> Darren
> 
> 
> On Mon, Nov 25, 2013 at 10:16 AM, Darren Shepherd 
>  wrote:
> You bring up some interesting points.  I really need to digest this further.  
> From a high level I think I agree, but there are a lot of implied details of 
> what you've said.
> 
> Darren
> 
> 
> On Mon, Nov 25, 2013 at 8:39 AM, John Burwell  wrote:
> Darren,
> 
> I originally presented my thoughts on this subject at CCC13 [1].  
> Fundamentally, I see CloudStack as having two distinct tiers — orchestration 
> management and automation control.  The orchestration tier coordinates the 
> automation control layer to fulfill user goals (e.g. create a VM instance, 
> alter a network route, snapshot a volume, etc) constrained by policies 
> defined by the operator (e.g. multi-tenacy boundaries, ACLs, quotas, etc).  
> This layer must always be available to take new requests, and to report the 
> best available infrastructure state information.  Since execution of work is 
> guaranteed on completion of a request, this layer may pend work to be 
> completed when the appropriate devices become available.
> 
> The automation control tier translates logical units of work to underlying 
> infrastructure component APIs.  Upon completion of unit of work’s execution, 
> the state of a device (e.g. hypervisor, storage device, network switch, 
> router, etc) matches the state managed by the orchestration tier at the time 
> un

RE: Unable to add NetScaler in 4.3 branch builds - Automation blocker

2013-11-25 Thread Rayees Namathponnan
Thanks Darren,

I can see your check-in in 4.3 and master branch; I will test this.

Regards,
Rayees

From: Darren Shepherd [mailto:darren.s.sheph...@gmail.com]
Sent: Monday, November 25, 2013 10:32 AM
To: Rayees Namathponnan
Cc: dev@cloudstack.apache.org
Subject: Re: Unable to add NetScaler in 4.3 branch builds - Automation blocker

It appears that I didn't commit any of the noredist spring configuration for 
network-elements.  I will get that added in a bit once I just do some quick 
validation.

Darren

On Wed, Nov 20, 2013 at 7:41 PM, Rayees Namathponnan 
mailto:rayees.namathpon...@citrix.com>> wrote:
Created below defect, EIP / ELB automation blocked due to this

https://issues.apache.org/jira/browse/CLOUDSTACK-5224


Regards,
Rayees

-Original Message-
From: Rayees Namathponnan 
[mailto:rayees.namathpon...@citrix.com]
Sent: Wednesday, November 20, 2013 2:29 PM
To: dev@cloudstack.apache.org
Cc: Darren Shepherd 
mailto:darren.s.sheph...@gmail.com>> 
(darren.s.sheph...@gmail.com)
Subject: Unable to add NetScaler in 4.3 branch builds

Hi,

I created noredist (non oss  build from 4.3 branch ;  and  created advanced 
zone in KVM.

I am trying to add NetScaler as service provider;  but its failed with below 
error


2013-11-20 00:13:57,532 DEBUG [c.c.a.ApiServlet] 
(catalina-exec-24:ctx-2de1fb83) ===START===  10.223.240.194 -- GET  
physicalnetworkid=05ef4a90-f9ba-449f-b1b6
-a437e6c4d4dd&apiKey=a8WrP3KUsp4G9e4xsseUEgqRJF0hoZ8uZwtIL5tM7fnSNgZ-uez5ht7x0GvH8fnVzI59gjnq93VRZzazazy8dQ&name=Netscaler&command=addNetworkServiceProvider&s
ignature=Kz%2FM3E60UlpWJg0VbjEs%2FdHpIUE%3D&response=json
2013-11-20 00:13:57,553 INFO  [c.c.a.ApiServer] (catalina-exec-24:ctx-2de1fb83 
ctx-0bbc33a1 ctx-e0da0b07) Unable to find the Network Element implementing the 
Service Provider 'Netscaler'
2013-11-20 00:13:57,554 DEBUG [c.c.a.ApiServlet] (catalina-exec-24:ctx-2de1fb83 
ctx-0bbc33a1 ctx-e0da0b07) ===END===  10.223.240.194 -- GET  
physicalnetworkid=05ef4a90-f9ba-449f-b1b6-a437e6c4d4dd&apiKey=a8WrP3KUsp4G9e4xsseUEgqRJF0hoZ8uZwtIL5tM7fnSNgZ-uez5ht7x0GvH8fnVzI59gjnq93VRZzazazy8dQ&name=Netscaler&command=addNetworkServiceProvider&signature=Kz%2FM3E60UlpWJg0VbjEs%2FdHpIUE%3D&response=json


Regards,
Rayees



Re: Resource Management/Locking [was: Re: What would be your ideal solution?]

2013-11-25 Thread John Burwell
Edison,

The CAP theorem applies to all distributed systems.  One “master” controlling a 
bunch of a hypervisors being directed by orchestration engine + Zookeeper is a 
distributed system.  In this case, a consistent system.  In my very brief 
reading of it, CloudStack would need multiple Mesos masters to provide 
availability in event of zone or pod failures.  It would run into the same 
issue explicit locking issues I previously described — ensuring the underlying 
Zookeeper infrastructure can maintain quorum in the face of a zone and/or pod 
failures.  While it is possible to achieve, it would greatly increase the 
complexity of CloudStack deployments.

Thanks,
-John 

On Nov 25, 2013, at 2:05 PM, Edison Su  wrote:

> Won't the architecture used by Mesos/Omega solve the resource 
> management/locking issue:
> http://mesos.apache.org/documentation/latest/mesos-architecture/
> http://eurosys2013.tudos.org/wp-content/uploads/2013/paper/Schwarzkopf.pdf
> Basically, one server holds all the resource information in memory 
> (cpu/memory/disk/ip address etc) about the whole data center, all the 
> hypervisor hosts or any other resource entities are connecting to this server 
> to report/update its own resource. As there is only one master server, CAP 
> theorem is invalid.
> 
> 
>> -Original Message-
>> From: Darren Shepherd [mailto:darren.s.sheph...@gmail.com]
>> Sent: Monday, November 25, 2013 9:17 AM
>> To: John Burwell
>> Cc: dev@cloudstack.apache.org
>> Subject: Re: Resource Management/Locking [was: Re: What would be your
>> ideal solution?]
>> 
>> You bring up some interesting points.  I really need to digest this further.
>> From a high level I think I agree, but there are a lot of implied details of 
>> what
>> you've said.
>> 
>> Darren
>> 
>> 
>> On Mon, Nov 25, 2013 at 8:39 AM, John Burwell 
>> wrote:
>> 
>>> Darren,
>>> 
>>> I originally presented my thoughts on this subject at CCC13 [1].
>>> Fundamentally, I see CloudStack as having two distinct tiers -
>>> orchestration management and automation control.  The orchestration
>>> tier coordinates the automation control layer to fulfill user goals
>>> (e.g. create a VM instance, alter a network route, snapshot a volume,
>>> etc) constrained by policies defined by the operator (e.g.
>>> multi-tenacy boundaries, ACLs, quotas, etc).  This layer must always
>>> be available to take new requests, and to report the best available
>>> infrastructure state information.  Since execution of work is
>>> guaranteed on completion of a request, this layer may pend work to be
>> completed when the appropriate devices become available.
>>> 
>>> The automation control tier translates logical units of work to
>>> underlying infrastructure component APIs.  Upon completion of unit of
>>> work's execution, the state of a device (e.g. hypervisor, storage
>>> device, network switch, router, etc) matches the state managed by the
>>> orchestration tier at the time unit of work was created.  In order to
>>> ensure that the state of the underlying devices remains consistent,
>>> these units of work must be executed serially.  Permitting concurrent
>> changes to resources creates race
>>> conditions that lead to resource overcommitment and state divergence.   A
>>> symptom of this phenomenon are the myriad of scripts operators write
>>> to "synchronize" state between the CloudStack database and their
>> hypervisors.
>>> Another is the example provided below is the rapid create-destroy
>>> which can (and often does) leave dangling resources due to race
>>> conditions between the two operations.
>>> 
>>> In order to provide reliability, CloudStack vertically partitions the
>>> infrastructure into zones (independent power source/network uplink
>>> combination) sub-divided into pods (racks).  At this time, regions are
>>> largely notional, as such, as are not partitions at this time.
>>> Between the user's zone selection and our allocators distribution of
>>> resources across pods, the system attempts to distribute resources
>>> widely as possible across these partitions to provide resilience
>>> against a variety infrastructure failures (e.g. power loss, network
>>> uplink disruption, switch failures, etc).  In order maximize this
>>> resilience, the control plane (orchestration
>>> + automation tiers) must be to operate on all available partitions.
>>> + For
>>> example, if we have two (2) zones (A & B) and twenty (20) pods per
>>> zone, we should be able to take and execute work in Zone A when one or
>>> more pods is lost, as well as, when taking and executing work in Zone
>>> B when Zone B has failed.
>>> 
>>> CloudStack is an eventually consistent system in that the state
>>> reflected in the orchestration tier will (optimistically) differ from
>>> the state of the underlying infrastructure (managed by the automation
>> tier).
>>> Furthermore, the system has a partitioning model to provide
>>> resilience in the face of a variety of logical and physical failures.
>>> H

RE: what's the procedure for committing to 4.3?

2013-11-25 Thread Animesh Chaturvedi
Darren 4.3 is open for bug fixes for everyone. No new features are allowed 
though.

We have a code freeze on 12/06 which means that we will only allow blockers and 
critical fixes after that.

 Once we cut our first RC in beginning of January I will create a staging 
branch 4.3-forward and cherry-pick important fixes into 4.3 branch

Animesh

-Original Message-
From: Darren Shepherd [mailto:darren.s.sheph...@gmail.com] 
Sent: Monday, November 25, 2013 10:27 AM
To: dev@cloudstack.apache.org
Subject: what's the procedure for committing to 4.3?

What's the procedure for making changes to 4.3?  Obviously commit to master and 
then cherry-pick but at this point is there any other control around it?  If I 
need to commit something in 4.3 do I just do it myself?

Darren


Re: Router requires upgrade. Unable to send command to router Error

2013-11-25 Thread Will Stevens
In trying to troubleshoot I think I have found another issue.  I went
looking for a 'more official' source for the system templates.  I found
this:
http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Release_Notes/upgrade-instructions.html#upgrade-from-3.0.x-to-4.0

Which gives a system vm template of (for xen server):
http://download.cloud.com/templates/4.2/systemvmtemplate-2013-07-12-master-xen.vhd.bz2

Unfortunately, those system vm templates do not come up (check the attached
files for images).  Basically, the file system comes up as Read Only...

I will go back to the buildacloud system templates to see if I can get them
working...

Would love to have someone confirm where we should be getting the System VM
Templates from for 4.2+.

Still trying to get System VM Templates to work on 4.3.  If anyone has this
working, please post how you get them working and where you got them from.

Thanks,

Will


On Fri, Nov 22, 2013 at 7:43 AM, Will Stevens  wrote:

> I will try this as a temporary solution.  Thank you...
>
> Will
>
>
> On Fri, Nov 22, 2013 at 6:57 AM, Murali Reddy wrote:
>
>>
>> Don¹t understand problem well enough for clean fix, but I updated
>> 'template_version' from 3.0 to 4.3 of the VR in the domain_router table
>> that resolved the issue for me.
>>
>> On 22/11/13 3:50 PM, "Will Stevens"  wrote:
>>
>> >Has anyone been able to resolve this issue?  This is holding up my
>> ability
>> >to launch VMs and test the fixes to my plugin.  I need to resolve this
>> >issue to move forward...
>> >
>> >@Syed, are you still stuck on this as well?
>> >
>> >Cheers,
>> >
>> >Will
>> >
>> >
>> >On Wed, Nov 20, 2013 at 5:18 PM, Syed Ahmed  wrote:
>> >
>> >> OK here is how far I got debugging this. I think I am missing a small
>> >> thing. I hope you guys can help.
>> >>
>> >> So my VM template has the correct version.
>> >>
>> >> root@eng-ns-dev-cs1: /export/secondary/template/tmpl/1/1 # strings
>> >> f3fc75d9-0240-4c71-a3bf-fb65652e4763.vhd  | grep Cloudstack
>> >>
>> >> Cloudstack Release*  4.2.0*Tue Nov 19 23:22:37 UTC 2013
>> >>
>> >>
>> >> But in the database I see the following ( table domain_router )
>> >>
>> >> *** 4. row ***
>> >>  id: 11
>> >>  element_id: 4
>> >>  public_mac_address: 06:48:a8:00:00:68
>> >>   public_ip_address: 172.30.91.102
>> >>  public_netmask: 255.255.255.0
>> >>   guest_netmask: NULL
>> >>guest_ip_address: NULL
>> >> is_redundant_router: 0
>> >>priority: 0
>> >>  is_priority_bumpup: 0
>> >> redundant_state: UNKNOWN
>> >>stop_pending: 0
>> >>role: VIRTUAL_ROUTER
>> >>template_version:*Cloudstack Release 3.0 Mon Feb 6 15:10:04 PST
>> 2012*
>> >> scripts_version: 725d5e5901a62c68aed0dd3463023518
>> >>  vpc_id: NULL
>> >> 4 rows in set (0.00 sec)
>> >>
>> >>
>> >>
>> >> I guess this is populated from the VM that gets created. On the xen the
>> >>vm
>> >> is r-11. I see the following version on that VM
>> >>
>> >> root@r-11-VM:~# cat /etc/cloudstack-release
>> >> Cloudstack Release 3.0 Mon Feb  6 15:10:04 PST 2012
>> >>
>> >>
>> >> This means that Xen is not picking up the template present in the
>> >> secondary storage. Does Xen cache the vhd files locally to avoid coming
>> >>to
>> >> the secondary storage? If so, how can I disable that?
>> >>
>> >> Also, I was looking at UpgradeRouterTemplateCmd API which basically
>> goes
>> >> through all the VRs and reboots them. It expects that when the reboot
>> is
>> >> completed, the router should have picked up the 4.2.0 version of the
>> >> template ( see line 4072 in VirtualNetworkApplianceManagerImpl.java ) I
>> >> try to do the reboot manually but the template remains the same. Do you
>> >> guys have any more suggestions?
>> >>
>> >> Thanks,
>> >> -Syed
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> On Wed 20 Nov 2013 12:55:04 PM EST, Wei ZHOU wrote:
>> >>
>> >>>
>> >>> FYI.
>> >>>
>> >>> I upgraded from 2.2.14 to 4.2.1. The CPVM, SSVM and VRs are working
>> >>>after
>> >>> running *cloudstack-sysvmadm to recreate.*
>> >>>
>> >>>
>> >>> 2013/11/20 Syed Ahmed 
>> >>>
>> >>>
>>  +1 Same error. The secondary storage VM and the Console proxy VM seem
>> to
>>  be coming up alright. I see this error only when starting the virtual
>>  router which is preventing me from creating any instances.
>> 
>> 
>>  On Wed 20 Nov 2013 11:14:47 AM EST, Will Stevens wrote:
>> 
>> 
>> > I am having the same problem. I got the latest system VMs from:
>> >
>> http://jenkins.buildacloud.org/view/master/job/build-systemvm-master/
>> > lastSuccessfulBuild/artifact/tools/appliance/dist/
>> >
>> > Are these the wrong System VM Templates? If so, where should I get
>> >the
>> > System VM Templates to make this work again?
>> >
>> > Thanks,
>> >
>> > Will
>> >
>> >
>> > On Thu, Nov 7, 2013 at 7:42 PM, Alena Prokharchyk <
>> 

Re: Router requires upgrade. Unable to send command to router Error

2013-11-25 Thread Travis Graham
Use the links from the Install Guide instead.

http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Installation_Guide/management-server-install-flow.html#prepare-system-vm-template

Travis

On Nov 25, 2013, at 3:01 PM, Will Stevens  wrote:

> In trying to troubleshoot I think I have found another issue.  I went looking 
> for a 'more official' source for the system templates.  I found this: 
> http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Release_Notes/upgrade-instructions.html#upgrade-from-3.0.x-to-4.0
> 
> Which gives a system vm template of (for xen server): 
> http://download.cloud.com/templates/4.2/systemvmtemplate-2013-07-12-master-xen.vhd.bz2
> 
> Unfortunately, those system vm templates do not come up (check the attached 
> files for images).  Basically, the file system comes up as Read Only...
> 
> I will go back to the buildacloud system templates to see if I can get them 
> working...  
> 
> Would love to have someone confirm where we should be getting the System VM 
> Templates from for 4.2+.
> 
> Still trying to get System VM Templates to work on 4.3.  If anyone has this 
> working, please post how you get them working and where you got them from.
> 
> Thanks,
> 
> Will
> 
> 
> On Fri, Nov 22, 2013 at 7:43 AM, Will Stevens  wrote:
> I will try this as a temporary solution.  Thank you...
> 
> Will
> 
> 
> On Fri, Nov 22, 2013 at 6:57 AM, Murali Reddy  wrote:
> 
> Don¹t understand problem well enough for clean fix, but I updated
> 'template_version' from 3.0 to 4.3 of the VR in the domain_router table
> that resolved the issue for me.
> 
> On 22/11/13 3:50 PM, "Will Stevens"  wrote:
> 
> >Has anyone been able to resolve this issue?  This is holding up my ability
> >to launch VMs and test the fixes to my plugin.  I need to resolve this
> >issue to move forward...
> >
> >@Syed, are you still stuck on this as well?
> >
> >Cheers,
> >
> >Will
> >
> >
> >On Wed, Nov 20, 2013 at 5:18 PM, Syed Ahmed  wrote:
> >
> >> OK here is how far I got debugging this. I think I am missing a small
> >> thing. I hope you guys can help.
> >>
> >> So my VM template has the correct version.
> >>
> >> root@eng-ns-dev-cs1: /export/secondary/template/tmpl/1/1 # strings
> >> f3fc75d9-0240-4c71-a3bf-fb65652e4763.vhd  | grep Cloudstack
> >>
> >> Cloudstack Release*  4.2.0*Tue Nov 19 23:22:37 UTC 2013
> >>
> >>
> >> But in the database I see the following ( table domain_router )
> >>
> >> *** 4. row ***
> >>  id: 11
> >>  element_id: 4
> >>  public_mac_address: 06:48:a8:00:00:68
> >>   public_ip_address: 172.30.91.102
> >>  public_netmask: 255.255.255.0
> >>   guest_netmask: NULL
> >>guest_ip_address: NULL
> >> is_redundant_router: 0
> >>priority: 0
> >>  is_priority_bumpup: 0
> >> redundant_state: UNKNOWN
> >>stop_pending: 0
> >>role: VIRTUAL_ROUTER
> >>template_version:*Cloudstack Release 3.0 Mon Feb 6 15:10:04 PST 2012*
> >> scripts_version: 725d5e5901a62c68aed0dd3463023518
> >>  vpc_id: NULL
> >> 4 rows in set (0.00 sec)
> >>
> >>
> >>
> >> I guess this is populated from the VM that gets created. On the xen the
> >>vm
> >> is r-11. I see the following version on that VM
> >>
> >> root@r-11-VM:~# cat /etc/cloudstack-release
> >> Cloudstack Release 3.0 Mon Feb  6 15:10:04 PST 2012
> >>
> >>
> >> This means that Xen is not picking up the template present in the
> >> secondary storage. Does Xen cache the vhd files locally to avoid coming
> >>to
> >> the secondary storage? If so, how can I disable that?
> >>
> >> Also, I was looking at UpgradeRouterTemplateCmd API which basically goes
> >> through all the VRs and reboots them. It expects that when the reboot is
> >> completed, the router should have picked up the 4.2.0 version of the
> >> template ( see line 4072 in VirtualNetworkApplianceManagerImpl.java ) I
> >> try to do the reboot manually but the template remains the same. Do you
> >> guys have any more suggestions?
> >>
> >> Thanks,
> >> -Syed
> >>
> >>
> >>
> >>
> >>
> >>
> >> On Wed 20 Nov 2013 12:55:04 PM EST, Wei ZHOU wrote:
> >>
> >>>
> >>> FYI.
> >>>
> >>> I upgraded from 2.2.14 to 4.2.1. The CPVM, SSVM and VRs are working
> >>>after
> >>> running *cloudstack-sysvmadm to recreate.*
> >>>
> >>>
> >>> 2013/11/20 Syed Ahmed 
> >>>
> >>>
>  +1 Same error. The secondary storage VM and the Console proxy VM seem
> to
>  be coming up alright. I see this error only when starting the virtual
>  router which is preventing me from creating any instances.
> 
> 
>  On Wed 20 Nov 2013 11:14:47 AM EST, Will Stevens wrote:
> 
> 
> > I am having the same problem. I got the latest system VMs from:
> > http://jenkins.buildacloud.org/view/master/job/build-systemvm-master/
> > lastSuccessfulBuild/artifact/tools/appliance/dist/
> >
> > Are these the wrong System VM Templates? If so, where should I get

Re: Router requires upgrade. Unable to send command to router Error

2013-11-25 Thread Hugo Trippaers
The images should be downloadable from jenkins.buildacloud.org

The 4.2 images are here: 
http://jenkins.buildacloud.org/view/4.2/job/build-systemvm-4.2/

There was something wrong with this build, but i’m working on it. Expect the 
images soon.

Cheers,

Hugo

On 25 nov. 2013, at 21:14, Travis Graham  wrote:

> Use the links from the Install Guide instead.
> 
> http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Installation_Guide/management-server-install-flow.html#prepare-system-vm-template
> 
> Travis
> 
> On Nov 25, 2013, at 3:01 PM, Will Stevens  wrote:
> 
>> In trying to troubleshoot I think I have found another issue.  I went 
>> looking for a 'more official' source for the system templates.  I found 
>> this: 
>> http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Release_Notes/upgrade-instructions.html#upgrade-from-3.0.x-to-4.0
>> 
>> Which gives a system vm template of (for xen server): 
>> http://download.cloud.com/templates/4.2/systemvmtemplate-2013-07-12-master-xen.vhd.bz2
>> 
>> Unfortunately, those system vm templates do not come up (check the attached 
>> files for images).  Basically, the file system comes up as Read Only...
>> 
>> I will go back to the buildacloud system templates to see if I can get them 
>> working...  
>> 
>> Would love to have someone confirm where we should be getting the System VM 
>> Templates from for 4.2+.
>> 
>> Still trying to get System VM Templates to work on 4.3.  If anyone has this 
>> working, please post how you get them working and where you got them from.
>> 
>> Thanks,
>> 
>> Will
>> 
>> 
>> On Fri, Nov 22, 2013 at 7:43 AM, Will Stevens  wrote:
>> I will try this as a temporary solution.  Thank you...
>> 
>> Will
>> 
>> 
>> On Fri, Nov 22, 2013 at 6:57 AM, Murali Reddy  
>> wrote:
>> 
>> Don¹t understand problem well enough for clean fix, but I updated
>> 'template_version' from 3.0 to 4.3 of the VR in the domain_router table
>> that resolved the issue for me.
>> 
>> On 22/11/13 3:50 PM, "Will Stevens"  wrote:
>> 
>>> Has anyone been able to resolve this issue?  This is holding up my ability
>>> to launch VMs and test the fixes to my plugin.  I need to resolve this
>>> issue to move forward...
>>> 
>>> @Syed, are you still stuck on this as well?
>>> 
>>> Cheers,
>>> 
>>> Will
>>> 
>>> 
>>> On Wed, Nov 20, 2013 at 5:18 PM, Syed Ahmed  wrote:
>>> 
 OK here is how far I got debugging this. I think I am missing a small
 thing. I hope you guys can help.
 
 So my VM template has the correct version.
 
 root@eng-ns-dev-cs1: /export/secondary/template/tmpl/1/1 # strings
 f3fc75d9-0240-4c71-a3bf-fb65652e4763.vhd  | grep Cloudstack
 
 Cloudstack Release*  4.2.0*Tue Nov 19 23:22:37 UTC 2013
 
 
 But in the database I see the following ( table domain_router )
 
 *** 4. row ***
 id: 11
 element_id: 4
 public_mac_address: 06:48:a8:00:00:68
  public_ip_address: 172.30.91.102
 public_netmask: 255.255.255.0
  guest_netmask: NULL
   guest_ip_address: NULL
 is_redundant_router: 0
   priority: 0
 is_priority_bumpup: 0
redundant_state: UNKNOWN
   stop_pending: 0
   role: VIRTUAL_ROUTER
   template_version:*Cloudstack Release 3.0 Mon Feb 6 15:10:04 PST 2012*
scripts_version: 725d5e5901a62c68aed0dd3463023518
 vpc_id: NULL
 4 rows in set (0.00 sec)
 
 
 
 I guess this is populated from the VM that gets created. On the xen the
 vm
 is r-11. I see the following version on that VM
 
 root@r-11-VM:~# cat /etc/cloudstack-release
 Cloudstack Release 3.0 Mon Feb  6 15:10:04 PST 2012
 
 
 This means that Xen is not picking up the template present in the
 secondary storage. Does Xen cache the vhd files locally to avoid coming
 to
 the secondary storage? If so, how can I disable that?
 
 Also, I was looking at UpgradeRouterTemplateCmd API which basically goes
 through all the VRs and reboots them. It expects that when the reboot is
 completed, the router should have picked up the 4.2.0 version of the
 template ( see line 4072 in VirtualNetworkApplianceManagerImpl.java ) I
 try to do the reboot manually but the template remains the same. Do you
 guys have any more suggestions?
 
 Thanks,
 -Syed
 
 
 
 
 
 
 On Wed 20 Nov 2013 12:55:04 PM EST, Wei ZHOU wrote:
 
> 
> FYI.
> 
> I upgraded from 2.2.14 to 4.2.1. The CPVM, SSVM and VRs are working
> after
> running *cloudstack-sysvmadm to recreate.*
> 
> 
> 2013/11/20 Syed Ahmed 
> 
> 
>> +1 Same error. The secondary storage VM and the Console proxy VM seem
>> to
>> be coming up alright. I see this error only when starting the virtual
>> router which is preventing me from crea

Progress on GlusterFS support from the CCCEU Hackathon

2013-11-25 Thread David Nalley
Hi folks:

Just bringing some things from the hackathon back to the mailing list.

One of the things worked on there was GlusterFS support. Wido and
Niels began work on this, and you can see the blog post[1] from Niels,
which might be helpful to others as well for things like Sheepdog.

There's also now a project on gluster's forge [2] where the code for
this work in progress lives for the moment. Please don't hesitate to
get involved and help if you are interested.

[1] http://blog.nixpanic.net/2013/11/initial-work-on-gluster-integration.html
[2] https://forge.gluster.org/cloudstack-gluster#more

Thanks

--David


[PROPOSAL] Alert publishing via API

2013-11-25 Thread Alena Prokharchyk
Third party systems integrating with CloudStack should be able to publish 
custom alerts to the CS. Existing alerts might not be enough for a particular 
application purposes, and adding new one while utilizing the existing CS Alert 
notification system, can be quite useful.

Currently there is no way to publish an alert through the web API; it can be 
done only by direct calls to the AlertManager. So the proposal would be - add a 
new API (with ROOT admin permissions) allowing to publish the alerts to CS 
system.

Jira ticket ref is created [1]. I'll send the update with FS link if nobody 
objects on the proposal.

Thank you,
Alena.

[1] https://issues.apache.org/jira/browse/CLOUDSTACK-5261


Re: Router requires upgrade. Unable to send command to router Error

2013-11-25 Thread Will Stevens
I just used:
http://jenkins.buildacloud.org/view/master/job/build-systemvm-master/lastSuccessfulBuild/artifact/tools/appliance/dist/systemvmtemplate-2013-11-24-master-xen.vhd.bz2

I ran into the same issue with this template as all the others I have been
trying.  Basically, the vm comes up with a read only file system for some
reason.  If I run 'fsck' on the vm from XenCenter when the system vms come
up, and then reboot, I can get the vm to come up correctly.  Each time I
launch a new vm which requires a virtual router, I have to 'fsck' the VR in
XenCenter as soon as it tries to launch.

This is working for me to launch an instance (but it is quite annoying).
 Other than the 'fsck' issue, these templates seem to be working.

ws


On Mon, Nov 25, 2013 at 3:29 PM, Hugo Trippaers  wrote:

> The images should be downloadable from jenkins.buildacloud.org
>
> The 4.2 images are here:
> http://jenkins.buildacloud.org/view/4.2/job/build-systemvm-4.2/
>
> There was something wrong with this build, but i’m working on it. Expect
> the images soon.
>
> Cheers,
>
> Hugo
>
> On 25 nov. 2013, at 21:14, Travis Graham  wrote:
>
> > Use the links from the Install Guide instead.
> >
> >
> http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Installation_Guide/management-server-install-flow.html#prepare-system-vm-template
> >
> > Travis
> >
> > On Nov 25, 2013, at 3:01 PM, Will Stevens  wrote:
> >
> >> In trying to troubleshoot I think I have found another issue.  I went
> looking for a 'more official' source for the system templates.  I found
> this:
> http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Release_Notes/upgrade-instructions.html#upgrade-from-3.0.x-to-4.0
> >>
> >> Which gives a system vm template of (for xen server):
> http://download.cloud.com/templates/4.2/systemvmtemplate-2013-07-12-master-xen.vhd.bz2
> >>
> >> Unfortunately, those system vm templates do not come up (check the
> attached files for images).  Basically, the file system comes up as Read
> Only...
> >>
> >> I will go back to the buildacloud system templates to see if I can get
> them working...
> >>
> >> Would love to have someone confirm where we should be getting the
> System VM Templates from for 4.2+.
> >>
> >> Still trying to get System VM Templates to work on 4.3.  If anyone has
> this working, please post how you get them working and where you got them
> from.
> >>
> >> Thanks,
> >>
> >> Will
> >>
> >>
> >> On Fri, Nov 22, 2013 at 7:43 AM, Will Stevens 
> wrote:
> >> I will try this as a temporary solution.  Thank you...
> >>
> >> Will
> >>
> >>
> >> On Fri, Nov 22, 2013 at 6:57 AM, Murali Reddy 
> wrote:
> >>
> >> Don¹t understand problem well enough for clean fix, but I updated
> >> 'template_version' from 3.0 to 4.3 of the VR in the domain_router table
> >> that resolved the issue for me.
> >>
> >> On 22/11/13 3:50 PM, "Will Stevens"  wrote:
> >>
> >>> Has anyone been able to resolve this issue?  This is holding up my
> ability
> >>> to launch VMs and test the fixes to my plugin.  I need to resolve this
> >>> issue to move forward...
> >>>
> >>> @Syed, are you still stuck on this as well?
> >>>
> >>> Cheers,
> >>>
> >>> Will
> >>>
> >>>
> >>> On Wed, Nov 20, 2013 at 5:18 PM, Syed Ahmed 
> wrote:
> >>>
>  OK here is how far I got debugging this. I think I am missing a small
>  thing. I hope you guys can help.
> 
>  So my VM template has the correct version.
> 
>  root@eng-ns-dev-cs1: /export/secondary/template/tmpl/1/1 # strings
>  f3fc75d9-0240-4c71-a3bf-fb65652e4763.vhd  | grep Cloudstack
> 
>  Cloudstack Release*  4.2.0*Tue Nov 19 23:22:37 UTC 2013
> 
> 
>  But in the database I see the following ( table domain_router )
> 
>  *** 4. row ***
>  id: 11
>  element_id: 4
>  public_mac_address: 06:48:a8:00:00:68
>   public_ip_address: 172.30.91.102
>  public_netmask: 255.255.255.0
>   guest_netmask: NULL
>    guest_ip_address: NULL
>  is_redundant_router: 0
>    priority: 0
>  is_priority_bumpup: 0
> redundant_state: UNKNOWN
>    stop_pending: 0
>    role: VIRTUAL_ROUTER
>    template_version:*Cloudstack Release 3.0 Mon Feb 6 15:10:04 PST
> 2012*
> scripts_version: 725d5e5901a62c68aed0dd3463023518
>  vpc_id: NULL
>  4 rows in set (0.00 sec)
> 
> 
> 
>  I guess this is populated from the VM that gets created. On the xen
> the
>  vm
>  is r-11. I see the following version on that VM
> 
>  root@r-11-VM:~# cat /etc/cloudstack-release
>  Cloudstack Release 3.0 Mon Feb  6 15:10:04 PST 2012
> 
> 
>  This means that Xen is not picking up the template present in the
>  secondary storage. Does Xen cache the vhd files locally to avoid
> coming
>  to
>  the secondary storage? If so, how can I disable that?
> 
> 

Re: Resource Management/Locking [was: Re: What would be your ideal solution?]

2013-11-25 Thread Darren Shepherd
Okay, I'll have to stew over this for a bit.  My one general comment is
that it seems complicated.  Such a system seems like it would take a good
amount of effort to construct properly and as such it's a risky endeavour.

Darren


On Mon, Nov 25, 2013 at 12:10 PM, John Burwell  wrote:

> Darren,
>
> In a peer-to-peer model such as I describe, active-active is and is not a
> concept.  The supervision tree is responsible for identifying failure, and
> initiating process re-allocation for failed resources.  For example, if a
> pod’s management process crashed, it would also crash all of the processes
> managing the hosts in that pod.  The zone would then attempt to restart the
> pod’s management process (either local to the zone supervisor or on a
> remote instance which could be configurable) until it was able to start
> “ready” process for the child resource.
>
> This model requires a “special” root supervisor that is controlled by the
> orchestration tier which can identify when a zone supervisor becomes
> unavailable, and attempts to restart it.  The ownership of this “special”
> supervisor will require a consensus mechanism amongst the orchestration
> tier processes to elect an owner of the process and determine when a new
> owner needs to be elected (e.g. a Raft implementation such as barge [1]).
>  Given the orchestration tier is designed as an AP system, an orchestration
> tier process should be able to be an owner (i.e. the operator is not
> required to identify a “master” node).  There are likely other potential
> topologies (e.g. a root supervisor per zone rather than one for all zones),
> but in all cases ownership election would be the same.  Most importantly,
> there are no data durability requirements in this claim model.  When an
> orchestration process becomes unable to continue owning a root supervisor,
> the other orchestration processes recognize the missing owner and initiate
> ownership claim the process for the partition.
>
> In all failure scenarios, the supervision tree must be rebuilt from the
> point of failure downward using the process allocation process I previously
> described.  For an initial implementation, I would recommend taking simply
> throwing any parts of the supervision tree that are already running in the
> event of a widespread failure (e.g. a zone with many pods).  Dependent on
> the recovery time and SLAs, a future optimization may be to re-attach
> “orphaned” branches of the previous tree to the tree being built as part of
> the recovery process (e.g. loss a zone supervisor due to a switch failure).
>  Additionally, the system would also need a mechanism to hand-off ownership
> of the root supervisor for planned outages (hardware
> upgrades/decommissioning, maintenance windows, etc).
>
> Again, caveated with a a few hand waves, the idea is to build up a
> peer-to-peer management model that provides strict serialization
> guarantees.  Fundamentally, it utilizes a tree of processes to provide
> exclusive access, distribute work, and ensure availability requirements
> when partitions occur.  Details would need to be worked out for the best
> application to CloudStack (e.g root node ownership and orchestration tier
> gossip), but we would be implementing well-trod distributed systems
> concepts in the context cloud orchestration (sounds like a fun thing to do
> …).
>
> Thanks,
> -John
>
> [1]: https://github.com/mgodave/barge
>
> P.S. I see the libraries/frameworks referenced as the building blocks to a
> solution, but none of them (in whole or combination) solves the problem
> completely.
>
> On Nov 25, 2013, at 12:29 PM, Darren Shepherd 
> wrote:
>
> I will ask one basic question.  How do you forsee managing one mailbox per
> resource.  If I have multiple servers running in an active-active mode, how
> do you determine which server has the mailbox?  Do you create actors on
> demand?  How do you synchronize that operation?
>
> Darren
>
>
> On Mon, Nov 25, 2013 at 10:16 AM, Darren Shepherd <
> darren.s.sheph...@gmail.com> wrote:
>
>> You bring up some interesting points.  I really need to digest this
>> further.  From a high level I think I agree, but there are a lot of implied
>> details of what you've said.
>>
>> Darren
>>
>>
>> On Mon, Nov 25, 2013 at 8:39 AM, John Burwell  wrote:
>>
>>> Darren,
>>>
>>> I originally presented my thoughts on this subject at CCC13 [1].
>>>  Fundamentally, I see CloudStack as having two distinct tiers —
>>> orchestration management and automation control.  The orchestration tier
>>> coordinates the automation control layer to fulfill user goals (e.g. create
>>> a VM instance, alter a network route, snapshot a volume, etc) constrained
>>> by policies defined by the operator (e.g. multi-tenacy boundaries, ACLs,
>>> quotas, etc).  This layer must always be available to take new requests,
>>> and to report the best available infrastructure state information.  Since
>>> execution of work is guaranteed on completion of a request, this layer

Re: Router requires upgrade. Unable to send command to router Error

2013-11-25 Thread Hugo Trippaers
Will,

What is the error reported by the OS before the fsck is needed? I ran into a 
similar issue a while back and it was caused by a wrong date/time setting on 
the hypervisor. If the systemvm spins up with a time that is earlier than when 
the image was created the system will force an fsck because the last mounted 
time is in the future.  To fix this is needed to set the time on the hypervisor 
correctly.


Cheers,

Hugo

BTW the 4.2 images are ready in the jerkins job.

On 25 nov. 2013, at 22:02, Will Stevens  wrote:

> I just used:
> http://jenkins.buildacloud.org/view/master/job/build-systemvm-master/lastSuccessfulBuild/artifact/tools/appliance/dist/systemvmtemplate-2013-11-24-master-xen.vhd.bz2
> 
> I ran into the same issue with this template as all the others I have been
> trying.  Basically, the vm comes up with a read only file system for some
> reason.  If I run 'fsck' on the vm from XenCenter when the system vms come
> up, and then reboot, I can get the vm to come up correctly.  Each time I
> launch a new vm which requires a virtual router, I have to 'fsck' the VR in
> XenCenter as soon as it tries to launch.
> 
> This is working for me to launch an instance (but it is quite annoying).
> Other than the 'fsck' issue, these templates seem to be working.
> 
> ws
> 
> 
> On Mon, Nov 25, 2013 at 3:29 PM, Hugo Trippaers  wrote:
> 
>> The images should be downloadable from jenkins.buildacloud.org
>> 
>> The 4.2 images are here:
>> http://jenkins.buildacloud.org/view/4.2/job/build-systemvm-4.2/
>> 
>> There was something wrong with this build, but i’m working on it. Expect
>> the images soon.
>> 
>> Cheers,
>> 
>> Hugo
>> 
>> On 25 nov. 2013, at 21:14, Travis Graham  wrote:
>> 
>>> Use the links from the Install Guide instead.
>>> 
>>> 
>> http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Installation_Guide/management-server-install-flow.html#prepare-system-vm-template
>>> 
>>> Travis
>>> 
>>> On Nov 25, 2013, at 3:01 PM, Will Stevens  wrote:
>>> 
 In trying to troubleshoot I think I have found another issue.  I went
>> looking for a 'more official' source for the system templates.  I found
>> this:
>> http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Release_Notes/upgrade-instructions.html#upgrade-from-3.0.x-to-4.0
 
 Which gives a system vm template of (for xen server):
>> http://download.cloud.com/templates/4.2/systemvmtemplate-2013-07-12-master-xen.vhd.bz2
 
 Unfortunately, those system vm templates do not come up (check the
>> attached files for images).  Basically, the file system comes up as Read
>> Only...
 
 I will go back to the buildacloud system templates to see if I can get
>> them working...
 
 Would love to have someone confirm where we should be getting the
>> System VM Templates from for 4.2+.
 
 Still trying to get System VM Templates to work on 4.3.  If anyone has
>> this working, please post how you get them working and where you got them
>> from.
 
 Thanks,
 
 Will
 
 
 On Fri, Nov 22, 2013 at 7:43 AM, Will Stevens 
>> wrote:
 I will try this as a temporary solution.  Thank you...
 
 Will
 
 
 On Fri, Nov 22, 2013 at 6:57 AM, Murali Reddy 
>> wrote:
 
 Don¹t understand problem well enough for clean fix, but I updated
 'template_version' from 3.0 to 4.3 of the VR in the domain_router table
 that resolved the issue for me.
 
 On 22/11/13 3:50 PM, "Will Stevens"  wrote:
 
> Has anyone been able to resolve this issue?  This is holding up my
>> ability
> to launch VMs and test the fixes to my plugin.  I need to resolve this
> issue to move forward...
> 
> @Syed, are you still stuck on this as well?
> 
> Cheers,
> 
> Will
> 
> 
> On Wed, Nov 20, 2013 at 5:18 PM, Syed Ahmed 
>> wrote:
> 
>> OK here is how far I got debugging this. I think I am missing a small
>> thing. I hope you guys can help.
>> 
>> So my VM template has the correct version.
>> 
>> root@eng-ns-dev-cs1: /export/secondary/template/tmpl/1/1 # strings
>> f3fc75d9-0240-4c71-a3bf-fb65652e4763.vhd  | grep Cloudstack
>> 
>> Cloudstack Release*  4.2.0*Tue Nov 19 23:22:37 UTC 2013
>> 
>> 
>> But in the database I see the following ( table domain_router )
>> 
>> *** 4. row ***
>>id: 11
>>element_id: 4
>> public_mac_address: 06:48:a8:00:00:68
>> public_ip_address: 172.30.91.102
>>public_netmask: 255.255.255.0
>> guest_netmask: NULL
>>  guest_ip_address: NULL
>> is_redundant_router: 0
>>  priority: 0
>> is_priority_bumpup: 0
>>   redundant_state: UNKNOWN
>>  stop_pending: 0
>>  role: VIRTUAL_ROUTER
>>  template_version:*Cloudstack Release 3.0 Mon Feb 6 15:10:04 PST
>> 2012*
>>   scripts_version: 725d5e590

Re: Router requires upgrade. Unable to send command to router Error

2013-11-25 Thread Will Stevens
Thanks Hugo, that was the cause of my 'fsck' issue.  NTP was broken on my
dev XenServer due to a misconfigured resolv.conf.

I can confirm that the recent System VM Templates are working.

Thank you...


On Mon, Nov 25, 2013 at 5:15 PM, Hugo Trippaers  wrote:

> Will,
>
> What is the error reported by the OS before the fsck is needed? I ran into
> a similar issue a while back and it was caused by a wrong date/time setting
> on the hypervisor. If the systemvm spins up with a time that is earlier
> than when the image was created the system will force an fsck because the
> last mounted time is in the future.  To fix this is needed to set the time
> on the hypervisor correctly.
>
>
> Cheers,
>
> Hugo
>
> BTW the 4.2 images are ready in the jerkins job.
>
> On 25 nov. 2013, at 22:02, Will Stevens  wrote:
>
> > I just used:
> >
> http://jenkins.buildacloud.org/view/master/job/build-systemvm-master/lastSuccessfulBuild/artifact/tools/appliance/dist/systemvmtemplate-2013-11-24-master-xen.vhd.bz2
> >
> > I ran into the same issue with this template as all the others I have
> been
> > trying.  Basically, the vm comes up with a read only file system for some
> > reason.  If I run 'fsck' on the vm from XenCenter when the system vms
> come
> > up, and then reboot, I can get the vm to come up correctly.  Each time I
> > launch a new vm which requires a virtual router, I have to 'fsck' the VR
> in
> > XenCenter as soon as it tries to launch.
> >
> > This is working for me to launch an instance (but it is quite annoying).
> > Other than the 'fsck' issue, these templates seem to be working.
> >
> > ws
> >
> >
> > On Mon, Nov 25, 2013 at 3:29 PM, Hugo Trippaers 
> wrote:
> >
> >> The images should be downloadable from jenkins.buildacloud.org
> >>
> >> The 4.2 images are here:
> >> http://jenkins.buildacloud.org/view/4.2/job/build-systemvm-4.2/
> >>
> >> There was something wrong with this build, but i’m working on it. Expect
> >> the images soon.
> >>
> >> Cheers,
> >>
> >> Hugo
> >>
> >> On 25 nov. 2013, at 21:14, Travis Graham  wrote:
> >>
> >>> Use the links from the Install Guide instead.
> >>>
> >>>
> >>
> http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Installation_Guide/management-server-install-flow.html#prepare-system-vm-template
> >>>
> >>> Travis
> >>>
> >>> On Nov 25, 2013, at 3:01 PM, Will Stevens 
> wrote:
> >>>
>  In trying to troubleshoot I think I have found another issue.  I went
> >> looking for a 'more official' source for the system templates.  I found
> >> this:
> >>
> http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Release_Notes/upgrade-instructions.html#upgrade-from-3.0.x-to-4.0
> 
>  Which gives a system vm template of (for xen server):
> >>
> http://download.cloud.com/templates/4.2/systemvmtemplate-2013-07-12-master-xen.vhd.bz2
> 
>  Unfortunately, those system vm templates do not come up (check the
> >> attached files for images).  Basically, the file system comes up as Read
> >> Only...
> 
>  I will go back to the buildacloud system templates to see if I can get
> >> them working...
> 
>  Would love to have someone confirm where we should be getting the
> >> System VM Templates from for 4.2+.
> 
>  Still trying to get System VM Templates to work on 4.3.  If anyone has
> >> this working, please post how you get them working and where you got
> them
> >> from.
> 
>  Thanks,
> 
>  Will
> 
> 
>  On Fri, Nov 22, 2013 at 7:43 AM, Will Stevens 
> >> wrote:
>  I will try this as a temporary solution.  Thank you...
> 
>  Will
> 
> 
>  On Fri, Nov 22, 2013 at 6:57 AM, Murali Reddy <
> murali.re...@citrix.com>
> >> wrote:
> 
>  Don¹t understand problem well enough for clean fix, but I updated
>  'template_version' from 3.0 to 4.3 of the VR in the domain_router
> table
>  that resolved the issue for me.
> 
>  On 22/11/13 3:50 PM, "Will Stevens"  wrote:
> 
> > Has anyone been able to resolve this issue?  This is holding up my
> >> ability
> > to launch VMs and test the fixes to my plugin.  I need to resolve
> this
> > issue to move forward...
> >
> > @Syed, are you still stuck on this as well?
> >
> > Cheers,
> >
> > Will
> >
> >
> > On Wed, Nov 20, 2013 at 5:18 PM, Syed Ahmed 
> >> wrote:
> >
> >> OK here is how far I got debugging this. I think I am missing a
> small
> >> thing. I hope you guys can help.
> >>
> >> So my VM template has the correct version.
> >>
> >> root@eng-ns-dev-cs1: /export/secondary/template/tmpl/1/1 # strings
> >> f3fc75d9-0240-4c71-a3bf-fb65652e4763.vhd  | grep Cloudstack
> >>
> >> Cloudstack Release*  4.2.0*Tue Nov 19 23:22:37 UTC 2013
> >>
> >>
> >> But in the database I see the following ( table domain_router )
> >>
> >> *** 4. row ***
> >>id: 1

RE: persistence layer

2013-11-25 Thread Alex Huang
Has anyone actually tried dropping in a different jdbc driver and see if CS can 
use another DB?  I don't think the current CS DB layer prevents anyone from 
doing that.

This is different from MariaDB which, as othes have pointed out, is drop-in 
replacement for MySQL.  I'm talking about stuff like derby or sqlserver or 
oracle or db2.

--Alex

> -Original Message-
> From: Sebastien Goasguen [mailto:run...@gmail.com]
> Sent: Monday, November 25, 2013 2:21 AM
> To: dev@cloudstack.apache.org
> Subject: Re: persistence layer
> 
> 
> On Nov 23, 2013, at 4:13 PM, Laszlo Hornyak 
> wrote:
> 
> > Wouldn't it be a lot of work to move to JOOQ? All queries will have to
> > be rewritten.
> >
> >
> 
> An a non-java developer question: Will that help support different
> databases ? like moving to MariaDB ?
> 
> >
> > On Sat, Nov 23, 2013 at 11:32 AM, Darren Shepherd <
> > darren.s.sheph...@gmail.com> wrote:
> >
> >> Going to an ORM is not as simple as you would expect.  First, one can
> >> make a strong argument that ORM is not the right solution, but that
> >> can be ignored right now.
> >>
> >> You have to look at the context of ACS and figure out what technology
> >> is the most practical to adopt.  ACS does not have ORM today.  It has a
> custom
> >> query api, object mapping, and change tracking for simple CRUD.
> Honestly
> >> these features are quite sufficient for ACS needs.  The problem, and
> >> why we should change it, is that the current framework is custom,
> >> limited in functionality, undocumented, and generally a barrier to
> >> people developing on ACS.  So jOOQ is a somewhat similar approach but
> >> it is just far far better, has a community of users that have
> >> developed over 3-4 years, is well documented, and honestly just a very
> well thought out framework.
> >>
> >> Darren
> >>
> >>> On Nov 22, 2013, at 6:50 PM, Alex Ough 
> wrote:
> >>>
> >>> All,
> >>>
> >>> I'm very interested in converting the current DAO framework to an
> >>> ORM. I didn't have any experience with java related ORMs, but I've
> >>> done quite
> >> lots
> >>> of works with Django and LINQ. So can you add me if this project is
> >> started?
> >>>
> >>> Thanks
> >>> Alex Ough
> >>>
> >>>
> >>> On Fri, Nov 22, 2013 at 7:06 AM, Daan Hoogland
> >>>  >>> wrote:
> >>>
>  Had a quick look, It looks alright. One question/doubt: will we
>  thigh ourselves more to mysql if we code sql more directly instead
>  of abstracting away from it so we can leave db choice to the
>  operator in the future!?!?
> 
>  On Thu, Nov 21, 2013 at 7:03 AM, Darren Shepherd
>   wrote:
> > I've done a lot of analysis on the data access layer, but just
> > haven't
>  had time to put together a discuss/recommendation.  In the end I'd
> >> propose
>  we move to jOOQ.  It's an excellent framework that will be very
>  natural
> >> to
>  the style of data access that CloudStack uses and we can slowly
>  migrate
> >> to
>  it.  I've hacked up some code and proven that I can get the two
> >> frameworks
>  to seamlessly interoperate.  So you can select from a custom DAO
>  and
> >> commit
>  with jOOQ or vice versa.  Additionally jOOQ will work with the
>  existing pojos we have today.
> >
> > Check out jOOQ and let me know what you think of it.  I know for
> > most
>  people the immediate thought would be to move to JPA, but the way
>  we managed "session" is completely incompatible with JPA and will
>  require constant merging.  Additionally mixing our custom DAO
>  framework with a
> >> JPA
>  solution looks darn near impossible.
> >
> > Darren
> >
> >> On Nov 11, 2013, at 8:33 PM, Laszlo Hornyak
> >>  >>>
>  wrote:
> >>
> >> Hi,
> >>
> >> What are the general directions with the persistence system?
> >> What I know about it is:
> >> - It works with JPA (javax.persistence) annotations
> >> - But rather than integrating a general JPA implementation such
> >> us hibernate, eclipselink or OpenJPA it uses its own query
> >> generator and
>  DAO
> >> classes to generate SQL statements.
> >>
> >> Questions:
> >> - Are you planing to use JPA? What is the motivation behind the
> >> custom
>  DAO
> >> system?
> >> - There are some capabilities in the DAO system that are not used.
>  Should
> >> these capabilities be maintained or is it ok to remove the
> >> support for unused features in small steps?
> >>
> >> --
> >>
> >> EOF
> 
> 
> >>
> >
> >
> >
> > --
> >
> > EOF



Re: [Doc] Validation Issue in Release Notes

2013-11-25 Thread kel...@backbonetechnology.com
Hi we had a similar issue at the conf with a nested list not contained in a 
listitem. It's worth a shot.

I am not yet near reliable internet to look myself.

-Kelcey

Sent from my HTC

- Reply message -
From: "Sebastien Goasguen" 
To: 
Subject: [Doc] Validation Issue in Release Notes
Date: Mon, Nov 25, 2013 2:58 AM

On Nov 25, 2013, at 5:40 AM, Abhinandan Prateek  
wrote:

> There are some issues with the 4.2/master docs. 4.2 is a priority.
> 
> Anyone who fixes the build gets a special mention in the release notes !
> Now can we have someone fix this.
> 

I can't even locate: en-US/Revision_History.xml


> -abhi
> 
> On 23/11/13 4:20 pm, "Radhika Puthiyetath"
>  wrote:
> 
>> Hi,
>> 
>> Sorry for cross-posting.
>> 
>> While validating the Release Notes by using publican,  there is a
>> validity issue which I am not able to resolve.
>> 
>> The command used is:
>> 
>> 
>> Publican build -format=test -langs=en-us -config=publican.cfg.
>> 
>> 
>> The error I am getting is the following:
>> 
>> Release_Notes.xml:3509: validity error : Element listitem content does
>> not follow the DTD, expecting
>> (calloutlist | glosslist | bibliolist | itemizedlist | orderedlist |
>> segmentedlist | simplelist | v
>> ariablelist | caution | important | note | tip | warning | literallayout
>> | programlisting | programl
>> istingco | screen | screenco | screenshot | synopsis | cmdsynopsis |
>> funcsynopsis | classsynopsis |
>> fieldsynopsis | constructorsynopsis | destructorsynopsis | methodsynopsis
>> | formalpara | para | simp
>> ara | address | blockquote | graphic | graphicco | mediaobject |
>> mediaobjectco | informalequation |
>> informalexample | informalfigure | informaltable | equation | example |
>> figure | table | msgset | pr
>> ocedure | sidebar | qandaset | task | anchor | bridgehead | remark |
>> highlights | abstract | authorb
>> lurb | epigraph | indexterm | beginpage)+, got (para programlisting CDATA)
>> 
>> The issue is that the CDATA cannot be located in the file. If it is
>> removed, we can successfully build the file. The issue persists on both
>> Master and 4.2
>> 
>> Thanks in advance
>> 
>> -Radhika
>

Re: [Discuss] AutoScaling.next in CloudStack

2013-11-25 Thread Chiradeep Vittal
Hi Tuna,

I boldly diagrammed out what we talked about here:

https://cwiki.apache.org/confluence/x/M6YTAg

The idea is to keep the monitoring part separate from the autoscale
decision. 
So, the monitoring can be SNMP/RRD/whatever.

Scale-up using reconfiguration then becomes a mere matter of modifying the
autoscale service.


On 11/25/13 8:57 AM, "tuna"  wrote:

>Hi guys,
>
>At CCCEU13 I talked about the AutoScale without NetScaler feature working
>with XenServer & XCP. For anyone don¹t know about this feature, take a
>look into my slide here:
>http://www.slideshare.net/tuna20073882/autoscale-without-netscalerccceu13.
>
>Chiradeep and I had a short talk after the presentation about how to
>improve the AutoScale feature in future. We agreed that:
>
>+ Need to remove Load Balancing feature from AutoScaling. That¹s very
>simple to do.
>+ Need to use SNMP for monitoring not only instance-level but also
>application-level.
>+ Also, supporting well KVM hypervisor
>
>So, I blow up this thread for all of you guys to discuss the way we
>design that feature, such as:
>+ technical side, how to integrate effectively SNMP into CLoudStack.
>Where do we put SNMP monitor components into infrastructure? etc
>+ user experience, how user configure that feature with SNMP monitoring.
>I image that user can figure out they need AutoScale for which of
>following items: application, protocol (tcp, udp), port, bandwidth, disk,
>cpu and memory also, etc
>+ How about autoscale action, not just only deploy or destroy VM, we need
>maybe dynamically increase-decrease memory/cpu, nic bandwidth, disk,Š
>
>Personally, we should think about a completely autoscaling feature.
>
>Cheers,
>
>‹Tuna



Re: [Discuss] AutoScaling.next in CloudStack

2013-11-25 Thread Nguyen Anh Tu
2013/11/26 Chiradeep Vittal 

> Hi Tuna,
>
> I boldly diagrammed out what we talked about here:
>
> https://cwiki.apache.org/confluence/x/M6YTAg
>
> The idea is to keep the monitoring part separate from the autoscale
> decision.
> So, the monitoring can be SNMP/RRD/whatever.
>
> Scale-up using reconfiguration then becomes a mere matter of modifying the
> autoscale service.
>

Great,

And I found a feature below might be helpful. Will looking at that code

https://cwiki.apache.org/confluence/display/CLOUDSTACK/FS+for+Integrating+CS+alerts+via+SNMP+to+external+management+system


-- 

N.g.U.y.e.N.A.n.H.t.U


Re: Need help in creating/posting Cloudstack plugin for Juniper's network devices

2013-11-25 Thread Pradeep HK
Thanks Rayees

-Pradeep





On Monday, November 25, 2013 11:15 PM, Pradeep HK  wrote:
 
Hi,
my name is Pradeep HK and I am a Tech Lead in Juniper Networks, Bangalore.

I am leading the effort to develop Cloudstack Plugin(ie Network Guru) for 
Juniper's networking devices.

I have developed Network Plugin for orchestration of L2 services on Juniper's 
Networking devices.

 I need some pointers on :
(1)How do we go about posting the plugin ? Is it like any other source code? 
What is the procedure?
(2)In a customer installation, if they want to try out the new plugin,  what is 
the procedure

Appreciate ur help on this


-Pradeep

RE: [Doc] Validation Issue in Release Notes

2013-11-25 Thread Radhika Puthiyetath
That issue has been resolved. This is about a CDATA, which is still hidden to 
my eyes..

-Original Message-
From: kel...@backbonetechnology.com [mailto:kel...@backbonetechnology.com] 
Sent: Tuesday, November 26, 2013 7:37 AM
To: dev@cloudstack.apache.org
Subject: Re: [Doc] Validation Issue in Release Notes

Hi we had a similar issue at the conf with a nested list not contained in a 
listitem. It's worth a shot.

I am not yet near reliable internet to look myself.

-Kelcey

Sent from my HTC

- Reply message -
From: "Sebastien Goasguen" 
To: 
Subject: [Doc] Validation Issue in Release Notes
Date: Mon, Nov 25, 2013 2:58 AM

On Nov 25, 2013, at 5:40 AM, Abhinandan Prateek  
wrote:

> There are some issues with the 4.2/master docs. 4.2 is a priority.
> 
> Anyone who fixes the build gets a special mention in the release notes !
> Now can we have someone fix this.
> 

I can't even locate: en-US/Revision_History.xml


> -abhi
> 
> On 23/11/13 4:20 pm, "Radhika Puthiyetath"
>  wrote:
> 
>> Hi,
>> 
>> Sorry for cross-posting.
>> 
>> While validating the Release Notes by using publican,  there is a 
>> validity issue which I am not able to resolve.
>> 
>> The command used is:
>> 
>> 
>> Publican build -format=test -langs=en-us -config=publican.cfg.
>> 
>> 
>> The error I am getting is the following:
>> 
>> Release_Notes.xml:3509: validity error : Element listitem content 
>> does not follow the DTD, expecting (calloutlist | glosslist | 
>> bibliolist | itemizedlist | orderedlist | segmentedlist | simplelist 
>> | v ariablelist | caution | important | note | tip | warning | 
>> literallayout
>> | programlisting | programl
>> istingco | screen | screenco | screenshot | synopsis | cmdsynopsis | 
>> funcsynopsis | classsynopsis | fieldsynopsis | constructorsynopsis | 
>> destructorsynopsis | methodsynopsis
>> | formalpara | para | simp
>> ara | address | blockquote | graphic | graphicco | mediaobject | 
>> mediaobjectco | informalequation | informalexample | informalfigure | 
>> informaltable | equation | example | figure | table | msgset | pr 
>> ocedure | sidebar | qandaset | task | anchor | bridgehead | remark | 
>> highlights | abstract | authorb lurb | epigraph | indexterm | 
>> beginpage)+, got (para programlisting CDATA)
>> 
>> The issue is that the CDATA cannot be located in the file. If it is 
>> removed, we can successfully build the file. The issue persists on 
>> both Master and 4.2
>> 
>> Thanks in advance
>> 
>> -Radhika
>


RE: [Discuss] AutoScaling.next in CloudStack

2013-11-25 Thread Vijay Venkatachalam
Hi Chiradeep,
The monitoring service seems to be collecting statistics using polling 
and 
triggers actions during  threshold breach. This seems to be very 
tasking.
Can it be designed to listen for events on threshold breach as well? 
For ex. a configuration "response timeout > 30 ms" on a VIP can be 
sent to LB appliance, the LB appliance can intimate the Monitoring 
service 
when the threshold breach has happened. Basically offloading the 
responsibility.
Thanks,
Vijay V.


> -Original Message-
> From: Chiradeep Vittal [mailto:chiradeep.vit...@citrix.com]
> Sent: Tuesday, November 26, 2013 8:07 AM
> To: dev@cloudstack.apache.org
> Subject: Re: [Discuss] AutoScaling.next in CloudStack
> 
> Hi Tuna,
> 
> I boldly diagrammed out what we talked about here:
> 
> https://cwiki.apache.org/confluence/x/M6YTAg
> 
> The idea is to keep the monitoring part separate from the autoscale decision.
> So, the monitoring can be SNMP/RRD/whatever.
> 
> Scale-up using reconfiguration then becomes a mere matter of modifying the
> autoscale service.
> 
> 
> On 11/25/13 8:57 AM, "tuna"  wrote:
> 
> >Hi guys,
> >
> >At CCCEU13 I talked about the AutoScale without NetScaler feature
> >working with XenServer & XCP. For anyone don¹t know about this feature,
> >take a look into my slide here:
> >http://www.slideshare.net/tuna20073882/autoscale-without-
> netscalerccceu13.
> >
> >Chiradeep and I had a short talk after the presentation about how to
> >improve the AutoScale feature in future. We agreed that:
> >
> >+ Need to remove Load Balancing feature from AutoScaling. That¹s very
> >simple to do.
> >+ Need to use SNMP for monitoring not only instance-level but also
> >application-level.
> >+ Also, supporting well KVM hypervisor
> >
> >So, I blow up this thread for all of you guys to discuss the way we
> >design that feature, such as:
> >+ technical side, how to integrate effectively SNMP into CLoudStack.
> >Where do we put SNMP monitor components into infrastructure? etc
> >+ user experience, how user configure that feature with SNMP monitoring.
> >I image that user can figure out they need AutoScale for which of
> >following items: application, protocol (tcp, udp), port, bandwidth,
> >disk, cpu and memory also, etc
> >+ How about autoscale action, not just only deploy or destroy VM, we
> >+ need
> >maybe dynamically increase-decrease memory/cpu, nic bandwidth, disk,Š
> >
> >Personally, we should think about a completely autoscaling feature.
> >
> >Cheers,
> >
> >‹Tuna



RE: [Doc] Validation Issue in Release Notes

2013-11-25 Thread Radhika Puthiyetath
Hi,

I managed to use the --novalid option to create the file. Hope, it should work 
to generate the online version.

-Original Message-
From: Radhika Puthiyetath [mailto:radhika.puthiyet...@citrix.com] 
Sent: Tuesday, November 26, 2013 10:00 AM
To: dev@cloudstack.apache.org
Subject: RE: [Doc] Validation Issue in Release Notes

That issue has been resolved. This is about a CDATA, which is still hidden to 
my eyes..

-Original Message-
From: kel...@backbonetechnology.com [mailto:kel...@backbonetechnology.com]
Sent: Tuesday, November 26, 2013 7:37 AM
To: dev@cloudstack.apache.org
Subject: Re: [Doc] Validation Issue in Release Notes

Hi we had a similar issue at the conf with a nested list not contained in a 
listitem. It's worth a shot.

I am not yet near reliable internet to look myself.

-Kelcey

Sent from my HTC

- Reply message -
From: "Sebastien Goasguen" 
To: 
Subject: [Doc] Validation Issue in Release Notes
Date: Mon, Nov 25, 2013 2:58 AM

On Nov 25, 2013, at 5:40 AM, Abhinandan Prateek  
wrote:

> There are some issues with the 4.2/master docs. 4.2 is a priority.
> 
> Anyone who fixes the build gets a special mention in the release notes !
> Now can we have someone fix this.
> 

I can't even locate: en-US/Revision_History.xml


> -abhi
> 
> On 23/11/13 4:20 pm, "Radhika Puthiyetath"
>  wrote:
> 
>> Hi,
>> 
>> Sorry for cross-posting.
>> 
>> While validating the Release Notes by using publican,  there is a 
>> validity issue which I am not able to resolve.
>> 
>> The command used is:
>> 
>> 
>> Publican build -format=test -langs=en-us -config=publican.cfg.
>> 
>> 
>> The error I am getting is the following:
>> 
>> Release_Notes.xml:3509: validity error : Element listitem content 
>> does not follow the DTD, expecting (calloutlist | glosslist | 
>> bibliolist | itemizedlist | orderedlist | segmentedlist | simplelist
>> | v ariablelist | caution | important | note | tip | warning |
>> literallayout
>> | programlisting | programl
>> istingco | screen | screenco | screenshot | synopsis | cmdsynopsis | 
>> funcsynopsis | classsynopsis | fieldsynopsis | constructorsynopsis | 
>> destructorsynopsis | methodsynopsis
>> | formalpara | para | simp
>> ara | address | blockquote | graphic | graphicco | mediaobject | 
>> mediaobjectco | informalequation | informalexample | informalfigure | 
>> informaltable | equation | example | figure | table | msgset | pr 
>> ocedure | sidebar | qandaset | task | anchor | bridgehead | remark | 
>> highlights | abstract | authorb lurb | epigraph | indexterm | 
>> beginpage)+, got (para programlisting CDATA)
>> 
>> The issue is that the CDATA cannot be located in the file. If it is 
>> removed, we can successfully build the file. The issue persists on 
>> both Master and 4.2
>> 
>> Thanks in advance
>> 
>> -Radhika
>


Re: Review Request 15834: CLOUDSTACK-4737: Root volume metering

2013-11-25 Thread Harikrishna Patnala

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15834/
---

(Updated Nov. 26, 2013, 5:39 a.m.)


Review request for cloudstack and Koushik Das.


Bugs: CLOUDSTACK-4737
https://issues.apache.org/jira/browse/CLOUDSTACK-4737


Repository: cloudstack-git


Description
---

CLOUDSTACK-4737: Root volume metering


Diffs
-

  engine/schema/src/com/cloud/event/dao/UsageEventDetailsDaoImpl.java a4382c4 
  engine/schema/src/com/cloud/usage/UsageVMInstanceVO.java 2fe346e 
  setup/db/db/schema-421to430.sql 8be0fb1 
  usage/src/com/cloud/usage/UsageManagerImpl.java 1ee21c9 

Diff: https://reviews.apache.org/r/15834/diff/


Testing
---


Thanks,

Harikrishna Patnala



Re: Review Request 15505: Usage details are not getting populated when using dynamic offerings.

2013-11-25 Thread bharat kumar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15505/
---

(Updated Nov. 26, 2013, 5:57 a.m.)


Review request for cloudstack, Kishan Kavala and Koushik Das.


Bugs: CLOUDSTACK-5162
https://issues.apache.org/jira/browse/CLOUDSTACK-5162


Repository: cloudstack-git


Description
---

Usage details are not getting populated when using dynamic offerings.
CLOUDSTACK-5162


Diffs
-

  engine/schema/src/com/cloud/service/ServiceOfferingVO.java 1e89add 
  server/src/com/cloud/vm/UserVmManagerImpl.java ca10b06 

Diff: https://reviews.apache.org/r/15505/diff/


Testing
---

tested on master.


Thanks,

bharat kumar



Re: Review Request 15834: CLOUDSTACK-4737: handling usage events for dynamic compute offering

2013-11-25 Thread Harikrishna Patnala

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15834/
---

(Updated Nov. 26, 2013, 5:57 a.m.)


Review request for cloudstack and Koushik Das.


Summary (updated)
-

CLOUDSTACK-4737: handling usage events for dynamic compute offering 


Bugs: CLOUDSTACK-4737
https://issues.apache.org/jira/browse/CLOUDSTACK-4737


Repository: cloudstack-git


Description (updated)
---

CLOUDSTACK-4737: handling usage events for dynamic compute offering 


Diffs
-

  engine/schema/src/com/cloud/event/dao/UsageEventDetailsDaoImpl.java a4382c4 
  engine/schema/src/com/cloud/usage/UsageVMInstanceVO.java 2fe346e 
  setup/db/db/schema-421to430.sql 8be0fb1 
  usage/src/com/cloud/usage/UsageManagerImpl.java 1ee21c9 

Diff: https://reviews.apache.org/r/15834/diff/


Testing
---


Thanks,

Harikrishna Patnala



Re: Review Request 15833: CLOUDSTACK-5257: Fixed Network ACL issue related to Egress traffic

2013-11-25 Thread ASF Subversion and Git Services

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15833/#review29420
---


Commit d6298302a1872eea1be52ccf5922174e469ed807 in branch refs/heads/master 
from Ashutosh K
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=d629830 ]

CLOUDSTACK-5257: Fixed Network ACL issue related to Egress traffic

Signed-off-by: Girish Shilamkar 


- ASF Subversion and Git Services


On Nov. 25, 2013, 2:37 p.m., Ashutosh Kelkar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/15833/
> ---
> 
> (Updated Nov. 25, 2013, 2:37 p.m.)
> 
> 
> Review request for cloudstack, Girish Shilamkar and SrikanteswaraRao Talluri.
> 
> 
> Bugs: CLOUDSTACK-5257
> https://issues.apache.org/jira/browse/CLOUDSTACK-5257
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> The test case was failing due to issue in ACL rule. The ACL rule was created 
> for TCP protocol and the connection to outside world was checked using Ping 
> protocol. In this case ICMP protocol should be used in ACL rule as Ping uses 
> ICMP.
> Also corrected the port numbers and cleaned up code.
> 
> 
> Diffs
> -
> 
>   test/integration/component/test_vpc_vms_deployment.py baefa55 
> 
> Diff: https://reviews.apache.org/r/15833/diff/
> 
> 
> Testing
> ---
> 
> Tested locally on XenServer advances setup.
> 
> Log:
> test_01_deploy_vms_in_network (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test deploy VMs in VPC networks ... skipped 'Skip'
> test_02_deploy_vms_delete_network 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test deploy VMs in VPC networks and delete one of the network ... skipped 
> 'Skip'
> test_03_deploy_vms_delete_add_network 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test deploy VMs, delete one of the network and add another one ... skipped 
> 'Skip'
> test_04_deploy_vms_delete_add_network_noLb 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test deploy VMs, delete one network without LB and add another one ... 
> skipped 'Skip'
> test_05_create_network_max_limit 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test create networks in VPC upto maximum limit for hypervisor ... skipped 
> 'Skip'
> test_06_delete_network_vm_running 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test delete network having running instances in VPC ... skipped 'Skip'
> test_07_delete_network_with_rules 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test delete network that has PF/staticNat/LB rules/Network Acl ... skipped 
> 'Skip'
> 
> --
> Ran 7 tests in 5.907s
> 
> OK (skipped=7)
> 
> 
> Thanks,
> 
> Ashutosh Kelkar
> 
>



Re: Review Request 15833: CLOUDSTACK-5257: Fixed Network ACL issue related to Egress traffic

2013-11-25 Thread ASF Subversion and Git Services

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15833/#review29421
---


Commit 5b37d38ed0b156268132ade1856032102723c36e in branch refs/heads/4.3 from 
Ashutosh K
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=5b37d38 ]

CLOUDSTACK-5257: Fixed Network ACL issue related to Egress traffic

Signed-off-by: Girish Shilamkar 

Conflicts:
test/integration/component/test_vpc_vms_deployment.py


- ASF Subversion and Git Services


On Nov. 25, 2013, 2:37 p.m., Ashutosh Kelkar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/15833/
> ---
> 
> (Updated Nov. 25, 2013, 2:37 p.m.)
> 
> 
> Review request for cloudstack, Girish Shilamkar and SrikanteswaraRao Talluri.
> 
> 
> Bugs: CLOUDSTACK-5257
> https://issues.apache.org/jira/browse/CLOUDSTACK-5257
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> The test case was failing due to issue in ACL rule. The ACL rule was created 
> for TCP protocol and the connection to outside world was checked using Ping 
> protocol. In this case ICMP protocol should be used in ACL rule as Ping uses 
> ICMP.
> Also corrected the port numbers and cleaned up code.
> 
> 
> Diffs
> -
> 
>   test/integration/component/test_vpc_vms_deployment.py baefa55 
> 
> Diff: https://reviews.apache.org/r/15833/diff/
> 
> 
> Testing
> ---
> 
> Tested locally on XenServer advances setup.
> 
> Log:
> test_01_deploy_vms_in_network (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test deploy VMs in VPC networks ... skipped 'Skip'
> test_02_deploy_vms_delete_network 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test deploy VMs in VPC networks and delete one of the network ... skipped 
> 'Skip'
> test_03_deploy_vms_delete_add_network 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test deploy VMs, delete one of the network and add another one ... skipped 
> 'Skip'
> test_04_deploy_vms_delete_add_network_noLb 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test deploy VMs, delete one network without LB and add another one ... 
> skipped 'Skip'
> test_05_create_network_max_limit 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test create networks in VPC upto maximum limit for hypervisor ... skipped 
> 'Skip'
> test_06_delete_network_vm_running 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test delete network having running instances in VPC ... skipped 'Skip'
> test_07_delete_network_with_rules 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test delete network that has PF/staticNat/LB rules/Network Acl ... skipped 
> 'Skip'
> 
> --
> Ran 7 tests in 5.907s
> 
> OK (skipped=7)
> 
> 
> Thanks,
> 
> Ashutosh Kelkar
> 
>



Re: Review Request 15833: CLOUDSTACK-5257: Fixed Network ACL issue related to Egress traffic

2013-11-25 Thread Girish Shilamkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15833/#review29422
---

Ship it!


Committed to 4.3 and master.

- Girish Shilamkar


On Nov. 25, 2013, 2:37 p.m., Ashutosh Kelkar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/15833/
> ---
> 
> (Updated Nov. 25, 2013, 2:37 p.m.)
> 
> 
> Review request for cloudstack, Girish Shilamkar and SrikanteswaraRao Talluri.
> 
> 
> Bugs: CLOUDSTACK-5257
> https://issues.apache.org/jira/browse/CLOUDSTACK-5257
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> The test case was failing due to issue in ACL rule. The ACL rule was created 
> for TCP protocol and the connection to outside world was checked using Ping 
> protocol. In this case ICMP protocol should be used in ACL rule as Ping uses 
> ICMP.
> Also corrected the port numbers and cleaned up code.
> 
> 
> Diffs
> -
> 
>   test/integration/component/test_vpc_vms_deployment.py baefa55 
> 
> Diff: https://reviews.apache.org/r/15833/diff/
> 
> 
> Testing
> ---
> 
> Tested locally on XenServer advances setup.
> 
> Log:
> test_01_deploy_vms_in_network (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test deploy VMs in VPC networks ... skipped 'Skip'
> test_02_deploy_vms_delete_network 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test deploy VMs in VPC networks and delete one of the network ... skipped 
> 'Skip'
> test_03_deploy_vms_delete_add_network 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test deploy VMs, delete one of the network and add another one ... skipped 
> 'Skip'
> test_04_deploy_vms_delete_add_network_noLb 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test deploy VMs, delete one network without LB and add another one ... 
> skipped 'Skip'
> test_05_create_network_max_limit 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test create networks in VPC upto maximum limit for hypervisor ... skipped 
> 'Skip'
> test_06_delete_network_vm_running 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test delete network having running instances in VPC ... skipped 'Skip'
> test_07_delete_network_with_rules 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test delete network that has PF/staticNat/LB rules/Network Acl ... skipped 
> 'Skip'
> 
> --
> Ran 7 tests in 5.907s
> 
> OK (skipped=7)
> 
> 
> Thanks,
> 
> Ashutosh Kelkar
> 
>



[Responsiveness report] users 2013w46

2013-11-25 Thread Daan Hoogland
http://markmail.org/message/gha6ezktv7mpfgsh Create primary storage
entry via API by Lisa B.
http://markmail.org/message/l47ztbzzohqxs36h Multiple simultaneous
tasks in vCenter by Sean Hamilton
http://markmail.org/message/65nouxpzfbhxtjrb Major stability problems
lately by Timothy Ehlers
http://markmail.org/message/eii36hbqgw3t7ifd upgrade 4.1.1 to 4.2 and
new system template by Jaro 2079
http://markmail.org/message/jijoffbuovlmm6rf Unable to start VM by m2m isb
http://markmail.org/message/wvikkr7k7rd7vcvk Fail to communicate with
user_vm through web UI by Du Jun

for an explanation of this report see
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Responsiveness+report


[Responsiveness report] dev 2013w46

2013-11-25 Thread Daan Hoogland
http://markmail.org/message/ro7ahmfcj4pqppqo Snapshot file extension
on nfs by Gaurav Aradhye
http://markmail.org/message/zbcewsuysyvwj4ob [DISCUSS]
(CLOUDSTACK-1889) by Saurav Lahiri
http://markmail.org/message/rgv6vmvyjvwx67g7 help need error launching
jetty by Juan Barrio
http://markmail.org/message/x4g4y426r4ru5c2o CloudStack 4.2 web
interface is not responding during handling big logs by Denis Finko
http://markmail.org/message/os35pnlccdujasdf [DOCS] 4.2.1 Release
Notes rework by Travis Graham

for an explanation of this report see
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Responsiveness+report


RE: Review Request 15833: CLOUDSTACK-5257: Fixed Network ACL issue related to Egress traffic

2013-11-25 Thread Santhosh Edukulla
Does this applies to 4,.2 and if possible prior versions as well?

Santhosh

From: Girish Shilamkar [nore...@reviews.apache.org] on behalf of Girish 
Shilamkar [gir...@clogeny.com]
Sent: Tuesday, November 26, 2013 1:41 AM
To: Girish Shilamkar; Srikanteswararao Talluri
Cc: Ashutosh Kelkar; cloudstack
Subject: Re: Review Request 15833: CLOUDSTACK-5257: Fixed Network ACL issue 
related to Egress traffic

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15833/#review29422
---

Ship it!


Committed to 4.3 and master.

- Girish Shilamkar


On Nov. 25, 2013, 2:37 p.m., Ashutosh Kelkar wrote:
>
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/15833/
> ---
>
> (Updated Nov. 25, 2013, 2:37 p.m.)
>
>
> Review request for cloudstack, Girish Shilamkar and SrikanteswaraRao Talluri.
>
>
> Bugs: CLOUDSTACK-5257
> https://issues.apache.org/jira/browse/CLOUDSTACK-5257
>
>
> Repository: cloudstack-git
>
>
> Description
> ---
>
> The test case was failing due to issue in ACL rule. The ACL rule was created 
> for TCP protocol and the connection to outside world was checked using Ping 
> protocol. In this case ICMP protocol should be used in ACL rule as Ping uses 
> ICMP.
> Also corrected the port numbers and cleaned up code.
>
>
> Diffs
> -
>
>   test/integration/component/test_vpc_vms_deployment.py baefa55
>
> Diff: https://reviews.apache.org/r/15833/diff/
>
>
> Testing
> ---
>
> Tested locally on XenServer advances setup.
>
> Log:
> test_01_deploy_vms_in_network (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test deploy VMs in VPC networks ... skipped 'Skip'
> test_02_deploy_vms_delete_network 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test deploy VMs in VPC networks and delete one of the network ... skipped 
> 'Skip'
> test_03_deploy_vms_delete_add_network 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test deploy VMs, delete one of the network and add another one ... skipped 
> 'Skip'
> test_04_deploy_vms_delete_add_network_noLb 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test deploy VMs, delete one network without LB and add another one ... 
> skipped 'Skip'
> test_05_create_network_max_limit 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test create networks in VPC upto maximum limit for hypervisor ... skipped 
> 'Skip'
> test_06_delete_network_vm_running 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test delete network having running instances in VPC ... skipped 'Skip'
> test_07_delete_network_with_rules 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test delete network that has PF/staticNat/LB rules/Network Acl ... skipped 
> 'Skip'
>
> --
> Ran 7 tests in 5.907s
>
> OK (skipped=7)
>
>
> Thanks,
>
> Ashutosh Kelkar
>
>



Re: edit access to cwiki

2013-11-25 Thread Daan Hoogland
On Mon, Nov 18, 2013 at 7:36 AM, Shweta Agarwal
 wrote:
> 'shweta.agar...@citrix.com


added,

sorry it took so long


Re: Review Request 15833: CLOUDSTACK-5257: Fixed Network ACL issue related to Egress traffic

2013-11-25 Thread Ashutosh Kelkar
Yes, this a generic fix and would apply to all branches.




On Tue, Nov 26, 2013 at 12:19 PM, Santhosh Edukulla <
santhosh.eduku...@citrix.com> wrote:

> Does this applies to 4,.2 and if possible prior versions as well?
>
> Santhosh
> 
> From: Girish Shilamkar [nore...@reviews.apache.org] on behalf of Girish
> Shilamkar [gir...@clogeny.com]
> Sent: Tuesday, November 26, 2013 1:41 AM
> To: Girish Shilamkar; Srikanteswararao Talluri
> Cc: Ashutosh Kelkar; cloudstack
> Subject: Re: Review Request 15833: CLOUDSTACK-5257: Fixed Network ACL
> issue related to Egress traffic
>
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/15833/#review29422
> ---
>
> Ship it!
>
>
> Committed to 4.3 and master.
>
> - Girish Shilamkar
>
>
> On Nov. 25, 2013, 2:37 p.m., Ashutosh Kelkar wrote:
> >
> > ---
> > This is an automatically generated e-mail. To reply, visit:
> > https://reviews.apache.org/r/15833/
> > ---
> >
> > (Updated Nov. 25, 2013, 2:37 p.m.)
> >
> >
> > Review request for cloudstack, Girish Shilamkar and SrikanteswaraRao
> Talluri.
> >
> >
> > Bugs: CLOUDSTACK-5257
> > https://issues.apache.org/jira/browse/CLOUDSTACK-5257
> >
> >
> > Repository: cloudstack-git
> >
> >
> > Description
> > ---
> >
> > The test case was failing due to issue in ACL rule. The ACL rule was
> created for TCP protocol and the connection to outside world was checked
> using Ping protocol. In this case ICMP protocol should be used in ACL rule
> as Ping uses ICMP.
> > Also corrected the port numbers and cleaned up code.
> >
> >
> > Diffs
> > -
> >
> >   test/integration/component/test_vpc_vms_deployment.py baefa55
> >
> > Diff: https://reviews.apache.org/r/15833/diff/
> >
> >
> > Testing
> > ---
> >
> > Tested locally on XenServer advances setup.
> >
> > Log:
> > test_01_deploy_vms_in_network
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> > Test deploy VMs in VPC networks ... skipped 'Skip'
> > test_02_deploy_vms_delete_network
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> > Test deploy VMs in VPC networks and delete one of the network ...
> skipped 'Skip'
> > test_03_deploy_vms_delete_add_network
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> > Test deploy VMs, delete one of the network and add another one ...
> skipped 'Skip'
> > test_04_deploy_vms_delete_add_network_noLb
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> > Test deploy VMs, delete one network without LB and add another one ...
> skipped 'Skip'
> > test_05_create_network_max_limit
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> > Test create networks in VPC upto maximum limit for hypervisor ...
> skipped 'Skip'
> > test_06_delete_network_vm_running
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> > Test delete network having running instances in VPC ... skipped 'Skip'
> > test_07_delete_network_with_rules
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> > Test delete network that has PF/staticNat/LB rules/Network Acl ...
> skipped 'Skip'
> >
> > --
> > Ran 7 tests in 5.907s
> >
> > OK (skipped=7)
> >
> >
> > Thanks,
> >
> > Ashutosh Kelkar
> >
> >
>
>


Re: Review Request 15833: CLOUDSTACK-5257: Fixed Network ACL issue related to Egress traffic

2013-11-25 Thread Girish Shilamkar
Santhosh,

I am not sure if we will be running tests against 4.2.1 so I did not merge it 
to 4.2 branch.

Regards,
Girish

On 26-Nov-2013, at 12:19 PM, Santhosh Edukulla  
wrote:

> Does this applies to 4,.2 and if possible prior versions as well?
> 
> Santhosh
> 
> From: Girish Shilamkar [nore...@reviews.apache.org] on behalf of Girish 
> Shilamkar [gir...@clogeny.com]
> Sent: Tuesday, November 26, 2013 1:41 AM
> To: Girish Shilamkar; Srikanteswararao Talluri
> Cc: Ashutosh Kelkar; cloudstack
> Subject: Re: Review Request 15833: CLOUDSTACK-5257: Fixed Network ACL issue 
> related to Egress traffic
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/15833/#review29422
> ---
> 
> Ship it!
> 
> 
> Committed to 4.3 and master.
> 
> - Girish Shilamkar
> 
> 
> On Nov. 25, 2013, 2:37 p.m., Ashutosh Kelkar wrote:
>> 
>> ---
>> This is an automatically generated e-mail. To reply, visit:
>> https://reviews.apache.org/r/15833/
>> ---
>> 
>> (Updated Nov. 25, 2013, 2:37 p.m.)
>> 
>> 
>> Review request for cloudstack, Girish Shilamkar and SrikanteswaraRao Talluri.
>> 
>> 
>> Bugs: CLOUDSTACK-5257
>>https://issues.apache.org/jira/browse/CLOUDSTACK-5257
>> 
>> 
>> Repository: cloudstack-git
>> 
>> 
>> Description
>> ---
>> 
>> The test case was failing due to issue in ACL rule. The ACL rule was created 
>> for TCP protocol and the connection to outside world was checked using Ping 
>> protocol. In this case ICMP protocol should be used in ACL rule as Ping uses 
>> ICMP.
>> Also corrected the port numbers and cleaned up code.
>> 
>> 
>> Diffs
>> -
>> 
>>  test/integration/component/test_vpc_vms_deployment.py baefa55
>> 
>> Diff: https://reviews.apache.org/r/15833/diff/
>> 
>> 
>> Testing
>> ---
>> 
>> Tested locally on XenServer advances setup.
>> 
>> Log:
>> test_01_deploy_vms_in_network (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
>> Test deploy VMs in VPC networks ... skipped 'Skip'
>> test_02_deploy_vms_delete_network 
>> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
>> Test deploy VMs in VPC networks and delete one of the network ... skipped 
>> 'Skip'
>> test_03_deploy_vms_delete_add_network 
>> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
>> Test deploy VMs, delete one of the network and add another one ... skipped 
>> 'Skip'
>> test_04_deploy_vms_delete_add_network_noLb 
>> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
>> Test deploy VMs, delete one network without LB and add another one ... 
>> skipped 'Skip'
>> test_05_create_network_max_limit 
>> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
>> Test create networks in VPC upto maximum limit for hypervisor ... skipped 
>> 'Skip'
>> test_06_delete_network_vm_running 
>> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
>> Test delete network having running instances in VPC ... skipped 'Skip'
>> test_07_delete_network_with_rules 
>> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
>> Test delete network that has PF/staticNat/LB rules/Network Acl ... skipped 
>> 'Skip'
>> 
>> --
>> Ran 7 tests in 5.907s
>> 
>> OK (skipped=7)
>> 
>> 
>> Thanks,
>> 
>> Ashutosh Kelkar
>> 
>> 
> 



RE: Review Request 15833: CLOUDSTACK-5257: Fixed Network ACL issue related to Egress traffic

2013-11-25 Thread Santhosh Edukulla
But, its better to merge when tests are run in future for that branch as well. 

Santhosh

From: Girish Shilamkar [gir...@clogeny.com]
Sent: Tuesday, November 26, 2013 2:01 AM
To: Santhosh Edukulla
Cc: dev@cloudstack.apache.org; Srikanteswararao Talluri; Ashutosh Kelkar
Subject: Re: Review Request 15833: CLOUDSTACK-5257: Fixed Network ACL issue 
related to Egress traffic

Santhosh,

I am not sure if we will be running tests against 4.2.1 so I did not merge it 
to 4.2 branch.

Regards,
Girish

On 26-Nov-2013, at 12:19 PM, Santhosh Edukulla  
wrote:

> Does this applies to 4,.2 and if possible prior versions as well?
>
> Santhosh
> 
> From: Girish Shilamkar [nore...@reviews.apache.org] on behalf of Girish 
> Shilamkar [gir...@clogeny.com]
> Sent: Tuesday, November 26, 2013 1:41 AM
> To: Girish Shilamkar; Srikanteswararao Talluri
> Cc: Ashutosh Kelkar; cloudstack
> Subject: Re: Review Request 15833: CLOUDSTACK-5257: Fixed Network ACL issue 
> related to Egress traffic
>
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/15833/#review29422
> ---
>
> Ship it!
>
>
> Committed to 4.3 and master.
>
> - Girish Shilamkar
>
>
> On Nov. 25, 2013, 2:37 p.m., Ashutosh Kelkar wrote:
>>
>> ---
>> This is an automatically generated e-mail. To reply, visit:
>> https://reviews.apache.org/r/15833/
>> ---
>>
>> (Updated Nov. 25, 2013, 2:37 p.m.)
>>
>>
>> Review request for cloudstack, Girish Shilamkar and SrikanteswaraRao Talluri.
>>
>>
>> Bugs: CLOUDSTACK-5257
>>https://issues.apache.org/jira/browse/CLOUDSTACK-5257
>>
>>
>> Repository: cloudstack-git
>>
>>
>> Description
>> ---
>>
>> The test case was failing due to issue in ACL rule. The ACL rule was created 
>> for TCP protocol and the connection to outside world was checked using Ping 
>> protocol. In this case ICMP protocol should be used in ACL rule as Ping uses 
>> ICMP.
>> Also corrected the port numbers and cleaned up code.
>>
>>
>> Diffs
>> -
>>
>>  test/integration/component/test_vpc_vms_deployment.py baefa55
>>
>> Diff: https://reviews.apache.org/r/15833/diff/
>>
>>
>> Testing
>> ---
>>
>> Tested locally on XenServer advances setup.
>>
>> Log:
>> test_01_deploy_vms_in_network (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
>> Test deploy VMs in VPC networks ... skipped 'Skip'
>> test_02_deploy_vms_delete_network 
>> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
>> Test deploy VMs in VPC networks and delete one of the network ... skipped 
>> 'Skip'
>> test_03_deploy_vms_delete_add_network 
>> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
>> Test deploy VMs, delete one of the network and add another one ... skipped 
>> 'Skip'
>> test_04_deploy_vms_delete_add_network_noLb 
>> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
>> Test deploy VMs, delete one network without LB and add another one ... 
>> skipped 'Skip'
>> test_05_create_network_max_limit 
>> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
>> Test create networks in VPC upto maximum limit for hypervisor ... skipped 
>> 'Skip'
>> test_06_delete_network_vm_running 
>> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
>> Test delete network having running instances in VPC ... skipped 'Skip'
>> test_07_delete_network_with_rules 
>> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
>> Test delete network that has PF/staticNat/LB rules/Network Acl ... skipped 
>> 'Skip'
>>
>> --
>> Ran 7 tests in 5.907s
>>
>> OK (skipped=7)
>>
>>
>> Thanks,
>>
>> Ashutosh Kelkar
>>
>>
>



Re: Review Request 15833: CLOUDSTACK-5257: Fixed Network ACL issue related to Egress traffic

2013-11-25 Thread Girish Shilamkar
Ok, I will merge it to 4.2 as well.

Regards,
Girish

On 26-Nov-2013, at 12:33 PM, Santhosh Edukulla  
wrote:

> But, its better to merge when tests are run in future for that branch as 
> well. 
> 
> Santhosh
> 
> From: Girish Shilamkar [gir...@clogeny.com]
> Sent: Tuesday, November 26, 2013 2:01 AM
> To: Santhosh Edukulla
> Cc: dev@cloudstack.apache.org; Srikanteswararao Talluri; Ashutosh Kelkar
> Subject: Re: Review Request 15833: CLOUDSTACK-5257: Fixed Network ACL issue 
> related to Egress traffic
> 
> Santhosh,
> 
> I am not sure if we will be running tests against 4.2.1 so I did not merge it 
> to 4.2 branch.
> 
> Regards,
> Girish
> 
> On 26-Nov-2013, at 12:19 PM, Santhosh Edukulla  
> wrote:
> 
>> Does this applies to 4,.2 and if possible prior versions as well?
>> 
>> Santhosh
>> 
>> From: Girish Shilamkar [nore...@reviews.apache.org] on behalf of Girish 
>> Shilamkar [gir...@clogeny.com]
>> Sent: Tuesday, November 26, 2013 1:41 AM
>> To: Girish Shilamkar; Srikanteswararao Talluri
>> Cc: Ashutosh Kelkar; cloudstack
>> Subject: Re: Review Request 15833: CLOUDSTACK-5257: Fixed Network ACL issue 
>> related to Egress traffic
>> 
>> ---
>> This is an automatically generated e-mail. To reply, visit:
>> https://reviews.apache.org/r/15833/#review29422
>> ---
>> 
>> Ship it!
>> 
>> 
>> Committed to 4.3 and master.
>> 
>> - Girish Shilamkar
>> 
>> 
>> On Nov. 25, 2013, 2:37 p.m., Ashutosh Kelkar wrote:
>>> 
>>> ---
>>> This is an automatically generated e-mail. To reply, visit:
>>> https://reviews.apache.org/r/15833/
>>> ---
>>> 
>>> (Updated Nov. 25, 2013, 2:37 p.m.)
>>> 
>>> 
>>> Review request for cloudstack, Girish Shilamkar and SrikanteswaraRao 
>>> Talluri.
>>> 
>>> 
>>> Bugs: CLOUDSTACK-5257
>>>   https://issues.apache.org/jira/browse/CLOUDSTACK-5257
>>> 
>>> 
>>> Repository: cloudstack-git
>>> 
>>> 
>>> Description
>>> ---
>>> 
>>> The test case was failing due to issue in ACL rule. The ACL rule was 
>>> created for TCP protocol and the connection to outside world was checked 
>>> using Ping protocol. In this case ICMP protocol should be used in ACL rule 
>>> as Ping uses ICMP.
>>> Also corrected the port numbers and cleaned up code.
>>> 
>>> 
>>> Diffs
>>> -
>>> 
>>> test/integration/component/test_vpc_vms_deployment.py baefa55
>>> 
>>> Diff: https://reviews.apache.org/r/15833/diff/
>>> 
>>> 
>>> Testing
>>> ---
>>> 
>>> Tested locally on XenServer advances setup.
>>> 
>>> Log:
>>> test_01_deploy_vms_in_network 
>>> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
>>> Test deploy VMs in VPC networks ... skipped 'Skip'
>>> test_02_deploy_vms_delete_network 
>>> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
>>> Test deploy VMs in VPC networks and delete one of the network ... skipped 
>>> 'Skip'
>>> test_03_deploy_vms_delete_add_network 
>>> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
>>> Test deploy VMs, delete one of the network and add another one ... skipped 
>>> 'Skip'
>>> test_04_deploy_vms_delete_add_network_noLb 
>>> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
>>> Test deploy VMs, delete one network without LB and add another one ... 
>>> skipped 'Skip'
>>> test_05_create_network_max_limit 
>>> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
>>> Test create networks in VPC upto maximum limit for hypervisor ... skipped 
>>> 'Skip'
>>> test_06_delete_network_vm_running 
>>> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
>>> Test delete network having running instances in VPC ... skipped 'Skip'
>>> test_07_delete_network_with_rules 
>>> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
>>> Test delete network that has PF/staticNat/LB rules/Network Acl ... skipped 
>>> 'Skip'
>>> 
>>> --
>>> Ran 7 tests in 5.907s
>>> 
>>> OK (skipped=7)
>>> 
>>> 
>>> Thanks,
>>> 
>>> Ashutosh Kelkar
>>> 
>>> 
>> 
> 



Re: Review Request 15508: Make sure that if the file does not exist an Exception is thrown and that once it exists it is also closed after the properties are loaded.

2013-11-25 Thread Hugo Trippaers

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15508/#review29423
---

Ship it!


commit 7a6751aa770eaf8065864f497bff401012f553ae
Author: wilderrodrigues 
Date:   Thu Nov 14 08:37:02 2013 +0100

Make sure that if the file does not exist an Exception is thrown and that 
once it exists it is also closed after the properties are loaded.

Signed-off-by: Hugo Trippaers 


- Hugo Trippaers


On Nov. 25, 2013, 3:32 p.m., Wilder Rodrigues wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/15508/
> ---
> 
> (Updated Nov. 25, 2013, 3:32 p.m.)
> 
> 
> Review request for cloudstack and Hugo Trippaers.
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> Make sure that if the file does not exist an Exception is thrown and that 
> once it exists it is also closed after the properties are loaded.
> 
> fix for Coverity bug cv_1125364 Resource leak
> The system resource will not be reclaimed and reused, reducing the future 
> availability of the resource.
> In 
> org.?apache.?cloudstack.?network.?contrail.?management.?ManagementNetworkGuru.?configure(java.?lang.?String,
>  java.?util.?Map): Leak of a system resource (CWE-404)
> 
> 
> Diffs
> -
> 
>   
> plugins/network-elements/juniper-contrail/src/org/apache/cloudstack/network/contrail/management/ManagementNetworkGuru.java
>  e86e98a 
> 
> Diff: https://reviews.apache.org/r/15508/diff/
> 
> 
> Testing
> ---
> 
> A test branch was created, the patch was applied against the branch and a 
> full build was executed. Everything is working fine. The class changed is 
> tested by MockLocalNfsSecondaryStorageResource.
> 
> 
> Thanks,
> 
> Wilder Rodrigues
> 
>



Re: Review Request 15647: Fixing coverity issues related to resource leak on FileInputStream being created anonymously.

2013-11-25 Thread Hugo Trippaers

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15647/#review29424
---



awsapi/src/com/cloud/bridge/service/EC2RestServlet.java


you don't seem to be using ec2PropFile for anything here?


- Hugo Trippaers


On Nov. 25, 2013, 3:15 p.m., Wilder Rodrigues wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/15647/
> ---
> 
> (Updated Nov. 25, 2013, 3:15 p.m.)
> 
> 
> Review request for cloudstack and Hugo Trippaers.
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> Fixing coverity issues related to resource leak on FileInputStream being 
> created anonymously.
> 
> This patch fixed the following Coverity issues:
> 
> cv_1116497
> cv_1116681
> cv_1116694
> cv_1116567
> cv_1116495
> 
> 
> Diffs
> -
> 
>   awsapi/src/com/cloud/bridge/service/EC2RestServlet.java 5c56e9d 
>   awsapi/src/com/cloud/bridge/service/controller/s3/ServiceProvider.java 
> deb886f 
>   awsapi/src/com/cloud/bridge/service/core/ec2/EC2Engine.java 59abca0 
>   framework/cluster/src/com/cloud/cluster/ClusterManagerImpl.java 3e7138f 
>   services/console-proxy/server/src/com/cloud/consoleproxy/ConsoleProxy.java 
> 0d28e09 
> 
> Diff: https://reviews.apache.org/r/15647/diff/
> 
> 
> Testing
> ---
> 
> A build full build was executed on top of the branch created for these 
> changes. After committed and patched, the a brand new branch was created from 
> Master and patched with this patch. Everything worked fine.
> 
> No new feature was added.
> 
> 
> Thanks,
> 
> Wilder Rodrigues
> 
>



Re: Review Request 15833: CLOUDSTACK-5257: Fixed Network ACL issue related to Egress traffic

2013-11-25 Thread ASF Subversion and Git Services

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15833/#review29425
---


Commit b2dc2db269c405b6c65b8d00b74e4c84c7abeff2 in branch refs/heads/4.2 from 
Ashutosh K
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=b2dc2db ]

CLOUDSTACK-5257: Fixed Network ACL issue related to Egress traffic

Signed-off-by: Girish Shilamkar 

Conflicts:
test/integration/component/test_vpc_vms_deployment.py

Conflicts:
test/integration/component/test_vpc_vms_deployment.py


- ASF Subversion and Git Services


On Nov. 25, 2013, 2:37 p.m., Ashutosh Kelkar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/15833/
> ---
> 
> (Updated Nov. 25, 2013, 2:37 p.m.)
> 
> 
> Review request for cloudstack, Girish Shilamkar and SrikanteswaraRao Talluri.
> 
> 
> Bugs: CLOUDSTACK-5257
> https://issues.apache.org/jira/browse/CLOUDSTACK-5257
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> The test case was failing due to issue in ACL rule. The ACL rule was created 
> for TCP protocol and the connection to outside world was checked using Ping 
> protocol. In this case ICMP protocol should be used in ACL rule as Ping uses 
> ICMP.
> Also corrected the port numbers and cleaned up code.
> 
> 
> Diffs
> -
> 
>   test/integration/component/test_vpc_vms_deployment.py baefa55 
> 
> Diff: https://reviews.apache.org/r/15833/diff/
> 
> 
> Testing
> ---
> 
> Tested locally on XenServer advances setup.
> 
> Log:
> test_01_deploy_vms_in_network (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test deploy VMs in VPC networks ... skipped 'Skip'
> test_02_deploy_vms_delete_network 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test deploy VMs in VPC networks and delete one of the network ... skipped 
> 'Skip'
> test_03_deploy_vms_delete_add_network 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test deploy VMs, delete one of the network and add another one ... skipped 
> 'Skip'
> test_04_deploy_vms_delete_add_network_noLb 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test deploy VMs, delete one network without LB and add another one ... 
> skipped 'Skip'
> test_05_create_network_max_limit 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test create networks in VPC upto maximum limit for hypervisor ... skipped 
> 'Skip'
> test_06_delete_network_vm_running 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test delete network having running instances in VPC ... skipped 'Skip'
> test_07_delete_network_with_rules 
> (test_vpc_vms_deployment_fixed.TestVMDeployVPC)
> Test delete network that has PF/staticNat/LB rules/Network Acl ... skipped 
> 'Skip'
> 
> --
> Ran 7 tests in 5.907s
> 
> OK (skipped=7)
> 
> 
> Thanks,
> 
> Ashutosh Kelkar
> 
>