Re: Review Request: Add docbook of GSOC native SDN controller proposal

2013-06-07 Thread Sebastien Goasguen

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11678/#review21564
---

Ship it!


Applied to master with 1bdb6266c6b263684db229accd4f0a4a330f203a
You can mark the review as submitted
thanks for the patch

- Sebastien Goasguen


On June 6, 2013, 4:49 p.m., tuna wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11678/
> ---
> 
> (Updated June 6, 2013, 4:49 p.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Description
> ---
> 
> This is the docbook for my GSOC project: "Add Xen/XCP support for native GRE 
> SDN controller"
> 
> 
> Diffs
> -
> 
>   docs/en-US/gsoc-tuna.xml 68032a8 
> 
> Diff: https://reviews.apache.org/r/11678/diff/
> 
> 
> Testing
> ---
> 
> The added xml file was build with publican successfully.
> 
> 
> Thanks,
> 
> tuna
> 
>



[ACS4.1.0]

2013-06-07 Thread Paul Angus
Guys,

The installation guide for 4.1.0 says that the convenience RPMs are located in:

baseurl=http://cloudstack.apt-get.eu/rhel/4.0/

they're actually in

http://cloudstack.apt-get.eu/rhel/4.1/

is the documentation wrong or have they been uploaded to the wrong place?

Regards,

Paul Angus
Senior Consultant / Cloud Architect
[cid:image004.png@01CE635E.50C5D0A0]

S: +44 20 3603 0540 | M: +447711418784
paul.an...@shapeblue.com | 
www.shapeblue.com | Twitter:@shapeblue
ShapeBlue Ltd, 53 Chandos Place, Covent Garden, London, WC2N 4HS

ShapeBlue are proud to be sponsoring CloudStack Collaboration  Conference NA
[https://cwiki.apache.org/confluence/download/attachments/30760149/CloudStack+Collaboration+Conference+Banner+v2+Blue+Background+Only.jpg?version=3&modificationDate=1367282397297]

Apache CloudStack Bootcamp training courses
20/21 May, London
22/23 June, Santa Clara  
CA

This email and any attachments to it may be confidential and are intended 
solely for the use of the individual to whom it is addressed. Any views or 
opinions expressed are solely those of the author and do not necessarily 
represent those of Shape Blue Ltd or related companies. If you are not the 
intended recipient of this email, you must neither take any action based upon 
its contents, nor copy or show it to anyone. Please contact the sender if you 
believe you have received this email in error. Shape Blue Ltd is a company 
incorporated in England & Wales. ShapeBlue Services India LLP is operated under 
license from Shape Blue Ltd. ShapeBlue is a registered trademark.


Re: [ACS4.1.0]

2013-06-07 Thread Sebastien Goasguen
Can you open a bug for it. (you could even submit a patch for it :) )


On Jun 7, 2013, at 4:06 AM, Paul Angus  wrote:

> Guys,
>  
> The installation guide for 4.1.0 says that the convenience RPMs are located 
> in:
>  
> baseurl=http://cloudstack.apt-get.eu/rhel/4.0/
>  
> they’re actually in
>  
> http://cloudstack.apt-get.eu/rhel/4.1/
>  
> is the documentation wrong or have they been uploaded to the wrong place?
>  
> Regards,
>  
> Paul Angus
> Senior Consultant / Cloud Architect
> 
>  
> S: +44 20 3603 0540 | M: +447711418784
> paul.an...@shapeblue.com | www.shapeblue.com | Twitter:@shapeblue
> ShapeBlue Ltd, 53 Chandos Place, Covent Garden, London, WC2N 4HS
>  
> ShapeBlue are proud to be sponsoring CloudStack Collaboration  Conference NA
> 
>  
> Apache CloudStack Bootcamp training courses
> 20/21 May, London
> 22/23 June, Santa Clara  CA
>  
> This email and any attachments to it may be confidential and are intended 
> solely for the use of the individual to whom it is addressed. Any views or 
> opinions expressed are solely those of the author and do not necessarily 
> represent those of Shape Blue Ltd or related companies. If you are not the 
> intended recipient of this email, you must neither take any action based upon 
> its contents, nor copy or show it to anyone. Please contact the sender if you 
> believe you have received this email in error. Shape Blue Ltd is a company 
> incorporated in England & Wales. ShapeBlue Services India LLP is operated 
> under license from Shape Blue Ltd. ShapeBlue is a registered trademark.



Review Request: Add GSoC proposal to docs

2013-06-07 Thread Dharmesh Kakadia

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11707/
---

Review request for cloudstack and Sebastien Goasguen.


Description
---

Added mesos integration proposal to docs


Diffs
-

  docs/en-US/CloudStack_GSoC_Guide.xml b7ba61f 
  docs/en-US/gsoc-dharmesh.xml PRE-CREATION 
  docs/en-US/images/mesos-integration-arch.jpg PRE-CREATION 

Diff: https://reviews.apache.org/r/11707/diff/


Testing
---

publican is able to build docs successfully


Thanks,

Dharmesh Kakadia



Re: Review Request: Add GSoC proposal to docs

2013-06-07 Thread Sebastien Goasguen

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11707/#review21565
---

Ship it!


committed to master with e17a0c23b8d4d7cbbcbf4a95cf545b0d8ae59aaa
please mark the review as submitted

- Sebastien Goasguen


On June 7, 2013, 9:18 a.m., Dharmesh Kakadia wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11707/
> ---
> 
> (Updated June 7, 2013, 9:18 a.m.)
> 
> 
> Review request for cloudstack and Sebastien Goasguen.
> 
> 
> Description
> ---
> 
> Added mesos integration proposal to docs
> 
> 
> Diffs
> -
> 
>   docs/en-US/CloudStack_GSoC_Guide.xml b7ba61f 
>   docs/en-US/gsoc-dharmesh.xml PRE-CREATION 
>   docs/en-US/images/mesos-integration-arch.jpg PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/11707/diff/
> 
> 
> Testing
> ---
> 
> publican is able to build docs successfully
> 
> 
> Thanks,
> 
> Dharmesh Kakadia
> 
>



Review Request: CLOUDSTACK-2167: The Vlan ranges displayed are not in ascending order.

2013-06-07 Thread Saksham Srivastava

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11708/
---

Review request for cloudstack and Devdeep Singh.


Description
---

When multiple vlan ranges are added to a physical networks the vlan ranges 
displayed in the output of the listPhysicalNetworks api displays the vlan range 
in the order the ranges were added,Instead if they are displayed in the 
ascending order range this would make it easy for the end user.


This addresses bug CLOUDSTACK-2167.


Diffs
-

  server/src/com/cloud/api/ApiResponseHelper.java bcc1605 

Diff: https://reviews.apache.org/r/11708/diff/


Testing
---

The response of list api is now enhanced:

1

49e5cdfc-2c14-415a-9dd3-38ac2fdeef54
Physical Network 1
ZONE
0bd17058-2931-479b-98b5-29c8c91c24d3
Enabled
480-504;910-914;916-918;920-923;925-934;936-940
VLAN


Build passes successfully.


Thanks,

Saksham Srivastava



Re: Review Request: CLOUDSTACK-869-nTier-Apps-2.0_Support-NetScalar-as-external-LB-provider

2013-06-07 Thread ASF Subversion and Git Services

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/10858/#review21567
---


Commit bcc5baa1636037baa8c5ffbd3bbf70df8af4d024 in branch refs/heads/master 
from Pranav Saxena
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=bcc5baa ]

CLOUDSTACK-869:Netscaler support as an external LB provider:front end


- ASF Subversion and Git Services


On May 8, 2013, 1:39 p.m., Rajesh Battala wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/10858/
> ---
> 
> (Updated May 8, 2013, 1:39 p.m.)
> 
> 
> Review request for cloudstack, Kishan Kavala, Murali Reddy, Alena 
> Prokharchyk, Vijay Venkatachalam, and Ram Ganesh.
> 
> 
> Description
> ---
> 
> This feature will introduce Netscaler as external LB provider in VPC.
> As of now only 1 tier is support for external LB.
> A new VPC offering will be created "Default VPC Offering with NS" with all 
> the services provided by VPCVR and LB service with NetScaler.
> Existing NetscalerElement is used and implements VpcProvider.
> In VpcManager, Netscaler is added as one of the supported providers.
> Netscaler will be dedicated to the vpc.
> 
> 
> Diffs
> -
> 
>   api/src/com/cloud/network/vpc/VpcOffering.java 3961d0a 
>   
> plugins/network-elements/netscaler/src/com/cloud/network/element/NetscalerElement.java
>  7bd9c2e 
>   server/pom.xml 808dd3e 
>   server/src/com/cloud/network/NetworkServiceImpl.java 5e8be92 
>   server/src/com/cloud/network/guru/ExternalGuestNetworkGuru.java b1606db 
>   server/src/com/cloud/network/vpc/VpcManagerImpl.java a7f06e9 
>   server/test/com/cloud/vpc/VpcTest.java PRE-CREATION 
>   
> server/test/org/apache/cloudstack/networkoffering/CreateNetworkOfferingTest.java
>  cbb6c00 
> 
> Diff: https://reviews.apache.org/r/10858/diff/
> 
> 
> Testing
> ---
> 
> Manual Testing:
> ==
> 1. Creation of Vpc with the default offering with NS is created successfully. 
> ( Enable Netscaler provider in network service providers)
> 2. Deletion of Vpc with the default offering with NS is deleted successfully.
> 3. Creation of new Vpc Network Offering with Netscaler as LB provider with 
> dedicated mode is created successfully.
> 4. Creation of new Vpc Network Offering with Netscaler as LB provider with 
> shared mode should throw exception.
> 5. Creation of tier (webtier) with the created Vpcnetscaler offering is 
> created successfully.
> 6. Verified Only one tier with netscaler as LB provider can be created. 
> 7. Verified deploying Instance in the tier is successful.
> 8. Verified a new nic got created with gateway ip from the tier cidr.
> 9. Verified deployed instance should get the ip from the specified tier cidr 
> range.
> 10. Acquire public ip in the vpc.
> 11. Verified creation on LB rule, is selecting only free dedicated Netscaler 
> device and necessary configuration is created and LB rule is created on NS
> 12. Deletion of LB rule is successful.
> 13. Modification of LB rule is successful
> 14. Creation of LB Health Check of TCP type is successful.
> 15. Deletion of LB Health Check of TCP type is successful.
> 16. Creation of LB Health Check of HTTP type is successful.
> 17. Deletion of LB Health Check of HTTP type is successful.
> 18. IpAssoc command is executed successful on Netscaler.
> 19. Deletion of tier will delete the tier and config on netscaler is cleared
> 20. Deletion of tier will mark the netscaler to be in free mode.
> 
> 
> Unit Test:
> ===
> Created VpcManger tests and added few tests to createNetworkOfferingTest
> 
> 
> Thanks,
> 
> Rajesh Battala
> 
>



Re: Review Request: CLOUDSTACK-2286: Volume created from snapshot state is in allocated state instead of Ready state which is letting Primary storage not to increment the resources

2013-06-07 Thread Devdeep Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11298/#review21568
---



engine/storage/volume/src/org/apache/cloudstack/storage/volume/VolumeServiceImpl.java


Why not get an updated volumeObject before calling processEvent (from the 
volFactory like you are doing in the callback)? That way the change is not 
required in the callback.


- Devdeep Singh


On May 21, 2013, 8:13 a.m., Sanjay Tripathi wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11298/
> ---
> 
> (Updated May 21, 2013, 8:13 a.m.)
> 
> 
> Review request for cloudstack, Devdeep Singh and Nitin Mehta.
> 
> 
> Description
> ---
> 
> CLOUDSTACK-2286: Volume created from snapshot state is in allocated state 
> instead of Ready state which is letting Primary storage not to increment the 
> resources
> 
> 
> This addresses bug CLOUDSTACK-2286.
> 
> 
> Diffs
> -
> 
>   
> engine/storage/volume/src/org/apache/cloudstack/storage/volume/VolumeServiceImpl.java
>  7fdf6bb 
> 
> Diff: https://reviews.apache.org/r/11298/diff/
> 
> 
> Testing
> ---
> 
> Verified on my local CloudStack setup.
> 
> 
> Thanks,
> 
> Sanjay Tripathi
> 
>



Re: Review Request: CLOUDSTACK-1647: IP Reservation should not happen if the guest-vm cidr and network cidr is not same but their start ip and end ip are same.

2013-06-07 Thread Sateesh Chodapuneedi

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/10005/#review21569
---

Ship it!


Ship It!

- Sateesh Chodapuneedi


On June 5, 2013, 11:39 a.m., Saksham Srivastava wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/10005/
> ---
> 
> (Updated June 5, 2013, 11:39 a.m.)
> 
> 
> Review request for cloudstack, Murali Reddy and Sateesh Chodapuneedi.
> 
> 
> Description
> ---
> 
> In cases where the start ip and end ip of guest vm cidr and network cidr are 
> same, even when the cidrs appear to be different,the reservation procedure 
> should not go through and user should get a message mentioning that.
> Added extra check for the same with proper alert message.
> 
> 
> This addresses bug CLOUDSTACK-1647.
> 
> 
> Diffs
> -
> 
>   server/src/com/cloud/network/NetworkServiceImpl.java 2bf9f40 
>   utils/src/com/cloud/utils/net/NetUtils.java 8c094c8 
>   utils/test/com/cloud/utils/net/NetUtilsTest.java 16d3402 
> 
> Diff: https://reviews.apache.org/r/10005/diff/
> 
> 
> Testing
> ---
> 
> CIDR : 10.0.144.0/20, Network CIDR : null, guestVmCidr : 10.0.151.0/20 => 
> Reservation is not applied.
> CIDR : 10.0.144.0/21, Network CIDR : 10.0.144.0/20, guestVmCidr : 
> 10.0.151.0/20 => Existing Reservation is not affected.
> Added UnitTest testIsSameIpRange()
> 
> 
> Thanks,
> 
> Saksham Srivastava
> 
>



Re: Review Request: CLOUDSTACK-2167: The Vlan ranges displayed are not in ascending order.

2013-06-07 Thread Devdeep Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11708/#review21570
---



server/src/com/cloud/api/ApiResponseHelper.java


How about using the StringUtils.join? It'll remove the need of appending 
the string together and them trimming it to remove the trailing separator.


- Devdeep Singh


On June 7, 2013, 10:36 a.m., Saksham Srivastava wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11708/
> ---
> 
> (Updated June 7, 2013, 10:36 a.m.)
> 
> 
> Review request for cloudstack and Devdeep Singh.
> 
> 
> Description
> ---
> 
> When multiple vlan ranges are added to a physical networks the vlan ranges 
> displayed in the output of the listPhysicalNetworks api displays the vlan 
> range in the order the ranges were added,Instead if they are displayed in the 
> ascending order range this would make it easy for the end user.
> 
> 
> This addresses bug CLOUDSTACK-2167.
> 
> 
> Diffs
> -
> 
>   server/src/com/cloud/api/ApiResponseHelper.java bcc1605 
> 
> Diff: https://reviews.apache.org/r/11708/diff/
> 
> 
> Testing
> ---
> 
> The response of list api is now enhanced:
> 
> 1
> 
> 49e5cdfc-2c14-415a-9dd3-38ac2fdeef54
> Physical Network 1
> ZONE
> 0bd17058-2931-479b-98b5-29c8c91c24d3
> Enabled
> 480-504;910-914;916-918;920-923;925-934;936-940
> VLAN
> 
> 
> Build passes successfully.
> 
> 
> Thanks,
> 
> Saksham Srivastava
> 
>



Re: Review Request: CLOUDSTACK-869-nTier-Apps-2.0_Support-NetScalar-as-external-LB-provider

2013-06-07 Thread ASF Subversion and Git Services

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/10858/#review21571
---


Commit 5233e3216b11e69c7c7e051f0a6c3d0c6bf98803 in branch refs/heads/master 
from Pranav Saxena
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=5233e32 ]

CLOUDSTACK-869:Netscaler support as an external LB provider


- ASF Subversion and Git Services


On May 8, 2013, 1:39 p.m., Rajesh Battala wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/10858/
> ---
> 
> (Updated May 8, 2013, 1:39 p.m.)
> 
> 
> Review request for cloudstack, Kishan Kavala, Murali Reddy, Alena 
> Prokharchyk, Vijay Venkatachalam, and Ram Ganesh.
> 
> 
> Description
> ---
> 
> This feature will introduce Netscaler as external LB provider in VPC.
> As of now only 1 tier is support for external LB.
> A new VPC offering will be created "Default VPC Offering with NS" with all 
> the services provided by VPCVR and LB service with NetScaler.
> Existing NetscalerElement is used and implements VpcProvider.
> In VpcManager, Netscaler is added as one of the supported providers.
> Netscaler will be dedicated to the vpc.
> 
> 
> Diffs
> -
> 
>   api/src/com/cloud/network/vpc/VpcOffering.java 3961d0a 
>   
> plugins/network-elements/netscaler/src/com/cloud/network/element/NetscalerElement.java
>  7bd9c2e 
>   server/pom.xml 808dd3e 
>   server/src/com/cloud/network/NetworkServiceImpl.java 5e8be92 
>   server/src/com/cloud/network/guru/ExternalGuestNetworkGuru.java b1606db 
>   server/src/com/cloud/network/vpc/VpcManagerImpl.java a7f06e9 
>   server/test/com/cloud/vpc/VpcTest.java PRE-CREATION 
>   
> server/test/org/apache/cloudstack/networkoffering/CreateNetworkOfferingTest.java
>  cbb6c00 
> 
> Diff: https://reviews.apache.org/r/10858/diff/
> 
> 
> Testing
> ---
> 
> Manual Testing:
> ==
> 1. Creation of Vpc with the default offering with NS is created successfully. 
> ( Enable Netscaler provider in network service providers)
> 2. Deletion of Vpc with the default offering with NS is deleted successfully.
> 3. Creation of new Vpc Network Offering with Netscaler as LB provider with 
> dedicated mode is created successfully.
> 4. Creation of new Vpc Network Offering with Netscaler as LB provider with 
> shared mode should throw exception.
> 5. Creation of tier (webtier) with the created Vpcnetscaler offering is 
> created successfully.
> 6. Verified Only one tier with netscaler as LB provider can be created. 
> 7. Verified deploying Instance in the tier is successful.
> 8. Verified a new nic got created with gateway ip from the tier cidr.
> 9. Verified deployed instance should get the ip from the specified tier cidr 
> range.
> 10. Acquire public ip in the vpc.
> 11. Verified creation on LB rule, is selecting only free dedicated Netscaler 
> device and necessary configuration is created and LB rule is created on NS
> 12. Deletion of LB rule is successful.
> 13. Modification of LB rule is successful
> 14. Creation of LB Health Check of TCP type is successful.
> 15. Deletion of LB Health Check of TCP type is successful.
> 16. Creation of LB Health Check of HTTP type is successful.
> 17. Deletion of LB Health Check of HTTP type is successful.
> 18. IpAssoc command is executed successful on Netscaler.
> 19. Deletion of tier will delete the tier and config on netscaler is cleared
> 20. Deletion of tier will mark the netscaler to be in free mode.
> 
> 
> Unit Test:
> ===
> Created VpcManger tests and added few tests to createNetworkOfferingTest
> 
> 
> Thanks,
> 
> Rajesh Battala
> 
>



Re: Review Request: CLOUDSTACK-1647: IP Reservation should not happen if the guest-vm cidr and network cidr is not same but their start ip and end ip are same.

2013-06-07 Thread ASF Subversion and Git Services

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/10005/#review21572
---


Commit 5dc7387d3b6b1abd841abc92e3c76a3894213d82 in branch refs/heads/master 
from Saksham Srivastava
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=5dc7387 ]

CLOUDSTACK-1647: IP Reservation should not happen if the guest-vm cidr and 
network cidr is not same but their start ip and end ip are same.

Signed-off-by: Sateesh Chodapuneedi 


- ASF Subversion and Git Services


On June 5, 2013, 11:39 a.m., Saksham Srivastava wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/10005/
> ---
> 
> (Updated June 5, 2013, 11:39 a.m.)
> 
> 
> Review request for cloudstack, Murali Reddy and Sateesh Chodapuneedi.
> 
> 
> Description
> ---
> 
> In cases where the start ip and end ip of guest vm cidr and network cidr are 
> same, even when the cidrs appear to be different,the reservation procedure 
> should not go through and user should get a message mentioning that.
> Added extra check for the same with proper alert message.
> 
> 
> This addresses bug CLOUDSTACK-1647.
> 
> 
> Diffs
> -
> 
>   server/src/com/cloud/network/NetworkServiceImpl.java 2bf9f40 
>   utils/src/com/cloud/utils/net/NetUtils.java 8c094c8 
>   utils/test/com/cloud/utils/net/NetUtilsTest.java 16d3402 
> 
> Diff: https://reviews.apache.org/r/10005/diff/
> 
> 
> Testing
> ---
> 
> CIDR : 10.0.144.0/20, Network CIDR : null, guestVmCidr : 10.0.151.0/20 => 
> Reservation is not applied.
> CIDR : 10.0.144.0/21, Network CIDR : 10.0.144.0/20, guestVmCidr : 
> 10.0.151.0/20 => Existing Reservation is not affected.
> Added UnitTest testIsSameIpRange()
> 
> 
> Thanks,
> 
> Saksham Srivastava
> 
>



Re: Review Request: CLOUDSTACK-2286: Volume created from snapshot state is in allocated state instead of Ready state which is letting Primary storage not to increment the resources

2013-06-07 Thread Sanjay Tripathi

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11298/
---

(Updated June 7, 2013, 11:41 a.m.)


Review request for cloudstack, Devdeep Singh and Nitin Mehta.


Changes
---

Updated patch after incorporating the review comments.


Description
---

CLOUDSTACK-2286: Volume created from snapshot state is in allocated state 
instead of Ready state which is letting Primary storage not to increment the 
resources


This addresses bug CLOUDSTACK-2286.


Diffs (updated)
-

  
engine/storage/volume/src/org/apache/cloudstack/storage/volume/VolumeServiceImpl.java
 7fdf6bb 

Diff: https://reviews.apache.org/r/11298/diff/


Testing
---

Verified on my local CloudStack setup.


Thanks,

Sanjay Tripathi



Re: Review Request: CLOUDSTACK-2286: Volume created from snapshot state is in allocated state instead of Ready state which is letting Primary storage not to increment the resources

2013-06-07 Thread ASF Subversion and Git Services

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11298/#review21574
---


Commit da5c4619c394eaedec55c277bc5e71de379d6600 in branch refs/heads/master 
from Sanjay Tripathi
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=da5c461 ]

CLOUDSTACK-2286: Volume created from snapshot state is in allocated state 
instead of Ready state which is letting Primary storage not to increment the 
resources.


- ASF Subversion and Git Services


On June 7, 2013, 11:41 a.m., Sanjay Tripathi wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11298/
> ---
> 
> (Updated June 7, 2013, 11:41 a.m.)
> 
> 
> Review request for cloudstack, Devdeep Singh and Nitin Mehta.
> 
> 
> Description
> ---
> 
> CLOUDSTACK-2286: Volume created from snapshot state is in allocated state 
> instead of Ready state which is letting Primary storage not to increment the 
> resources
> 
> 
> This addresses bug CLOUDSTACK-2286.
> 
> 
> Diffs
> -
> 
>   
> engine/storage/volume/src/org/apache/cloudstack/storage/volume/VolumeServiceImpl.java
>  7fdf6bb 
> 
> Diff: https://reviews.apache.org/r/11298/diff/
> 
> 
> Testing
> ---
> 
> Verified on my local CloudStack setup.
> 
> 
> Thanks,
> 
> Sanjay Tripathi
> 
>



Re: Review Request: CLOUDSTACK-2286: Volume created from snapshot state is in allocated state instead of Ready state which is letting Primary storage not to increment the resources

2013-06-07 Thread Devdeep Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11298/#review21575
---

Ship it!


Ship It!

- Devdeep Singh


On June 7, 2013, 11:41 a.m., Sanjay Tripathi wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11298/
> ---
> 
> (Updated June 7, 2013, 11:41 a.m.)
> 
> 
> Review request for cloudstack, Devdeep Singh and Nitin Mehta.
> 
> 
> Description
> ---
> 
> CLOUDSTACK-2286: Volume created from snapshot state is in allocated state 
> instead of Ready state which is letting Primary storage not to increment the 
> resources
> 
> 
> This addresses bug CLOUDSTACK-2286.
> 
> 
> Diffs
> -
> 
>   
> engine/storage/volume/src/org/apache/cloudstack/storage/volume/VolumeServiceImpl.java
>  7fdf6bb 
> 
> Diff: https://reviews.apache.org/r/11298/diff/
> 
> 
> Testing
> ---
> 
> Verified on my local CloudStack setup.
> 
> 
> Thanks,
> 
> Sanjay Tripathi
> 
>



Re: KVM development, libvirt

2013-06-07 Thread Prasanna Santhanam
On Thu, Jun 06, 2013 at 10:48:14PM -0600, Marcus Sorensen wrote:
> Ok. Do we need to call a vote or something to change our rules to
> solidify that we should require at least two votes from each supported
> platform, whether they be automated tests or contributor tests?
> 

I'd encourage that. That'll need a change to our release
testing/voting steps which works from the source release only.

I'd personally prefer a jenkins automated package test. 

-- 
Prasanna.,


Powered by BigRock.com



Re: [jira] [Updated] (CLOUDSTACK-2893) The Agent attempts to re-create a already existing Libvirt Storage pool when creating a volume

2013-06-07 Thread Marcus Sorensen
I did see this once, normally the agent keeps a map of which pools are
installed. I believe it was triggered by putting the host into maintenance
and then reconnecting without stopping the agent. Restarting the agent
fixed the problem without further intervention, the agent discovered the
existing pools as expected.

Otherwise, the volume commands can and should try to recreate the pool if
it is not really there.
On Jun 7, 2013 6:43 AM, "Wido den Hollander (JIRA)"  wrote:

>
>  [
> https://issues.apache.org/jira/browse/CLOUDSTACK-2893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel]
>
> Wido den Hollander updated CLOUDSTACK-2893:
> ---
>
> Assignee: Wido den Hollander
>
> > The Agent attempts to re-create a already existing Libvirt Storage pool
> when creating a volume
> >
> --
> >
> > Key: CLOUDSTACK-2893
> > URL:
> https://issues.apache.org/jira/browse/CLOUDSTACK-2893
> > Project: CloudStack
> >  Issue Type: Bug
> >  Security Level: Public(Anyone can view this level - this is the
> default.)
> >  Components: KVM
> >Affects Versions: 4.1.0
> > Environment: - Ubuntu 12.04.2
> > - Libvirt 1.0.2 (Cloud Archive PPA from Canonical)
> > - CloudStack 4.1
> >Reporter: Wido den Hollander
> >Assignee: Wido den Hollander
> > Fix For: 4.1.1
> >
> >
> > When trying to deploy a new Instance I saw the following Exception in my
> logs:
> > 2013-06-07 08:19:18,832 DEBUG [cloud.agent.Agent]
> (agentRequest-Handler-7:null) Processing command:
> com.cloud.agent.api.storage.CreateCommand
> > 2013-06-07 08:19:18,840 DEBUG [kvm.resource.LibvirtComputingResource]
> (agentRequest-Handler-7:null) Failed to create volume:
> com.cloud.utils.exception.CloudRuntimeException:
> org.libvirt.LibvirtException: Requested operation is not valid: Target
> '/mnt/52801816-fe44-3a2b-a147-bb768eeea295' is already mounted
> > 2013-06-07 08:19:18,841 DEBUG [cloud.agent.Agent]
> (agentRequest-Handler-7:null) Seq 12-477959384:  { Ans: , MgmtId:
> 207376724852, via: 12, Ver: v1, Flags: 110,
> [{"storage.CreateAnswer":{"requestTemplateReload":false,"result":false,"details":"Exception:
> com.cloud.utils.exception.CloudRuntimeException\nMessage:
> org.libvirt.LibvirtException: Requested operation is not valid: Target
> '/mnt/52801816-fe44-3a2b-a147-bb768eeea295' is already mounted\nStack:
> com.cloud.utils.exception.CloudRuntimeException:
> org.libvirt.LibvirtException: Requested operation is not valid: Target
> '/mnt/52801816-fe44-3a2b-a147-bb768eeea295' is already mounted\n\tat
> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.getStoragePool(LibvirtStorageAdaptor.java:427)\n\t
> > at
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.getStoragePool(KVMStoragePoolManager.java:71)\n\t
> > at
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:1271)\n\t
> > at
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1072)\n\tat
> com.cloud.agent.Agent.processRequest(Agent.java:525)\n\t
> > at
> com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:852)\n\tat
> com.cloud.utils.nio.Task.run(Task.java:83)\n\tat
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)\n\tat
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat
> java.lang.Thread.run(Thread.java:679)\n","wait":0}}] }
> > The Agent seems to try and create the storage pool again and Libvirt
> will try to mount it, but it is already mounted.
> > The Storage pool is actually already running on that Hypervisor, but
> there seems to be a miscommuncation between libvirt and the Agent.
>
> --
> This message is automatically generated by JIRA.
> If you think it was sent incorrectly, please contact your JIRA
> administrators
> For more information on JIRA, see: http://www.atlassian.com/software/jira
>


Orphaned libvirt storage pools

2013-06-07 Thread Wido den Hollander

Hi,

So, I just created CLOUDSTACK-2893, but Wei Zhou mentioned that there 
are some related issues:

* CLOUDSTACK-2729
* CLOUDSTACK-2780

I restarted my Agent and the issue described in 2893 went away, but I'm 
wondering how that happened.


Anyway, after going further I found that I have some "orphaned" storage 
pools, with that I mean, they are mounted and in use, but not defined 
nor active in libvirt:


root@n02:~# lsof |grep "\.iso"|awk '{print $9}'|cut -d '/' -f 3|sort -n|uniq
eb3cd8fd-a462-35b9-882a-f4b9f2f4a84c
f84e51ab-d203-3114-b581-247b81b7d2c1
fd968b03-bd11-3179-a2b3-73def7c66c68
7ceb73e5-5ab1-3862-ad6e-52cb986aff0d
7dc0149e-0281-3353-91eb-4589ef2b1ec1
8e005344-6a65-3802-ab36-31befc95abf3
88ddd8f5-e6c7-3f3d-bef2-eea8f33aa593
765e63d7-e9f9-3203-bf4f-e55f83fe9177
1287a27d-0383-3f5a-84aa-61211621d451
98622150-41b2-3ba3-9c9c-09e3b6a2da03
root@n02:~#

Looking at libvirt:
root@n02:~# virsh pool-list
Name State  Autostart
-
52801816-fe44-3a2b-a147-bb768eeea295 active no
7ceb73e5-5ab1-3862-ad6e-52cb986aff0d active no
88ddd8f5-e6c7-3f3d-bef2-eea8f33aa593 active no
a83d1100-4ffa-432a-8467-4dc266c4b0c8 active no
fd968b03-bd11-3179-a2b3-73def7c66c68 active no

root@n02:~#

What happens here is that the mountpoints are in use (ISO attached to 
Instance) but there is no storage pool in libvirt.


This means that when you try to deploy a second VM with the same ISO 
libvirt will error out since the Agent will try to create and start a 
new storage pool which will fail since the mountpoint is already in use.


The remedy would be to take the hypervisor into maintainence, reboot int 
completely and migrate Instances to it again.


In libvirt there is no way to start a NFS storage pool without libvirt 
mounting it.


Any suggestions on how we can work around this code wise?

For my issue I'm writing a patch which adds some more debug lines to 
show what the Agent is doing, but it's kind of weird that we got into 
this "disconnected" state.


Wido


Re: Orphaned libvirt storage pools

2013-06-07 Thread Marcus Sorensen
Does this only happen with isos?
On Jun 7, 2013 8:15 AM, "Wido den Hollander"  wrote:

> Hi,
>
> So, I just created CLOUDSTACK-2893, but Wei Zhou mentioned that there are
> some related issues:
> * CLOUDSTACK-2729
> * CLOUDSTACK-2780
>
> I restarted my Agent and the issue described in 2893 went away, but I'm
> wondering how that happened.
>
> Anyway, after going further I found that I have some "orphaned" storage
> pools, with that I mean, they are mounted and in use, but not defined nor
> active in libvirt:
>
> root@n02:~# lsof |grep "\.iso"|awk '{print $9}'|cut -d '/' -f 3|sort
> -n|uniq
> eb3cd8fd-a462-35b9-882a-**f4b9f2f4a84c
> f84e51ab-d203-3114-b581-**247b81b7d2c1
> fd968b03-bd11-3179-a2b3-**73def7c66c68
> 7ceb73e5-5ab1-3862-ad6e-**52cb986aff0d
> 7dc0149e-0281-3353-91eb-**4589ef2b1ec1
> 8e005344-6a65-3802-ab36-**31befc95abf3
> 88ddd8f5-e6c7-3f3d-bef2-**eea8f33aa593
> 765e63d7-e9f9-3203-bf4f-**e55f83fe9177
> 1287a27d-0383-3f5a-84aa-**61211621d451
> 98622150-41b2-3ba3-9c9c-**09e3b6a2da03
> root@n02:~#
>
> Looking at libvirt:
> root@n02:~# virsh pool-list
> Name State  Autostart
> --**---
> 52801816-fe44-3a2b-a147-**bb768eeea295 active no
> 7ceb73e5-5ab1-3862-ad6e-**52cb986aff0d active no
> 88ddd8f5-e6c7-3f3d-bef2-**eea8f33aa593 active no
> a83d1100-4ffa-432a-8467-**4dc266c4b0c8 active no
> fd968b03-bd11-3179-a2b3-**73def7c66c68 active no
>
> root@n02:~#
>
> What happens here is that the mountpoints are in use (ISO attached to
> Instance) but there is no storage pool in libvirt.
>
> This means that when you try to deploy a second VM with the same ISO
> libvirt will error out since the Agent will try to create and start a new
> storage pool which will fail since the mountpoint is already in use.
>
> The remedy would be to take the hypervisor into maintainence, reboot int
> completely and migrate Instances to it again.
>
> In libvirt there is no way to start a NFS storage pool without libvirt
> mounting it.
>
> Any suggestions on how we can work around this code wise?
>
> For my issue I'm writing a patch which adds some more debug lines to show
> what the Agent is doing, but it's kind of weird that we got into this
> "disconnected" state.
>
> Wido
>


Re: Orphaned libvirt storage pools

2013-06-07 Thread Marcus Sorensen
I had seen something similar related to the KVM HA monitor (it would
re-mount the pools outside of libvirt after they were removed), but
anything using getStoragePoolByURI to register a pool shouldn't be
added to the KVMHA monitor anymore. That HA monitor script is the only
way I know of that cloudstack mounts NFS outside of libvirt, so it
seems that the issue is in removing the mountpoint while it is in use.
 Libvirt will remove it from the definition, even if it can't be
unmounted, so perhaps there's an issue in verifying that the
mountpoint isn't in use before trying to delete the storage pool.

I am assuming when you say 'in use' that it means that the ISO is
connected to a VM. However, this could happen for any number of
reasons... say an admin is looking in the directory right when
cloudstack wants to delete the storage pool from libvirt.

On Fri, Jun 7, 2013 at 8:30 AM, Marcus Sorensen  wrote:
> Does this only happen with isos?
>
> On Jun 7, 2013 8:15 AM, "Wido den Hollander"  wrote:
>>
>> Hi,
>>
>> So, I just created CLOUDSTACK-2893, but Wei Zhou mentioned that there are
>> some related issues:
>> * CLOUDSTACK-2729
>> * CLOUDSTACK-2780
>>
>> I restarted my Agent and the issue described in 2893 went away, but I'm
>> wondering how that happened.
>>
>> Anyway, after going further I found that I have some "orphaned" storage
>> pools, with that I mean, they are mounted and in use, but not defined nor
>> active in libvirt:
>>
>> root@n02:~# lsof |grep "\.iso"|awk '{print $9}'|cut -d '/' -f 3|sort
>> -n|uniq
>> eb3cd8fd-a462-35b9-882a-f4b9f2f4a84c
>> f84e51ab-d203-3114-b581-247b81b7d2c1
>> fd968b03-bd11-3179-a2b3-73def7c66c68
>> 7ceb73e5-5ab1-3862-ad6e-52cb986aff0d
>> 7dc0149e-0281-3353-91eb-4589ef2b1ec1
>> 8e005344-6a65-3802-ab36-31befc95abf3
>> 88ddd8f5-e6c7-3f3d-bef2-eea8f33aa593
>> 765e63d7-e9f9-3203-bf4f-e55f83fe9177
>> 1287a27d-0383-3f5a-84aa-61211621d451
>> 98622150-41b2-3ba3-9c9c-09e3b6a2da03
>> root@n02:~#
>>
>> Looking at libvirt:
>> root@n02:~# virsh pool-list
>> Name State  Autostart
>> -
>> 52801816-fe44-3a2b-a147-bb768eeea295 active no
>> 7ceb73e5-5ab1-3862-ad6e-52cb986aff0d active no
>> 88ddd8f5-e6c7-3f3d-bef2-eea8f33aa593 active no
>> a83d1100-4ffa-432a-8467-4dc266c4b0c8 active no
>> fd968b03-bd11-3179-a2b3-73def7c66c68 active no
>>
>> root@n02:~#
>>
>> What happens here is that the mountpoints are in use (ISO attached to
>> Instance) but there is no storage pool in libvirt.
>>
>> This means that when you try to deploy a second VM with the same ISO
>> libvirt will error out since the Agent will try to create and start a new
>> storage pool which will fail since the mountpoint is already in use.
>>
>> The remedy would be to take the hypervisor into maintainence, reboot int
>> completely and migrate Instances to it again.
>>
>> In libvirt there is no way to start a NFS storage pool without libvirt
>> mounting it.
>>
>> Any suggestions on how we can work around this code wise?
>>
>> For my issue I'm writing a patch which adds some more debug lines to show
>> what the Agent is doing, but it's kind of weird that we got into this
>> "disconnected" state.
>>
>> Wido


Re: Orphaned libvirt storage pools

2013-06-07 Thread Wido den Hollander

Hi,

On 06/07/2013 04:30 PM, Marcus Sorensen wrote:

Does this only happen with isos?


Yes, it does.

My work-around for now was to locate all the Instances who had these 
ISOs attached and detach them from all (~100 instances..)


Then I manually unmounted all the mountpoints under /mnt so that they 
can be re-used again.


This cluster was upgraded to 4.1 from 4.0 with libvirt 1.0.2 (coming 
from 0.9.8).


Somehow libvirt forgot about these storage pools.

Wido


On Jun 7, 2013 8:15 AM, "Wido den Hollander"  wrote:


Hi,

So, I just created CLOUDSTACK-2893, but Wei Zhou mentioned that there are
some related issues:
* CLOUDSTACK-2729
* CLOUDSTACK-2780

I restarted my Agent and the issue described in 2893 went away, but I'm
wondering how that happened.

Anyway, after going further I found that I have some "orphaned" storage
pools, with that I mean, they are mounted and in use, but not defined nor
active in libvirt:

root@n02:~# lsof |grep "\.iso"|awk '{print $9}'|cut -d '/' -f 3|sort
-n|uniq
eb3cd8fd-a462-35b9-882a-**f4b9f2f4a84c
f84e51ab-d203-3114-b581-**247b81b7d2c1
fd968b03-bd11-3179-a2b3-**73def7c66c68
7ceb73e5-5ab1-3862-ad6e-**52cb986aff0d
7dc0149e-0281-3353-91eb-**4589ef2b1ec1
8e005344-6a65-3802-ab36-**31befc95abf3
88ddd8f5-e6c7-3f3d-bef2-**eea8f33aa593
765e63d7-e9f9-3203-bf4f-**e55f83fe9177
1287a27d-0383-3f5a-84aa-**61211621d451
98622150-41b2-3ba3-9c9c-**09e3b6a2da03
root@n02:~#

Looking at libvirt:
root@n02:~# virsh pool-list
Name State  Autostart
--**---
52801816-fe44-3a2b-a147-**bb768eeea295 active no
7ceb73e5-5ab1-3862-ad6e-**52cb986aff0d active no
88ddd8f5-e6c7-3f3d-bef2-**eea8f33aa593 active no
a83d1100-4ffa-432a-8467-**4dc266c4b0c8 active no
fd968b03-bd11-3179-a2b3-**73def7c66c68 active no

root@n02:~#

What happens here is that the mountpoints are in use (ISO attached to
Instance) but there is no storage pool in libvirt.

This means that when you try to deploy a second VM with the same ISO
libvirt will error out since the Agent will try to create and start a new
storage pool which will fail since the mountpoint is already in use.

The remedy would be to take the hypervisor into maintainence, reboot int
completely and migrate Instances to it again.

In libvirt there is no way to start a NFS storage pool without libvirt
mounting it.

Any suggestions on how we can work around this code wise?

For my issue I'm writing a patch which adds some more debug lines to show
what the Agent is doing, but it's kind of weird that we got into this
"disconnected" state.

Wido





Re: Orphaned libvirt storage pools

2013-06-07 Thread Marcus Sorensen
There is already quite a bit of logging around this stuff, for example:

s_logger.error("deleteStoragePool removed pool from
libvirt, but libvirt had trouble"
   + "unmounting the pool. Trying umount
location " + targetPath
   + "again in a few seconds");

And if it gets an error from libvirt during create stating that the
mountpoint is in use, agent attempts to unmount before remounting. Of
course this would fail if it is in use.

// if error is that pool is mounted, try to handle it
if (e.toString().contains("already mounted")) {
s_logger.error("Attempting to unmount old mount
libvirt is unaware of at "+targetPath);
String result = Script.runSimpleBashScript("umount " +
targetPath );
if (result == null) {
s_logger.error("Succeeded in unmounting " + targetPath);
try {
sp = conn.storagePoolCreateXML(spd.toString(), 0);
s_logger.error("Succeeded in redefining storage");
return sp;
} catch (LibvirtException l) {
s_logger.error("Target was already mounted,
unmounted it but failed to redefine storage:" + l);
}
} else {
s_logger.error("Failed in unmounting and
redefining storage");
}
}


Do you think it was related to the upgrade process itself (e.g. maybe
the storage pools didn't carry across the libvirt upgrade)? Can you
duplicate outside of the upgrade?

On Fri, Jun 7, 2013 at 8:43 AM, Wido den Hollander  wrote:
> Hi,
>
>
> On 06/07/2013 04:30 PM, Marcus Sorensen wrote:
>>
>> Does this only happen with isos?
>
>
> Yes, it does.
>
> My work-around for now was to locate all the Instances who had these ISOs
> attached and detach them from all (~100 instances..)
>
> Then I manually unmounted all the mountpoints under /mnt so that they can be
> re-used again.
>
> This cluster was upgraded to 4.1 from 4.0 with libvirt 1.0.2 (coming from
> 0.9.8).
>
> Somehow libvirt forgot about these storage pools.
>
> Wido
>
>> On Jun 7, 2013 8:15 AM, "Wido den Hollander"  wrote:
>>
>>> Hi,
>>>
>>> So, I just created CLOUDSTACK-2893, but Wei Zhou mentioned that there are
>>> some related issues:
>>> * CLOUDSTACK-2729
>>> * CLOUDSTACK-2780
>>>
>>> I restarted my Agent and the issue described in 2893 went away, but I'm
>>> wondering how that happened.
>>>
>>> Anyway, after going further I found that I have some "orphaned" storage
>>> pools, with that I mean, they are mounted and in use, but not defined nor
>>> active in libvirt:
>>>
>>> root@n02:~# lsof |grep "\.iso"|awk '{print $9}'|cut -d '/' -f 3|sort
>>> -n|uniq
>>> eb3cd8fd-a462-35b9-882a-**f4b9f2f4a84c
>>> f84e51ab-d203-3114-b581-**247b81b7d2c1
>>> fd968b03-bd11-3179-a2b3-**73def7c66c68
>>> 7ceb73e5-5ab1-3862-ad6e-**52cb986aff0d
>>> 7dc0149e-0281-3353-91eb-**4589ef2b1ec1
>>> 8e005344-6a65-3802-ab36-**31befc95abf3
>>> 88ddd8f5-e6c7-3f3d-bef2-**eea8f33aa593
>>> 765e63d7-e9f9-3203-bf4f-**e55f83fe9177
>>> 1287a27d-0383-3f5a-84aa-**61211621d451
>>> 98622150-41b2-3ba3-9c9c-**09e3b6a2da03
>>>
>>> root@n02:~#
>>>
>>> Looking at libvirt:
>>> root@n02:~# virsh pool-list
>>> Name State  Autostart
>>> --**---
>>> 52801816-fe44-3a2b-a147-**bb768eeea295 active no
>>> 7ceb73e5-5ab1-3862-ad6e-**52cb986aff0d active no
>>> 88ddd8f5-e6c7-3f3d-bef2-**eea8f33aa593 active no
>>> a83d1100-4ffa-432a-8467-**4dc266c4b0c8 active no
>>> fd968b03-bd11-3179-a2b3-**73def7c66c68 active no
>>>
>>>
>>> root@n02:~#
>>>
>>> What happens here is that the mountpoints are in use (ISO attached to
>>> Instance) but there is no storage pool in libvirt.
>>>
>>> This means that when you try to deploy a second VM with the same ISO
>>> libvirt will error out since the Agent will try to create and start a new
>>> storage pool which will fail since the mountpoint is already in use.
>>>
>>> The remedy would be to take the hypervisor into maintainence, reboot int
>>> completely and migrate Instances to it again.
>>>
>>> In libvirt there is no way to start a NFS storage pool without libvirt
>>> mounting it.
>>>
>>> Any suggestions on how we can work around this code wise?
>>>
>>> For my issue I'm writing a patch which adds some more debug lines to show
>>> what the Agent is doing, but it's kind of weird that we got into this
>>> "disconnected" state.
>>>
>>> Wido
>>>
>>
>


Re: Object based Secondary storage.

2013-06-07 Thread John Burwell
Thomas,

The AWS API explicitly states the ETag is not guaranteed to be an integrity 
hash [1].  According to RFC 2616 [2], clients should not infer any meaning to 
the content of an ETag.  Essentially, it is an opaque version identifier which 
should only be compared for equality to another ETag value to detect a resource 
change.  As such, I agree with your assessment that s3cmd is making an invalid 
assumption regarding the value of the ETag.

Min, could you please send the stack trace you receiving from TransferManager?  
Also, could send a reference to the code in the Git repo?  With that 
information, we can start run down the source of the problem.

Thanks,
-John

[1]: http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html
[2]: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html

On Jun 7, 2013, at 1:08 AM, Thomas O'Dowd  wrote:

> Min,
> 
> This looks like an s3cmd problem. I just downloaded the latest s3cmd to
> check the source code.
> 
> In S3/FileLists.py:
> 
>compare_md5 = 'md5' in cfg.sync_checks
># Multipart-uploaded files don't have a valid md5 sum - it ends
> with "...-nn"
>if compare_md5:
>if (src_remote == True and src_list[file]['md5'].find("-")
>> = 0) or (dst_remote == True and dst_list[file]['md5'].find("-") >= 0):
> 
> Basically, s3cmd is trying to verify that the checksum of the data that
> it downloads is the same as the etag unless the etag ends with "-YYY".
> This is an AWS convention (as I mentioned in an earlier mail) so it
> works but it seems that RiakCS has a different ETAG format which doesn't
> match -YYY so s3cmd assumes the other type of ETAG which is the same as
> the MD5 checksum. For RiakCS however, this is not the case. This is why
> you get the checksum error.
> 
> Chances are that Riak is doing the right thing here and the data file
> will be the same as what you uploaded. You could change the s3cmd code
> to be more lenient for Riak. The Basho guys might either like to change
> their format or talk to the different tool vendors about changing the
> tools to work with Riak. For Cloudian, we choose to try to keep it
> similar to AWS so we could avoid stuff like this.
> 
> Tom.
> 
> On Fri, 2013-06-07 at 04:02 +, Min Chen wrote:
>> John,
>>  We are not able to successfully download file that was uploaded to Riak CS 
>> with TransferManager using S3cmd. Same error as we encountered using amazon 
>> s3 java client due to the incompatible ETAG format ( - and _ difference).
>> 
>> Thanks
>> -min
>> 
>> 
>> 
>> On Jun 6, 2013, at 5:40 PM, "John Burwell"  wrote:
>> 
>>> Edison,
>>> 
>>> Riak CS and S3 seed their hashes differently -- causing the form to appear 
>>> slightly different.  In particular, Riak CS uses URI-safe base64 encoding 
>>> which explains why the ETag values contain "-"s instead of "_"s.  From a 
>>> client perspective, the ETags are treated as opaque strings that are passed 
>>> through to the server for processing and compared strictly for equality.  
>>> Therefore, the form of the hash will not cause the client to choke, and the 
>>> Riak CS behavior you are seeing is S3 API compatible (see 
>>> http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html for more 
>>> details).  
>>> 
>>> Were you able to successfully download the file from Riak CS using s3cmd?
>>> 
>>> Thanks,
>>> -John
>>> 
>>> 
>>> On Jun 6, 2013, at 6:57 PM, Edison Su  wrote:
>>> 
 The Etag created by both RIAK CS and Amazon S3 seems a little bit 
 different, in case of multi part upload.
 
 Here is the result I tested on both RIAK CS and Amazon S3, with s3cmd.
 Test environment:
 S3cmd: version: version 1.5.0-alpha1
 Riak cs:
 Name: riak
 Arch: x86_64
 Version : 1.3.1
 Release : 1.el6
 Size: 40 M
 Repo: installed
 From repo   : basho-products
 
 The command I used to put:
 s3cmd put some-file s3://some-path --multipart-chunk-size-mb=100 -v -d
 
 The etag created for the file, when using Riak CS is 
 WxEUkiQzTWm_2C8A92fLQg==
 
 EBUG: Sending request method_string='POST', 
 uri='http://imagestore.s3.amazonaws.com/tmpl/1/1/routing-1/test?uploadId=kfDkh7Q_QCWN7r0ZTqNq4Q==',
  headers={'content-length': '309', 'Authorization': 'AWS 
 OYAZXCAFUC1DAFOXNJWI:xlkHI9tUfUV/N+Ekqpi7Jz/pbOI=', 'x-amz-date': 'Thu, 06 
 Jun 2013 22:54:28 +'}, body=(309 bytes)
 DEBUG: Response: {'status': 200, 'headers': {'date': 'Thu, 06 Jun 2013 
 22:40:09 GMT', 'content-length': '326', 'content-type': 'application/xml', 
 'server': 'Riak CS'}, 'reason': 'OK', 'data': '>>> encoding="UTF-8"? xmlns="http://s3.amazonaws.com/doc/2006-03-01/";>http://imagestore.s3.amazonaws.com/tmpl/1/1/routing-1/testimagestoretmpl/1/1/routing-1/testkfDkh7Q_QCWN7r0ZTqNq4Q=='}
 
 While the etag created by Amazon S3 is: 
 "70e1860be687d43c039873adef4280f2-3"
 
 DEBUG: Sending request method_string=

migrateVirtualMachine

2013-06-07 Thread La Motta, David
Anybody know a bit more in depth what that API call really does?  From the API 
docs it "Attempts Migration of a VM to a different host or Root volume of the 
vm to a different storage pool".

Does this mean if the VM is on XenServer and I want to move it to vShpere, it 
does VHD to OVA (VMDK?) conversion under the covers?

Just trying to understand this a bit better.

Thanks.



David La Motta
Technical Marketing Engineer
Citrix Solutions

NetApp
919.476.5042
dlamo...@netapp.com





Re: migrateVirtualMachine

2013-06-07 Thread kel...@backbonetechnology.com
Migration is only achievable in a cluster. It is hypervisor dependant. 
Migration is used only for CAP management and HA. In order to cross the 
hypervisor barrier you need to automate the 'create template > export template 
> convirt template > import template > provision' process.

Sent from my HTC

- Reply message -
From: "La Motta, David" 
To: "" 
Subject: migrateVirtualMachine
Date: Fri, Jun 7, 2013 8:20 AM

Anybody know a bit more in depth what that API call really does?  From the API 
docs it "Attempts Migration of a VM to a different host or Root volume of the 
vm to a different storage pool".

Does this mean if the VM is on XenServer and I want to move it to vShpere, it 
does VHD to OVA (VMDK?) conversion under the covers?

Just trying to understand this a bit better.

Thanks.



David La Motta
Technical Marketing Engineer
Citrix Solutions

NetApp
919.476.5042
dlamo...@netapp.com

Re: migrateVirtualMachine

2013-06-07 Thread La Motta, David
Got it.  Cool, thanks.


David La Motta
Technical Marketing Engineer
Citrix Solutions

NetApp
919.476.5042
dlamo...@netapp.com



On Jun 7, 2013, at 11:28 AM, 
kel...@backbonetechnology.com
 mailto:kel...@backbonetechnology.com>>
 wrote:

Migration is only achievable in a cluster. It is hypervisor dependant. 
Migration is used only for CAP management and HA. In order to cross the 
hypervisor barrier you need to automate the 'create template > export template 
> convirt template > import template > provision' process.

Sent from my HTC

- Reply message -
From: "La Motta, David" 
mailto:david.lamo...@netapp.com>>
To: "mailto:dev@cloudstack.apache.org>>" 
mailto:dev@cloudstack.apache.org>>
Subject: migrateVirtualMachine
Date: Fri, Jun 7, 2013 8:20 AM

Anybody know a bit more in depth what that API call really does?  From the API 
docs it "Attempts Migration of a VM to a different host or Root volume of the 
vm to a different storage pool".

Does this mean if the VM is on XenServer and I want to move it to vShpere, it 
does VHD to OVA (VMDK?) conversion under the covers?

Just trying to understand this a bit better.

Thanks.



David La Motta
Technical Marketing Engineer
Citrix Solutions

NetApp
919.476.5042
dlamo...@netapp.com



Re: KVM development, libvirt

2013-06-07 Thread John Burwell
Prasanna,

What if we made passing the Jenkins tests a pre-requisite to open voting?  In 
such a scenario, the test report from the Jenkins build would be attached to 
the voting email.

Thanks,
-John

On Jun 7, 2013, at 9:09 AM, Prasanna Santhanam  wrote:

> On Thu, Jun 06, 2013 at 10:48:14PM -0600, Marcus Sorensen wrote:
>> Ok. Do we need to call a vote or something to change our rules to
>> solidify that we should require at least two votes from each supported
>> platform, whether they be automated tests or contributor tests?
>> 
> 
> I'd encourage that. That'll need a change to our release
> testing/voting steps which works from the source release only.
> 
> I'd personally prefer a jenkins automated package test. 
> 
> -- 
> Prasanna.,
> 
> 
> Powered by BigRock.com
> 



Re: quick systemvm question

2013-06-07 Thread Marcus Sorensen
Thanks, I'm looking at it from a different perspective, not a CS
upgrade, but say we have to roll a new systemvm template for an
existing CS version. Say we rolled 4.2, with a new template, and then
two months later we realize that the template is missing dnsmasq or
something, and we have to have everyone install a new template. Do we
actually have to overwrite the existing template in-place on secondary
storage, then on each primary storage while the system vms are down?
Or can we register a new template, and the new template gets installed
on primary storage as system vms are rebooted.

 I saw that the upgrade scripts had that 'select max' statement, but
that just fetches the id for installing the template to secondary
storage. When I deploy a router, how does cloudstack select the
template for that?

On Fri, Jun 7, 2013 at 12:54 AM, Wei ZHOU  wrote:
> Marcus,
>
> (1) cloud-install-sys-tmplt update the template with max(id)
>
> select max(id) from cloud.vm_template where type = \"SYSTEM\" and
> hypervisor_type = \"KVM\" and removed is null"`
>
> (2) upgrade process update the template with specified name. in
> Upgrade410to420.java
> pstmt = conn.prepareStatement("select id from `cloud`.`vm_template` where
> name like 'systemvm-xenserver-4.2' and removed is null order by id desc
> limit 1");
>
> We are discussing in another thread "git commit: updated refs/heads/master
> to 9fe7846". Please join us.
>
> -Wei
>
>
> 2013/6/7 Marcus Sorensen 
>
>> How does cloudstack know which template is the latest system vm? Does
>> it match on name or something?  From what I have gathered in the
>> upgrade docs, you simply register a new template, like any other, and
>> run a convenience script that restarts your system vms. But I don't
>> gather from this how cloudstack knows it's a system template (and
>> further THE system template).
>>


Re: git commit: updated refs/heads/master to 9fe7846

2013-06-07 Thread Marcus Sorensen
I'm not sure if this fits in the discussion, I was asking Wei how
cloudstack chooses the system vm template during normal operation. I
get how the upgrades work, but I don't get how cloudstack chooses the
system template to use when actually deploying:

I'm looking at it from a different perspective, not a CS
upgrade, but say we have to roll a new systemvm template for an
existing CS version. Say we rolled 4.2, with a new template, and then
two months later we realize that the template is missing dnsmasq or
something, and we have to have everyone install a new template. Do we
actually have to overwrite the existing template in-place on secondary
storage, then on each primary storage while the system vms are down?
Or can we register a new template, and the new template gets installed
on primary storage as system vms are rebooted.

 I saw that the upgrade scripts had that 'select max' statement, but
that just fetches the id for installing the template to secondary
storage. When I deploy a router, how does cloudstack select the
template for that?

On Fri, Jun 7, 2013 at 12:15 AM, Wei ZHOU  wrote:
> In my point view, we ask users register new template in the upgrade
> instruction in release notes. If they do not register, it is their
> fault. If they do but upgrade fails, it is our fault.
>
> I admit that it is a good way to change each upgrade process and
> remove old templates when we use new template. It is not large work.
>
> -Wei
>
> 2013/6/6, Kishan Kavala :
>> In the mentioned example, when new template for 4.3 is introduced, we should
>> remove template upgrade code in Upgrade41to42. This will make upgrade
>> succeed even when systemvm-kvm-4.2 is not in database.
>> On the other hand, if we allow 'systemvm-kvm-%', upgrade to 4.3 will succeed
>> even though the required systemvm-kvm-4.3 is not in database.
>>
>> So, every time a new system vm template is added, template upgrade from
>> previous version should be removed.
>>
>> 
>> From: Wei ZHOU [ustcweiz...@gmail.com]
>> Sent: Wednesday, June 05, 2013 3:56 PM
>> To: dev@cloudstack.apache.org
>> Subject: Re: git commit: updated refs/heads/master to 9fe7846
>>
>> Kishan,
>>
>> I know.
>>
>> If we upgrade from 4.1 to 4.3 ( assume the systemvm template is
>> systemvm-kvm-4.3). We need to add systemvm-kvm-4.3 instead of
>> systemvm-kvm-4.2. Maybe systemvm-kvm-4.2 is not in database.
>> The upgrade includes Upgrade41to42 and Upgrade42to43. It will fail in the
>> Upgrade41to42.
>>
>> -Wei
>>
>>
>> 2013/6/5 Kishan Kavala 
>>
>>> Wei,
>>>  If we use other templates, system Vms may not work. Only 4.2 templates
>>> should be used when upgrading to 4.2.
>>>
>>> > -Original Message-
>>> > From: Wei ZHOU [mailto:ustcweiz...@gmail.com]
>>> > Sent: Wednesday, 5 June 2013 3:26 PM
>>> > To: dev@cloudstack.apache.org
>>> > Subject: Re: git commit: updated refs/heads/master to 9fe7846
>>> >
>>> > Kishan,
>>> >
>>> > What do you think about change some codes to "name like 'systemvm-
>>> > xenserver-%' " ?
>>> > If we use other templates, the upgrade maybe fail.
>>> >
>>> > -Wei
>>> >
>>> >
>>> > 2013/6/5 
>>> >
>>> > > Updated Branches:
>>> > >   refs/heads/master 91b15711b -> 9fe7846d7
>>> > >
>>> > >
>>> > > CLOUDSTACK-2728: 41-42 DB upgrade: add step to upgrade system
>>> > > templates
>>> > >
>>> > >
>>> > > Project: http://git-wip-us.apache.org/repos/asf/cloudstack/repo
>>> > > Commit:
>>> > > http://git-wip-us.apache.org/repos/asf/cloudstack/commit/9fe7846d
>>> > > Tree: http://git-wip-us.apache.org/repos/asf/cloudstack/tree/9fe7846d
>>> > > Diff: http://git-wip-us.apache.org/repos/asf/cloudstack/diff/9fe7846d
>>> > >
>>> > > Branch: refs/heads/master
>>> > > Commit: 9fe7846d72e401720e1dcbce52d021e2646429f1
>>> > > Parents: 91b1571
>>> > > Author: Harikrishna Patnala 
>>> > > Authored: Mon Jun 3 12:33:58 2013  0530
>>> > > Committer: Kishan Kavala 
>>> > > Committed: Wed Jun 5 15:14:04 2013  0530
>>> > >
>>> > > --
>>> > >  .../src/com/cloud/upgrade/dao/Upgrade410to420.java |  209
>>> > >   -
>>> > >  1 files changed, 204 insertions( ), 5 deletions(-)
>>> > > --
>>> > >
>>> > >
>>> > >
>>> > > http://git-wip-us.apache.org/repos/asf/cloudstack/blob/9fe7846d/engine
>>> > > /schema/src/com/cloud/upgrade/dao/Upgrade410to420.java
>>> > > --
>>> > > diff --git
>>> > > a/engine/schema/src/com/cloud/upgrade/dao/Upgrade410to420.java
>>> > > b/engine/schema/src/com/cloud/upgrade/dao/Upgrade410to420.java
>>> > > index 1584973..955ea56 100644
>>> > > --- a/engine/schema/src/com/cloud/upgrade/dao/Upgrade410to420.java
>>> > > b/engine/schema/src/com/cloud/upgrade/dao/Upgrade410to420.java
>>> > > @@ -112,16  112,215 @@ public class Upgrade410to420 implements
>>> > DbUpgrade {
>>> > >  }
>>> > >
>>> > >  private

Re: Object based Secondary storage.

2013-06-07 Thread Min Chen
Hi John,
Although AWS API states the ETAG is not guaranteed to be an integrity hash, its 
internal code assumes a special format of ETAG for object uploaded through 
multi-part, like TransferManager. This is reflected in AmazonS3Client's api 
implementation "getObject" below:

 /* (non-Javadoc)

 * @see 
com.amazonaws.services.s3.AmazonS3#getObject(com.amazonaws.services.s3.model.GetObjectRequest,
 java.io.File)

 */

public ObjectMetadata getObject(GetObjectRequest getObjectRequest, File 
destinationFile)

throws AmazonClientException, AmazonServiceException {

assertParameterNotNull(destinationFile,

"The destination file parameter must be specified when 
downloading an object directly to a file");


S3Object s3Object = getObject(getObjectRequest);

// getObject can return null if constraints were specified but not met

if(s3Object==null)return null;


ServiceUtils.downloadObjectToFile(s3Object, 
destinationFile,(getObjectRequest.getRange()==null));


return s3Object.getObjectMetadata();

}

And In ServiceUtils.downloadObjectToFile, it determines whether an ETAG is 
generated from multipart upload or not through the following routine:


/**

 * Returns true if the specified ETag was from a multipart upload.

 *

 * @param eTag

 *The ETag to test.

 *

 * @return True if the specified ETag was from a multipart upload, otherwise

 * false it if belongs to an object that was uploaded in a single

 * part.

 */

public static boolean isMultipartUploadETag(String eTag) {

return eTag.contains("-");

}

As you can see, it assumes that multipart upload ETAG should contain "-", not 
underscore "_".  For RIAK CS, the ETAG generated for my S3 object uploaded 
through TransferManager does not follow this convention, thus that check 
failed, and then failed integrity check since that ETAG is not actual MD5sum, 
specifically, reflected in the following code snippet from 
ServiceUtils.downloadObjectToFile:


try {

// Multipart Uploads don't have an MD5 calculated on the service 
side

if 
(ServiceUtils.isMultipartUploadETag(s3Object.getObjectMetadata().getETag()) == 
false) {

clientSideHash = Md5Utils.computeMD5Hash(new 
FileInputStream(destinationFile));

serverSideHash = 
BinaryUtils.fromHex(s3Object.getObjectMetadata().getETag());

}

} catch (Exception e) {

log.warn("Unable to calculate MD5 hash to validate download: " + 
e.getMessage(), e);

}


if (performIntegrityCheck && clientSideHash != null && serverSideHash 
!= null && !Arrays.equals(clientSideHash, serverSideHash)) {

throw new AmazonClientException("Unable to verify integrity of data 
download.  " +

"Client calculated content hash didn't match hash 
calculated by Amazon S3.  " +

"The data stored in '" + destinationFile.getAbsolutePath() 
+ "' may be corrupt.");

}

If you want to check how we upload the file to RIAK CS using multi-part upload, 
you can check the code at Git repo: 
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob;f=core/src/com/cloud/storage/template/S3TemplateDownloader.java;h=ca0df5d515e900c5313ccb14e962aa72c0785b84;hb=refs/heads/object_store.

Thanks
-min


On 6/7/13 7:53 AM, "John Burwell" 
mailto:jburw...@basho.com>> wrote:

Thomas,

The AWS API explicitly states the ETag is not guaranteed to be an integrity 
hash [1].  According to RFC 2616 [2], clients should not infer any meaning to 
the content of an ETag.  Essentially, it is an opaque version identifier which 
should only be compared for equality to another ETag value to detect a resource 
change.  As such, I agree with your assessment that s3cmd is making an invalid 
assumption regarding the value of the ETag.

Min, could you please send the stack trace you receiving from TransferManager?  
Also, could send a reference to the code in the Git repo?  With that 
information, we can start run down the source of the problem.

Thanks,
-John

[1]: http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html
[2]: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html

On Jun 7, 2013, at 1:08 AM, Thomas O'Dowd 
mailto:tpod...@cloudian.com>> wrote:

Min,
This looks like an s3cmd problem. I just downloaded the latest s3cmd to
check the source code.
In S3/FileLists.py:
compare_md5 = 'md5' in cfg.sync_checks
# Multipart-uploaded files don't have a valid md5 sum - it ends
with "...-nn"
if compare_md5:
if (src_remote == True and src_list[file]['md5'].find("-")
= 0) or (dst_remote == True and dst_list[file]['md5'].find("-") >= 0):
Basically, s3cmd is trying to verify that the checksum of the data that
it downloads is the same as the etag unless the etag ends with "-YYY".
This is an AWS con

Re: StoragePoolForMigrationResponse and StoragePoolResponse

2013-06-07 Thread Min Chen
Maybe attribute is not accurate in this sense, but it is just some
metadata related to a storage pool. Just like tags, or statistics (For
example, AccountResponse), display to user flag (for example, displayVm in
UserVMResponse) we have created for any other CloudStack Entity. We didn't
need to create a different Response class just to include or exclude this
information to make it over complicated.

Thanks
-min

On 6/6/13 11:33 PM, "Devdeep Singh"  wrote:

>suitableformigration isn't an attribute of the storage pool. It just
>tells whether a particular pool is suitable for migrating a particular
>volume. For example, if volume A has to be migrated to another pool, then
>the pools available are listed and if the tags on the pool and volume do
>not match then it is flagged as unsuitable. For another volume it may be
>flagged suitable. So it really isn't an attribute of a storage pool and I
>believe it doesn't belong in the StoragePoolResponse object.
>
>Regards,
>Devdeep
>
>> -Original Message-
>> From: Min Chen [mailto:min.c...@citrix.com]
>> Sent: Friday, June 07, 2013 2:20 AM
>> To: dev@cloudstack.apache.org
>> Subject: Re: StoragePoolForMigrationResponse and StoragePoolResponse
>> 
>> I agree with Prasanna on this. We don't need to introduce several
>>Storage
>> pool related responses just for some specific apis. In some way,
>> suitableFormigration is some kind of attribute that can be set on a
>>storage
>> pool or not. If you don't want to show it to listStoragePool call, you
>>can set that
>> as null so that json serialization will ignore it.
>> 
>> Just my two cents.
>> -min
>> 
>> On 6/6/13 5:07 AM, "Devdeep Singh"  wrote:
>> 
>> >Hi,
>> >
>> >StoragePoolResponse should really only be used for listing storage
>>pools.
>> >Putting a suitableformigration flag etc. makes it weird for other apis.
>> >If tomorrow the response object is updated to include more statistics
>> >for admin user to make a better decision, then such information gets
>> >pushed in there which makes it unnatural for apis that just need the
>> >list of storage pools. I am planning to update
>> >StoragePoolForMigrationResponse to include the StoragePoolResponse
>> >object and any other flag; suitableformigration in this case. I'll
>>file a bug for
>> the same.
>> >
>> >Regards,
>> >Devdeep
>> >
>> >> -Original Message-
>> >> From: Prasanna Santhanam [mailto:t...@apache.org]
>> >> Sent: Tuesday, June 04, 2013 2:28 PM
>> >> To: dev@cloudstack.apache.org
>> >> Subject: Re: StoragePoolForMigrationResponse and StoragePoolResponse
>> >>
>> >> On Fri, May 31, 2013 at 06:28:39PM +0530, Prasanna Santhanam wrote:
>> >> > On Fri, May 31, 2013 at 12:24:20PM +, Pranav Saxena wrote:
>> >> > > Hey Prasanna ,
>> >> > >
>> >> > > I see that the response  object name is
>> >> > > findstoragepoolsformigrationresponse , which is correct as shown
>> >> > > below .  Are you referring to this API or something else  ?
>> >> > >
>> >> > > http://MSIP:8096/client/api?command=findStoragePoolsForMigration
>> >> > >
>> >> > > > >> > > cloud-stack-version="4.2.0-SNAPSHOT">
>> >> > >
>> >> > >  
>> >> > >
>> >> >
>> >> > No that's what is shown to the user. I meant the class within
>> >> > org.apache.cloudstack.api.response
>> >> >
>> >> Fixed with 0401774a09483354f5b8532a30943351755da93f
>> >>
>> >> --
>> >> Prasanna.,
>> >>
>> >> 
>> >> Powered by BigRock.com
>> >
>



RE: Object based Secondary storage.

2013-06-07 Thread Edison Su


> -Original Message-
> From: John Burwell [mailto:jburw...@basho.com]
> Sent: Friday, June 07, 2013 7:54 AM
> To: dev@cloudstack.apache.org
> Cc: Kelly McLaughlin
> Subject: Re: Object based Secondary storage.
> 
> Thomas,
> 
> The AWS API explicitly states the ETag is not guaranteed to be an integrity
> hash [1].  According to RFC 2616 [2], clients should not infer any meaning to
> the content of an ETag.  Essentially, it is an opaque version identifier which
> should only be compared for equality to another ETag value to detect a
> resource change.  As such, I agree with your assessment that s3cmd is
> making an invalid assumption regarding the value of the ETag.


Not only s3cmd, but Amazon S3 java SDK also makes the "invalid" assumption.
What's your opinion to solve the SDK incompatibility issue? 

> 
> Min, could you please send the stack trace you receiving from
> TransferManager?  Also, could send a reference to the code in the Git repo?
> With that information, we can start run down the source of the problem.
> 
> Thanks,
> -John
> 
> [1]: http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html
> [2]: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
> 
> On Jun 7, 2013, at 1:08 AM, Thomas O'Dowd 
> wrote:
> 
> > Min,
> >
> > This looks like an s3cmd problem. I just downloaded the latest s3cmd
> > to check the source code.
> >
> > In S3/FileLists.py:
> >
> >compare_md5 = 'md5' in cfg.sync_checks
> ># Multipart-uploaded files don't have a valid md5 sum - it ends
> > with "...-nn"
> >if compare_md5:
> >if (src_remote == True and src_list[file]['md5'].find("-")
> >> = 0) or (dst_remote == True and dst_list[file]['md5'].find("-") >= 0):
> >
> > Basically, s3cmd is trying to verify that the checksum of the data
> > that it downloads is the same as the etag unless the etag ends with "-YYY".
> > This is an AWS convention (as I mentioned in an earlier mail) so it
> > works but it seems that RiakCS has a different ETAG format which
> > doesn't match -YYY so s3cmd assumes the other type of ETAG which is
> > the same as the MD5 checksum. For RiakCS however, this is not the
> > case. This is why you get the checksum error.
> >
> > Chances are that Riak is doing the right thing here and the data file
> > will be the same as what you uploaded. You could change the s3cmd code
> > to be more lenient for Riak. The Basho guys might either like to
> > change their format or talk to the different tool vendors about
> > changing the tools to work with Riak. For Cloudian, we choose to try
> > to keep it similar to AWS so we could avoid stuff like this.
> >
> > Tom.
> >
> > On Fri, 2013-06-07 at 04:02 +, Min Chen wrote:
> >> John,
> >>  We are not able to successfully download file that was uploaded to Riak
> CS with TransferManager using S3cmd. Same error as we encountered using
> amazon s3 java client due to the incompatible ETAG format ( - and _
> difference).
> >>
> >> Thanks
> >> -min
> >>
> >>
> >>
> >> On Jun 6, 2013, at 5:40 PM, "John Burwell"  wrote:
> >>
> >>> Edison,
> >>>
> >>> Riak CS and S3 seed their hashes differently -- causing the form to
> appear slightly different.  In particular, Riak CS uses URI-safe base64 
> encoding
> which explains why the ETag values contain "-"s instead of "_"s.  From a 
> client
> perspective, the ETags are treated as opaque strings that are passed through
> to the server for processing and compared strictly for equality.  Therefore,
> the form of the hash will not cause the client to choke, and the Riak CS
> behavior you are seeing is S3 API compatible (see
> http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html for
> more details).
> >>>
> >>> Were you able to successfully download the file from Riak CS using
> s3cmd?
> >>>
> >>> Thanks,
> >>> -John
> >>>
> >>>
> >>> On Jun 6, 2013, at 6:57 PM, Edison Su  wrote:
> >>>
>  The Etag created by both RIAK CS and Amazon S3 seems a little bit
> different, in case of multi part upload.
> 
>  Here is the result I tested on both RIAK CS and Amazon S3, with s3cmd.
>  Test environment:
>  S3cmd: version: version 1.5.0-alpha1 Riak cs:
>  Name: riak
>  Arch: x86_64
>  Version : 1.3.1
>  Release : 1.el6
>  Size: 40 M
>  Repo: installed
>  From repo   : basho-products
> 
>  The command I used to put:
>  s3cmd put some-file s3://some-path --multipart-chunk-size-mb=100 -v
>  -d
> 
>  The etag created for the file, when using Riak CS is
>  WxEUkiQzTWm_2C8A92fLQg==
> 
>  EBUG: Sending request method_string='POST',
>  uri='http://imagestore.s3.amazonaws.com/tmpl/1/1/routing-
> 1/test?upl
>  oadId=kfDkh7Q_QCWN7r0ZTqNq4Q==', headers={'content-length':
> '309',
>  'Authorization': 'AWS
>  OYAZXCAFUC1DAFOXNJWI:xlkHI9tUfUV/N+Ekqpi7Jz/pbOI=', 'x-amz-
> date':
>  'Thu, 06 Jun 2013 22:54:28 +'}, body=(309 bytes)
>  DEBUG: Response:

Re: Object based Secondary storage.

2013-06-07 Thread John Burwell
Edison,

It appears that the S3 clients have a quirk in their behavior for multi-part 
uploads.  I have created a defect for Riak CS 
(https://github.com/basho/riak_cs/issues/585).  Once a patch has been merged 
merged into master, I will provide instructions for building from source (it is 
very easy), and we can move forward.  Until the path is available, I recommend 
configuring TransferManager with a high multi-part upload threshold (4.5 GB 
should do the trick) and using files less than the size of threshold until the 
Riak CS patch becomes available.

Thanks  for running down this issue.  As I said, it is unexpected behavior, but 
in discussing it, it seems like the quickest remedy is to have Riak CS emulate 
the quirk.  
-John

On Jun 7, 2013, at 1:23 PM, Edison Su  wrote:

> 
> 
>> -Original Message-
>> From: John Burwell [mailto:jburw...@basho.com]
>> Sent: Friday, June 07, 2013 7:54 AM
>> To: dev@cloudstack.apache.org
>> Cc: Kelly McLaughlin
>> Subject: Re: Object based Secondary storage.
>> 
>> Thomas,
>> 
>> The AWS API explicitly states the ETag is not guaranteed to be an integrity
>> hash [1].  According to RFC 2616 [2], clients should not infer any meaning to
>> the content of an ETag.  Essentially, it is an opaque version identifier 
>> which
>> should only be compared for equality to another ETag value to detect a
>> resource change.  As such, I agree with your assessment that s3cmd is
>> making an invalid assumption regarding the value of the ETag.
> 
> 
> Not only s3cmd, but Amazon S3 java SDK also makes the "invalid" assumption.
> What's your opinion to solve the SDK incompatibility issue? 
> 
>> 
>> Min, could you please send the stack trace you receiving from
>> TransferManager?  Also, could send a reference to the code in the Git repo?
>> With that information, we can start run down the source of the problem.
>> 
>> Thanks,
>> -John
>> 
>> [1]: http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html
>> [2]: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
>> 
>> On Jun 7, 2013, at 1:08 AM, Thomas O'Dowd 
>> wrote:
>> 
>>> Min,
>>> 
>>> This looks like an s3cmd problem. I just downloaded the latest s3cmd
>>> to check the source code.
>>> 
>>> In S3/FileLists.py:
>>> 
>>>   compare_md5 = 'md5' in cfg.sync_checks
>>>   # Multipart-uploaded files don't have a valid md5 sum - it ends
>>> with "...-nn"
>>>   if compare_md5:
>>>   if (src_remote == True and src_list[file]['md5'].find("-")
 = 0) or (dst_remote == True and dst_list[file]['md5'].find("-") >= 0):
>>> 
>>> Basically, s3cmd is trying to verify that the checksum of the data
>>> that it downloads is the same as the etag unless the etag ends with "-YYY".
>>> This is an AWS convention (as I mentioned in an earlier mail) so it
>>> works but it seems that RiakCS has a different ETAG format which
>>> doesn't match -YYY so s3cmd assumes the other type of ETAG which is
>>> the same as the MD5 checksum. For RiakCS however, this is not the
>>> case. This is why you get the checksum error.
>>> 
>>> Chances are that Riak is doing the right thing here and the data file
>>> will be the same as what you uploaded. You could change the s3cmd code
>>> to be more lenient for Riak. The Basho guys might either like to
>>> change their format or talk to the different tool vendors about
>>> changing the tools to work with Riak. For Cloudian, we choose to try
>>> to keep it similar to AWS so we could avoid stuff like this.
>>> 
>>> Tom.
>>> 
>>> On Fri, 2013-06-07 at 04:02 +, Min Chen wrote:
 John,
 We are not able to successfully download file that was uploaded to Riak
>> CS with TransferManager using S3cmd. Same error as we encountered using
>> amazon s3 java client due to the incompatible ETAG format ( - and _
>> difference).
 
 Thanks
 -min
 
 
 
 On Jun 6, 2013, at 5:40 PM, "John Burwell"  wrote:
 
> Edison,
> 
> Riak CS and S3 seed their hashes differently -- causing the form to
>> appear slightly different.  In particular, Riak CS uses URI-safe base64 
>> encoding
>> which explains why the ETag values contain "-"s instead of "_"s.  From a 
>> client
>> perspective, the ETags are treated as opaque strings that are passed through
>> to the server for processing and compared strictly for equality.  Therefore,
>> the form of the hash will not cause the client to choke, and the Riak CS
>> behavior you are seeing is S3 API compatible (see
>> http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html for
>> more details).
> 
> Were you able to successfully download the file from Riak CS using
>> s3cmd?
> 
> Thanks,
> -John
> 
> 
> On Jun 6, 2013, at 6:57 PM, Edison Su  wrote:
> 
>> The Etag created by both RIAK CS and Amazon S3 seems a little bit
>> different, in case of multi part upload.
>> 
>> Here is the result I tested on both RIAK CS and Amazon S3, with s3cmd.
>> Test environment:
>> S3cm

Review Request: Fix for test case failure test_network.py:test_delete_account - CLOUDSTACK-2898

2013-06-07 Thread Rayees Namathponnan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11713/
---

Review request for cloudstack, Prasanna Santhanam and Girish Shilamkar.


Description
---

https://issues.apache.org/jira/browse/CLOUDSTACK-2898

In this test case we need to capture "cloudstackAPIException" before capturing 
more generic exceptions


Diffs
-


Diff: https://reviews.apache.org/r/11713/diff/


Testing
---

Tested 


Thanks,

Rayees Namathponnan



Re: git commit: updated refs/heads/master to 9fe7846

2013-06-07 Thread Chiradeep Vittal
 _configServer.getConfigValue(Config.RouterTemplate***.key(),   
VMTemplateVO template =
_templateDao.findRoutingTemplate(hType, templateName);



On 6/7/13 8:55 AM, "Marcus Sorensen"  wrote:

>I'm not sure if this fits in the discussion, I was asking Wei how
>cloudstack chooses the system vm template during normal operation. I
>get how the upgrades work, but I don't get how cloudstack chooses the
>system template to use when actually deploying:
>
>I'm looking at it from a different perspective, not a CS
>upgrade, but say we have to roll a new systemvm template for an
>existing CS version. Say we rolled 4.2, with a new template, and then
>two months later we realize that the template is missing dnsmasq or
>something, and we have to have everyone install a new template. Do we
>actually have to overwrite the existing template in-place on secondary
>storage, then on each primary storage while the system vms are down?
>Or can we register a new template, and the new template gets installed
>on primary storage as system vms are rebooted.
>
> I saw that the upgrade scripts had that 'select max' statement, but
>that just fetches the id for installing the template to secondary
>storage. When I deploy a router, how does cloudstack select the
>template for that?
>
>On Fri, Jun 7, 2013 at 12:15 AM, Wei ZHOU  wrote:
>> In my point view, we ask users register new template in the upgrade
>> instruction in release notes. If they do not register, it is their
>> fault. If they do but upgrade fails, it is our fault.
>>
>> I admit that it is a good way to change each upgrade process and
>> remove old templates when we use new template. It is not large work.
>>
>> -Wei
>>
>> 2013/6/6, Kishan Kavala :
>>> In the mentioned example, when new template for 4.3 is introduced, we
>>>should
>>> remove template upgrade code in Upgrade41to42. This will make upgrade
>>> succeed even when systemvm-kvm-4.2 is not in database.
>>> On the other hand, if we allow 'systemvm-kvm-%', upgrade to 4.3 will
>>>succeed
>>> even though the required systemvm-kvm-4.3 is not in database.
>>>
>>> So, every time a new system vm template is added, template upgrade from
>>> previous version should be removed.
>>>
>>> 
>>> From: Wei ZHOU [ustcweiz...@gmail.com]
>>> Sent: Wednesday, June 05, 2013 3:56 PM
>>> To: dev@cloudstack.apache.org
>>> Subject: Re: git commit: updated refs/heads/master to 9fe7846
>>>
>>> Kishan,
>>>
>>> I know.
>>>
>>> If we upgrade from 4.1 to 4.3 ( assume the systemvm template is
>>> systemvm-kvm-4.3). We need to add systemvm-kvm-4.3 instead of
>>> systemvm-kvm-4.2. Maybe systemvm-kvm-4.2 is not in database.
>>> The upgrade includes Upgrade41to42 and Upgrade42to43. It will fail in
>>>the
>>> Upgrade41to42.
>>>
>>> -Wei
>>>
>>>
>>> 2013/6/5 Kishan Kavala 
>>>
 Wei,
  If we use other templates, system Vms may not work. Only 4.2
templates
 should be used when upgrading to 4.2.

 > -Original Message-
 > From: Wei ZHOU [mailto:ustcweiz...@gmail.com]
 > Sent: Wednesday, 5 June 2013 3:26 PM
 > To: dev@cloudstack.apache.org
 > Subject: Re: git commit: updated refs/heads/master to 9fe7846
 >
 > Kishan,
 >
 > What do you think about change some codes to "name like 'systemvm-
 > xenserver-%' " ?
 > If we use other templates, the upgrade maybe fail.
 >
 > -Wei
 >
 >
 > 2013/6/5 
 >
 > > Updated Branches:
 > >   refs/heads/master 91b15711b -> 9fe7846d7
 > >
 > >
 > > CLOUDSTACK-2728: 41-42 DB upgrade: add step to upgrade system
 > > templates
 > >
 > >
 > > Project: http://git-wip-us.apache.org/repos/asf/cloudstack/repo
 > > Commit:
 > > http://git-wip-us.apache.org/repos/asf/cloudstack/commit/9fe7846d
 > > Tree: 
http://git-wip-us.apache.org/repos/asf/cloudstack/tree/9fe7846d
 > > Diff: 
http://git-wip-us.apache.org/repos/asf/cloudstack/diff/9fe7846d
 > >
 > > Branch: refs/heads/master
 > > Commit: 9fe7846d72e401720e1dcbce52d021e2646429f1
 > > Parents: 91b1571
 > > Author: Harikrishna Patnala 
 > > Authored: Mon Jun 3 12:33:58 2013  0530
 > > Committer: Kishan Kavala 
 > > Committed: Wed Jun 5 15:14:04 2013  0530
 > >
 > > 
--
 > >  .../src/com/cloud/upgrade/dao/Upgrade410to420.java |  209
 > >   -
 > >  1 files changed, 204 insertions( ), 5 deletions(-)
 > > 
--
 > >
 > >
 > >
 > > 
http://git-wip-us.apache.org/repos/asf/cloudstack/blob/9fe7846d/engine
 > > /schema/src/com/cloud/upgrade/dao/Upgrade410to420.java
 > > 
--
 > > diff --git
 > > a/engine/schema/src/com/cloud/upgrade/dao/Upgrade410to420.java
 > > b/eng

Re: git commit: updated refs/heads/master to 9fe7846

2013-06-07 Thread Marcus Sorensen
Ok. That gives me a place to start digging to figure out how to do
this. I'll update the thread when I find out, just for future
reference.

On Fri, Jun 7, 2013 at 12:06 PM, Chiradeep Vittal
 wrote:
>  _configServer.getConfigValue(Config.RouterTemplate***.key(),
> VMTemplateVO template =
> _templateDao.findRoutingTemplate(hType, templateName);
>
>
>
> On 6/7/13 8:55 AM, "Marcus Sorensen"  wrote:
>
>>I'm not sure if this fits in the discussion, I was asking Wei how
>>cloudstack chooses the system vm template during normal operation. I
>>get how the upgrades work, but I don't get how cloudstack chooses the
>>system template to use when actually deploying:
>>
>>I'm looking at it from a different perspective, not a CS
>>upgrade, but say we have to roll a new systemvm template for an
>>existing CS version. Say we rolled 4.2, with a new template, and then
>>two months later we realize that the template is missing dnsmasq or
>>something, and we have to have everyone install a new template. Do we
>>actually have to overwrite the existing template in-place on secondary
>>storage, then on each primary storage while the system vms are down?
>>Or can we register a new template, and the new template gets installed
>>on primary storage as system vms are rebooted.
>>
>> I saw that the upgrade scripts had that 'select max' statement, but
>>that just fetches the id for installing the template to secondary
>>storage. When I deploy a router, how does cloudstack select the
>>template for that?
>>
>>On Fri, Jun 7, 2013 at 12:15 AM, Wei ZHOU  wrote:
>>> In my point view, we ask users register new template in the upgrade
>>> instruction in release notes. If they do not register, it is their
>>> fault. If they do but upgrade fails, it is our fault.
>>>
>>> I admit that it is a good way to change each upgrade process and
>>> remove old templates when we use new template. It is not large work.
>>>
>>> -Wei
>>>
>>> 2013/6/6, Kishan Kavala :
 In the mentioned example, when new template for 4.3 is introduced, we
should
 remove template upgrade code in Upgrade41to42. This will make upgrade
 succeed even when systemvm-kvm-4.2 is not in database.
 On the other hand, if we allow 'systemvm-kvm-%', upgrade to 4.3 will
succeed
 even though the required systemvm-kvm-4.3 is not in database.

 So, every time a new system vm template is added, template upgrade from
 previous version should be removed.

 
 From: Wei ZHOU [ustcweiz...@gmail.com]
 Sent: Wednesday, June 05, 2013 3:56 PM
 To: dev@cloudstack.apache.org
 Subject: Re: git commit: updated refs/heads/master to 9fe7846

 Kishan,

 I know.

 If we upgrade from 4.1 to 4.3 ( assume the systemvm template is
 systemvm-kvm-4.3). We need to add systemvm-kvm-4.3 instead of
 systemvm-kvm-4.2. Maybe systemvm-kvm-4.2 is not in database.
 The upgrade includes Upgrade41to42 and Upgrade42to43. It will fail in
the
 Upgrade41to42.

 -Wei


 2013/6/5 Kishan Kavala 

> Wei,
>  If we use other templates, system Vms may not work. Only 4.2
>templates
> should be used when upgrading to 4.2.
>
> > -Original Message-
> > From: Wei ZHOU [mailto:ustcweiz...@gmail.com]
> > Sent: Wednesday, 5 June 2013 3:26 PM
> > To: dev@cloudstack.apache.org
> > Subject: Re: git commit: updated refs/heads/master to 9fe7846
> >
> > Kishan,
> >
> > What do you think about change some codes to "name like 'systemvm-
> > xenserver-%' " ?
> > If we use other templates, the upgrade maybe fail.
> >
> > -Wei
> >
> >
> > 2013/6/5 
> >
> > > Updated Branches:
> > >   refs/heads/master 91b15711b -> 9fe7846d7
> > >
> > >
> > > CLOUDSTACK-2728: 41-42 DB upgrade: add step to upgrade system
> > > templates
> > >
> > >
> > > Project: http://git-wip-us.apache.org/repos/asf/cloudstack/repo
> > > Commit:
> > > http://git-wip-us.apache.org/repos/asf/cloudstack/commit/9fe7846d
> > > Tree:
>http://git-wip-us.apache.org/repos/asf/cloudstack/tree/9fe7846d
> > > Diff:
>http://git-wip-us.apache.org/repos/asf/cloudstack/diff/9fe7846d
> > >
> > > Branch: refs/heads/master
> > > Commit: 9fe7846d72e401720e1dcbce52d021e2646429f1
> > > Parents: 91b1571
> > > Author: Harikrishna Patnala 
> > > Authored: Mon Jun 3 12:33:58 2013  0530
> > > Committer: Kishan Kavala 
> > > Committed: Wed Jun 5 15:14:04 2013  0530
> > >
> > >
>--
> > >  .../src/com/cloud/upgrade/dao/Upgrade410to420.java |  209
> > >   -
> > >  1 files changed, 204 insertions( ), 5 deletions(-)
> > >
>--
> > >
> > >
> > >
> > >
>http://gi

Re: git commit: updated refs/heads/master to 9fe7846

2013-06-07 Thread Marcus Sorensen
Ok, here's what I've figured out so far (for master branch, 4.1 seems
different):

Specifically for routers, given the hypervisor KVM, it searches the
table vm_template for type='SYSTEM' and hypervisor_type='KVM' and
chooses the last registered template.

There is a 'router.template.kvm' config option, not in the
configuration table by default. If that exists, it is supposed to take
the value of that config option and search the 'name' field of the
vm_template table as well(add to the filter).

There is another config parameter called router.template.id, which
seems to do nothing in master code. There are references to things
that set it in the configuration table, but that's about it. This
seems bad because it's exposed to the user but doesn't do anything.

So, back to the original question of how to update a system VM
template. It looks like you'd register a new template, manually set
its type to 'SYSTEM' in the database (or how else can you set it?),
and that's it. If you want a particular one out of several SYSTEM
templates, rather than the latest, you'd set the router.template.kvm
parameter to the name text matching the particular template.

Now on to 4.1

On Fri, Jun 7, 2013 at 12:42 PM, Marcus Sorensen  wrote:
> Ok. That gives me a place to start digging to figure out how to do
> this. I'll update the thread when I find out, just for future
> reference.
>
> On Fri, Jun 7, 2013 at 12:06 PM, Chiradeep Vittal
>  wrote:
>>  _configServer.getConfigValue(Config.RouterTemplate***.key(),
>> VMTemplateVO template =
>> _templateDao.findRoutingTemplate(hType, templateName);
>>
>>
>>
>> On 6/7/13 8:55 AM, "Marcus Sorensen"  wrote:
>>
>>>I'm not sure if this fits in the discussion, I was asking Wei how
>>>cloudstack chooses the system vm template during normal operation. I
>>>get how the upgrades work, but I don't get how cloudstack chooses the
>>>system template to use when actually deploying:
>>>
>>>I'm looking at it from a different perspective, not a CS
>>>upgrade, but say we have to roll a new systemvm template for an
>>>existing CS version. Say we rolled 4.2, with a new template, and then
>>>two months later we realize that the template is missing dnsmasq or
>>>something, and we have to have everyone install a new template. Do we
>>>actually have to overwrite the existing template in-place on secondary
>>>storage, then on each primary storage while the system vms are down?
>>>Or can we register a new template, and the new template gets installed
>>>on primary storage as system vms are rebooted.
>>>
>>> I saw that the upgrade scripts had that 'select max' statement, but
>>>that just fetches the id for installing the template to secondary
>>>storage. When I deploy a router, how does cloudstack select the
>>>template for that?
>>>
>>>On Fri, Jun 7, 2013 at 12:15 AM, Wei ZHOU  wrote:
 In my point view, we ask users register new template in the upgrade
 instruction in release notes. If they do not register, it is their
 fault. If they do but upgrade fails, it is our fault.

 I admit that it is a good way to change each upgrade process and
 remove old templates when we use new template. It is not large work.

 -Wei

 2013/6/6, Kishan Kavala :
> In the mentioned example, when new template for 4.3 is introduced, we
>should
> remove template upgrade code in Upgrade41to42. This will make upgrade
> succeed even when systemvm-kvm-4.2 is not in database.
> On the other hand, if we allow 'systemvm-kvm-%', upgrade to 4.3 will
>succeed
> even though the required systemvm-kvm-4.3 is not in database.
>
> So, every time a new system vm template is added, template upgrade from
> previous version should be removed.
>
> 
> From: Wei ZHOU [ustcweiz...@gmail.com]
> Sent: Wednesday, June 05, 2013 3:56 PM
> To: dev@cloudstack.apache.org
> Subject: Re: git commit: updated refs/heads/master to 9fe7846
>
> Kishan,
>
> I know.
>
> If we upgrade from 4.1 to 4.3 ( assume the systemvm template is
> systemvm-kvm-4.3). We need to add systemvm-kvm-4.3 instead of
> systemvm-kvm-4.2. Maybe systemvm-kvm-4.2 is not in database.
> The upgrade includes Upgrade41to42 and Upgrade42to43. It will fail in
>the
> Upgrade41to42.
>
> -Wei
>
>
> 2013/6/5 Kishan Kavala 
>
>> Wei,
>>  If we use other templates, system Vms may not work. Only 4.2
>>templates
>> should be used when upgrading to 4.2.
>>
>> > -Original Message-
>> > From: Wei ZHOU [mailto:ustcweiz...@gmail.com]
>> > Sent: Wednesday, 5 June 2013 3:26 PM
>> > To: dev@cloudstack.apache.org
>> > Subject: Re: git commit: updated refs/heads/master to 9fe7846
>> >
>> > Kishan,
>> >
>> > What do you think about change some codes to "name like 'systemvm-
>> > xenserver-%' " ?
>> > If we use other templates,

Re: git commit: updated refs/heads/master to 9fe7846

2013-06-07 Thread Marcus Sorensen
4.1 is the same, but it does away with the router.template.kvm config
parameter. It literally just matches the last row in the vm_template
field that is type 'SYSTEM' and your hypervisor type.  It also has a
router.template.id field that seems to do nothing.

I'm assuming the other system vms work the same way...

On Fri, Jun 7, 2013 at 1:36 PM, Marcus Sorensen  wrote:
> Ok, here's what I've figured out so far (for master branch, 4.1 seems
> different):
>
> Specifically for routers, given the hypervisor KVM, it searches the
> table vm_template for type='SYSTEM' and hypervisor_type='KVM' and
> chooses the last registered template.
>
> There is a 'router.template.kvm' config option, not in the
> configuration table by default. If that exists, it is supposed to take
> the value of that config option and search the 'name' field of the
> vm_template table as well(add to the filter).
>
> There is another config parameter called router.template.id, which
> seems to do nothing in master code. There are references to things
> that set it in the configuration table, but that's about it. This
> seems bad because it's exposed to the user but doesn't do anything.
>
> So, back to the original question of how to update a system VM
> template. It looks like you'd register a new template, manually set
> its type to 'SYSTEM' in the database (or how else can you set it?),
> and that's it. If you want a particular one out of several SYSTEM
> templates, rather than the latest, you'd set the router.template.kvm
> parameter to the name text matching the particular template.
>
> Now on to 4.1
>
> On Fri, Jun 7, 2013 at 12:42 PM, Marcus Sorensen  wrote:
>> Ok. That gives me a place to start digging to figure out how to do
>> this. I'll update the thread when I find out, just for future
>> reference.
>>
>> On Fri, Jun 7, 2013 at 12:06 PM, Chiradeep Vittal
>>  wrote:
>>>  _configServer.getConfigValue(Config.RouterTemplate***.key(),
>>> VMTemplateVO template =
>>> _templateDao.findRoutingTemplate(hType, templateName);
>>>
>>>
>>>
>>> On 6/7/13 8:55 AM, "Marcus Sorensen"  wrote:
>>>
I'm not sure if this fits in the discussion, I was asking Wei how
cloudstack chooses the system vm template during normal operation. I
get how the upgrades work, but I don't get how cloudstack chooses the
system template to use when actually deploying:

I'm looking at it from a different perspective, not a CS
upgrade, but say we have to roll a new systemvm template for an
existing CS version. Say we rolled 4.2, with a new template, and then
two months later we realize that the template is missing dnsmasq or
something, and we have to have everyone install a new template. Do we
actually have to overwrite the existing template in-place on secondary
storage, then on each primary storage while the system vms are down?
Or can we register a new template, and the new template gets installed
on primary storage as system vms are rebooted.

 I saw that the upgrade scripts had that 'select max' statement, but
that just fetches the id for installing the template to secondary
storage. When I deploy a router, how does cloudstack select the
template for that?

On Fri, Jun 7, 2013 at 12:15 AM, Wei ZHOU  wrote:
> In my point view, we ask users register new template in the upgrade
> instruction in release notes. If they do not register, it is their
> fault. If they do but upgrade fails, it is our fault.
>
> I admit that it is a good way to change each upgrade process and
> remove old templates when we use new template. It is not large work.
>
> -Wei
>
> 2013/6/6, Kishan Kavala :
>> In the mentioned example, when new template for 4.3 is introduced, we
>>should
>> remove template upgrade code in Upgrade41to42. This will make upgrade
>> succeed even when systemvm-kvm-4.2 is not in database.
>> On the other hand, if we allow 'systemvm-kvm-%', upgrade to 4.3 will
>>succeed
>> even though the required systemvm-kvm-4.3 is not in database.
>>
>> So, every time a new system vm template is added, template upgrade from
>> previous version should be removed.
>>
>> 
>> From: Wei ZHOU [ustcweiz...@gmail.com]
>> Sent: Wednesday, June 05, 2013 3:56 PM
>> To: dev@cloudstack.apache.org
>> Subject: Re: git commit: updated refs/heads/master to 9fe7846
>>
>> Kishan,
>>
>> I know.
>>
>> If we upgrade from 4.1 to 4.3 ( assume the systemvm template is
>> systemvm-kvm-4.3). We need to add systemvm-kvm-4.3 instead of
>> systemvm-kvm-4.2. Maybe systemvm-kvm-4.2 is not in database.
>> The upgrade includes Upgrade41to42 and Upgrade42to43. It will fail in
>>the
>> Upgrade41to42.
>>
>> -Wei
>>
>>
>> 2013/6/5 Kishan Kavala 
>>
>>> Wei,
>>>  If we use other templates, system Vms may not wo

Re: quick systemvm question

2013-06-07 Thread Wei ZHOU
Marcus,
Please have a look at findSystemVMTemplate in VmTemplateDaoImpl.java
-Wei

2013/6/7, Marcus Sorensen :
> Thanks, I'm looking at it from a different perspective, not a CS
> upgrade, but say we have to roll a new systemvm template for an
> existing CS version. Say we rolled 4.2, with a new template, and then
> two months later we realize that the template is missing dnsmasq or
> something, and we have to have everyone install a new template. Do we
> actually have to overwrite the existing template in-place on secondary
> storage, then on each primary storage while the system vms are down?
> Or can we register a new template, and the new template gets installed
> on primary storage as system vms are rebooted.
>
>  I saw that the upgrade scripts had that 'select max' statement, but
> that just fetches the id for installing the template to secondary
> storage. When I deploy a router, how does cloudstack select the
> template for that?
>
> On Fri, Jun 7, 2013 at 12:54 AM, Wei ZHOU  wrote:
>> Marcus,
>>
>> (1) cloud-install-sys-tmplt update the template with max(id)
>>
>> select max(id) from cloud.vm_template where type = \"SYSTEM\" and
>> hypervisor_type = \"KVM\" and removed is null"`
>>
>> (2) upgrade process update the template with specified name. in
>> Upgrade410to420.java
>> pstmt = conn.prepareStatement("select id from `cloud`.`vm_template` where
>> name like 'systemvm-xenserver-4.2' and removed is null order by id desc
>> limit 1");
>>
>> We are discussing in another thread "git commit: updated
>> refs/heads/master
>> to 9fe7846". Please join us.
>>
>> -Wei
>>
>>
>> 2013/6/7 Marcus Sorensen 
>>
>>> How does cloudstack know which template is the latest system vm? Does
>>> it match on name or something?  From what I have gathered in the
>>> upgrade docs, you simply register a new template, like any other, and
>>> run a convenience script that restarts your system vms. But I don't
>>> gather from this how cloudstack knows it's a system template (and
>>> further THE system template).
>>>
>


Contributing as a non-committer

2013-06-07 Thread Paul Angus
Guys,

I'm just trying to get up to speed with how I can contribute more (starting 
with a minor doc fix) but the link  
http://cloudstack.apache.org/develop/non-contributors.html is broken.

Can it be fixed pls (does it count as a bug that I need to report). Is there an 
alternate link for the time being?


Regards

Paul Angus
Senior Consultant / Cloud Architect

[cid:image002.png@01CE1071.C6CC9C10]

S: +44 20 3603 0540 | M: +447711418784
paul.an...@shapeblue.com | 
www.shapeblue.com | Twitter:@shapeblue
ShapeBlue Ltd, 53 Chandos Place, Covent Garden, London, WC2N 4HS

ShapeBlue are proud to be sponsoring CloudStack Collaboration  Conference NA
[https://cwiki.apache.org/confluence/download/attachments/30760149/CloudStack+Collaboration+Conference+Banner+v2+Blue+Background+Only.jpg?version=3&modificationDate=1367282397297]

Apache CloudStack Bootcamp training courses
19/20 June, 
London
22/23 June, Santa Clara  
CA
10/11 July, Bangalore, 
India
21/22 August, 
London

This email and any attachments to it may be confidential and are intended 
solely for the use of the individual to whom it is addressed. Any views or 
opinions expressed are solely those of the author and do not necessarily 
represent those of Shape Blue Ltd or related companies. If you are not the 
intended recipient of this email, you must neither take any action based upon 
its contents, nor copy or show it to anyone. Please contact the sender if you 
believe you have received this email in error. Shape Blue Ltd is a company 
incorporated in England & Wales. ShapeBlue Services India LLP is operated under 
license from Shape Blue Ltd. ShapeBlue is a registered trademark.


Re: Contributing as a non-committer

2013-06-07 Thread Kelcey Jamison Damage
As a non-committer follow these steps:

http://cloudstack.apache.org/develop/non-committer.html

hope that helps.

- Original Message -
From: "Paul Angus" 
To: dev@cloudstack.apache.org
Cc: "Sebastien Goasguen" 
Sent: Friday, June 7, 2013 1:55:17 PM
Subject: Contributing as a non-committer 

Guys,

I'm just trying to get up to speed with how I can contribute more (starting 
with a minor doc fix) but the link  
http://cloudstack.apache.org/develop/non-contributors.html is broken.

Can it be fixed pls (does it count as a bug that I need to report). Is there an 
alternate link for the time being?


Regards

Paul Angus
Senior Consultant / Cloud Architect

[cid:image002.png@01CE1071.C6CC9C10]

S: +44 20 3603 0540 | M: +447711418784
paul.an...@shapeblue.com | 
www.shapeblue.com | Twitter:@shapeblue
ShapeBlue Ltd, 53 Chandos Place, Covent Garden, London, WC2N 4HS

ShapeBlue are proud to be sponsoring CloudStack Collaboration  Conference NA
[https://cwiki.apache.org/confluence/download/attachments/30760149/CloudStack+Collaboration+Conference+Banner+v2+Blue+Background+Only.jpg?version=3&modificationDate=1367282397297]

Apache CloudStack Bootcamp training courses
19/20 June, 
London
22/23 June, Santa Clara  
CA
10/11 July, Bangalore, 
India
21/22 August, 
London

This email and any attachments to it may be confidential and are intended 
solely for the use of the individual to whom it is addressed. Any views or 
opinions expressed are solely those of the author and do not necessarily 
represent those of Shape Blue Ltd or related companies. If you are not the 
intended recipient of this email, you must neither take any action based upon 
its contents, nor copy or show it to anyone. Please contact the sender if you 
believe you have received this email in error. Shape Blue Ltd is a company 
incorporated in England & Wales. ShapeBlue Services India LLP is operated under 
license from Shape Blue Ltd. ShapeBlue is a registered trademark.


Re: [DISCUSS] NFS cache storage issue on object_store

2013-06-07 Thread John Burwell
Edison,

Please see my commons in-line below.

Thanks,
-John

On Jun 6, 2013, at 6:43 PM, Edison Su  wrote:

> 
> 
>> -Original Message-
>> From: John Burwell [mailto:jburw...@basho.com]
>> Sent: Thursday, June 06, 2013 7:47 AM
>> To: dev@cloudstack.apache.org
>> Subject: Re: [DISCUSS] NFS cache storage issue on object_store
>> 
>> Edison,
>> 
>> Please my comments in-line below.
>> 
>> Thanks,
>> -John
>> 
>> On Jun 5, 2013, at 6:55 PM, Edison Su  wrote:
>> 
>>> 
>>> 
 -Original Message-
 From: John Burwell [mailto:jburw...@basho.com]
 Sent: Wednesday, June 05, 2013 1:04 PM
 To: dev@cloudstack.apache.org
 Subject: Re: [DISCUSS] NFS cache storage issue on object_store
 
 Edison,
 
 You have provided some great information below which helps greatly to
 understand the role of the "NFS cache" mechanism.  To summarize, this
 mechanism is only currently required for Xen snapshot operations
 driven by Xen's coalescing operations.  Is my understanding correct?
 Just out of
>>> 
>>> I think Ceph may still need "NFS cache", for example, during delta snapshot
>> backup:
>>> http://ceph.com/dev-notes/incremental-snapshots-with-rbd/
>>> You need to create a delta snapshot into a file, then upload the file into 
>>> S3
>>> 
>>> For KVM, if the snapshot is taken on qcow2, then need to copy the
>> snapshot into a file system, then backup it to S3.
>>> 
>>> Another usage case for "NFS cache " is to cache template stored on S3, if
>> there is no zone-wide primary storage. We need to download template from
>> S3 into every primary storage, if there is no cache, each download will take 
>> a
>> while: comparing download template directly from S3(if the S3 is region wide)
>> with download from a zone wide "cache" storage, I would say, the download
>> from zone wide cache storage should be faster than from region wide S3. If
>> there is no zone wide primary storage, then we will download the template
>> from S3 several times, which is quite time consuming.
>>> 
>>> 
>>> There may have other places to use "NFS cache", but the point is as
>>> long as mgt server can be decoupled from this "cache" storage, then we
>> can decide when/how to use cache storage based on different kind of
>> hypervisor/storage combinations in the future.
>> 
>> I think we would do well to re-orient the way we think about roles and
>> requirements.  Ceph doesn't need a file system to perform a delta snapshot
>> operation.  Xen, KVM, and/or VMWare need access to a file system to
> 
> For Ceph delta snapshot case, it's Ceph has the requirement that needs a file 
> system to perform delta snapshot(http://ceph.com/docs/next/man/8/rbd/):
> 
> export-diff [image-name] [dest-path] [-from-snap snapname]
> Exports an incremental diff for an image to dest path (use - for stdout). If 
> an initial snapshot is specified, only changes since that snapshot are 
> included; otherwise, any regions of the image that contain data are included. 
> The end snapshot is specified using the standard -snap option or @snap syntax 
> (see below). The image diff format includes metadata about image size 
> changes, and the start and end snapshots. It efficiently represents discarded 
> or 'zero' regions of the image.
> 
> The dest-path is either a file, or stdout, if using stdout, then need a lot 
> of memory. If using hypervisor's local file system, then the local file 
> system may don't have enough space to store the delta diff.

I apologize for failing to read more closely -- I mistakenly assumed you were 
referring to hypervisor snapshots.  To my mind, if a local file system is 
needed by a storage driver to perform an operation then it should be a 
encapsulated within the driver's scope.  The storage layer should provide a 
suitable interface for the driver to acquire/release a reservation to the 
staging/temporary area if it needs it.

For Ceph specifically, stdout can be pushed through a BufferedOutputStream and 
written straight to the object store -- skipping the file system.  With this 
approach, we should be able to keep the memory required a fixed size and "pump" 
it out to the object store.  Ideally, we would define the interfaces to provide 
InputStreams and OutputStreams -- creating the potential for the copy operation 
to be implemented in the orchestration code.

> 
>> perform these operations.  The hypervisor plugin should request a
>> reservation of x size as a file handle from the Storage subsystem.  The Ceph
>> driver implements this request by using a staging area + transfer operation.
>> This approach encapsulates the operation/rules around the staging area from
>> clients, protects against concurrent requests flooding a resource, and allows
>> hypervisor-specific behavior/rules to encapsulated in the appropriate plugin.
>> 
>>> 
 curiosity, is their a Xen expert on the list who can provide a
 high-level description of the coalescing operation -- in particular,
 the way it inter

Quick DB Question

2013-06-07 Thread Mike Tutkowski
Hi,

I'd like to place an IQN in the "iscsi_name" field available in the
cloud.volumes table after I create an appropriate iSCSI target on a SAN.

For some reason, we don't seem to be using this column in the VolumeVO
class, so I went ahead and added access to it.

I've successfully added columns to tables before in CloudStack and created
read/write access to them, but I am - for some reason - having trouble with
this "iscsi_name" column.

In the VolumeVO class, I've added the following (below). Can anyone see any
flaws in what I've done? I could have made the member variable private (I
just copied, pasted, and modified an existing field), but that shouldn't
matter for this purpose. I don't usually use the "_" in a method name, but
it just looked better to me in this case.

Thanks!

@Column(name = "iscsi_name")

String iScsiName;


@Override

public String get_iScsiName() {

return this.iScsiName;

}



public void set_iScsiName(String iScsiName) {

this.iScsiName = iScsiName;

}



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud
*™*


RE: Quick DB Question

2013-06-07 Thread Vijayendra Bhamidipati
Hi Mike,

You're probably calling those setter methods in the constructor and I don't see 
any problem having an '_' in the function name. What is the problem you're 
seeing?

Also I don't see this iscsi_name in VolumeVO.java on master - I'm guessing 
you're working off a private branch.

I'm yet to go through your earlier mails on the alias - so sorry for the 
following question if it has already been discussed - why do you want to put an 
iscsi iqn here in volumeVO? Isn't it better to put in a class of its own that 
derives VolumeVO?

Regards,
Vijay

-Original Message-
From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com] 
Sent: Friday, June 07, 2013 2:52 PM
To: dev@cloudstack.apache.org
Subject: Quick DB Question

Hi,

I'd like to place an IQN in the "iscsi_name" field available in the 
cloud.volumes table after I create an appropriate iSCSI target on a SAN.

For some reason, we don't seem to be using this column in the VolumeVO class, 
so I went ahead and added access to it.

I've successfully added columns to tables before in CloudStack and created 
read/write access to them, but I am - for some reason - having trouble with 
this "iscsi_name" column.

In the VolumeVO class, I've added the following (below). Can anyone see any 
flaws in what I've done? I could have made the member variable private (I just 
copied, pasted, and modified an existing field), but that shouldn't matter for 
this purpose. I don't usually use the "_" in a method name, but it just looked 
better to me in this case.

Thanks!

@Column(name = "iscsi_name")

String iScsiName;


@Override

public String get_iScsiName() {

return this.iScsiName;

}



public void set_iScsiName(String iScsiName) {

this.iScsiName = iScsiName;

}



--
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud
*(tm)*


Re: Quick DB Question

2013-06-07 Thread Mike Tutkowski
Hi,

Yeah, I should have been more explicit in what problem I was seeing. :)

What I'm seeing is that I set the field with a string (IQN) that is not
null, but it doesn't make it to the DB.

I have other, similar "sets" and all is well with them (their data make it
to the DB just fine).

Perhaps it's getting overwritten later in the processing with a null. I'll
have to look into it more.

I'm actually on master, as well. I do see an "iscsi_name" field.
varchar(255) in the cloud.volumes table. I haven't updated in a week or so,
but I doubt the column's been removed. Strange.

I'm developing a storage plug-in which creates an iSCSI volume on a SAN and
I just wanted to use (what I thought was) an existing column to store the
IQN. It looked like the "iscsi_name" column would be a good place to store
this info.

Thanks!


On Fri, Jun 7, 2013 at 4:05 PM, Vijayendra Bhamidipati <
vijayendra.bhamidip...@citrix.com> wrote:

> Hi Mike,
>
> You're probably calling those setter methods in the constructor and I
> don't see any problem having an '_' in the function name. What is the
> problem you're seeing?
>
> Also I don't see this iscsi_name in VolumeVO.java on master - I'm guessing
> you're working off a private branch.
>
> I'm yet to go through your earlier mails on the alias - so sorry for the
> following question if it has already been discussed - why do you want to
> put an iscsi iqn here in volumeVO? Isn't it better to put in a class of its
> own that derives VolumeVO?
>
> Regards,
> Vijay
>
> -Original Message-
> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> Sent: Friday, June 07, 2013 2:52 PM
> To: dev@cloudstack.apache.org
> Subject: Quick DB Question
>
> Hi,
>
> I'd like to place an IQN in the "iscsi_name" field available in the
> cloud.volumes table after I create an appropriate iSCSI target on a SAN.
>
> For some reason, we don't seem to be using this column in the VolumeVO
> class, so I went ahead and added access to it.
>
> I've successfully added columns to tables before in CloudStack and created
> read/write access to them, but I am - for some reason - having trouble with
> this "iscsi_name" column.
>
> In the VolumeVO class, I've added the following (below). Can anyone see
> any flaws in what I've done? I could have made the member variable private
> (I just copied, pasted, and modified an existing field), but that shouldn't
> matter for this purpose. I don't usually use the "_" in a method name, but
> it just looked better to me in this case.
>
> Thanks!
>
> @Column(name = "iscsi_name")
>
> String iScsiName;
>
>
> @Override
>
> public String get_iScsiName() {
>
> return this.iScsiName;
>
> }
>
>
>
> public void set_iScsiName(String iScsiName) {
>
> this.iScsiName = iScsiName;
>
> }
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud
> *(tm)*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud
*™*


RE: Quick DB Question

2013-06-07 Thread Alex Huang
Mike,

The Data Access Layer page[1] have the answer to this.   Specifically this 
particular part in the example.

// getters and setters must follow the Java
// convention of putting get/set in front of
// the field name.
public String getText() {
return text;
}

get_iscsiName is not a Java coding convention.  We should not use it.  The 
coding convention is "get" followed by the variable name with the first 
character capitalized.

--Alex
[1] https://cwiki.apache.org/confluence/display/CLOUDSTACK/Data+Access+Layer


> -Original Message-
> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> Sent: Friday, June 7, 2013 3:12 PM
> To: dev@cloudstack.apache.org
> Subject: Re: Quick DB Question
> 
> Hi,
> 
> Yeah, I should have been more explicit in what problem I was seeing. :)
> 
> What I'm seeing is that I set the field with a string (IQN) that is not null, 
> but it
> doesn't make it to the DB.
> 
> I have other, similar "sets" and all is well with them (their data make it to 
> the
> DB just fine).
> 
> Perhaps it's getting overwritten later in the processing with a null. I'll 
> have to
> look into it more.
> 
> I'm actually on master, as well. I do see an "iscsi_name" field.
> varchar(255) in the cloud.volumes table. I haven't updated in a week or so,
> but I doubt the column's been removed. Strange.
> 
> I'm developing a storage plug-in which creates an iSCSI volume on a SAN and
> I just wanted to use (what I thought was) an existing column to store the IQN.
> It looked like the "iscsi_name" column would be a good place to store this
> info.
> 
> Thanks!
> 
> 
> On Fri, Jun 7, 2013 at 4:05 PM, Vijayendra Bhamidipati <
> vijayendra.bhamidip...@citrix.com> wrote:
> 
> > Hi Mike,
> >
> > You're probably calling those setter methods in the constructor and I
> > don't see any problem having an '_' in the function name. What is the
> > problem you're seeing?
> >
> > Also I don't see this iscsi_name in VolumeVO.java on master - I'm
> > guessing you're working off a private branch.
> >
> > I'm yet to go through your earlier mails on the alias - so sorry for
> > the following question if it has already been discussed - why do you
> > want to put an iscsi iqn here in volumeVO? Isn't it better to put in a
> > class of its own that derives VolumeVO?
> >
> > Regards,
> > Vijay
> >
> > -Original Message-
> > From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> > Sent: Friday, June 07, 2013 2:52 PM
> > To: dev@cloudstack.apache.org
> > Subject: Quick DB Question
> >
> > Hi,
> >
> > I'd like to place an IQN in the "iscsi_name" field available in the
> > cloud.volumes table after I create an appropriate iSCSI target on a SAN.
> >
> > For some reason, we don't seem to be using this column in the VolumeVO
> > class, so I went ahead and added access to it.
> >
> > I've successfully added columns to tables before in CloudStack and
> > created read/write access to them, but I am - for some reason - having
> > trouble with this "iscsi_name" column.
> >
> > In the VolumeVO class, I've added the following (below). Can anyone
> > see any flaws in what I've done? I could have made the member variable
> > private (I just copied, pasted, and modified an existing field), but
> > that shouldn't matter for this purpose. I don't usually use the "_" in
> > a method name, but it just looked better to me in this case.
> >
> > Thanks!
> >
> > @Column(name = "iscsi_name")
> >
> > String iScsiName;
> >
> >
> > @Override
> >
> > public String get_iScsiName() {
> >
> > return this.iScsiName;
> >
> > }
> >
> >
> >
> > public void set_iScsiName(String iScsiName) {
> >
> > this.iScsiName = iScsiName;
> >
> > }
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkow...@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud
> > *(tm)*
> >
> 
> 
> 
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud
> *(tm)*


RE: Quick DB Question

2013-06-07 Thread Vijayendra Bhamidipati
Thanks for the info Alex! Btw there are two VolumeVO.java files in the src and 
Edison will be removing one of them when he merges the object store branch into 
master.. you'll need to use engine/schema/src/com/cloud/storage/VolumeVO.java.

Regards,
Vijay

-Original Message-
From: Alex Huang [mailto:alex.hu...@citrix.com] 
Sent: Friday, June 07, 2013 3:19 PM
To: dev@cloudstack.apache.org
Subject: RE: Quick DB Question

Mike,

The Data Access Layer page[1] have the answer to this.   Specifically this 
particular part in the example.

// getters and setters must follow the Java
// convention of putting get/set in front of
// the field name.
public String getText() {
return text;
}

get_iscsiName is not a Java coding convention.  We should not use it.  The 
coding convention is "get" followed by the variable name with the first 
character capitalized.

--Alex
[1] https://cwiki.apache.org/confluence/display/CLOUDSTACK/Data+Access+Layer


> -Original Message-
> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> Sent: Friday, June 7, 2013 3:12 PM
> To: dev@cloudstack.apache.org
> Subject: Re: Quick DB Question
> 
> Hi,
> 
> Yeah, I should have been more explicit in what problem I was seeing. 
> :)
> 
> What I'm seeing is that I set the field with a string (IQN) that is 
> not null, but it doesn't make it to the DB.
> 
> I have other, similar "sets" and all is well with them (their data 
> make it to the DB just fine).
> 
> Perhaps it's getting overwritten later in the processing with a null. 
> I'll have to look into it more.
> 
> I'm actually on master, as well. I do see an "iscsi_name" field.
> varchar(255) in the cloud.volumes table. I haven't updated in a week 
> or so, but I doubt the column's been removed. Strange.
> 
> I'm developing a storage plug-in which creates an iSCSI volume on a 
> SAN and I just wanted to use (what I thought was) an existing column to store 
> the IQN.
> It looked like the "iscsi_name" column would be a good place to store 
> this info.
> 
> Thanks!
> 
> 
> On Fri, Jun 7, 2013 at 4:05 PM, Vijayendra Bhamidipati < 
> vijayendra.bhamidip...@citrix.com> wrote:
> 
> > Hi Mike,
> >
> > You're probably calling those setter methods in the constructor and 
> > I don't see any problem having an '_' in the function name. What is 
> > the problem you're seeing?
> >
> > Also I don't see this iscsi_name in VolumeVO.java on master - I'm 
> > guessing you're working off a private branch.
> >
> > I'm yet to go through your earlier mails on the alias - so sorry for 
> > the following question if it has already been discussed - why do you 
> > want to put an iscsi iqn here in volumeVO? Isn't it better to put in 
> > a class of its own that derives VolumeVO?
> >
> > Regards,
> > Vijay
> >
> > -Original Message-
> > From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> > Sent: Friday, June 07, 2013 2:52 PM
> > To: dev@cloudstack.apache.org
> > Subject: Quick DB Question
> >
> > Hi,
> >
> > I'd like to place an IQN in the "iscsi_name" field available in the 
> > cloud.volumes table after I create an appropriate iSCSI target on a SAN.
> >
> > For some reason, we don't seem to be using this column in the 
> > VolumeVO class, so I went ahead and added access to it.
> >
> > I've successfully added columns to tables before in CloudStack and 
> > created read/write access to them, but I am - for some reason - 
> > having trouble with this "iscsi_name" column.
> >
> > In the VolumeVO class, I've added the following (below). Can anyone 
> > see any flaws in what I've done? I could have made the member 
> > variable private (I just copied, pasted, and modified an existing 
> > field), but that shouldn't matter for this purpose. I don't usually 
> > use the "_" in a method name, but it just looked better to me in this case.
> >
> > Thanks!
> >
> > @Column(name = "iscsi_name")
> >
> > String iScsiName;
> >
> >
> > @Override
> >
> > public String get_iScsiName() {
> >
> > return this.iScsiName;
> >
> > }
> >
> >
> >
> > public void set_iScsiName(String iScsiName) {
> >
> > this.iScsiName = iScsiName;
> >
> > }
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkow...@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the 
> > cloud
> > *(tm)*
> >
> 
> 
> 
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud
> *(tm)*


Re: Quick DB Question

2013-06-07 Thread Mike Tutkowski
Thanks, guys!


On Fri, Jun 7, 2013 at 4:29 PM, Vijayendra Bhamidipati <
vijayendra.bhamidip...@citrix.com> wrote:

> Thanks for the info Alex! Btw there are two VolumeVO.java files in the src
> and Edison will be removing one of them when he merges the object store
> branch into master.. you'll need to use
> engine/schema/src/com/cloud/storage/VolumeVO.java.
>
> Regards,
> Vijay
>
> -Original Message-
> From: Alex Huang [mailto:alex.hu...@citrix.com]
> Sent: Friday, June 07, 2013 3:19 PM
> To: dev@cloudstack.apache.org
> Subject: RE: Quick DB Question
>
> Mike,
>
> The Data Access Layer page[1] have the answer to this.   Specifically this
> particular part in the example.
>
> // getters and setters must follow the Java
> // convention of putting get/set in front of
> // the field name.
> public String getText() {
> return text;
> }
>
> get_iscsiName is not a Java coding convention.  We should not use it.  The
> coding convention is "get" followed by the variable name with the first
> character capitalized.
>
> --Alex
> [1]
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Data+Access+Layer
>
>
> > -Original Message-
> > From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> > Sent: Friday, June 7, 2013 3:12 PM
> > To: dev@cloudstack.apache.org
> > Subject: Re: Quick DB Question
> >
> > Hi,
> >
> > Yeah, I should have been more explicit in what problem I was seeing.
> > :)
> >
> > What I'm seeing is that I set the field with a string (IQN) that is
> > not null, but it doesn't make it to the DB.
> >
> > I have other, similar "sets" and all is well with them (their data
> > make it to the DB just fine).
> >
> > Perhaps it's getting overwritten later in the processing with a null.
> > I'll have to look into it more.
> >
> > I'm actually on master, as well. I do see an "iscsi_name" field.
> > varchar(255) in the cloud.volumes table. I haven't updated in a week
> > or so, but I doubt the column's been removed. Strange.
> >
> > I'm developing a storage plug-in which creates an iSCSI volume on a
> > SAN and I just wanted to use (what I thought was) an existing column to
> store the IQN.
> > It looked like the "iscsi_name" column would be a good place to store
> > this info.
> >
> > Thanks!
> >
> >
> > On Fri, Jun 7, 2013 at 4:05 PM, Vijayendra Bhamidipati <
> > vijayendra.bhamidip...@citrix.com> wrote:
> >
> > > Hi Mike,
> > >
> > > You're probably calling those setter methods in the constructor and
> > > I don't see any problem having an '_' in the function name. What is
> > > the problem you're seeing?
> > >
> > > Also I don't see this iscsi_name in VolumeVO.java on master - I'm
> > > guessing you're working off a private branch.
> > >
> > > I'm yet to go through your earlier mails on the alias - so sorry for
> > > the following question if it has already been discussed - why do you
> > > want to put an iscsi iqn here in volumeVO? Isn't it better to put in
> > > a class of its own that derives VolumeVO?
> > >
> > > Regards,
> > > Vijay
> > >
> > > -Original Message-
> > > From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> > > Sent: Friday, June 07, 2013 2:52 PM
> > > To: dev@cloudstack.apache.org
> > > Subject: Quick DB Question
> > >
> > > Hi,
> > >
> > > I'd like to place an IQN in the "iscsi_name" field available in the
> > > cloud.volumes table after I create an appropriate iSCSI target on a
> SAN.
> > >
> > > For some reason, we don't seem to be using this column in the
> > > VolumeVO class, so I went ahead and added access to it.
> > >
> > > I've successfully added columns to tables before in CloudStack and
> > > created read/write access to them, but I am - for some reason -
> > > having trouble with this "iscsi_name" column.
> > >
> > > In the VolumeVO class, I've added the following (below). Can anyone
> > > see any flaws in what I've done? I could have made the member
> > > variable private (I just copied, pasted, and modified an existing
> > > field), but that shouldn't matter for this purpose. I don't usually
> > > use the "_" in a method name, but it just looked better to me in this
> case.
> > >
> > > Thanks!
> > >
> > > @Column(name = "iscsi_name")
> > >
> > > String iScsiName;
> > >
> > >
> > > @Override
> > >
> > > public String get_iScsiName() {
> > >
> > > return this.iScsiName;
> > >
> > > }
> > >
> > >
> > >
> > > public void set_iScsiName(String iScsiName) {
> > >
> > > this.iScsiName = iScsiName;
> > >
> > > }
> > >
> > >
> > >
> > > --
> > > *Mike Tutkowski*
> > > *Senior CloudStack Developer, SolidFire Inc.*
> > > e: mike.tutkow...@solidfire.com
> > > o: 303.746.7302
> > > Advancing the way the world uses the
> > > cloud
> > > *(tm)*
> > >
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkow...@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> >

Re: git commit: updated refs/heads/master to 9fe7846

2013-06-07 Thread Chiradeep Vittal
Seems to be a 4.2 feature
commit 10b6c1c6c8f8c2ec49145a901fb083e7f362f3a1
Author: Harikrishna Patnala 
Date:   Tue Apr 30 16:41:25 2013 +0530

CLOUDSTACK-741: Granular Global Parameters Added parameters to cluster
level cluster.storage.allocated.capacity.notificationthreshold
cluster.storage.capacity.notificationthreshold

CLOUDSTACK-2036
global parameter for Router Template ID functionality added
We use 5 parameters to set the router template name for each hypervisor



On 6/7/13 12:44 PM, "Marcus Sorensen"  wrote:

>4.1 is the same, but it does away with the router.template.kvm config
>parameter. It literally just matches the last row in the vm_template
>field that is type 'SYSTEM' and your hypervisor type.  It also has a
>router.template.id field that seems to do nothing.
>
>I'm assuming the other system vms work the same way...
>
>On Fri, Jun 7, 2013 at 1:36 PM, Marcus Sorensen 
>wrote:
>> Ok, here's what I've figured out so far (for master branch, 4.1 seems
>> different):
>>
>> Specifically for routers, given the hypervisor KVM, it searches the
>> table vm_template for type='SYSTEM' and hypervisor_type='KVM' and
>> chooses the last registered template.
>>
>> There is a 'router.template.kvm' config option, not in the
>> configuration table by default. If that exists, it is supposed to take
>> the value of that config option and search the 'name' field of the
>> vm_template table as well(add to the filter).
>>
>> There is another config parameter called router.template.id, which
>> seems to do nothing in master code. There are references to things
>> that set it in the configuration table, but that's about it. This
>> seems bad because it's exposed to the user but doesn't do anything.
>>
>> So, back to the original question of how to update a system VM
>> template. It looks like you'd register a new template, manually set
>> its type to 'SYSTEM' in the database (or how else can you set it?),
>> and that's it. If you want a particular one out of several SYSTEM
>> templates, rather than the latest, you'd set the router.template.kvm
>> parameter to the name text matching the particular template.
>>
>> Now on to 4.1
>>
>> On Fri, Jun 7, 2013 at 12:42 PM, Marcus Sorensen 
>>wrote:
>>> Ok. That gives me a place to start digging to figure out how to do
>>> this. I'll update the thread when I find out, just for future
>>> reference.
>>>
>>> On Fri, Jun 7, 2013 at 12:06 PM, Chiradeep Vittal
>>>  wrote:
  _configServer.getConfigValue(Config.RouterTemplate***.key(),
 VMTemplateVO template =
 _templateDao.findRoutingTemplate(hType, templateName);



 On 6/7/13 8:55 AM, "Marcus Sorensen"  wrote:

>I'm not sure if this fits in the discussion, I was asking Wei how
>cloudstack chooses the system vm template during normal operation. I
>get how the upgrades work, but I don't get how cloudstack chooses the
>system template to use when actually deploying:
>
>I'm looking at it from a different perspective, not a CS
>upgrade, but say we have to roll a new systemvm template for an
>existing CS version. Say we rolled 4.2, with a new template, and then
>two months later we realize that the template is missing dnsmasq or
>something, and we have to have everyone install a new template. Do we
>actually have to overwrite the existing template in-place on secondary
>storage, then on each primary storage while the system vms are down?
>Or can we register a new template, and the new template gets installed
>on primary storage as system vms are rebooted.
>
> I saw that the upgrade scripts had that 'select max' statement, but
>that just fetches the id for installing the template to secondary
>storage. When I deploy a router, how does cloudstack select the
>template for that?
>
>On Fri, Jun 7, 2013 at 12:15 AM, Wei ZHOU 
>wrote:
>> In my point view, we ask users register new template in the upgrade
>> instruction in release notes. If they do not register, it is their
>> fault. If they do but upgrade fails, it is our fault.
>>
>> I admit that it is a good way to change each upgrade process and
>> remove old templates when we use new template. It is not large work.
>>
>> -Wei
>>
>> 2013/6/6, Kishan Kavala :
>>> In the mentioned example, when new template for 4.3 is introduced,
>>>we
>>>should
>>> remove template upgrade code in Upgrade41to42. This will make
>>>upgrade
>>> succeed even when systemvm-kvm-4.2 is not in database.
>>> On the other hand, if we allow 'systemvm-kvm-%', upgrade to 4.3
>>>will
>>>succeed
>>> even though the required systemvm-kvm-4.3 is not in database.
>>>
>>> So, every time a new system vm template is added, template upgrade
>>>from
>>> previous version should be removed.
>>>
>>> 
>>> From: Wei ZHOU [ustcweiz...@

Re: haproxy on VMWare systemVM template

2013-06-07 Thread Chiradeep Vittal
This is now done.
http://s.apache.org/wy



On 5/28/13 3:35 PM, "Chiradeep Vittal"  wrote:

>Thanks. I'll wait for the i386 bits to land as well.
>
>On 5/28/13 3:07 PM, "Milamber"  wrote:
>
>>Hello Chiradeep,
>>
>>Please note, haproxy has been backported in Debian Wheezy (7.0):
>>http://lists.debian.org/debian-backports-changes/2013/05/msg00050.html
>>http://packages.debian.org/wheezy-backports/haproxy
>>
>>Milamber
>>
>>Le 11/05/2013 01:14, Chiradeep Vittal a ecrit :
>>> Fixed by fetching haproxy 1.4.8-1 from squeeze-backports
>>>
>>> On 5/9/13 4:16 PM, "Sheng Yang"  wrote:
>>>
 Don't know. We can use Ubuntu package for now if it possible.

 or just use sid packages if possible?

 dnsmasq version is 0.62, which is good enough for ipv6.

 --Sheng


 On Thu, May 9, 2013 at 4:04 PM, Chiradeep Vittal <
 chiradeep.vit...@citrix.com> wrote:

> How old? When did it disappear?
>
> I propose using the Ubuntu  package.
> In tools/appliance/definitions/systemvmtemplate/postinstall.sh
>
> wget
>
> 
>http://security.ubuntu.com/ubuntu/pool/main/h/haproxy/haproxy_1.4.18-0
>u
>bu
> nt
> u2.1_i386.deb
>
> dpkg -i haproxy_1.4.18-0ubuntu2.1_i386.deb
>
> Also do we know if the system vm template contains the version of
> dnsmasq
> that is known to work for ipv6 support?
>
> --
> Chiradeep
>
> On 5/9/13 3:48 PM, "Sheng Yang"  wrote:
>
>> No idea. Probably we should just grab some old generated systemvm
>>for
> now.
>> --Sheng
>>
>>
>> On Thu, May 9, 2013 at 3:37 PM, Chiradeep Vittal <
>> chiradeep.vit...@citrix.com> wrote:
>>
>>> Should we use the Ubuntu package for now?
>>>
>>> On 5/9/13 2:03 PM, "Sheng Yang"  wrote:
>>>
 HAproxy is missing in Debian 7.0's repo, due to old maintainer is
>>> missing.
 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=674447
 http://lists.debian.org/debian-qa/2013/04/msg00039.html

 The new maintainer took over it at Apr 20th, but there is no
> schedule
>>> of
 recovering yet.

 That's why depends on everyday generated systemvm template is
>>> dangerous.
 --Sheng


 On Tue, Apr 23, 2013 at 12:09 AM, Rohit Yadav

>>> wrote:
> On Mon, Apr 22, 2013 at 11:45 AM, Abhinandan Prateek <
> agneya2...@hotmail.com
>> wrote:
>> The haproxy and port map services are not installed on VMWare
>>> system
> VM
>> template. Is the path used to create the templates different for
> different
>> Hypervisor templates ? I was under the assumption that the
> services
>> installed on all the system VM templates meant for different
> hypervisors
>> should be same ?
>>
> No? Pl. see tools/appliance/systemvmtemplate/postinstall.sh, if
> it's
> there
> those pkgs will be installed.
> For the template I created, I had built it with veewee on my
> system
>>> and
> then imported it in vmware fusion to install the vmware-tools.
>
> Cheers.
>
>
>> -abhi
>>
>>>
>
>>>
>>
>



Re: haproxy on VMWare systemVM template

2013-06-07 Thread Marcus Sorensen
Installed the oneiric Ubuntu haproxy for ours and it seems to work.
On May 28, 2013 4:07 PM, "Milamber"  wrote:

> Hello Chiradeep,
>
> Please note, haproxy has been backported in Debian Wheezy (7.0):
> http://lists.debian.org/**debian-backports-changes/2013/**05/msg00050.html
> http://packages.debian.org/**wheezy-backports/haproxy
>
> Milamber
>
> Le 11/05/2013 01:14, Chiradeep Vittal a ecrit :
>
>> Fixed by fetching haproxy 1.4.8-1 from squeeze-backports
>>
>> On 5/9/13 4:16 PM, "Sheng Yang"  wrote:
>>
>>  Don't know. We can use Ubuntu package for now if it possible.
>>>
>>> or just use sid packages if possible?
>>>
>>> dnsmasq version is 0.62, which is good enough for ipv6.
>>>
>>> --Sheng
>>>
>>>
>>> On Thu, May 9, 2013 at 4:04 PM, Chiradeep Vittal <
>>> chiradeep.vit...@citrix.com> wrote:
>>>
>>>  How old? When did it disappear?

 I propose using the Ubuntu  package.
 In tools/appliance/definitions/**systemvmtemplate/postinstall.**sh

 wget

 http://security.ubuntu.com/**ubuntu/pool/main/h/haproxy/**
 haproxy_1.4.18-0ubu
 nt
 u2.1_i386.deb

 dpkg -i haproxy_1.4.18-0ubuntu2.1_**i386.deb

 Also do we know if the system vm template contains the version of
 dnsmasq
 that is known to work for ipv6 support?

 --
 Chiradeep

 On 5/9/13 3:48 PM, "Sheng Yang"  wrote:

  No idea. Probably we should just grab some old generated systemvm for
>
 now.

> --Sheng
>
>
> On Thu, May 9, 2013 at 3:37 PM, Chiradeep Vittal <
> chiradeep.vit...@citrix.com> wrote:
>
>  Should we use the Ubuntu package for now?
>>
>> On 5/9/13 2:03 PM, "Sheng Yang"  wrote:
>>
>>  HAproxy is missing in Debian 7.0's repo, due to old maintainer is
>>>
>> missing.
>>
>>> http://bugs.debian.org/cgi-**bin/bugreport.cgi?bug=674447
>>> http://lists.debian.org/**debian-qa/2013/04/msg00039.**html
>>>
>>> The new maintainer took over it at Apr 20th, but there is no
>>>
>> schedule

> of
>>
>>> recovering yet.
>>>
>>> That's why depends on everyday generated systemvm template is
>>>
>> dangerous.
>>
>>> --Sheng
>>>
>>>
>>> On Tue, Apr 23, 2013 at 12:09 AM, Rohit Yadav 
>>>
>> wrote:
>>
>>> On Mon, Apr 22, 2013 at 11:45 AM, Abhinandan Prateek <
 agneya2...@hotmail.com

> wrote:
> The haproxy and port map services are not installed on VMWare
>
 system
>>
>>> VM

> template. Is the path used to create the templates different for
>
 different

> Hypervisor templates ? I was under the assumption that the
>
 services

> installed on all the system VM templates meant for different
>
 hypervisors

> should be same ?
>
>  No? Pl. see tools/appliance/**systemvmtemplate/postinstall.**sh,
 if

>>> it's

> there
 those pkgs will be installed.
 For the template I created, I had built it with veewee on my

>>> system

> and
>>
>>> then imported it in vmware fusion to install the vmware-tools.

 Cheers.


  -abhi
>
>
>>

>>
>


Hello (Upgrade to 4.1)

2013-06-07 Thread Maurice Lawler
I just wanted to follow up, seems as though the communcation has 
stopped.


I am presetly utilizing CS 4.0.2 | KVM | CentOS 6.3

I would like to go ahead and upgrade to CS 4.1 / CentOS 6.4; however, 
prior to doing so, it would be suggested to pause all containers 
(instances) as it seems it will be also upgrading the qmenu.


Just wanted to make sure.

- Maurice


Re: Review Request: Fix for test case failure test_network.py:test_delete_account - CLOUDSTACK-2898

2013-06-07 Thread Prasanna Santhanam

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11713/#review21612
---


no diff here. did you forget to attach?

- Prasanna Santhanam


On June 7, 2013, 6:08 p.m., Rayees Namathponnan wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11713/
> ---
> 
> (Updated June 7, 2013, 6:08 p.m.)
> 
> 
> Review request for cloudstack, Prasanna Santhanam and Girish Shilamkar.
> 
> 
> Description
> ---
> 
> https://issues.apache.org/jira/browse/CLOUDSTACK-2898
> 
> In this test case we need to capture "cloudstackAPIException" before 
> capturing more generic exceptions
> 
> 
> Diffs
> -
> 
> 
> Diff: https://reviews.apache.org/r/11713/diff/
> 
> 
> Testing
> ---
> 
> Tested 
> 
> 
> Thanks,
> 
> Rayees Namathponnan
> 
>



Re: KVM development, libvirt

2013-06-07 Thread Prasanna Santhanam
On Fri, Jun 07, 2013 at 11:26:22AM -0400, John Burwell wrote:
> Prasanna,
> 
> What if we made passing the Jenkins tests a pre-requisite to open
> voting?  In such a scenario, the test report from the Jenkins build
> would be attached to the voting email.
> 

Absolutely,

We already do check that all jobs on Jenkins are "blue" before cutting
the RC. So adding those additional package tests into jenkins for
supported platforms will add to the release manager's checklist.

-- 
Prasanna.,
 
> On Jun 7, 2013, at 9:09 AM, Prasanna Santhanam  wrote:
> 
> > On Thu, Jun 06, 2013 at 10:48:14PM -0600, Marcus Sorensen wrote:
> >> Ok. Do we need to call a vote or something to change our rules to
> >> solidify that we should require at least two votes from each supported
> >> platform, whether they be automated tests or contributor tests?
> >> 
> > 
> > I'd encourage that. That'll need a change to our release
> > testing/voting steps which works from the source release only.
> > 
> > I'd personally prefer a jenkins automated package test. 
> > 
> > -- 
> > Prasanna.,
> > 
> > 
> > Powered by BigRock.com
> > 



Powered by BigRock.com



reminder: @author tags in codebase

2013-06-07 Thread Prasanna Santhanam
The latest list is on the bug report:
https://issues.apache.org/jira/browse/CLOUDSTACK-1253

Please consider removing them (and fixing your IDE) if you own the
code.

Thanks,

-- 
Prasanna.,


Powered by BigRock.com