[GitHub] cloudstack issue #1953: CLOUDSTACK-9794: Unable to attach more than 14 devic...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1953
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-521


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1897: CLOUDSTACK-9733: Concurrent volume snapshots of a VM...

2017-02-21 Thread sureshanaparti
Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1897
  
@ramkatru Checked and addressed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1953: CLOUDSTACK-9794: Unable to attach more than 14 devic...

2017-02-21 Thread borisstoyanov
Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1953
  
@blueorangutan test


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1953: CLOUDSTACK-9794: Unable to attach more than 14 devic...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1953
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has 
been kicked to run smoke tests


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1897: CLOUDSTACK-9733: Concurrent volume snapshots of a VM...

2017-02-21 Thread sureshanaparti
Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1897
  
@koushik-das @kishankavala  Please review the changes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1935: CLOUDSTACK-9764: Delete domain failure due to Accoun...

2017-02-21 Thread nvazquez
Github user nvazquez commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@rafaelweingartner thanks for reviewing! I extracted code to new methods 
and also added unit tests for them


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1773: CLOUDSTACK-9607: Preventing template deletion when t...

2017-02-21 Thread serg38
Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1773
  
@jburwell That was default behavior for few years to allow deletion of the 
template even if active VMs exist. Deletion of the template on secondary 
doesn't remove the template copy on primary storage so all existing VM function 
work just fine. 
From my prospective if we allow forced deletion from the UI I am fine with 
switching default to forced=no and documenting it in Release Notes. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1879: CLOUDSTACK-9719: [VMware] VR loses DHCP settings and...

2017-02-21 Thread sureshanaparti
Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1879
  
@rhtyd Thanks for running these test. The failures/errors are not related 
to this PR changes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1879: CLOUDSTACK-9719: [VMware] VR loses DHCP settings and...

2017-02-21 Thread sureshanaparti
Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1879
  
@sateesh-chodapuneedi @rhtyd  Please review the code changes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1935: CLOUDSTACK-9764: Delete domain failure due to Accoun...

2017-02-21 Thread rafaelweingartner
Github user rafaelweingartner commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@nvazquez great work.
However, there is a catch there that I think you might have overlooked. 
This problem is caused by the method extraction I suggested.

If you take a look at the code before the extraction, every time that an 
exception is thrown, the code was setting the variable `rollBackState = true`. 
This happens at lines 287, 305, and 313. Now that the code was extracted, 
setting those variables to `true` does not work anymore, because of the context 
those variables are declared change.

In my opinion, this code was kind of weird before. It was throwing an 
exception that is caught right away and setting a control variable to be 
executed on `finally` block. The only reason I see for this is that if other 
exceptions that are not the ones generated at lines 292, 310, and 325 happen, 
and we do not want to execute the rollback for them. However, this seems error 
prone, leading to database inconsistencies.

I would change the "rollback" code (lines 342-345) to the catch block.

I do not know if I have been clear, we can discuss this further. I may have 
overlooked some bits of it as well (it is a quite complicated bit of code).



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1915: CLOUDSTACK-9746 system-vm: logrotate config causes c...

2017-02-21 Thread dmabry
Github user dmabry commented on the issue:

https://github.com/apache/cloudstack/pull/1915
  
@serbaut Can you do a force push to kick off jenkins again.  I'm guessing 
Jenkins just had an issue and not the PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1878: CLOUDSTACK-9717: [VMware] RVRs have mismatching MAC ...

2017-02-21 Thread sureshanaparti
Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1878
  
@remibergsma Same MAC for RVR has been re-introducted as part of 
[CLOUDSTACK-985](https://issues.apache.org/jira/browse/CLOUDSTACK-985). It 
confirms that peer NICs of RVRs should have same MAC addresses. Only default 
public NIC was configured with same MAC. For VMware, there exists additional 
public NICs which were not configured with same MAC addresses.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1954: CLOUDSTACK-9795: moved logrotate from cron.daily to ...

2017-02-21 Thread dmabry
Github user dmabry commented on the issue:

https://github.com/apache/cloudstack/pull/1954
  
tag:mergeready



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1875: CLOUDSTACK-8608: [VMware] System VMs failed to start...

2017-02-21 Thread sureshanaparti
Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1875
  
@rhtyd Thanks for running tests. The test failures/errors above are failing 
in other PRs as well, not related to the changes in this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1875: CLOUDSTACK-8608: [VMware] System VMs failed to start...

2017-02-21 Thread sureshanaparti
Github user sureshanaparti commented on the issue:

https://github.com/apache/cloudstack/pull/1875
  
@sateesh-chodapuneedi @rhtyd  Please review the changes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1953: CLOUDSTACK-9794: Unable to attach more than 1...

2017-02-21 Thread HrWiggles
Github user HrWiggles commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102336557
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
 // allocate deviceId
-List vols = _volsDao.findByInstance(vmId);
+int maxDataVolumesSupported = getMaxDataVolumesSupported(vm);
+List vols = _volsDao.findByInstance(vm.getId());
 if (deviceId != null) {
-if (deviceId.longValue() > 15 || deviceId.longValue() == 3) {
-throw new RuntimeException("deviceId should be 1,2,4-15");
+if (deviceId.longValue() > maxDataVolumesSupported || 
deviceId.longValue() == 3) {
+throw new RuntimeException("deviceId should be 1,2,4-" + 
maxDataVolumesSupported);
 }
 for (VolumeVO vol : vols) {
 if (vol.getDeviceId().equals(deviceId)) {
-throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vmId);
+throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vm.getId());
 }
 }
 } else {
 // allocate deviceId here
 List devIds = new ArrayList();
-for (int i = 1; i < 15; i++) {
+for (int i = 1; i < maxDataVolumesSupported; i++) {
--- End diff --

@sureshanaparti If the condition really should be `i < 
maxDataVolumesSupported` (which would make the maximum device id returned by 
the method be `maxDataVolumesSupported - 1`), then the check + error message 
above
```
if (deviceId.longValue() <= 0 || deviceId.longValue() > 
maxDataVolumesSupported || deviceId.longValue() == 3) {
throw new RuntimeException("deviceId should be 1,2,4-" + 
maxDataVolumesSupported);
```
need to be changed so as not to include the value of 
`maxDataVolumesSupported` itself.
Otherwise, when `maxDataVolumesSupported` has value `6` (for example), the 
method would not ever return `6` when parameter `deviceId` is specified as 
`null` but would return `6` when parameter `deviceId` is specified as `6` 
(assuming device id `6` is not already in use).  Also, the error message would 
state "deviceId should be 1,2,4-6" whenever parameter `deviceId` would be 
specified as an invalid value, which would not be correct (as `5` should be the 
highest valid device id).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1953: CLOUDSTACK-9794: Unable to attach more than 1...

2017-02-21 Thread HrWiggles
Github user HrWiggles commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102342529
  
--- Diff: 
plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtVMDef.java 
---
@@ -584,18 +584,36 @@ public void defFileBasedDisk(String filePath, String 
diskLabel, DiskBus bus, Dis
 
 /* skip iso label */
 private String getDevLabel(int devId, DiskBus bus) {
--- End diff --

Would be great to have unit tests for either `getDevLabel(int devId, 
DiskBus bus)` or `getDevLabelSuffix(int deviceIndex)`, especially to test for 
the expected results when `devId` (or `deviceIndex`) are high enough to return 
a double-letter device label suffix.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1861: CLOUDSTACK-9698 [VMware] Make hardcorded wait timeou...

2017-02-21 Thread sateesh-chodapuneedi
Github user sateesh-chodapuneedi commented on the issue:

https://github.com/apache/cloudstack/pull/1861
  
ping @karuturi @koushik-das 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Adding VirtIO SCSI to KVM hypervisors

2017-02-21 Thread Nathan Johnson
Wido den Hollander  wrote:

>
>> Op 25 januari 2017 om 4:44 schreef Simon Weller :
>>
>>
>> Maybe this is a good opportunity to discuss modernizing the OS  
>> selections so that drivers (and other features) could be selectable per  
>> OS.
>
> That seems like a good idea. If you select Ubuntu 16.04 or CentOS 7.3  
> then for example it will give you a VirtIO SCSI disk on KVM, anything  
> previous to that will get VirtIO-blk.

So one thing I noticed, there is a possibility of a rootDiskController  
parameter passed to the Start Command.  So this means that the Management  
server could control whether to use scsi or virtio, assuming I’m reading  
this correctly, and we shouldn’t necessarily have to rely on the os type  
name inside the agent code.  From a quick glance at the vmware code, it  
looks like maybe they already use this parameter?  It would be great if  
someone familiar with the vmware code could chime in here.

Thanks,

Nathan



>
> Wido
>
>> Thoughts?
>>
>>
>> 
>> From: Syed Ahmed 
>> Sent: Tuesday, January 24, 2017 10:46 AM
>> To: dev@cloudstack.apache.org
>> Cc: Simon Weller
>> Subject: Re: Adding VirtIO SCSI to KVM hypervisors
>>
>> To maintain backward compatibility we would have to add a config option  
>> here unfortunately. I do like the idea however. We can make the default  
>> VirtIO ISCSI and keep the VirtIO-blk as an alternative for existing  
>> installations.
>>
>> On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander  
>> mailto:w...@widodh.nl>> wrote:
>>
>>> Op 21 januari 2017 om 23:50 schreef Wido den Hollander  
>>> mailto:w...@widodh.nl>>:
>>>
>>>
>>>
>>>
 Op 21 jan. 2017 om 22:59 heeft Syed Ahmed  
 mailto:sah...@cloudops.com>> het volgende  
 geschreven:

 Exposing this via an API would be tricky but it can definitely be  
 added as
 a cluster-wide or a global setting in my opinion. By enabling that,  
 all the
 instances would be using VirtIO SCSI. Is there a reason you'd want some
 instances to use VirtIIO and others to use VirtIO SCSI?
>>>
>>> Even a global setting would be a bit of work and hacky as well.
>>>
>>> I do not see any reason to keep VirtIO, it os just that devices will be  
>>> named sdX instead of vdX in the guest.
>>
>> To add, the Qemu wiki [0] says:
>>
>> "A virtio storage interface for efficient I/O that overcomes virtio-blk  
>> limitations and supports advanced SCSI hardware."
>>
>> At OpenStack [1] they also say:
>>
>> "It has been designed to replace virtio-blk, increase it's performance  
>> and improve scalability."
>>
>> So it seems that VirtIO is there to be removed. I'd say switch to VirtIO  
>> SCSI at version 5.X? :)
>>
>> Wido
>>
>> [0]: http://wiki.qemu.org/Features/VirtioSCSI
>> [1]: https://wiki.openstack.org/wiki/LibvirtVirtioScsi
>>
>>> That might break existing Instances when not using labels or UUIDs in  
>>> the Instance when mounting.
>>>
>>> Wido
>>>
> On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller  
> mailto:swel...@ena.com>> wrote:
>
> For the record, we've been looking into this as well.
> Has anyone tried it with Windows VMs before? The standard virtio driver
> doesn't support spanned disks and that's something we'd really like to
> enable for our customers.
>
>
>
> Simon Weller/615-312-6068 <(615)%20312-6068>
>
>
> -Original Message-
> *From:* Wido den Hollander [w...@widodh.nl]
> *Received:* Saturday, 21 Jan 2017, 2:56PM
> *To:* Syed Ahmed [sah...@cloudops.com];  
> dev@cloudstack.apache.org [
> dev@cloudstack.apache.org]
> *Subject:* Re: Adding VirtIO SCSI to KVM hypervisors
>
>
>> Op 21 januari 2017 om 16:15 schreef Syed Ahmed  
>> mailto:sah...@cloudops.com>>:
>>
>>
>> Wido,
>>
>> Were you thinking of adding this as a global setting? I can see why it
> will
>> be useful. I'm happy to review any ideas you might have around this.
>
> Well, not really. We don't have any structure for this in place right  
> now
> to define what type of driver/disk we present to a guest.
>
> See my answer below.
>
>> Thanks,
>> -Syed
>> On Sat, Jan 21, 2017 at 04:46 Laszlo Hornyak  
>> mailto:laszlo.horn...@gmail.com>>
>> wrote:
>>
>>> Hi Wido,
>>>
>>> If I understand correctly from the documentation and your examples,
> virtio
>>> provides virtio interface to the guest while virtio-scsi provides  
>>> scsi
>>> interface, therefore an IaaS service should not replace it without  
>>> user
>>> request / approval. It would be probably better to let the user set
> what
>>> kind of IO interface the VM needs.
>
> You'd say, but we already do those. Some Operating Systems get a IDE  
> disk,
> others a SCSI disk and when L

Re: Adding VirtIO SCSI to KVM hypervisors

2017-02-21 Thread Sergey Levitskiy
On vmware rootdiskcontroller is passed over to the hypervisor in VM start 
command. I know for the fact that the following rootdiskcontroller option 
specified in template/vm details work fine:
ide
scsi
lsilogic
lsilogic1068

In general, any scsi controller option that vmware recognizes should work.

Thanks,
Sergey


On 2/21/17, 6:13 PM, "Nathan Johnson"  wrote:

Wido den Hollander  wrote:

>
>> Op 25 januari 2017 om 4:44 schreef Simon Weller :
>>
>>
>> Maybe this is a good opportunity to discuss modernizing the OS  
>> selections so that drivers (and other features) could be selectable per  
>> OS.
>
> That seems like a good idea. If you select Ubuntu 16.04 or CentOS 7.3  
> then for example it will give you a VirtIO SCSI disk on KVM, anything  
> previous to that will get VirtIO-blk.

So one thing I noticed, there is a possibility of a rootDiskController  
parameter passed to the Start Command.  So this means that the Management  
server could control whether to use scsi or virtio, assuming I’m reading  
this correctly, and we shouldn’t necessarily have to rely on the os type  
name inside the agent code.  From a quick glance at the vmware code, it  
looks like maybe they already use this parameter?  It would be great if  
someone familiar with the vmware code could chime in here.

Thanks,

Nathan



>
> Wido
>
>> Thoughts?
>>
>>
>> 
>> From: Syed Ahmed 
>> Sent: Tuesday, January 24, 2017 10:46 AM
>> To: dev@cloudstack.apache.org
>> Cc: Simon Weller
>> Subject: Re: Adding VirtIO SCSI to KVM hypervisors
>>
>> To maintain backward compatibility we would have to add a config option  
>> here unfortunately. I do like the idea however. We can make the default  
>> VirtIO ISCSI and keep the VirtIO-blk as an alternative for existing  
>> installations.
>>
>> On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander  
>> mailto:w...@widodh.nl>> wrote:
>>
>>> Op 21 januari 2017 om 23:50 schreef Wido den Hollander  
>>> mailto:w...@widodh.nl>>:
>>>
>>>
>>>
>>>
 Op 21 jan. 2017 om 22:59 heeft Syed Ahmed  
 mailto:sah...@cloudops.com>> het volgende  
 geschreven:

 Exposing this via an API would be tricky but it can definitely be  
 added as
 a cluster-wide or a global setting in my opinion. By enabling that,  
 all the
 instances would be using VirtIO SCSI. Is there a reason you'd want some
 instances to use VirtIIO and others to use VirtIO SCSI?
>>>
>>> Even a global setting would be a bit of work and hacky as well.
>>>
>>> I do not see any reason to keep VirtIO, it os just that devices will be 
 
>>> named sdX instead of vdX in the guest.
>>
>> To add, the Qemu wiki [0] says:
>>
>> "A virtio storage interface for efficient I/O that overcomes virtio-blk  
>> limitations and supports advanced SCSI hardware."
>>
>> At OpenStack [1] they also say:
>>
>> "It has been designed to replace virtio-blk, increase it's performance  
>> and improve scalability."
>>
>> So it seems that VirtIO is there to be removed. I'd say switch to VirtIO 
 
>> SCSI at version 5.X? :)
>>
>> Wido
>>
>> [0]: http://wiki.qemu.org/Features/VirtioSCSI
>> [1]: https://wiki.openstack.org/wiki/LibvirtVirtioScsi
>>
>>> That might break existing Instances when not using labels or UUIDs in  
>>> the Instance when mounting.
>>>
>>> Wido
>>>
> On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller  
> mailto:swel...@ena.com>> wrote:
>
> For the record, we've been looking into this as well.
> Has anyone tried it with Windows VMs before? The standard virtio 
driver
> doesn't support spanned disks and that's something we'd really like to
> enable for our customers.
>
>
>
> Simon Weller/615-312-6068 <(615)%20312-6068>
>
>
> -Original Message-
> *From:* Wido den Hollander [w...@widodh.nl]
> *Received:* Saturday, 21 Jan 2017, 2:56PM
> *To:* Syed Ahmed [sah...@cloudops.com];  
> dev@cloudstack.apache.org [
> dev@cloudstack.apache.org]
> *Subject:* Re: Adding VirtIO SCSI to KVM hypervisors
>
>
>> Op 21 januari 2017 om 16:15 schreef Syed Ahmed  
>> mailto:sah...@cloudops.com>>:
>>
>>
>> Wido,
>>
>> Were you thinking of adding this as a global setting? I can see why 
it
> will
>> be useful. I'm happy to review any ideas you might have around this.
>
> Well

Re: Adding VirtIO SCSI to KVM hypervisors

2017-02-21 Thread Nathan Johnson
Sergey Levitskiy  wrote:

> On vmware rootdiskcontroller is passed over to the hypervisor in VM start  
> command. I know for the fact that the following rootdiskcontroller option  
> specified in template/vm details work fine:
> ide
> scsi
> lsilogic
> lsilogic1068
>
> In general, any scsi controller option that vmware recognizes should work.
>
> Thanks,
> Sergey

Thanks Sergey!  So do you happen to know where on the management server  
side the determination is made as to which rootDiskController parameter to  
pass?




>
>
> On 2/21/17, 6:13 PM, "Nathan Johnson"  wrote:
>
> Wido den Hollander  wrote:
>
>>> Op 25 januari 2017 om 4:44 schreef Simon Weller :
>>>
>>>
>>> Maybe this is a good opportunity to discuss modernizing the OS
>>> selections so that drivers (and other features) could be selectable per
>>> OS.
>>
>> That seems like a good idea. If you select Ubuntu 16.04 or CentOS 7.3
>> then for example it will give you a VirtIO SCSI disk on KVM, anything
>> previous to that will get VirtIO-blk.
>
> So one thing I noticed, there is a possibility of a rootDiskController
> parameter passed to the Start Command.  So this means that the Management
> server could control whether to use scsi or virtio, assuming I’m reading
> this correctly, and we shouldn’t necessarily have to rely on the os type
> name inside the agent code.  From a quick glance at the vmware code, it
> looks like maybe they already use this parameter?  It would be great if
> someone familiar with the vmware code could chime in here.
>
> Thanks,
>
> Nathan
>
>
>
>> Wido
>>
>>> Thoughts?
>>>
>>>
>>> 
>>> From: Syed Ahmed 
>>> Sent: Tuesday, January 24, 2017 10:46 AM
>>> To: dev@cloudstack.apache.org
>>> Cc: Simon Weller
>>> Subject: Re: Adding VirtIO SCSI to KVM hypervisors
>>>
>>> To maintain backward compatibility we would have to add a config option
>>> here unfortunately. I do like the idea however. We can make the default
>>> VirtIO ISCSI and keep the VirtIO-blk as an alternative for existing
>>> installations.
>>>
>>> On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander
>>> mailto:w...@widodh.nl>> wrote:
>>>
 Op 21 januari 2017 om 23:50 schreef Wido den Hollander
 mailto:w...@widodh.nl>>:




> Op 21 jan. 2017 om 22:59 heeft Syed Ahmed
> mailto:sah...@cloudops.com>> het volgende
> geschreven:
>
> Exposing this via an API would be tricky but it can definitely be
> added as
> a cluster-wide or a global setting in my opinion. By enabling that,
> all the
> instances would be using VirtIO SCSI. Is there a reason you'd want some
> instances to use VirtIIO and others to use VirtIO SCSI?

 Even a global setting would be a bit of work and hacky as well.

 I do not see any reason to keep VirtIO, it os just that devices will be
 named sdX instead of vdX in the guest.
>>>
>>> To add, the Qemu wiki [0] says:
>>>
>>> "A virtio storage interface for efficient I/O that overcomes virtio-blk
>>> limitations and supports advanced SCSI hardware."
>>>
>>> At OpenStack [1] they also say:
>>>
>>> "It has been designed to replace virtio-blk, increase it's performance
>>> and improve scalability."
>>>
>>> So it seems that VirtIO is there to be removed. I'd say switch to VirtIO
>>> SCSI at version 5.X? :)
>>>
>>> Wido
>>>
>>> [0]: http://wiki.qemu.org/Features/VirtioSCSI
>>> [1]: https://wiki.openstack.org/wiki/LibvirtVirtioScsi
>>>
 That might break existing Instances when not using labels or UUIDs in
 the Instance when mounting.

 Wido

>> On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller
>> mailto:swel...@ena.com>> wrote:
>>
>> For the record, we've been looking into this as well.
>> Has anyone tried it with Windows VMs before? The standard virtio  
>> driver
>> doesn't support spanned disks and that's something we'd really like to
>> enable for our customers.
>>
>>
>>
>> Simon Weller/615-312-6068 <(615)%20312-6068>
>>
>>
>> -Original Message-
>> *From:* Wido den Hollander [w...@widodh.nl]
>> *Received:* Saturday, 21 Jan 2017, 2:56PM
>> *To:* Syed Ahmed [sah...@cloudops.com];
>> dev@cloudstack.apache.org [
>> dev@cloudstack.apache.org]
>> *Subject:* Re: Adding VirtIO SCSI to KVM hypervisors
>>
>>
>>> Op 21 januari 2017 om 16:15 schreef Syed Ahmed
>>> mailto:sah...@cloudops.com>>:
>>>
>>>
>>> Wido,
>>>
>>> Were you thinking of adding this as a global setting? I can see why  
>>> it
>> will
>>> be useful. I'm happy to review any ideas you might have around this.
>>
>> Well, not really. We don't have any structure for this in place right
>> now
>> to define what type of driver/disk we present to a guest.
>>
>> See

Re: Adding VirtIO SCSI to KVM hypervisors

2017-02-21 Thread Sergey Levitskiy
Here it is the logic.
1. Default value is taken from a global configuration 
vmware.root.disk.controller   
2. To override add the same config to template or VM (starting from 4.10 UI 
allows adding advanced settings to templates and/or VMs). If added to a 
template all VMs deployed from it will inherit this value. If added to VM and 
then template is created it will also inherits all advanced settings.




On 2/21/17, 7:06 PM, "Nathan Johnson"  wrote:

Sergey Levitskiy  wrote:

> On vmware rootdiskcontroller is passed over to the hypervisor in VM start 
 
> command. I know for the fact that the following rootdiskcontroller option 
 
> specified in template/vm details work fine:
> ide
> scsi
> lsilogic
> lsilogic1068
>
> In general, any scsi controller option that vmware recognizes should work.
>
> Thanks,
> Sergey

Thanks Sergey!  So do you happen to know where on the management server  
side the determination is made as to which rootDiskController parameter to  
pass?




>
>
> On 2/21/17, 6:13 PM, "Nathan Johnson"  wrote:
>
> Wido den Hollander  wrote:
>
>>> Op 25 januari 2017 om 4:44 schreef Simon Weller :
>>>
>>>
>>> Maybe this is a good opportunity to discuss modernizing the OS
>>> selections so that drivers (and other features) could be selectable per
>>> OS.
>>
>> That seems like a good idea. If you select Ubuntu 16.04 or CentOS 7.3
>> then for example it will give you a VirtIO SCSI disk on KVM, anything
>> previous to that will get VirtIO-blk.
>
> So one thing I noticed, there is a possibility of a rootDiskController
> parameter passed to the Start Command.  So this means that the 
Management
> server could control whether to use scsi or virtio, assuming I’m 
reading
> this correctly, and we shouldn’t necessarily have to rely on the os 
type
> name inside the agent code.  From a quick glance at the vmware code, 
it
> looks like maybe they already use this parameter?  It would be great 
if
> someone familiar with the vmware code could chime in here.
>
> Thanks,
>
> Nathan
>
>
>
>> Wido
>>
>>> Thoughts?
>>>
>>>
>>> 
>>> From: Syed Ahmed 
>>> Sent: Tuesday, January 24, 2017 10:46 AM
>>> To: dev@cloudstack.apache.org
>>> Cc: Simon Weller
>>> Subject: Re: Adding VirtIO SCSI to KVM hypervisors
>>>
>>> To maintain backward compatibility we would have to add a config option
>>> here unfortunately. I do like the idea however. We can make the default
>>> VirtIO ISCSI and keep the VirtIO-blk as an alternative for existing
>>> installations.
>>>
>>> On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander
>>> mailto:w...@widodh.nl>> wrote:
>>>
 Op 21 januari 2017 om 23:50 schreef Wido den Hollander
 mailto:w...@widodh.nl>>:




> Op 21 jan. 2017 om 22:59 heeft Syed Ahmed
> mailto:sah...@cloudops.com>> het volgende
> geschreven:
>
> Exposing this via an API would be tricky but it can definitely be
> added as
> a cluster-wide or a global setting in my opinion. By enabling that,
> all the
> instances would be using VirtIO SCSI. Is there a reason you'd want 
some
> instances to use VirtIIO and others to use VirtIO SCSI?

 Even a global setting would be a bit of work and hacky as well.

 I do not see any reason to keep VirtIO, it os just that devices will be
 named sdX instead of vdX in the guest.
>>>
>>> To add, the Qemu wiki [0] says:
>>>
>>> "A virtio storage interface for efficient I/O that overcomes virtio-blk
>>> limitations and supports advanced SCSI hardware."
>>>
>>> At OpenStack [1] they also say:
>>>
>>> "It has been designed to replace virtio-blk, increase it's performance
>>> and improve scalability."
>>>
>>> So it seems that VirtIO is there to be removed. I'd say switch to VirtIO
>>> SCSI at version 5.X? :)
>>>
>>> Wido
>>>
>>> [0]: http://wiki.qemu.org/Features/VirtioSCSI
>>> [1]: https://wiki.openstack.org/wiki/LibvirtVirtioScsi
>>>
 That might break existing Instances when not using labels or UUIDs in
 the Instance when mounting.

 Wido

>> On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller
>> mailto:swel...@ena.com>> wrote:
>>
>> For the record, we've been looking into this as well.
>> Has anyone tried it with Windows VMs before? The standard virtio  
>> driver
>> doesn't support spanned disks and that's something we'd really like 
to
>> enable for our customers.
>>
>>
>>

Re: Adding VirtIO SCSI to KVM hypervisors

2017-02-21 Thread Nathan Johnson
Sergey Levitskiy  wrote:

> Here it is the logic.
> 1. Default value is taken from a global configuration  
> vmware.root.disk.controller   
> 2. To override add the same config to template or VM (starting from 4.10  
> UI allows adding advanced settings to templates and/or VMs). If added to  
> a template all VMs deployed from it will inherit this value. If added to  
> VM and then template is created it will also inherits all advanced  
> settings.
>

Excellent, thanks.  Do you happen to know where this is stored in the  
database?

Thanks again!

>
>
>
> On 2/21/17, 7:06 PM, "Nathan Johnson"  wrote:
>
> Sergey Levitskiy  wrote:
>
>> On vmware rootdiskcontroller is passed over to the hypervisor in VM start
>> command. I know for the fact that the following rootdiskcontroller option
>> specified in template/vm details work fine:
>> ide
>> scsi
>> lsilogic
>> lsilogic1068
>>
>> In general, any scsi controller option that vmware recognizes should work.
>>
>> Thanks,
>> Sergey
>
> Thanks Sergey!  So do you happen to know where on the management server
> side the determination is made as to which rootDiskController parameter to
> pass?
>
>
>
>
>> On 2/21/17, 6:13 PM, "Nathan Johnson"  wrote:
>>
>> Wido den Hollander  wrote:
>>
 Op 25 januari 2017 om 4:44 schreef Simon Weller :


 Maybe this is a good opportunity to discuss modernizing the OS
 selections so that drivers (and other features) could be selectable per
 OS.
>>>
>>> That seems like a good idea. If you select Ubuntu 16.04 or CentOS 7.3
>>> then for example it will give you a VirtIO SCSI disk on KVM, anything
>>> previous to that will get VirtIO-blk.
>>
>> So one thing I noticed, there is a possibility of a rootDiskController
>> parameter passed to the Start Command.  So this means that the Management
>> server could control whether to use scsi or virtio, assuming I’m reading
>> this correctly, and we shouldn’t necessarily have to rely on the os type
>> name inside the agent code.  From a quick glance at the vmware code, it
>> looks like maybe they already use this parameter?  It would be great if
>> someone familiar with the vmware code could chime in here.
>>
>> Thanks,
>>
>> Nathan
>>
>>
>>
>>> Wido
>>>
 Thoughts?


 
 From: Syed Ahmed 
 Sent: Tuesday, January 24, 2017 10:46 AM
 To: dev@cloudstack.apache.org
 Cc: Simon Weller
 Subject: Re: Adding VirtIO SCSI to KVM hypervisors

 To maintain backward compatibility we would have to add a config option
 here unfortunately. I do like the idea however. We can make the default
 VirtIO ISCSI and keep the VirtIO-blk as an alternative for existing
 installations.

 On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander
 mailto:w...@widodh.nl>> wrote:

> Op 21 januari 2017 om 23:50 schreef Wido den Hollander
> mailto:w...@widodh.nl>>:
>
>
>
>
>> Op 21 jan. 2017 om 22:59 heeft Syed Ahmed
>> mailto:sah...@cloudops.com>> het volgende
>> geschreven:
>>
>> Exposing this via an API would be tricky but it can definitely be
>> added as
>> a cluster-wide or a global setting in my opinion. By enabling that,
>> all the
>> instances would be using VirtIO SCSI. Is there a reason you'd want  
>> some
>> instances to use VirtIIO and others to use VirtIO SCSI?
>
> Even a global setting would be a bit of work and hacky as well.
>
> I do not see any reason to keep VirtIO, it os just that devices will be
> named sdX instead of vdX in the guest.

 To add, the Qemu wiki [0] says:

 "A virtio storage interface for efficient I/O that overcomes virtio-blk
 limitations and supports advanced SCSI hardware."

 At OpenStack [1] they also say:

 "It has been designed to replace virtio-blk, increase it's performance
 and improve scalability."

 So it seems that VirtIO is there to be removed. I'd say switch to VirtIO
 SCSI at version 5.X? :)

 Wido

 [0]: http://wiki.qemu.org/Features/VirtioSCSI
 [1]: https://wiki.openstack.org/wiki/LibvirtVirtioScsi

> That might break existing Instances when not using labels or UUIDs in
> the Instance when mounting.
>
> Wido
>
>>> On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller
>>> mailto:swel...@ena.com>> wrote:
>>>
>>> For the record, we've been looking into this as well.
>>> Has anyone tried it with Windows VMs before? The standard virtio
>>> driver
>>> doesn't support spanned disks and that's something we'd really like  
>>> to
>>> enable for our customers.
>>>
>>>
>>>
>>> Simon Weller/615-312-6068 <(615)%20312-6068>
>>>
>>>
>>> -Original Message-
>>> *From:* Wido den Hollander [w...@widodh.nl]
>>> *Received:* Saturday, 21 Jan 2017, 2:56PM
>>

Re: Adding VirtIO SCSI to KVM hypervisors

2017-02-21 Thread Sergey Levitskiy
Actually, minor correction. When adding to VM/templates the name of the detail 
is rootDiskController for Root controller and dataDiskController for additional 
disks.
Also, if you want to make changes on a global scale the changes need to go to 
vm_template_details and user_vm_details tables respectively.

On 2/21/17, 8:03 PM, "Sergey Levitskiy"  wrote:

Here it is the logic.
1. Default value is taken from a global configuration 
vmware.root.disk.controller   
2. To override add the same config to template or VM (starting from 4.10 UI 
allows adding advanced settings to templates and/or VMs). If added to a 
template all VMs deployed from it will inherit this value. If added to VM and 
then template is created it will also inherits all advanced settings.




On 2/21/17, 7:06 PM, "Nathan Johnson"  wrote:

Sergey Levitskiy  wrote:

> On vmware rootdiskcontroller is passed over to the hypervisor in VM 
start  
> command. I know for the fact that the following rootdiskcontroller 
option  
> specified in template/vm details work fine:
> ide
> scsi
> lsilogic
> lsilogic1068
>
> In general, any scsi controller option that vmware recognizes should 
work.
>
> Thanks,
> Sergey

Thanks Sergey!  So do you happen to know where on the management server 
 
side the determination is made as to which rootDiskController parameter 
to  
pass?




>
>
> On 2/21/17, 6:13 PM, "Nathan Johnson"  wrote:
>
> Wido den Hollander  wrote:
>
>>> Op 25 januari 2017 om 4:44 schreef Simon Weller :
>>>
>>>
>>> Maybe this is a good opportunity to discuss modernizing the OS
>>> selections so that drivers (and other features) could be selectable 
per
>>> OS.
>>
>> That seems like a good idea. If you select Ubuntu 16.04 or CentOS 7.3
>> then for example it will give you a VirtIO SCSI disk on KVM, anything
>> previous to that will get VirtIO-blk.
>
> So one thing I noticed, there is a possibility of a 
rootDiskController
> parameter passed to the Start Command.  So this means that the 
Management
> server could control whether to use scsi or virtio, assuming I’m 
reading
> this correctly, and we shouldn’t necessarily have to rely on the 
os type
> name inside the agent code.  From a quick glance at the vmware 
code, it
> looks like maybe they already use this parameter?  It would be 
great if
> someone familiar with the vmware code could chime in here.
>
> Thanks,
>
> Nathan
>
>
>
>> Wido
>>
>>> Thoughts?
>>>
>>>
>>> 
>>> From: Syed Ahmed 
>>> Sent: Tuesday, January 24, 2017 10:46 AM
>>> To: dev@cloudstack.apache.org
>>> Cc: Simon Weller
>>> Subject: Re: Adding VirtIO SCSI to KVM hypervisors
>>>
>>> To maintain backward compatibility we would have to add a config 
option
>>> here unfortunately. I do like the idea however. We can make the 
default
>>> VirtIO ISCSI and keep the VirtIO-blk as an alternative for existing
>>> installations.
>>>
>>> On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander
>>> mailto:w...@widodh.nl>> wrote:
>>>
 Op 21 januari 2017 om 23:50 schreef Wido den Hollander
 mailto:w...@widodh.nl>>:




> Op 21 jan. 2017 om 22:59 heeft Syed Ahmed
> mailto:sah...@cloudops.com>> het volgende
> geschreven:
>
> Exposing this via an API would be tricky but it can definitely be
> added as
> a cluster-wide or a global setting in my opinion. By enabling 
that,
> all the
> instances would be using VirtIO SCSI. Is there a reason you'd 
want some
> instances to use VirtIIO and others to use VirtIO SCSI?

 Even a global setting would be a bit of work and hacky as well.

 I do not see any reason to keep VirtIO, it os just that devices 
will be
 named sdX instead of vdX in the guest.
>>>
>>> To add, the Qemu wiki [0] says:
>>>
>>> "A virtio storage interface for efficient I/O that overcomes 
virtio-blk
>>> limitations and supports advanced SCSI hardware."
>>>
>>> At OpenStack [1] they also say:
>>>
>>> "It has been designed to replace virtio-blk, increase it's 
performance
>>> and improve scalability."
>>>
>>> So it seems tha

[GitHub] cloudstack issue #1873: CLOUDSTACK-9709: Updated the vm ip fetch task to use...

2017-02-21 Thread jayapalu
Github user jayapalu commented on the issue:

https://github.com/apache/cloudstack/pull/1873
  
There are not failed test cases on CI run.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Adding VirtIO SCSI to KVM hypervisors

2017-02-21 Thread Nathan Johnson
Sergey Levitskiy  wrote:

> Actually, minor correction. When adding to VM/templates the name of the  
> detail is rootDiskController for Root controller and dataDiskController  
> for additional disks.
> Also, if you want to make changes on a global scale the changes need to  
> go to vm_template_details and user_vm_details tables respectively.

Thanks!  Very helpful

>
> On 2/21/17, 8:03 PM, "Sergey Levitskiy"   
> wrote:
>
> Here it is the logic.
> 1. Default value is taken from a global configuration 
> vmware.root.disk.controller 
> 2. To override add the same config to template or VM (starting from 4.10 
> UI allows adding advanced settings to templates and/or VMs). If added to a 
> template all VMs deployed from it will inherit this value. If added to VM and 
> then template is created it will also inherits all advanced settings.
>
>
>
>
> On 2/21/17, 7:06 PM, "Nathan Johnson"  wrote:
>
> Sergey Levitskiy  wrote:
>
>> On vmware rootdiskcontroller is passed over to the hypervisor in VM start
>> command. I know for the fact that the following rootdiskcontroller option
>> specified in template/vm details work fine:
>> ide
>> scsi
>> lsilogic
>> lsilogic1068
>>
>> In general, any scsi controller option that vmware recognizes should work.
>>
>> Thanks,
>> Sergey
>
> Thanks Sergey!  So do you happen to know where on the management 
> server
> side the determination is made as to which rootDiskController 
> parameter to
> pass?
>
>
>
>
>> On 2/21/17, 6:13 PM, "Nathan Johnson"  wrote:
>>
>> Wido den Hollander  wrote:
>>
 Op 25 januari 2017 om 4:44 schreef Simon Weller :


 Maybe this is a good opportunity to discuss modernizing the OS
 selections so that drivers (and other features) could be selectable per
 OS.
>>>
>>> That seems like a good idea. If you select Ubuntu 16.04 or CentOS 7.3
>>> then for example it will give you a VirtIO SCSI disk on KVM, anything
>>> previous to that will get VirtIO-blk.
>>
>> So one thing I noticed, there is a possibility of a rootDiskController
>> parameter passed to the Start Command.  So this means that the Management
>> server could control whether to use scsi or virtio, assuming I’m reading
>> this correctly, and we shouldn’t necessarily have to rely on the os type
>> name inside the agent code.  From a quick glance at the vmware code, it
>> looks like maybe they already use this parameter?  It would be great if
>> someone familiar with the vmware code could chime in here.
>>
>> Thanks,
>>
>> Nathan
>>
>>
>>
>>> Wido
>>>
 Thoughts?


 
 From: Syed Ahmed 
 Sent: Tuesday, January 24, 2017 10:46 AM
 To: dev@cloudstack.apache.org
 Cc: Simon Weller
 Subject: Re: Adding VirtIO SCSI to KVM hypervisors

 To maintain backward compatibility we would have to add a config option
 here unfortunately. I do like the idea however. We can make the default
 VirtIO ISCSI and keep the VirtIO-blk as an alternative for existing
 installations.

 On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander
 mailto:w...@widodh.nl>> wrote:

> Op 21 januari 2017 om 23:50 schreef Wido den Hollander
> mailto:w...@widodh.nl>>:
>
>
>
>
>> Op 21 jan. 2017 om 22:59 heeft Syed Ahmed
>> mailto:sah...@cloudops.com>> het volgende
>> geschreven:
>>
>> Exposing this via an API would be tricky but it can definitely be
>> added as
>> a cluster-wide or a global setting in my opinion. By enabling that,
>> all the
>> instances would be using VirtIO SCSI. Is there a reason you'd want  
>> some
>> instances to use VirtIIO and others to use VirtIO SCSI?
>
> Even a global setting would be a bit of work and hacky as well.
>
> I do not see any reason to keep VirtIO, it os just that devices will be
> named sdX instead of vdX in the guest.

 To add, the Qemu wiki [0] says:

 "A virtio storage interface for efficient I/O that overcomes virtio-blk
 limitations and supports advanced SCSI hardware."

 At OpenStack [1] they also say:

 "It has been designed to replace virtio-blk, increase it's performance
 and improve scalability."

 So it seems that VirtIO is there to be removed. I'd say switch to VirtIO
 SCSI at version 5.X? :)

 Wido

 [0]: http://wiki.qemu.org/Features/VirtioSCSI
 [1]: https://wiki.openstack.org/wiki/LibvirtVirtioScsi

> That might break existing Instances when not using labels or UUIDs in
> the Instance when mounting.
>
> Wido
>
>>> On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller
>>> mailto:swel...@ena.com>> wrote:
>>>
>>> For the record, we've been looking into this as well.
>>> Has anyone tried it with Windows VMs before? The standard virtio
>>> driver
>>> doesn't support spanned disks and that

[GitHub] cloudstack issue #1953: CLOUDSTACK-9794: Unable to attach more than 14 devic...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1953
  
Trillian test result (tid-877)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 26001 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1953-t877-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Test completed. 47 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 303.85 | 
test_privategw_acl.py
test_02_list_snapshots_with_removed_data_store | `Error` | 0.03 | 
test_snapshots.py
test_01_vpc_site2site_vpn | Success | 134.10 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 55.78 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 210.11 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 242.58 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 466.50 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 479.09 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1366.03 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 521.09 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 728.22 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1246.26 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 156.10 | test_volumes.py
test_08_resize_volume | Success | 156.06 | test_volumes.py
test_07_resize_fail | Success | 156.10 | test_volumes.py
test_06_download_detached_volume | Success | 155.99 | test_volumes.py
test_05_detach_volume | Success | 150.56 | test_volumes.py
test_04_delete_attached_volume | Success | 145.93 | test_volumes.py
test_03_download_attached_volume | Success | 155.98 | test_volumes.py
test_02_attach_volume | Success | 83.88 | test_volumes.py
test_01_create_volume | Success | 620.53 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.19 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 95.62 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 128.64 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 252.01 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.02 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.37 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.17 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 35.63 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.08 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 130.68 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.64 | test_vm_life_cycle.py
test_02_start_vm | Success | 5.10 | test_vm_life_cycle.py
test_01_stop_vm | Success | 35.23 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 40.36 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.03 | test_templates.py
test_05_template_permissions | Success | 0.04 | test_templates.py
test_04_extract_template | Success | 5.10 | test_templates.py
test_03_delete_template | Success | 5.08 | test_templates.py
test_02_edit_template | Success | 90.07 | test_templates.py
test_01_create_template | Success | 25.25 | test_templates.py
test_10_destroy_cpvm | Success | 161.28 | test_ssvm.py
test_09_destroy_ssvm | Success | 133.21 | test_ssvm.py
test_08_reboot_cpvm | Success | 101.19 | test_ssvm.py
test_07_reboot_ssvm | Success | 102.98 | test_ssvm.py
test_06_stop_cpvm | Success | 131.35 | test_ssvm.py
test_05_stop_ssvm | Success | 133.05 | test_ssvm.py
test_04_cpvm_internals | Success | 0.98 | test_ssvm.py
test_03_ssvm_internals | Success | 2.85 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.09 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.10 | test_ssvm.py
test_01_snapshot_root_disk | Success | 10.83 | test_snapshots.py
test_04_change_offering_small | Success | 204.33 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.03 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.04 | test_service_offerings.py
test_01_create_service_offering | Success | 0.08 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.09 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.13 | test_secondary_storage.py
test_09_reboot_router | Success | 35.23 | test_routers.py
test_08_start_router | Success | 25.19 | test_routers.py

[GitHub] cloudstack issue #1935: CLOUDSTACK-9764: Delete domain failure due to Accoun...

2017-02-21 Thread nvazquez
Github user nvazquez commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
@rafaelweingartner I think I got your point, I tried to keep code as 
similar as it was before, by declaring `rollBackState` as static class variable 
on line 114. This way inner `finally` block would work the same as before when 
one of new methods set `rollBackState = true.` On outter `finally` block, 
`rollBackState` is set to false (line 345), this way each time `deleteDomain` 
is invoked it would start on false (maybe it would be easier to move it at the 
beggining of `deleteDomain`). Do you agree with this approach?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1910: CLOUDSTACK-9748:VPN Users search functionalit...

2017-02-21 Thread Ashadeepa
Github user Ashadeepa closed the pull request at:

https://github.com/apache/cloudstack/pull/1910


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1957: CLOUDSTACK-9748:VPN Users search functionality broke...

2017-02-21 Thread Ashadeepa
Github user Ashadeepa commented on the issue:

https://github.com/apache/cloudstack/pull/1957
  
@rafaelweingartner : This is due to the change in my remote urls. Closing 
the old PR https://github.com/apache/cloudstack/issues/1910.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1768: CLOUDSTACK 9601: Upgrade: change logic for update pa...

2017-02-21 Thread marcaurele
Github user marcaurele commented on the issue:

https://github.com/apache/cloudstack/pull/1768
  
@serg38 Not sure to understand what you mean with:
> If I get it right with your proposed changes, upgrade scripts become 
obsolete since all the changes can be done in upgrade scripts.

You meant: *If I get it right with your proposed changes, upgrade 
**cleanup** scripts become obsolete since all the changes can be done in 
upgrade scripts.* ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1935: CLOUDSTACK-9764: Delete domain failure due to Accoun...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1935
  
Trillian test result (tid-876)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 35807 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1935-t876-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_network.py
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Test completed. 46 look ok, 3 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_02_redundant_VPC_default_routes | `Failure` | 864.13 | 
test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 320.45 | 
test_privategw_acl.py
test_02_list_snapshots_with_removed_data_store | `Error` | 0.04 | 
test_snapshots.py
test_01_vpc_site2site_vpn | Success | 160.52 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 61.11 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 250.72 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 287.25 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 545.04 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 512.25 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1414.74 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 548.99 | test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1297.58 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 151.41 | test_volumes.py
test_08_resize_volume | Success | 156.44 | test_volumes.py
test_07_resize_fail | Success | 156.52 | test_volumes.py
test_06_download_detached_volume | Success | 156.34 | test_volumes.py
test_05_detach_volume | Success | 155.91 | test_volumes.py
test_04_delete_attached_volume | Success | 151.44 | test_volumes.py
test_03_download_attached_volume | Success | 151.32 | test_volumes.py
test_02_attach_volume | Success | 95.17 | test_volumes.py
test_01_create_volume | Success | 711.28 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.17 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 95.78 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 163.76 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 247.75 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.04 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.64 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.25 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 40.94 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.13 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.84 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.87 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.17 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.33 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 40.46 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.16 | test_templates.py
test_03_delete_template | Success | 5.11 | test_templates.py
test_02_edit_template | Success | 90.18 | test_templates.py
test_01_create_template | Success | 40.43 | test_templates.py
test_10_destroy_cpvm | Success | 166.69 | test_ssvm.py
test_09_destroy_ssvm | Success | 163.56 | test_ssvm.py
test_08_reboot_cpvm | Success | 101.57 | test_ssvm.py
test_07_reboot_ssvm | Success | 163.59 | test_ssvm.py
test_06_stop_cpvm | Success | 132.19 | test_ssvm.py
test_05_stop_ssvm | Success | 164.02 | test_ssvm.py
test_04_cpvm_internals | Success | 1.22 | test_ssvm.py
test_03_ssvm_internals | Success | 3.42 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.12 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.13 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.11 | test_snapshots.py
test_04_change_offering_small | Success | 210.27 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.05 | test_service_offerings.py
test_01_create_service_offering | Success | 0.11 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.13 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.18 |

[GitHub] cloudstack pull request #1958: CLOUDSTACK-5806: add presetup to storage type...

2017-02-21 Thread abhinandanprateek
GitHub user abhinandanprateek opened a pull request:

https://github.com/apache/cloudstack/pull/1958

CLOUDSTACK-5806: add presetup to storage types that support over prov…

…isioning

Ideally this should be configurable via global settings

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shapeblue/cloudstack CLOUDSTACK-5806

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1958.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1958


commit 6ad3429085abf2943ff3183288b7f2e7e0165963
Author: Abhinandan Prateek 
Date:   2017-02-22T06:48:35Z

CLOUDSTACK-5806: add presetup to storage types that support over 
provisioning
Ideally this should be configurable via global settings




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1959: CLOUDSTACK-9786:API reference guide entry for...

2017-02-21 Thread Ashadeepa
GitHub user Ashadeepa opened a pull request:

https://github.com/apache/cloudstack/pull/1959

CLOUDSTACK-9786:API reference guide entry for associateIpAddress needs 
additional information

Going through the code & implementation, it seems like either of the 
parameters are not required while accessing the API : associateIpAddress.
There are 3 cases for which this api works. 1) networkId 2) vpcId 3) 
zoneId. Either of these can be provided to achieve the same functionality. If 
neither of them is provided, there is an error text shown.
E.g.
[root@CCP ~]# curl -s 
'http://10.66.43.37:8096/client/api?command=associateIpAddress&listall=true' | 
xmllint --format - -o


431
4350
Unable to figure out zone to assign ip to. Please specify either 
zoneId, or networkId, or vpcId in the call

Modify the API reference guide entry with this detail in the "description"

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Accelerite/cloudstack CLOUDSTACK-9786

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1959.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1959


commit 030d34dca89621965afa2043a78a165a21adc26e
Author: Ashadeepa Debnath 
Date:   2017-02-21T11:29:02Z

CLOUDSTACK-9786:API reference guide entry for associateIpAddress needs a fix




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1768: CLOUDSTACK 9601: Upgrade: change logic for update pa...

2017-02-21 Thread marcaurele
Github user marcaurele commented on the issue:

https://github.com/apache/cloudstack/pull/1768
  
I'll try to make my point clearer with a better use case. Let say you were 
running version ACS 4.4.2 and wish to upgrade to 4.7.1. After installing the 
4.7.1, when ACS starts for the first time you will execute SQL scripts in that 
order (case A):
```
schema-442to450.sql   -> schema-442to450-cleanup.sql
  |  | |
  v  | v
schema-450to451.sql  |   schema-450to451-cleanup.sql
  |  | |
  v  | v
schema-451to452.sql  |   schema-451to452-cleanup.sql
  |  | |
  v  | v
schema-452to460.sql  |   schema-452to460-cleanup.sql
  |  | |
  v  | v
schema-460to461.sql  |   schema-460to461-cleanup.sql
  |  | |
  v  | v
schema-461to470.sql  |   schema-461to470-cleanup.sql
  |  | |
  v  | v
schema-470to471.sql >schema-470to471-cleanup.sql
```

But if you would have updated to each versions, one after the other, you 
would have run those scripts in that order (case B):
```
schema-442to450.sql -> schema-442to450-cleanup.sql
 |
   --
  |
  v
schema-450to451.sql -> schema-450to451-cleanup.sql
 |
   --
  |
  v
schema-451to452.sql -> schema-451to452-cleanup.sql
 |
   --
  |
  v
schema-452to460.sql -> schema-452to460-cleanup.sql
 |
   --
  |
  v
schema-460to461.sql -> schema-460to461-cleanup.sql
 |
   --
  |
  v
schema-461to470.sql -> schema-461to470-cleanup.sql
 |
   --
  |
  v
schema-470to471.sql -> schema-470to471-cleanup.sql
```

Since **case B** is that most developer would expect when fixing bugs and 
doing changes, but **case A** is the most common case of production upgrade, I 
wanted to correct the algorithm so that everyone will follow the same route 
(case B).

Most `-cleanup.sql` scripts are either empty or only updating the 
`configuration` table, so it's safe. There is only one possible problematic 
script: 
https://github.com/apache/cloudstack/blob/master/setup/db/db/schema-481to490-cleanup.sql
 today. This one does change views, which IMO was a mistake to put in the 
cleanup script file, it should have gone into `schema-481to490.sql` (@rhtyd ?). 
Leaving the mechanism as it is today would leave people with a possible bug 
while upgrading from any version prior to 4.9.0 *if* any future SQL script was 
to change the views modified inside `schema-481to490-cleanup.sql` because of 
scenario case A. Did I lost people there?

Any comment @remibergsma @DaanHoogland @syed @nvazquez ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: apidocs build failure

2017-02-21 Thread Rohit Yadav
Will, any specific use-case? For a live running cloudstack an apidocs plugin 
could also be written that displays apidocs for a role/user-account consuming 
from listApis.


Regards.


From: williamstev...@gmail.com  on behalf of Will 
Stevens 
Sent: 21 February 2017 20:31:14
To: dev@cloudstack.apache.org
Subject: Re: apidocs build failure

Is there any chance we can fix the 'roles' issue with the API doc so we can
get the docs split into the 'Admin', 'Domain Admin' and 'User' again?  The
introduction of the dynamic roles broke the generation of the API docs with
the different roles and I was not able to figure out how to fix it.  Any
ideas for how to fix that?

*Will STEVENS*
Lead Developer



On Tue, Feb 21, 2017 at 3:01 AM, Daan Hoogland 
wrote:

> @Rajani @Rohit I missed this mail and fixed the apidoc on build.a.o
> yesterday. I can disable it or throw it away might we so wish
>
> daan.hoogl...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, Utrecht Utrecht 3531 VENetherlands
> @shapeblue
>
>
>
>
> -Original Message-
> From: Rohit Yadav [mailto:rohit.ya...@shapeblue.com]
> Sent: 17 February 2017 10:27
> To: dev@cloudstack.apache.org
> Subject: Re: apidocs build failure
>
> Thanks Rajani, I've no objections.
>
>
> Regards.
>
> 
> From: Rajani Karuturi 
> Sent: 17 February 2017 14:07:34
> To: dev@cloudstack.apache.org
> Subject: Re: apidocs build failure
>
> since travis is already verifying this, I asked infra to disable this job.
>
> Infra ticket https://issues.apache.org/jira/browse/INFRA-13527
>
> Please comment on the ticket if you think otherwise.
>
> Thanks,
>
> ~ Rajani
>
> http://cloudplatform.accelerite.com/
>
> On February 13, 2017 at 12:29 PM, Rohit Yadav
> (rohit.ya...@shapeblue.com) wrote:
>
> Jenkins need to have jdk8 available, someone need to setup jenv on it as
> well.
>
> (The first job in Travis does apidocs/marvin/rat related checks to
> validate changes and apidocs build).
>
> Regards.
>
> 
> From: Rajani Karuturi 
> Sent: 09 February 2017 12:21:40
> To: dev@cloudstack.apache.org
> Subject: apidocs build failure
>
> Hi all,
>
> All the apidocs builds[1] are failing after the recent java 8 change. Can
> anyone having access fix it? Or should we talk to INFRA about it?
>
> Error message:
>
> [INFO]
> -
> [ERROR] COMPILATION ERROR : [INFO]
> -
> [ERROR] javac: invalid target release: 1.8 Usage: javac use -help for a
> list of possible options
>
> [1] https://builds.apache.org/job/cloudstack-apidocs-master/
>
> Thanks
>
> ~ Rajani
>
> http://cloudplatform.accelerite.com/
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com ( http://www.shapeblue.com )
> 53 Chandos Place, Covent Garden, London WC2N 4HSUK @shapeblue
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>
>
>
>

rohit.ya...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 



[GitHub] cloudstack issue #1957: CLOUDSTACK-9748:VPN Users search functionality broke...

2017-02-21 Thread ustcweizhou
Github user ustcweizhou commented on the issue:

https://github.com/apache/cloudstack/pull/1957
  
tested. LGTM

btw, you can use "git push --force" to overwrite the code


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


RE: apidocs build failure

2017-02-21 Thread Daan Hoogland
@Rajani @Rohit I missed this mail and fixed the apidoc on build.a.o yesterday. 
I can disable it or throw it away might we so wish

daan.hoogl...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, Utrecht Utrecht 3531 VENetherlands
@shapeblue
  
 


-Original Message-
From: Rohit Yadav [mailto:rohit.ya...@shapeblue.com] 
Sent: 17 February 2017 10:27
To: dev@cloudstack.apache.org
Subject: Re: apidocs build failure

Thanks Rajani, I've no objections.


Regards.


From: Rajani Karuturi 
Sent: 17 February 2017 14:07:34
To: dev@cloudstack.apache.org
Subject: Re: apidocs build failure

since travis is already verifying this, I asked infra to disable this job.

Infra ticket https://issues.apache.org/jira/browse/INFRA-13527

Please comment on the ticket if you think otherwise.

Thanks,

~ Rajani

http://cloudplatform.accelerite.com/

On February 13, 2017 at 12:29 PM, Rohit Yadav
(rohit.ya...@shapeblue.com) wrote:

Jenkins need to have jdk8 available, someone need to setup jenv on it as well.

(The first job in Travis does apidocs/marvin/rat related checks to validate 
changes and apidocs build).

Regards.


From: Rajani Karuturi 
Sent: 09 February 2017 12:21:40
To: dev@cloudstack.apache.org
Subject: apidocs build failure

Hi all,

All the apidocs builds[1] are failing after the recent java 8 change. Can 
anyone having access fix it? Or should we talk to INFRA about it?

Error message:

[INFO]
-
[ERROR] COMPILATION ERROR : [INFO]
-
[ERROR] javac: invalid target release: 1.8 Usage: javac use -help for a list of 
possible options

[1] https://builds.apache.org/job/cloudstack-apidocs-master/

Thanks

~ Rajani

http://cloudplatform.accelerite.com/

rohit.ya...@shapeblue.com
www.shapeblue.com ( http://www.shapeblue.com )
53 Chandos Place, Covent Garden, London WC2N 4HSUK @shapeblue

rohit.ya...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
  
 



Re: [QUESTION] Upgrade path to JDK8

2017-02-21 Thread Marc-Aurèle Brothier
No there isn't any issue except having the bugs & fixes of the JDK you're
using. You can compile it with a JDK 1.8 as long as you don't change the
target bytecode version for 1.7

On Tue, Feb 21, 2017 at 8:15 AM, Wei ZHOU  wrote:

> Marco,
>
> Good point. Is there any issue if we compile code with jdk8 but run it on
> jdk7 (systemvm) ?
>
> -Wei
>
> 2017-02-21 7:43 GMT+01:00 Marc-Aurèle Brothier :
>
> > There's a list of compatibility issues between Java 7 & Java 8 here
> > http://www.oracle.com/technetwork/java/javase/8-
> > compatibility-guide-2156366.
> > html
> >
> > The main problem I would see in two system communicating while running
> > different Java version is the way they handle serialization and
> > de-serialization of objects which had been a problem in the past between
> > some Java versions. AFAIK we're using JSON for that now, so if the code
> > already compiles with Java8, it should not be a problem.
> >
> > On Mon, Feb 20, 2017 at 10:36 PM, Wei ZHOU 
> wrote:
> >
> > > We tested 4.7.1+systemd patches as well, it also works fine.
> > >
> > > -Wei
> > >
> > > 2017-02-20 22:34 GMT+01:00 Wei ZHOU :
> > >
> > > > @Will and @Syed, I build the packages of 4.9.2+systemd patches on
> > ubuntu
> > > > 16.04 (openjdk 8).
> > > > Then install the packages to management server and kvm hosts (all are
> > > > ubuntu 16.04 with openjdk8).
> > > > The systemvm template is 4.6 with openjdk7.
> > > >
> > > > cpvm and ssvm work fine.
> > > >
> > > > As there is no java process in VR, so I did not check, VR should not
> be
> > > > impacted.
> > > >
> > > > -Wei
> > > >
> > > > 2017-02-20 22:16 GMT+01:00 Pierre-Luc Dion :
> > > >
> > > >> That's quite interesting Chiradeep!
> > > >>
> > > >> so I could do something like this I guest:
> > > >>
> > > >> mvn clean install
> > > >>
> > > >> and then this one to build the systemvm.iso:
> > > >> mvn -Psystemvm -source 1.7 -target 1.7 install
> > > >>
> > > >>
> > > >> I'll give it a try! but for now, I'm worried about existing VR, they
> > > must
> > > >> continue to work while running on jdk7.  newer VPC would be ok to
> run
> > > with
> > > >> jdk8.  but we need to make sure while upgrading the
> management-server
> > we
> > > >> are not in the obligation to upgrade VR's.
> > > >>
> > > >> For sure it is required for strongswan + JDK8 to ugprade the VR,
> but a
> > > >>  existing VR should remain usable for port forwarding, vm creation
> and
> > > >> such...
> > > >>
> > > >> I'll post my finding...
> > > >>
> > > >> Thanks !
> > > >>
> > > >>
> > > >>
> > > >> On Mon, Feb 20, 2017 at 3:59 PM, Chiradeep Vittal <
> > chirade...@gmail.com
> > > >
> > > >> wrote:
> > > >>
> > > >> > You can build the system vm with  -source 1.7 -target 1.7
> > > >> > Also unless you are using Java8 features (lambda) the classfiles
> > > >> produced
> > > >> > by javac 8 should work in a 1.7 JVM
> > > >> >
> > > >> > Sent from my iPhone
> > > >> >
> > > >> > > On Feb 20, 2017, at 11:51 AM, Will Stevens <
> wstev...@cloudops.com
> > >
> > > >> > wrote:
> > > >> > >
> > > >> > > yes, that is what I was expecting.  which is why I was asking
> > about
> > > >> Wei's
> > > >> > > setup because he seems to have worked around that problem.  Or
> he
> > > has
> > > >> a
> > > >> > > custom SystemVM template running with both JDK7 and JDK8.
> > > >> > >
> > > >> > > *Will STEVENS*
> > > >> > > Lead Developer
> > > >> > >
> > > >> > > 
> > > >> > >
> > > >> > >> On Mon, Feb 20, 2017 at 2:20 PM, Syed Ahmed <
> sah...@cloudops.com
> > >
> > > >> > wrote:
> > > >> > >>
> > > >> > >> The problem is that systemvm.iso is built with java 8 whereas
> > java
> > > on
> > > >> > the
> > > >> > >> VR is java 7
> > > >> > >>> On Mon, Feb 20, 2017 at 13:20 Will Stevens <
> > wstev...@cloudops.com
> > > >
> > > >> > wrote:
> > > >> > >>>
> > > >> > >>> Did it work after resetting a VPC or when blowing away the
> SSVM
> > or
> > > >> > >> CPVM?  I
> > > >> > >>> would not expect the SSVM or the CPVM to come up if the
> > management
> > > >> > server
> > > >> > >>> was built with JDK8 and the system vm template is only using
> > JDK7.
> > > >> Can
> > > >> > >> you
> > > >> > >>> confirm?​
> > > >> > >>>
> > > >> > >>> *Will STEVENS*
> > > >> > >>> Lead Developer
> > > >> > >>>
> > > >> > >>> 
> > > >> > >>>
> > > >> >  On Mon, Feb 20, 2017 at 1:15 PM, Wei ZHOU <
> > ustcweiz...@gmail.com
> > > >
> > > >> > wrote:
> > > >> > 
> > > >> >  We've tested management server 4.7.1 with ubuntu
> 16.04/openjdk8
> > > and
> > > >> >  systemvm 4.6 with debian7/openjdk7.
> > > >> >  The systemvms (ssvm, cpvm) work fine.
> > > >> > 
> > > >> >  I agree we need consider the openjdk upgrade in systemvm
> > > template.
> > > >> > 
> > > >> >  -Wei
> > > >> > 
> > > >> >  2017-02-20 18:15 GMT+01:00 Will Stevens <
> wstev...@cloudops.com
> > >:
> > > >> > 
> > > >> > > Regarding my question. Is it because of the version of Java
> > 

[GitHub] cloudstack issue #1813: CLOUDSTACK-9604: Root disk resize support for VMware...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1813
  
Trillian test result (tid-864)
Environment: xenserver-65sp1 (x2), Advanced Networking with Mgmt server 7
Total time taken: 43562 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1813-t864-xenserver-65sp1.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: 
/marvin/tests/smoke/test_routers_network_ops.py
Intermitten failure detected: /marvin/tests/smoke/test_templates.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Test completed. 46 look ok, 3 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_05_rvpc_multi_tiers | `Failure` | 550.32 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | `Failure` | 1345.46 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 558.28 
| test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 698.24 | 
test_privategw_acl.py
test_04_extract_template | `Error` | 5.10 | test_templates.py
test_03_delete_template | `Error` | 5.09 | test_templates.py
test_01_create_template | `Error` | 40.46 | test_templates.py
test_01_vpc_site2site_vpn | Success | 331.32 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 166.78 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 620.10 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 325.26 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 710.06 | test_vpc_router_nics.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 899.40 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 1066.74 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 15.89 | test_volumes.py
test_08_resize_volume | Success | 105.96 | test_volumes.py
test_07_resize_fail | Success | 116.03 | test_volumes.py
test_06_download_detached_volume | Success | 20.46 | test_volumes.py
test_05_detach_volume | Success | 105.31 | test_volumes.py
test_04_delete_attached_volume | Success | 10.19 | test_volumes.py
test_03_download_attached_volume | Success | 15.27 | test_volumes.py
test_02_attach_volume | Success | 10.69 | test_volumes.py
test_01_create_volume | Success | 392.29 | test_volumes.py
test_change_service_offering_for_vm_with_snapshots | Success | 460.08 | 
test_vm_snapshots.py
test_03_delete_vm_snapshots | Success | 280.26 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 176.27 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 130.70 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 267.77 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 31.86 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 185.37 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 61.15 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.12 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.15 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 15.21 | test_vm_life_cycle.py
test_02_start_vm | Success | 25.27 | test_vm_life_cycle.py
test_01_stop_vm | Success | 30.29 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 85.72 | test_templates.py
test_08_list_system_templates | Success | 0.04 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.05 | test_templates.py
test_02_edit_template | Success | 90.15 | test_templates.py
test_10_destroy_cpvm | Success | 196.68 | test_ssvm.py
test_09_destroy_ssvm | Success | 229.06 | test_ssvm.py
test_08_reboot_cpvm | Success | 121.67 | test_ssvm.py
test_07_reboot_ssvm | Success | 143.78 | test_ssvm.py
test_06_stop_cpvm | Success | 166.74 | test_ssvm.py
test_05_stop_ssvm | Success | 168.90 | test_ssvm.py
test_04_cpvm_internals | Success | 1.16 | test_ssvm.py
test_03_ssvm_internals | Success | 3.37 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.12 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.13 | test_ssvm.py
test_01_snapshot_root_disk | Success | 16.31 | test_snapshots.py
test_04_change_offering_small | Success | 119.06 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.06 | test_service_offerings.py
test_01_create_service_offering | Success | 0.09 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.13 | test_secondary_storage.py
test_01_s

[GitHub] cloudstack issue #1955: CLOUDSTACK-8239 Add VirtIO SCSI support for KVM host...

2017-02-21 Thread wido
Github user wido commented on the issue:

https://github.com/apache/cloudstack/pull/1955
  
Very nice indeed! I will take a look asap.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1773: CLOUDSTACK-9607: Preventing template deletion when t...

2017-02-21 Thread priyankparihar
Github user priyankparihar commented on the issue:

https://github.com/apache/cloudstack/pull/1773
  
Hi @koushik-das  @rajesh-battala ,

>The default value of forced is false, might cause issue on backwards 
compatibility.

Should i make changes according to @serg38 and @ustcweizhou suggestion ? ( 
or current code LGT you, Please notify. )   


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1872: CLOUDSTACK-3223 Exception observed while creating CP...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1872
  
Trillian test result (tid-860)
Environment: vmware-60u2 (x2), Advanced Networking with Mgmt server 7
Total time taken: 45192 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1872-t860-vmware-60u2.zip
Intermitten failure detected: /marvin/tests/smoke/test_internal_lb.py
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: 
/marvin/tests/smoke/test_routers_network_ops.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Intermitten failure detected: /marvin/tests/smoke/test_vm_life_cycle.py
Intermitten failure detected: /marvin/tests/smoke/test_volumes.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Test completed. 45 look ok, 3 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 767.70 | 
test_privategw_acl.py
test_01_vpc_privategw_acl | `Failure` | 101.04 | test_privategw_acl.py
test_02_list_snapshots_with_removed_data_store | `Error` | 70.69 | 
test_snapshots.py
test_02_list_snapshots_with_removed_data_store | `Error` | 75.76 | 
test_snapshots.py
test_02_internallb_roundrobin_1RVPC_3VM_HTTP_port80 | `Error` | 951.13 | 
test_internal_lb.py
test_01_vpc_site2site_vpn | Success | 336.27 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 136.58 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 572.24 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 345.92 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 675.13 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 692.79 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1479.98 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 647.73 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 617.01 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1313.12 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 25.67 | test_volumes.py
test_06_download_detached_volume | Success | 40.42 | test_volumes.py
test_05_detach_volume | Success | 100.22 | test_volumes.py
test_04_delete_attached_volume | Success | 10.14 | test_volumes.py
test_03_download_attached_volume | Success | 15.20 | test_volumes.py
test_02_attach_volume | Success | 48.63 | test_volumes.py
test_01_create_volume | Success | 502.81 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.23 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 222.05 | test_vm_snapshots.py
test_01_test_vm_volume_snapshot | Success | 135.95 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 161.59 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 246.97 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.02 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 488.51 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.20 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 55.72 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.06 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 10.11 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 5.10 | test_vm_life_cycle.py
test_02_start_vm | Success | 15.14 | test_vm_life_cycle.py
test_01_stop_vm | Success | 5.08 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 206.14 | test_templates.py
test_08_list_system_templates | Success | 0.02 | test_templates.py
test_07_list_public_templates | Success | 0.03 | test_templates.py
test_05_template_permissions | Success | 0.04 | test_templates.py
test_04_extract_template | Success | 10.16 | test_templates.py
test_03_delete_template | Success | 5.08 | test_templates.py
test_02_edit_template | Success | 90.12 | test_templates.py
test_01_create_template | Success | 95.58 | test_templates.py
test_10_destroy_cpvm | Success | 201.58 | test_ssvm.py
test_09_destroy_ssvm | Success | 268.35 | test_ssvm.py
test_08_reboot_cpvm | Success | 156.26 | test_ssvm.py
test_07_reboot_ssvm | Success | 158.18 | test_ssvm.py
test_06_stop_cpvm | Success | 171.47 | test_ssvm.py
test_05_stop_ssvm | Success | 178.49 | test_ssvm.py
test_04_cpvm_internals | Success | 0.96 | test_ssvm.py
test_03_ssvm_internals | Success | 3.19 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.09 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.09 | test_ssvm.py
test_01_snapshot_root_disk | Success | 61.15 | test_snapshot

[GitHub] cloudstack issue #1779: CLOUDSTACK-9610: Disabled Host Keeps Being up status...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1779
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-517


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #669: Made the adding new keyboard language support easier

2017-02-21 Thread anshul1886
Github user anshul1886 commented on the issue:

https://github.com/apache/cloudstack/pull/669
  
Added the missing license on one js file.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1949: Automated Cloudstack bugs 9277 9276 9275 9274 9273 9...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1949
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-518


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1878: CLOUDSTACK-9717: [VMware] RVRs have mismatching MAC ...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1878
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-519


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1949: Automated Cloudstack bugs 9277 9276 9275 9274 9273 9...

2017-02-21 Thread borisstoyanov
Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1949
  
@blueorangutan test


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1949: Automated Cloudstack bugs 9277 9276 9275 9274 9273 9...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1949
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has 
been kicked to run smoke tests


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1878: CLOUDSTACK-9717: [VMware] RVRs have mismatching MAC ...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1878
  
@borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has 
been kicked to run smoke tests


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1878: CLOUDSTACK-9717: [VMware] RVRs have mismatching MAC ...

2017-02-21 Thread borisstoyanov
Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1878
  
@blueorangutan test


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Handling of DB migrations on forks

2017-02-21 Thread Daan Hoogland
Good strategy and I would make that not a warning but a fatal, as the
resulting ACS version will probably not work.

On Tue, Feb 14, 2017 at 12:16 PM, Wei ZHOU  wrote:
> Then you have to create your own branch forked from 4.10.0
>
> In our branch, I moved some table changes (eg ALTER TABLE, CREATE TABLE)
> from schema-.sql
> to engine/schema/src/com/cloud/upgrade/dao/UpgradeXXXtoYYY.java.
> If SQLException is throwed, then show a warning message instead upgrade
> interruption..
> By this way, the database will not be broken in the upgrade or fresh
> installation.
>
> -Wei
>
>
> 2017-02-14 11:52 GMT+01:00 Jeff Hair :
>
>> Hi all,
>>
>> Many people in the CS community maintain forks of CloudStack, and might
>> have implemented features or bug fixes long before they get into mainline.
>> I'm curious as to how people handle database migrations with their forks.
>> To make a DB migration, the CS version must be updated. If a developer adds
>> a migration to their fork on say, version 4.8.5. Later, they decide to
>> upgrade to 4.10.0 which has their migration in the schema upgrade to
>> 4.10.0.
>>
>> How do people handle this? As far as I know, CS will crash on the DB
>> upgrade due to SQL errors. Do people just sanitize migrations when they
>> pull from downstream or somehting?
>>
>> Jeff
>>



-- 
Daan


Re: [DISCUSS][FS] Host HA for CloudStack

2017-02-21 Thread Koushik Das
See inline.

Thanks,
Koushik

On 21/02/17, 11:47 AM, "Rohit Yadav"  wrote:

Hi Koushik,


Thanks for sharing your comments and questions.


1. Yes, the FS is divided into two parts - a general HA framework which makes 
no assumption about the type of resource and HA provider implementation that 
works on a type of resource/hypervisor/storage etc.

[Koushik] Hmm the heading is misleading then. I would like to see the details 
of the generic HA framework that you are proposing for any resource type. What 
all resource types can/need to be HA’ed? Also I would like to see a clear 
definition of “storage HA”, ”network HA” or “any resource HA” etc. before going 
ahead with this generic framework. If this new framework ends up doing only 
doing Host/VM HA then there is no point doing all this.

Specifically, with this feature we want to solve the problem of HA-ing a host 
reliably and use out-of-band management subsystem (i.e. ipmi based 
status/reboot/power-off to investigate/recover/fence the host) in the HA 
provider implementation. Yes, a host HA should trigger VM HA, i.e. for the host 
being fenced move HA VMs to other hosts. This also reliably solves the issue of 
disk corruption when same HA VMs get started on multiple hosts.

[Koushik] If host HA implies doing HA on all VMs running in a host, I am not 
clear as to why host HA is needed separately when there is already VM HA 
available.

2. The old VM HA implementation makes a lot of assumptions about the type 
of resource (i.e. VM) it is HA-ing, it is tied to VM HA which is why HA for 
host could not be added in a straight forward way without regressions we could 
not test. With this new HA framework, it does not make any assumption around 
type of the resource and separates policy from mechanism, we also want to add 
deterministic tests (using marvin tests and a simulator based ha provider 
implementation) to demonstrate the generic HA functionality. In future with 
this framework, HA for various resources such as VM, storage, network can be 
added. As a first step we want to get the framework in, and support for Host as 
a resource type. We also want to reduce assumptions, or dependency as both VM 
HA and Host HA are related (sequence etc). The HAProvider interface would be 
something every hypervisor can implement.

[Koushik] Again please justify why host HA is needed when VM HA is already 
there? If the question is about ease of writing automated tests, I have already 
written simulator based tests for the existing VM HA. Please refer 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Writing+tests+leveraging+the+simulator+enhancements
 for the test details.

3. While an existing (VM) HA framework exists, it was safer to write new 
code and demonstrate it works for any general HA resource than refactor and 
implement this in the old framework which could introduce serious regressions 
leading to production issues. For the most part, we've avoided to alter 
anything in the old HA framework while making sure that old (VM) HA works well 
with the new HA framework. The JIRA issue for the feature is in the FS.

[Koushik] As mentioned in a previous comment, please define what all resources 
need to be HA’d and why is it needed? For e.g. there is RVR which provides HA 
for the network services provided by VR. Also for other network plugins there 
may be native ways for achieving HA and may not need anything from CS 
perspective. I wanted to make sure that all these points are accounted for 
before we proceed with a generic framework.


4. Any HA operation can be blocking in nature, one of the things included 
is a background polling manager that polls for changes, and a task/activity 
executor as out-of-band operations can take time. Therefore, all the 
health/activity/fencing/recovery operations have some timeout, limits and 
specific queues. The existing framework does not provide any abstraction to 
queue, restrict operation timeout, and tie them against a FSM. The existing 
framework also is hard to test, specifically to validate using integration 
test. We also wanted to avoid adding any regressions to existing/old VM HA. 
Lastly, the primary use of IPMI/out-of-band management in performing host-ha is 
not for investigation but for recovery (try a reboot), and fencing (power off).

[Koushik] A lot of points you have raised here is not correct. There is already 
polling of all the hosts to find out VM state changes, queues, time-outs in 
place to send commands to hypervisors etc. Have you evaluated the option of 
using IPMI in the existing KVM HA plugins?

 

Hope this answers your questions, please feel free add more comments and 
questions. Thanks.


Regards.



From: Koushik Das 
Sent: 20 February 2017 11:45
To: dev@cloudstack.apache.org
Subject: Re: [DISCUSS][FS] Host HA for CloudStack

Rohit,

Thank

Re: Handling of DB migrations on forks

2017-02-21 Thread Marc-Aurèle Brothier
IMO the database changes should be idempotent as much as possible with
"CREATE OR REPLACE VIEW..." "DROP IF EXISTS". For other things like
altering a table, it's more complicated to achieve that in pure SQL.
A good call would be to integrate http://www.liquibase.org/ to manage the
schema and changes in a more descriptive way which allows branches/merges.

On Tue, Feb 21, 2017 at 9:46 AM, Daan Hoogland 
wrote:

> Good strategy and I would make that not a warning but a fatal, as the
> resulting ACS version will probably not work.
>
> On Tue, Feb 14, 2017 at 12:16 PM, Wei ZHOU  wrote:
> > Then you have to create your own branch forked from 4.10.0
> >
> > In our branch, I moved some table changes (eg ALTER TABLE, CREATE TABLE)
> > from schema-.sql
> > to engine/schema/src/com/cloud/upgrade/dao/UpgradeXXXtoYYY.java.
> > If SQLException is throwed, then show a warning message instead upgrade
> > interruption..
> > By this way, the database will not be broken in the upgrade or fresh
> > installation.
> >
> > -Wei
> >
> >
> > 2017-02-14 11:52 GMT+01:00 Jeff Hair :
> >
> >> Hi all,
> >>
> >> Many people in the CS community maintain forks of CloudStack, and might
> >> have implemented features or bug fixes long before they get into
> mainline.
> >> I'm curious as to how people handle database migrations with their
> forks.
> >> To make a DB migration, the CS version must be updated. If a developer
> adds
> >> a migration to their fork on say, version 4.8.5. Later, they decide to
> >> upgrade to 4.10.0 which has their migration in the schema upgrade to
> >> 4.10.0.
> >>
> >> How do people handle this? As far as I know, CS will crash on the DB
> >> upgrade due to SQL errors. Do people just sanitize migrations when they
> >> pull from downstream or somehting?
> >>
> >> Jeff
> >>
>
>
>
> --
> Daan
>


RE: Handling of DB migrations on forks

2017-02-21 Thread Daan Hoogland
Marc-Aurele, you are totally right and people agree with you but no one seems 
to give this priority

daan.hoogl...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, Utrecht Utrecht 3531 VENetherlands
@shapeblue
  
 


-Original Message-
From: Marc-Aurèle Brothier [mailto:ma...@exoscale.ch] 
Sent: 21 February 2017 10:04
To: dev@cloudstack.apache.org
Subject: Re: Handling of DB migrations on forks

IMO the database changes should be idempotent as much as possible with "CREATE 
OR REPLACE VIEW..." "DROP IF EXISTS". For other things like altering a 
table, it's more complicated to achieve that in pure SQL.
A good call would be to integrate http://www.liquibase.org/ to manage the 
schema and changes in a more descriptive way which allows branches/merges.

On Tue, Feb 21, 2017 at 9:46 AM, Daan Hoogland 
wrote:

> Good strategy and I would make that not a warning but a fatal, as the 
> resulting ACS version will probably not work.
>
> On Tue, Feb 14, 2017 at 12:16 PM, Wei ZHOU  wrote:
> > Then you have to create your own branch forked from 4.10.0
> >
> > In our branch, I moved some table changes (eg ALTER TABLE, CREATE 
> > TABLE) from schema-.sql to 
> > engine/schema/src/com/cloud/upgrade/dao/UpgradeXXXtoYYY.java.
> > If SQLException is throwed, then show a warning message instead 
> > upgrade interruption..
> > By this way, the database will not be broken in the upgrade or fresh 
> > installation.
> >
> > -Wei
> >
> >
> > 2017-02-14 11:52 GMT+01:00 Jeff Hair :
> >
> >> Hi all,
> >>
> >> Many people in the CS community maintain forks of CloudStack, and 
> >> might have implemented features or bug fixes long before they get 
> >> into
> mainline.
> >> I'm curious as to how people handle database migrations with their
> forks.
> >> To make a DB migration, the CS version must be updated. If a 
> >> developer
> adds
> >> a migration to their fork on say, version 4.8.5. Later, they decide 
> >> to upgrade to 4.10.0 which has their migration in the schema 
> >> upgrade to 4.10.0.
> >>
> >> How do people handle this? As far as I know, CS will crash on the 
> >> DB upgrade due to SQL errors. Do people just sanitize migrations 
> >> when they pull from downstream or somehting?
> >>
> >> Jeff
> >>
>
>
>
> --
> Daan
>


Re: Handling of DB migrations on forks

2017-02-21 Thread Jeff Hair
Something like Liquibase would be nice. Hibernate might be even better.
Even idempotent migrations would be a step in the right direction. Perhaps
reject any migration changes that aren't INSERT IGNORE, DROP IF EXISTS, etc?

I'm not sure why the DAO was originally hand-rolled. Perhaps the original
developers didn't think any ORM on the market met their needs. I would love
to leave DB migrations almost entirely behind. I believe Hibernate is smart
enough to construct dynamic migrations when fields are added and removed
from tables, right?

*Jeff Hair*
Technical Lead and Software Developer

Tel: (+354) 415 0200
j...@greenqloud.com
www.greenqloud.com

On Tue, Feb 21, 2017 at 9:27 AM, Daan Hoogland 
wrote:

> Marc-Aurele, you are totally right and people agree with you but no one
> seems to give this priority
>
> daan.hoogl...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, Utrecht Utrecht 3531 VENetherlands
> @shapeblue
>
>
>
>
> -Original Message-
> From: Marc-Aurèle Brothier [mailto:ma...@exoscale.ch]
> Sent: 21 February 2017 10:04
> To: dev@cloudstack.apache.org
> Subject: Re: Handling of DB migrations on forks
>
> IMO the database changes should be idempotent as much as possible with
> "CREATE OR REPLACE VIEW..." "DROP IF EXISTS". For other things like
> altering a table, it's more complicated to achieve that in pure SQL.
> A good call would be to integrate http://www.liquibase.org/ to manage the
> schema and changes in a more descriptive way which allows branches/merges.
>
> On Tue, Feb 21, 2017 at 9:46 AM, Daan Hoogland 
> wrote:
>
> > Good strategy and I would make that not a warning but a fatal, as the
> > resulting ACS version will probably not work.
> >
> > On Tue, Feb 14, 2017 at 12:16 PM, Wei ZHOU 
> wrote:
> > > Then you have to create your own branch forked from 4.10.0
> > >
> > > In our branch, I moved some table changes (eg ALTER TABLE, CREATE
> > > TABLE) from schema-.sql to
> > > engine/schema/src/com/cloud/upgrade/dao/UpgradeXXXtoYYY.java.
> > > If SQLException is throwed, then show a warning message instead
> > > upgrade interruption..
> > > By this way, the database will not be broken in the upgrade or fresh
> > > installation.
> > >
> > > -Wei
> > >
> > >
> > > 2017-02-14 11:52 GMT+01:00 Jeff Hair :
> > >
> > >> Hi all,
> > >>
> > >> Many people in the CS community maintain forks of CloudStack, and
> > >> might have implemented features or bug fixes long before they get
> > >> into
> > mainline.
> > >> I'm curious as to how people handle database migrations with their
> > forks.
> > >> To make a DB migration, the CS version must be updated. If a
> > >> developer
> > adds
> > >> a migration to their fork on say, version 4.8.5. Later, they decide
> > >> to upgrade to 4.10.0 which has their migration in the schema
> > >> upgrade to 4.10.0.
> > >>
> > >> How do people handle this? As far as I know, CS will crash on the
> > >> DB upgrade due to SQL errors. Do people just sanitize migrations
> > >> when they pull from downstream or somehting?
> > >>
> > >> Jeff
> > >>
> >
> >
> >
> > --
> > Daan
> >
>


RE: [PROPOSAL] add native vm-cluster orchestration service (was: [PROPOSAL] add native container orchestration service)

2017-02-21 Thread Kishan Kavala
Sure Daan. I'll publish the design on cwiki and share the link.

-Original Message-
From: Daan Hoogland [mailto:daan.hoogl...@shapeblue.com] 
Sent: Monday, February 20, 2017 7:27 PM
To: dev@cloudstack.apache.org
Subject: [PROPOSAL] add native vm-cluster orchestration service (was: 
[PROPOSAL] add native container orchestration service)

So, being very late in the discussion but havingread the whole thread before 
editting the title of this thread,

Can we agree that we want a generic vm-cluster service and leave the container 
bits to containers? Kishan can you share your design? Shapeblue wants to rebase 
their k8 service on top of this and I would like yours and Murali's work to not 
conflict.

daan.hoogl...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, Utrecht Utrecht 3531 VENetherlands @shapeblue
  
 


-Original Message-
From: Paul Angus [mailto:paul.an...@shapeblue.com]
Sent: dinsdag 7 februari 2017 08:14
To: dev@cloudstack.apache.org
Subject: Re: [PROPOSAL] add native container orchestration service

Will is 100% correct.  As I mentioned the Title is misleading.  However, Murali 
did clarify in his explanation; this is really about vm cluster orchestration.




From: Will Stevens 
Sent: 6 Feb 2017 22:54
To: dev@cloudstack.apache.org
Subject: Re: [PROPOSAL] add native container orchestration service

​My understanding is that what Paul is talking about is what is already built 
and IS what the thread is talking about.​

*Will STEVENS*
Lead Developer



On Mon, Feb 6, 2017 at 2:29 PM, Rajesh Ramchandani < 
rajesh.ramchand...@accelerite.com> wrote:

> Hi Paul - I think this is different from what the thread was about. 
> The conversation was specifically about adding support for container 
> orchestrators. You are talking about provisioning a group of VMs.
> Although, this is something I think several Cloudstack users have 
> requested before and we should propose a solution to this.
>
> Raj
>
> 
> From: Paul Angus 
> Sent: Monday, February 6, 2017 11:16:41 AM
> To: dev@cloudstack.apache.org
> Subject: RE: [PROPOSAL] add native container orchestration service
>
> #WhatHeSaid
>
> The title is misleading.  The proposal is purely to add the notion of 
> Clusters of VMs to CloudStack.  These may be for databases, containers 
> or anything else that needs 'clusters' of compute. Self-healing and 
> autoscaling are logical next steps to be added.
>
> Those guys at ShapeBlue have open-sourced their whole k8s container 
> service piece.  If/when the 'cluster' part of that work is added into 
> CloudStack, the k8s specific pieces can be used by anyone who wants 
> to, alternatively they could be used for reference in order to create 
> another types of cluster.  (or ignored completely).
>
>
>
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>
>
>
>
> -Original Message-
> From: Will Stevens [mailto:williamstev...@gmail.com]
> Sent: 31 January 2017 13:26
> To: dev@cloudstack.apache.org
> Subject: Re: [PROPOSAL] add native container orchestration service
>
> s/cloud-init/cloud-config/
>
> On Jan 31, 2017 7:24 AM, "Will Stevens"  wrote:
>
> > I think that is covered in this proposal. There is nothing k8s 
> > specific in this integration (from what I understand), all the k8s 
> > details are passed in via the cloud-init configuration after the 
> > cluster
> has been provisioned.
> >
> > On Jan 31, 2017 3:06 AM, "Lianghwa Jou" 
> > 
> > wrote:
> >
> >
> > There are many container orchestrators. Those container 
> > orchestrators are happy to run on any VMs or bare metal machines. 
> > K8s is just one of them and there will be more in the future. It may 
> > not be a good idea to make CloudStack to be k8s aware. IMO, the 
> > relationship between k8s and cloudstack should be similar to 
> > application and os. It probably not a good idea to make your OS to 
> > be aware of any specific applications so IMHO I don’t think k8s should be 
> > native to CloudStack.
> > It makes more sense to provide some generic services that many 
> > applications can take advantages of. Some generic resource grouping 
> > service makes sense so a group of VMs, baremetal machines or network 
> > can be treated as a single entity. Some life cycle management will 
> > be necessary for these entities too. We can deploy k8s, swarm, dcos 
> > or even non-container specific services on top of CloudStack. The 
> > k8s is changing fast. One single tenant installation may need more 
> > than one VM group and may actually requires more (k8s federation). 
> > It will be a struggle to be in sync if we try to bring k8s specific 
> > knowledge into
> cloudstack.
> >
> > Regards,
> >
> >
> > --
> > Lianghwa Jou
> > VP Engineering, Accelerite
> > email: lianghwa@accelerite.com
> >
> >
> >
> >
> >
> > On 1/29/17, 11:54 PM, "Murali Re

[GitHub] cloudstack pull request #1771: CLOUDSTACK-9611: Dedicating a Guest VLAN rang...

2017-02-21 Thread nitin-maharana
Github user nitin-maharana commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1771#discussion_r102173991
  
--- Diff: server/src/com/cloud/network/NetworkServiceImpl.java ---
@@ -3085,9 +3085,10 @@ public GuestVlan 
dedicateGuestVlanRange(DedicateGuestVlanRangeCmd cmd) {
 // Verify account is valid
 Account vlanOwner = null;
 if (projectId != null) {
-if (accountName != null) {
-throw new InvalidParameterValueException("accountName and 
projectId are mutually exclusive");
-}
+//accountName and projectId are mutually exclusive
--- End diff --

@koushik-das @rajesh-battala : Do you agree with @ustcweizhou suggestion?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #859: Bug-ID: CLOUDSTACK-8882: calculate network offering u...

2017-02-21 Thread cloudmonger
Github user cloudmonger commented on the issue:

https://github.com/apache/cloudstack/pull/859
  
### ACS CI BVT Run
 **Sumarry:**
 Build Number 369
 Hypervisor xenserver
 NetworkType Advanced
 Passed=104
 Failed=1
 Skipped=7

_Link to logs Folder (search by build_no):_ 
https://www.dropbox.com/sh/yj3wnzbceo9uef2/AAB6u-Iap-xztdm6jHX9SjPja?dl=0


**Failed tests:**
* test_routers_network_ops.py

 * test_01_RVR_Network_FW_PF_SSH_default_routes_egress_true Failed


**Skipped tests:**
test_01_test_vm_volume_snapshot
test_vm_nic_adapter_vmxnet3
test_static_role_account_acls
test_11_ss_nfs_version_on_ssvm
test_nested_virtualization_vmware
test_3d_gpu_support
test_deploy_vgpu_enabled_vm

**Passed test suits:**
test_deploy_vm_with_userdata.py
test_affinity_groups_projects.py
test_portable_publicip.py
test_over_provisioning.py
test_global_settings.py
test_scale_vm.py
test_service_offerings.py
test_routers_iptables_default_policy.py
test_loadbalance.py
test_routers.py
test_reset_vm_on_reboot.py
test_deploy_vms_with_varied_deploymentplanners.py
test_network.py
test_router_dns.py
test_non_contigiousvlan.py
test_login.py
test_deploy_vm_iso.py
test_list_ids_parameter.py
test_public_ip_range.py
test_multipleips_per_nic.py
test_regions.py
test_affinity_groups.py
test_network_acl.py
test_pvlan.py
test_volumes.py
test_nic.py
test_deploy_vm_root_resize.py
test_resource_detail.py
test_secondary_storage.py
test_vm_life_cycle.py
test_disk_offerings.py


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1953: CLOUDSTACK-9794: Unable to attach more than 1...

2017-02-21 Thread HrWiggles
Github user HrWiggles commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102187534
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
--- End diff --

How about adding unit tests for the method `getDeviceId(UserVmVO vm, Long 
deviceId)`?
Things I can currently think of to test:
- `RuntimeException` if param `deviceId` is specified as a negative value
- `RuntimeException` if param `deviceId` is specified as `0L`
- `RuntimeException` if param `deviceId` is specified as a value greater 
than the "max-device-id"
- `RuntimeException` if param `deviceId` is specified as reserved id `3L`
- `RuntimeException` if param `deviceId` is specified as an id that is 
already in use
- `RuntimeException` if param `deviceId` is specified as `null` and all 
device ids are in use
- returns id specified in param `deviceId` when not `null` and the id is 
not in use
- returns lowest available id when param `deviceId` is specified as `null`

(all of the above are from my understanding of how the method should behave)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1953: CLOUDSTACK-9794: Unable to attach more than 1...

2017-02-21 Thread HrWiggles
Github user HrWiggles commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102177792
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
 // allocate deviceId
-List vols = _volsDao.findByInstance(vmId);
+int maxDataVolumesSupported = getMaxDataVolumesSupported(vm);
--- End diff --

Is it enough to use max-data-volumes-supported as the max device id, seeing 
as device id 3 is not used? (or is reserved)

E.g. let's assume that `getMaxDataVolumesSupported(vm)` returns `6`.  What 
device ids should be ok in that case?  Would they be `1`, `2`, `4`, `5`, and 
`6` (since `maxDataVolumesSupported` is `6`) which is a total of `5` data 
volumes, or should `maxDataVolumesSupported` be renamed as `maxDeviceId` and be 
assigned a value of `getMaxDataVolumesSupported(vm) + 1` to account for the 
unused/reserved id `3`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1953: CLOUDSTACK-9794: Unable to attach more than 1...

2017-02-21 Thread HrWiggles
Github user HrWiggles commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102181285
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
 // allocate deviceId
-List vols = _volsDao.findByInstance(vmId);
+int maxDataVolumesSupported = getMaxDataVolumesSupported(vm);
+List vols = _volsDao.findByInstance(vm.getId());
 if (deviceId != null) {
-if (deviceId.longValue() > 15 || deviceId.longValue() == 3) {
-throw new RuntimeException("deviceId should be 1,2,4-15");
+if (deviceId.longValue() > maxDataVolumesSupported || 
deviceId.longValue() == 3) {
+throw new RuntimeException("deviceId should be 1,2,4-" + 
maxDataVolumesSupported);
 }
 for (VolumeVO vol : vols) {
 if (vol.getDeviceId().equals(deviceId)) {
-throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vmId);
+throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vm.getId());
 }
 }
 } else {
 // allocate deviceId here
 List devIds = new ArrayList();
-for (int i = 1; i < 15; i++) {
+for (int i = 1; i < maxDataVolumesSupported; i++) {
 devIds.add(String.valueOf(i));
 }
 devIds.remove("3");
--- End diff --

Not part of your changes but... there's a possible `NoSuchElementException` 
below, for line:
```
deviceId = Long.parseLong(devIds.iterator().next());
```
A check should be added for whether `devIds` is empty and, if so, throw a 
`RuntimeException` with an error message which specifies that all possible 
device ids are already in use by the vm (if only to be consistent with the rest 
of the method).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1953: CLOUDSTACK-9794: Unable to attach more than 1...

2017-02-21 Thread HrWiggles
Github user HrWiggles commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102179249
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
 // allocate deviceId
-List vols = _volsDao.findByInstance(vmId);
+int maxDataVolumesSupported = getMaxDataVolumesSupported(vm);
+List vols = _volsDao.findByInstance(vm.getId());
 if (deviceId != null) {
-if (deviceId.longValue() > 15 || deviceId.longValue() == 3) {
-throw new RuntimeException("deviceId should be 1,2,4-15");
+if (deviceId.longValue() > maxDataVolumesSupported || 
deviceId.longValue() == 3) {
--- End diff --

There's no check for whether `deviceId` is greater than `0`.  Should check 
for that here, as well.  Otherwise, the method will accept `0` and negative 
values as valid device ids.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1953: CLOUDSTACK-9794: Unable to attach more than 1...

2017-02-21 Thread HrWiggles
Github user HrWiggles commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102178630
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
 // allocate deviceId
-List vols = _volsDao.findByInstance(vmId);
+int maxDataVolumesSupported = getMaxDataVolumesSupported(vm);
+List vols = _volsDao.findByInstance(vm.getId());
 if (deviceId != null) {
-if (deviceId.longValue() > 15 || deviceId.longValue() == 3) {
-throw new RuntimeException("deviceId should be 1,2,4-15");
+if (deviceId.longValue() > maxDataVolumesSupported || 
deviceId.longValue() == 3) {
+throw new RuntimeException("deviceId should be 1,2,4-" + 
maxDataVolumesSupported);
 }
 for (VolumeVO vol : vols) {
 if (vol.getDeviceId().equals(deviceId)) {
-throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vmId);
+throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vm.getId());
 }
 }
 } else {
 // allocate deviceId here
 List devIds = new ArrayList();
-for (int i = 1; i < 15; i++) {
+for (int i = 1; i < maxDataVolumesSupported; i++) {
--- End diff --

Since `maxDataVolumesSupported` is basically being used to indicate the 
"max-device-id", the conditional should be `i <= maxDataVolumesSupported`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1953: CLOUDSTACK-9794: Unable to attach more than 1...

2017-02-21 Thread HrWiggles
Github user HrWiggles commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1953#discussion_r102184780
  
--- Diff: server/src/com/cloud/storage/VolumeApiServiceImpl.java ---
@@ -2639,22 +2639,23 @@ private int getMaxDataVolumesSupported(UserVmVO vm) 
{
 return maxDataVolumesSupported.intValue();
 }
 
-private Long getDeviceId(long vmId, Long deviceId) {
+private Long getDeviceId(UserVmVO vm, Long deviceId) {
 // allocate deviceId
-List vols = _volsDao.findByInstance(vmId);
+int maxDataVolumesSupported = getMaxDataVolumesSupported(vm);
+List vols = _volsDao.findByInstance(vm.getId());
 if (deviceId != null) {
-if (deviceId.longValue() > 15 || deviceId.longValue() == 3) {
-throw new RuntimeException("deviceId should be 1,2,4-15");
+if (deviceId.longValue() > maxDataVolumesSupported || 
deviceId.longValue() == 3) {
+throw new RuntimeException("deviceId should be 1,2,4-" + 
maxDataVolumesSupported);
 }
 for (VolumeVO vol : vols) {
 if (vol.getDeviceId().equals(deviceId)) {
-throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vmId);
+throw new RuntimeException("deviceId " + deviceId + " 
is used by vm" + vm.getId());
 }
 }
 } else {
 // allocate deviceId here
 List devIds = new ArrayList();
--- End diff --

Not part of your changes but... variable `devIds` should have type 
`List` instead of `List`.
All that conversion from `int` to `String` and then converting from 
`String` to `long` seems unnecessary.
Should simply be able to do:
```
List devIds = new ArrayList<>();
for (long i = 1; i <= maxDataVolumesSupported; i++) {
devIds.add(i);
}
devIds.remove(3L);
for (VolumeVO vol : vols) {
devIds.remove(vol.getDeviceId());
}
if (devIds.isEmpty()) {
throw new RuntimeException("every available deviceId already in use by 
vm " + vm.getId());
}
deviceId = devIds.iterator().next();
```
Note: my code above includes fixes to two other comments I made further 
down in the code.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1897: CLOUDSTACK-9733: Concurrent volume snapshots of a VM...

2017-02-21 Thread ramkatru
Github user ramkatru commented on the issue:

https://github.com/apache/cloudstack/pull/1897
  
@sureshanaparti, please look into these failures.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1882: CLOUDSTACK-8737: Removed the missed out-of-ba...

2017-02-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1882


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1881: CLOUDSTACK-9721: Remove deprecated/unused glo...

2017-02-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1881


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1824: CLOUDSTACK-9657: Fixed security group ipset i...

2017-02-21 Thread jayapalu
Github user jayapalu commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1824#discussion_r102192868
  
--- Diff: scripts/vm/hypervisor/xenserver/vmops ---
@@ -232,28 +233,50 @@ def deleteFile(session, args):
 
 return txt
 
+#using all the iptables chain names length to 24 because cleanup_rules 
groups the vm chain excluding -def,-eg
+#to avoid multiple iptables chains for single vm there using length 24
 def chain_name(vm_name):
 if vm_name.startswith('i-') or vm_name.startswith('r-'):
 if vm_name.endswith('untagged'):
 return '-'.join(vm_name.split('-')[:-1])
 if len(vm_name) > 28:
--- End diff --

Updated it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #859: Bug-ID: CLOUDSTACK-8882: calculate network offering u...

2017-02-21 Thread DaanHoogland
Github user DaanHoogland commented on the issue:

https://github.com/apache/cloudstack/pull/859
  
@kishankavala i forgot about this one, guess we won't make 4.7 ;)

can you appease cloudmonger?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1861: CLOUDSTACK-9698 [VMware] Make hardcorded wait timeou...

2017-02-21 Thread sateesh-chodapuneedi
Github user sateesh-chodapuneedi commented on the issue:

https://github.com/apache/cloudstack/pull/1861
  
@borisstoyanov, thanks for running tets.

I see 1 test error in the above results, this test has been failing in many 
other PRs as well, and doesn't seem related to changes here?

`2017-02-21 00:56:54,625 - CRITICAL - FAILED: 
test_04_rvpc_privategw_static_routes: ['Traceback (most recent call last):\n', 
'  File "/usr/lib64/python2.7/unittest/case.py", line 369, in run\n
testMethod()\n', '  File "/marvin/tests/smoke/test_privategw_acl.py", line 295, 
in test_04_rvpc_privategw_static_routes\nself.performVPCTests(vpc_off)\n', 
'  File "/marvin/tests/smoke/test_privategw_acl.py", line 362, in 
performVPCTests\nself.check_pvt_gw_connectivity(vm1, public_ip_1, 
[vm2.nic[0].ipaddress, vm1.nic[0].ipaddress])\n', '  File 
"/marvin/tests/smoke/test_privategw_acl.py", line 724, in 
check_pvt_gw_connectivity\n"Ping to VM on Network Tier N from VM in Network 
Tier A should be successful at least for 2 out of 3 VMs"\n', '  File 
"/usr/lib64/python2.7/unittest/case.py", line 462, in assertTrue\nraise 
self.failureException(msg)\n', 'AssertionError: Ping to VM on Network Tier N 
from VM in Network Tier A should be successful at least for 2 out of 3 VMs\n']
`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1957: CLOUDSTACK-9748:VPN Users search functionalit...

2017-02-21 Thread Ashadeepa
GitHub user Ashadeepa opened a pull request:

https://github.com/apache/cloudstack/pull/1957

CLOUDSTACK-9748:VPN Users search functionality broken

VPN Users search functionality broken
If you try to search VPN users with it’s user name, you will not be able 
to search.

Fixed the same.

Parent PR : https://github.com/apache/cloudstack/pull/1910

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Accelerite/cloudstack CLOUDSTACK-9748

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1957.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1957


commit 588ececd045c9175b33647375fd702e3e37f2126
Author: root 
Date:   2017-01-17T18:09:17Z

CLOUDSTACK-9748:VPN Users search functionality broken




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1910: CLOUDSTACK-9748:VPN Users search functionality broke...

2017-02-21 Thread Ashadeepa
Github user Ashadeepa commented on the issue:

https://github.com/apache/cloudstack/pull/1910
  
@ustcweizhou : Thanks. I have made the changes.

New PR : https://github.com/apache/cloudstack/pull/1957. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Attaching more than 14 data volumes to an instance

2017-02-21 Thread Friðvin Logi Oddbjörnsson
On 18 February 2017 at 20:51:42, Suresh Anaparti (
suresh.anapa...@accelerite.com) wrote:

I checked the limits set for VMware hypervisor and observed some
discrepancies. These can be either updated from the
updateHypervisorCapabilities API (max_data_volumes_limit,
max_hosts_per_cluster after improvements) or schema update during
upgradation. Which one would be better? For schema update, I have to raise
a PR.


If these are hard limits for the hypervisors, then I’m more inclined that
they be immutable (i.e. to not allow changing them through the API) and,
therefor, only updated through a schema update.  However, if these are not
hard limits for the hypervisors (or if there are some valid reasons for
allowing these limits to be easily updated), then having them updatable
through the API would make sense.


Friðvin Logi Oddbjörnsson

Senior Developer

Tel: (+354) 415 0200 | frid...@greenqloud.com

Mobile: (+354) 696 6528 | PGP Key: 57CA1B00


Twitter: @greenqloud  | @qstackcloud


www.greenqloud.com | www.qstack.com

[image: qstack_blue_landscape_byqreenqloud-01.png]


[GitHub] cloudstack pull request #1957: CLOUDSTACK-9748:VPN Users search functionalit...

2017-02-21 Thread ustcweizhou
Github user ustcweizhou commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1957#discussion_r102209429
  
--- Diff: server/src/com/cloud/network/vpn/RemoteAccessVpnManagerImpl.java 
---
@@ -621,6 +627,10 @@ public void 
doInTransactionWithoutResult(TransactionStatus status) {
 sc.setParameters("username", username);
 }
 
+if (keyword!= null) {
--- End diff --

it seems line 630 to 633 are not needed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1773: CLOUDSTACK-9607: Preventing template deletion when t...

2017-02-21 Thread jburwell
Github user jburwell commented on the issue:

https://github.com/apache/cloudstack/pull/1773
  
@priyankparihar I agree with @ustcweizhou regarding the default value of 
`forced` in terms of backwards compatibility.

Also, why we permit deletion of a template when it is associated with one 
or more active volumes?  It seems like we are giving the user the means to 
corrupt their system.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [QUESTION] Upgrade path to JDK8

2017-02-21 Thread Ron Wheeler


http://stackoverflow.com/questions/10895969/can-newer-jre-versions-run-java-programs-compiled-with-older-jdk-versions
You can run code compiled by the java 1.7 or 1.6 or earlier SDKs on a 
Java 8 JVM.


This gets you the improved speed of the Java 8 JVM even if you do not 
rebuild the code.


If this was not true, life would be chaos when you upgraded your Java on 
a production server.

All of the code that ran a few minutes ago would fail.

Think about how much java is running on a typical data centre. You would 
have heard the howls of pain if all that code suddenly stopped running.


It should be easy to test the existing jars compiled with earlier 
version of java on a machine running the Java 8 JVM.

Just replace the Java and restart the server.

A reasonable migration path is to replace the JVM and continue to run 
existing code.

Upgrade the code at your leisure.

An application can be constructed from Jars from different SDKs.
I had no trouble with the dozens of Apache and third party libraries 
that make up my application when I changed my compiler to Java 8.
One minute I was compiling and testing with Java 7 and the next minute I 
was compiling with Java 8 and the code still worked with all of the same 
third party jars.


No source code changes where required in any code to upgrade.
Since then, I have incorporated  Java 8 features into most of our code 
but that is not really part of this discussion.


I hope that this helps.

Ron


Ron

On 21/02/2017 3:03 AM, Marc-Aurèle Brothier wrote:

No there isn't any issue except having the bugs & fixes of the JDK you're
using. You can compile it with a JDK 1.8 as long as you don't change the
target bytecode version for 1.7

On Tue, Feb 21, 2017 at 8:15 AM, Wei ZHOU  wrote:


Marco,

Good point. Is there any issue if we compile code with jdk8 but run it on
jdk7 (systemvm) ?

-Wei

2017-02-21 7:43 GMT+01:00 Marc-Aurèle Brothier :


There's a list of compatibility issues between Java 7 & Java 8 here
http://www.oracle.com/technetwork/java/javase/8-
compatibility-guide-2156366.
html

The main problem I would see in two system communicating while running
different Java version is the way they handle serialization and
de-serialization of objects which had been a problem in the past between
some Java versions. AFAIK we're using JSON for that now, so if the code
already compiles with Java8, it should not be a problem.

On Mon, Feb 20, 2017 at 10:36 PM, Wei ZHOU 

wrote:

We tested 4.7.1+systemd patches as well, it also works fine.

-Wei

2017-02-20 22:34 GMT+01:00 Wei ZHOU :


@Will and @Syed, I build the packages of 4.9.2+systemd patches on

ubuntu

16.04 (openjdk 8).
Then install the packages to management server and kvm hosts (all are
ubuntu 16.04 with openjdk8).
The systemvm template is 4.6 with openjdk7.

cpvm and ssvm work fine.

As there is no java process in VR, so I did not check, VR should not

be

impacted.

-Wei

2017-02-20 22:16 GMT+01:00 Pierre-Luc Dion :


That's quite interesting Chiradeep!

so I could do something like this I guest:

mvn clean install

and then this one to build the systemvm.iso:
mvn -Psystemvm -source 1.7 -target 1.7 install


I'll give it a try! but for now, I'm worried about existing VR, they

must

continue to work while running on jdk7.  newer VPC would be ok to

run

with

jdk8.  but we need to make sure while upgrading the

management-server

we

are not in the obligation to upgrade VR's.

For sure it is required for strongswan + JDK8 to ugprade the VR,

but a

  existing VR should remain usable for port forwarding, vm creation

and

such...

I'll post my finding...

Thanks !



On Mon, Feb 20, 2017 at 3:59 PM, Chiradeep Vittal <

chirade...@gmail.com

wrote:


You can build the system vm with  -source 1.7 -target 1.7
Also unless you are using Java8 features (lambda) the classfiles

produced

by javac 8 should work in a 1.7 JVM

Sent from my iPhone


On Feb 20, 2017, at 11:51 AM, Will Stevens <

wstev...@cloudops.com

wrote:

yes, that is what I was expecting.  which is why I was asking

about

Wei's

setup because he seems to have worked around that problem.  Or

he

has

a

custom SystemVM template running with both JDK7 and JDK8.

*Will STEVENS*
Lead Developer




On Mon, Feb 20, 2017 at 2:20 PM, Syed Ahmed <

sah...@cloudops.com

wrote:

The problem is that systemvm.iso is built with java 8 whereas

java

on

the

VR is java 7

On Mon, Feb 20, 2017 at 13:20 Will Stevens <

wstev...@cloudops.com

wrote:

Did it work after resetting a VPC or when blowing away the

SSVM

or

CPVM?  I

would not expect the SSVM or the CPVM to come up if the

management

server

was built with JDK8 and the system vm template is only using

JDK7.

Can

you

confirm?​

*Will STEVENS*
Lead Developer




On Mon, Feb 20, 2017 at 1:15 PM, Wei ZHOU <

ustcweiz...@gmail.com

wrote:

We've tested management server 4.7.1 with ubuntu

16.04/openjdk8

and

systemvm 4.6 with debian7/openjdk7.
The systemvms

[GitHub] cloudstack issue #1768: CLOUDSTACK 9601: Upgrade: change logic for update pa...

2017-02-21 Thread marcaurele
Github user marcaurele commented on the issue:

https://github.com/apache/cloudstack/pull/1768
  
@rhtyd Changing the sequence is only idempotent if you have been upgrading 
to each new versions step by step. If you jumped other versions, then the path 
is different for each original version. This fix wants to avoid such a 
difference.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Handling of DB migrations on forks

2017-02-21 Thread Marc-Aurèle Brothier
Jeff, I do wonder the same thing about the ORM... I hit the ORM limitation
in many places now without being able to do joins on the same table
specifically for capacity check, and you can see in the code many hand made
SQL query on that part. I think the views came into the pictures for the
same reason.

Daan, the project maintainers should enforce that. I also posted another
finding that the upgrade path are not identical due to the order in which
upgrade files are executed, see (https://github.com/apache/
cloudstack/pull/1768)

On Tue, Feb 21, 2017 at 10:31 AM, Jeff Hair  wrote:

> Something like Liquibase would be nice. Hibernate might be even better.
> Even idempotent migrations would be a step in the right direction. Perhaps
> reject any migration changes that aren't INSERT IGNORE, DROP IF EXISTS,
> etc?
>
> I'm not sure why the DAO was originally hand-rolled. Perhaps the original
> developers didn't think any ORM on the market met their needs. I would love
> to leave DB migrations almost entirely behind. I believe Hibernate is smart
> enough to construct dynamic migrations when fields are added and removed
> from tables, right?
>
> *Jeff Hair*
> Technical Lead and Software Developer
>
> Tel: (+354) 415 0200
> j...@greenqloud.com
> www.greenqloud.com
>
> On Tue, Feb 21, 2017 at 9:27 AM, Daan Hoogland <
> daan.hoogl...@shapeblue.com>
> wrote:
>
> > Marc-Aurele, you are totally right and people agree with you but no one
> > seems to give this priority
> >
> > daan.hoogl...@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, Utrecht Utrecht 3531 VENetherlands
> > @shapeblue
> >
> >
> >
> >
> > -Original Message-
> > From: Marc-Aurèle Brothier [mailto:ma...@exoscale.ch]
> > Sent: 21 February 2017 10:04
> > To: dev@cloudstack.apache.org
> > Subject: Re: Handling of DB migrations on forks
> >
> > IMO the database changes should be idempotent as much as possible with
> > "CREATE OR REPLACE VIEW..." "DROP IF EXISTS". For other things like
> > altering a table, it's more complicated to achieve that in pure SQL.
> > A good call would be to integrate http://www.liquibase.org/ to manage
> the
> > schema and changes in a more descriptive way which allows
> branches/merges.
> >
> > On Tue, Feb 21, 2017 at 9:46 AM, Daan Hoogland 
> > wrote:
> >
> > > Good strategy and I would make that not a warning but a fatal, as the
> > > resulting ACS version will probably not work.
> > >
> > > On Tue, Feb 14, 2017 at 12:16 PM, Wei ZHOU 
> > wrote:
> > > > Then you have to create your own branch forked from 4.10.0
> > > >
> > > > In our branch, I moved some table changes (eg ALTER TABLE, CREATE
> > > > TABLE) from schema-.sql to
> > > > engine/schema/src/com/cloud/upgrade/dao/UpgradeXXXtoYYY.java.
> > > > If SQLException is throwed, then show a warning message instead
> > > > upgrade interruption..
> > > > By this way, the database will not be broken in the upgrade or fresh
> > > > installation.
> > > >
> > > > -Wei
> > > >
> > > >
> > > > 2017-02-14 11:52 GMT+01:00 Jeff Hair :
> > > >
> > > >> Hi all,
> > > >>
> > > >> Many people in the CS community maintain forks of CloudStack, and
> > > >> might have implemented features or bug fixes long before they get
> > > >> into
> > > mainline.
> > > >> I'm curious as to how people handle database migrations with their
> > > forks.
> > > >> To make a DB migration, the CS version must be updated. If a
> > > >> developer
> > > adds
> > > >> a migration to their fork on say, version 4.8.5. Later, they decide
> > > >> to upgrade to 4.10.0 which has their migration in the schema
> > > >> upgrade to 4.10.0.
> > > >>
> > > >> How do people handle this? As far as I know, CS will crash on the
> > > >> DB upgrade due to SQL errors. Do people just sanitize migrations
> > > >> when they pull from downstream or somehting?
> > > >>
> > > >> Jeff
> > > >>
> > >
> > >
> > >
> > > --
> > > Daan
> > >
> >
>


[GitHub] cloudstack issue #1957: CLOUDSTACK-9748:VPN Users search functionality broke...

2017-02-21 Thread Ashadeepa
Github user Ashadeepa commented on the issue:

https://github.com/apache/cloudstack/pull/1957
  
@ustcweizhou : My bad. Amended the changes . Thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1957: CLOUDSTACK-9748:VPN Users search functionalit...

2017-02-21 Thread Ashadeepa
Github user Ashadeepa commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1957#discussion_r10411
  
--- Diff: server/src/com/cloud/network/vpn/RemoteAccessVpnManagerImpl.java 
---
@@ -621,6 +627,10 @@ public void 
doInTransactionWithoutResult(TransactionStatus status) {
 sc.setParameters("username", username);
 }
 
+if (keyword!= null) {
--- End diff --

@ustcweizhou : My bad. Amended the changes . Thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1915: CLOUDSTACK-9746 system-vm: logrotate config causes c...

2017-02-21 Thread dmabry
Github user dmabry commented on the issue:

https://github.com/apache/cloudstack/pull/1915
  
@serbaut I agree with @ustcweizhou.  Please remove delaycompress and up to 
10.  I'd like to get this PR in as it is the second part of the problem 
resolution for my issue.  After that LGTM.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1955: CLOUDSTACK-8239 Add VirtIO SCSI support for KVM host...

2017-02-21 Thread dmabry
Github user dmabry commented on the issue:

https://github.com/apache/cloudstack/pull/1955
  
We are deploying this to our QA environment right now and hope to have it 
tested in a few days.  Great work @kiwiflyer and @nathanejohnson.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: apidocs build failure

2017-02-21 Thread Will Stevens
Is there any chance we can fix the 'roles' issue with the API doc so we can
get the docs split into the 'Admin', 'Domain Admin' and 'User' again?  The
introduction of the dynamic roles broke the generation of the API docs with
the different roles and I was not able to figure out how to fix it.  Any
ideas for how to fix that?

*Will STEVENS*
Lead Developer



On Tue, Feb 21, 2017 at 3:01 AM, Daan Hoogland 
wrote:

> @Rajani @Rohit I missed this mail and fixed the apidoc on build.a.o
> yesterday. I can disable it or throw it away might we so wish
>
> daan.hoogl...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, Utrecht Utrecht 3531 VENetherlands
> @shapeblue
>
>
>
>
> -Original Message-
> From: Rohit Yadav [mailto:rohit.ya...@shapeblue.com]
> Sent: 17 February 2017 10:27
> To: dev@cloudstack.apache.org
> Subject: Re: apidocs build failure
>
> Thanks Rajani, I've no objections.
>
>
> Regards.
>
> 
> From: Rajani Karuturi 
> Sent: 17 February 2017 14:07:34
> To: dev@cloudstack.apache.org
> Subject: Re: apidocs build failure
>
> since travis is already verifying this, I asked infra to disable this job.
>
> Infra ticket https://issues.apache.org/jira/browse/INFRA-13527
>
> Please comment on the ticket if you think otherwise.
>
> Thanks,
>
> ~ Rajani
>
> http://cloudplatform.accelerite.com/
>
> On February 13, 2017 at 12:29 PM, Rohit Yadav
> (rohit.ya...@shapeblue.com) wrote:
>
> Jenkins need to have jdk8 available, someone need to setup jenv on it as
> well.
>
> (The first job in Travis does apidocs/marvin/rat related checks to
> validate changes and apidocs build).
>
> Regards.
>
> 
> From: Rajani Karuturi 
> Sent: 09 February 2017 12:21:40
> To: dev@cloudstack.apache.org
> Subject: apidocs build failure
>
> Hi all,
>
> All the apidocs builds[1] are failing after the recent java 8 change. Can
> anyone having access fix it? Or should we talk to INFRA about it?
>
> Error message:
>
> [INFO]
> -
> [ERROR] COMPILATION ERROR : [INFO]
> -
> [ERROR] javac: invalid target release: 1.8 Usage: javac use -help for a
> list of possible options
>
> [1] https://builds.apache.org/job/cloudstack-apidocs-master/
>
> Thanks
>
> ~ Rajani
>
> http://cloudplatform.accelerite.com/
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com ( http://www.shapeblue.com )
> 53 Chandos Place, Covent Garden, London WC2N 4HSUK @shapeblue
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>
>
>
>


Re: Handling of DB migrations on forks

2017-02-21 Thread Frank Maximus
I'm also in favor of Liquibase.
Hibernate might be smart enough to add new columns,
but to be able to for example change the type of a column,
more complex data migration might be necessary.
Liquibase preconditions, combined with good identifiers
make up that upgrades will be more granular.


*Frank Maximus *
Senior Software Development Engineer
*nuage*networks.net 
p: (+32) 3 240 73 81 <+32%203%20240%2073%2081>


On Tue, Feb 21, 2017 at 10:31 AM Jeff Hair  wrote:

Something like Liquibase would be nice. Hibernate might be even better.
Even idempotent migrations would be a step in the right direction. Perhaps
reject any migration changes that aren't INSERT IGNORE, DROP IF EXISTS, etc?

I'm not sure why the DAO was originally hand-rolled. Perhaps the original
developers didn't think any ORM on the market met their needs. I would love
to leave DB migrations almost entirely behind. I believe Hibernate is smart
enough to construct dynamic migrations when fields are added and removed
from tables, right?

*Jeff Hair*
Technical Lead and Software Developer

Tel: (+354) 415 0200 <+354%20415%200200>
j...@greenqloud.com
www.greenqloud.com

On Tue, Feb 21, 2017 at 9:27 AM, Daan Hoogland 
wrote:

> Marc-Aurele, you are totally right and people agree with you but no one
> seems to give this priority
>
> daan.hoogl...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, Utrecht Utrecht 3531 VENetherlands
> @shapeblue
>
>
>
>
> -Original Message-
> From: Marc-Aurèle Brothier [mailto:ma...@exoscale.ch]
> Sent: 21 February 2017 10:04
> To: dev@cloudstack.apache.org
> Subject: Re: Handling of DB migrations on forks
>
> IMO the database changes should be idempotent as much as possible with
> "CREATE OR REPLACE VIEW..." "DROP IF EXISTS". For other things like
> altering a table, it's more complicated to achieve that in pure SQL.
> A good call would be to integrate http://www.liquibase.org/ to manage the
> schema and changes in a more descriptive way which allows branches/merges.
>
> On Tue, Feb 21, 2017 at 9:46 AM, Daan Hoogland 
> wrote:
>
> > Good strategy and I would make that not a warning but a fatal, as the
> > resulting ACS version will probably not work.
> >
> > On Tue, Feb 14, 2017 at 12:16 PM, Wei ZHOU 
> wrote:
> > > Then you have to create your own branch forked from 4.10.0
> > >
> > > In our branch, I moved some table changes (eg ALTER TABLE, CREATE
> > > TABLE) from schema-.sql to
> > > engine/schema/src/com/cloud/upgrade/dao/UpgradeXXXtoYYY.java.
> > > If SQLException is throwed, then show a warning message instead
> > > upgrade interruption..
> > > By this way, the database will not be broken in the upgrade or fresh
> > > installation.
> > >
> > > -Wei
> > >
> > >
> > > 2017-02-14 11:52 GMT+01:00 Jeff Hair :
> > >
> > >> Hi all,
> > >>
> > >> Many people in the CS community maintain forks of CloudStack, and
> > >> might have implemented features or bug fixes long before they get
> > >> into
> > mainline.
> > >> I'm curious as to how people handle database migrations with their
> > forks.
> > >> To make a DB migration, the CS version must be updated. If a
> > >> developer
> > adds
> > >> a migration to their fork on say, version 4.8.5. Later, they decide
> > >> to upgrade to 4.10.0 which has their migration in the schema
> > >> upgrade to 4.10.0.
> > >>
> > >> How do people handle this? As far as I know, CS will crash on the
> > >> DB upgrade due to SQL errors. Do people just sanitize migrations
> > >> when they pull from downstream or somehting?
> > >>
> > >> Jeff
> > >>
> >
> >
> >
> > --
> > Daan
> >
>


[GitHub] cloudstack pull request #1878: CLOUDSTACK-9717: [VMware] RVRs have mismatchi...

2017-02-21 Thread rafaelweingartner
Github user rafaelweingartner commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1878#discussion_r102226833
  
--- Diff: 
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
 ---
@@ -2071,6 +2120,14 @@ protected StartAnswer execute(StartCommand cmd) {
 }
 }
 
+private void replaceNicsMacSequenceInBootArgs(String oldMacSequence, 
String newMacSequence, VirtualMachineTO vmSpec) {
+String bootArgs = vmSpec.getBootArgs();
+if (!StringUtils.isEmpty(bootArgs) && 
!StringUtils.isEmpty(oldMacSequence) && !StringUtils.isEmpty(newMacSequence)) {
+//Update boot args with the new nic mac addresses
--- End diff --

What about moving this comment to the method documentation?
Also, how do you feel about test cases? The method is pretty simple and it 
will not be hard to write some unit test for it.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1915: CLOUDSTACK-9746 system-vm: logrotate config causes c...

2017-02-21 Thread serbaut
Github user serbaut commented on the issue:

https://github.com/apache/cloudstack/pull/1915
  
Is it safe to remove delaycompress across the board, I assume it is there 
for a reason?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1915: CLOUDSTACK-9746 system-vm: logrotate config causes c...

2017-02-21 Thread leprechau
Github user leprechau commented on the issue:

https://github.com/apache/cloudstack/pull/1915
  
We always want `compress` ... but the only time you need or want 
`delaycompress` is if you can't be sure that the program writing to the log can 
be successfully told to stop appending to that log.  In the case where you are 
certain that the writing program is going to do the right thing there is no 
need to add `delaycompress` as it just takes up extra space in already rotated 
logs until the next iteration.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1915: CLOUDSTACK-9746 system-vm: logrotate config causes c...

2017-02-21 Thread serbaut
Github user serbaut commented on the issue:

https://github.com/apache/cloudstack/pull/1915
  
Ok. I removed it from rsyslog since it should be safe there.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Handling of DB migrations on forks

2017-02-21 Thread Daan Hoogland
On Tue, Feb 21, 2017 at 3:19 PM, Marc-Aurèle Brothier  wrote:
>
> Daan, the project maintainers should enforce that. I also posted another
> finding that the upgrade path are not identical due to the order in which
> upgrade files are executed, see (https://github.com/apache/
> cloudstack/pull/1768)

If you mean refuse PRs containing non-idem-potent sql code yes, but as
for real work it is all on a voluntary basis, that is someone must
find it worth the time to encode it. I complete agree with a policy to
refuse comntaining other creates and drop then as in

> > "CREATE OR REPLACE VIEW..." "DROP IF EXISTS".

So please feel free to speak up if you catch somebody trying to sneak
in code like that. They have my -1

-- 
Daan


Re: Handling of DB migrations on forks

2017-02-21 Thread Rafael Weingärtner
I think this might be others doubt as well. Sorry if it seems a naïve/silly
question.

By idempotent, I understand something that you can make as many operations
as possible and it does not change (broadly speaking). For example, (in
theory), when you do an HTTP Get requests, the response would be always the
same, and the state of the resource should not change.

Now, regarding SQLs; this SQL for instance “CREATE TABLE XXX…..” is
idempotent for you (as far as I understood reading this thread), right?

And something like “CREATE or REPLACE TABLE XXX…..” would be
non-idempotent. Did I understand it right?

On Tue, Feb 21, 2017 at 11:28 AM, Daan Hoogland 
wrote:

> On Tue, Feb 21, 2017 at 3:19 PM, Marc-Aurèle Brothier 
> wrote:
> >
> > Daan, the project maintainers should enforce that. I also posted another
> > finding that the upgrade path are not identical due to the order in which
> > upgrade files are executed, see (https://github.com/apache/
> > cloudstack/pull/1768)
>
> If you mean refuse PRs containing non-idem-potent sql code yes, but as
> for real work it is all on a voluntary basis, that is someone must
> find it worth the time to encode it. I complete agree with a policy to
> refuse comntaining other creates and drop then as in
>
> > > "CREATE OR REPLACE VIEW..." "DROP IF EXISTS".
>
> So please feel free to speak up if you catch somebody trying to sneak
> in code like that. They have my -1
>
> --
> Daan
>



-- 
Rafael Weingärtner


Re: Handling of DB migrations on forks

2017-02-21 Thread Daan Hoogland
No Rafael the other way around,

The first time you call it it might change things, but no matter how
often you call it it will have changed in exactly the same way.

On Tue, Feb 21, 2017 at 5:41 PM, Rafael Weingärtner
 wrote:
> I think this might be others doubt as well. Sorry if it seems a naïve/silly
> question.
>
> By idempotent, I understand something that you can make as many operations
> as possible and it does not change (broadly speaking). For example, (in
> theory), when you do an HTTP Get requests, the response would be always the
> same, and the state of the resource should not change.
>
> Now, regarding SQLs; this SQL for instance “CREATE TABLE XXX…..” is
> idempotent for you (as far as I understood reading this thread), right?
>
> And something like “CREATE or REPLACE TABLE XXX…..” would be
> non-idempotent. Did I understand it right?
>
> On Tue, Feb 21, 2017 at 11:28 AM, Daan Hoogland 
> wrote:
>
>> On Tue, Feb 21, 2017 at 3:19 PM, Marc-Aurèle Brothier 
>> wrote:
>> >
>> > Daan, the project maintainers should enforce that. I also posted another
>> > finding that the upgrade path are not identical due to the order in which
>> > upgrade files are executed, see (https://github.com/apache/
>> > cloudstack/pull/1768)
>>
>> If you mean refuse PRs containing non-idem-potent sql code yes, but as
>> for real work it is all on a voluntary basis, that is someone must
>> find it worth the time to encode it. I complete agree with a policy to
>> refuse comntaining other creates and drop then as in
>>
>> > > "CREATE OR REPLACE VIEW..." "DROP IF EXISTS".
>>
>> So please feel free to speak up if you catch somebody trying to sneak
>> in code like that. They have my -1
>>
>> --
>> Daan
>>
>
>
>
> --
> Rafael Weingärtner



-- 
Daan


Re: Handling of DB migrations on forks

2017-02-21 Thread Rafael Weingärtner
Ah, great.

Thanks for clarifying that for me.

On Tue, Feb 21, 2017 at 11:44 AM, Daan Hoogland 
wrote:

> No Rafael the other way around,
>
> The first time you call it it might change things, but no matter how
> often you call it it will have changed in exactly the same way.
>
> On Tue, Feb 21, 2017 at 5:41 PM, Rafael Weingärtner
>  wrote:
> > I think this might be others doubt as well. Sorry if it seems a
> naïve/silly
> > question.
> >
> > By idempotent, I understand something that you can make as many
> operations
> > as possible and it does not change (broadly speaking). For example, (in
> > theory), when you do an HTTP Get requests, the response would be always
> the
> > same, and the state of the resource should not change.
> >
> > Now, regarding SQLs; this SQL for instance “CREATE TABLE XXX…..” is
> > idempotent for you (as far as I understood reading this thread), right?
> >
> > And something like “CREATE or REPLACE TABLE XXX…..” would be
> > non-idempotent. Did I understand it right?
> >
> > On Tue, Feb 21, 2017 at 11:28 AM, Daan Hoogland  >
> > wrote:
> >
> >> On Tue, Feb 21, 2017 at 3:19 PM, Marc-Aurèle Brothier <
> ma...@exoscale.ch>
> >> wrote:
> >> >
> >> > Daan, the project maintainers should enforce that. I also posted
> another
> >> > finding that the upgrade path are not identical due to the order in
> which
> >> > upgrade files are executed, see (https://github.com/apache/
> >> > cloudstack/pull/1768)
> >>
> >> If you mean refuse PRs containing non-idem-potent sql code yes, but as
> >> for real work it is all on a voluntary basis, that is someone must
> >> find it worth the time to encode it. I complete agree with a policy to
> >> refuse comntaining other creates and drop then as in
> >>
> >> > > "CREATE OR REPLACE VIEW..." "DROP IF EXISTS".
> >>
> >> So please feel free to speak up if you catch somebody trying to sneak
> >> in code like that. They have my -1
> >>
> >> --
> >> Daan
> >>
> >
> >
> >
> > --
> > Rafael Weingärtner
>
>
>
> --
> Daan
>



-- 
Rafael Weingärtner


[GitHub] cloudstack issue #1948: [CLOUDSTACK-9793] Faster IP in subnet check

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1948
  
Trillian test result (tid-870)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 33543 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1948-t870-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Test completed. 47 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_02_redundant_VPC_default_routes | `Failure` | 874.34 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 379.02 
| test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 330.77 | 
test_privategw_acl.py
test_01_vpc_site2site_vpn | Success | 160.11 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 66.25 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 250.97 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 298.25 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 526.31 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 506.37 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1402.83 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 566.05 | test_vpc_redundant.py
test_09_delete_detached_volume | Success | 151.61 | test_volumes.py
test_08_resize_volume | Success | 156.45 | test_volumes.py
test_07_resize_fail | Success | 161.63 | test_volumes.py
test_06_download_detached_volume | Success | 156.41 | test_volumes.py
test_05_detach_volume | Success | 156.70 | test_volumes.py
test_04_delete_attached_volume | Success | 151.31 | test_volumes.py
test_03_download_attached_volume | Success | 151.39 | test_volumes.py
test_02_attach_volume | Success | 96.19 | test_volumes.py
test_01_create_volume | Success | 717.60 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.60 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 95.63 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 163.89 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 248.34 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.03 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.72 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.27 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 30.86 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.13 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.98 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.88 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.17 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.40 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 40.55 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.04 | test_templates.py
test_05_template_permissions | Success | 0.08 | test_templates.py
test_04_extract_template | Success | 5.17 | test_templates.py
test_03_delete_template | Success | 5.11 | test_templates.py
test_02_edit_template | Success | 90.18 | test_templates.py
test_01_create_template | Success | 45.46 | test_templates.py
test_10_destroy_cpvm | Success | 191.69 | test_ssvm.py
test_09_destroy_ssvm | Success | 133.65 | test_ssvm.py
test_08_reboot_cpvm | Success | 101.59 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.57 | test_ssvm.py
test_06_stop_cpvm | Success | 131.89 | test_ssvm.py
test_05_stop_ssvm | Success | 163.73 | test_ssvm.py
test_04_cpvm_internals | Success | 1.22 | test_ssvm.py
test_03_ssvm_internals | Success | 4.20 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.15 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.14 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.34 | test_snapshots.py
test_04_change_offering_small | Success | 210.37 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.06 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.07 | test_service_offerings.py
test_01_create_service_offering | Success | 0.11 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.13 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.20 | test_secondary_storage.py
test_09_reboot_router | Success | 35.30 | test_routers.py
test_08_start_router | Success | 30.29 | test_routers.py
test_07_stop_router | Success | 10.17 | test_routers.py
test_06_router_a

[GitHub] cloudstack issue #1949: Automated Cloudstack bugs 9277 9276 9275 9274 9273 9...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1949
  
Trillian test result (tid-872)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 29754 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1949-t872-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_vpn.py
Test completed. 47 look ok, 2 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_04_rvpc_privategw_static_routes | `Failure` | 310.79 | 
test_privategw_acl.py
test_02_list_snapshots_with_removed_data_store | `Error` | 0.04 | 
test_snapshots.py
test_01_vpc_site2site_vpn | Success | 160.32 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 56.14 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 255.75 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 287.17 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 515.32 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 503.85 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1426.01 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 532.65 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 729.61 | 
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1264.82 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 156.54 | test_volumes.py
test_08_resize_volume | Success | 151.48 | test_volumes.py
test_07_resize_fail | Success | 156.49 | test_volumes.py
test_06_download_detached_volume | Success | 151.31 | test_volumes.py
test_05_detach_volume | Success | 150.79 | test_volumes.py
test_04_delete_attached_volume | Success | 151.51 | test_volumes.py
test_03_download_attached_volume | Success | 156.34 | test_volumes.py
test_02_attach_volume | Success | 89.22 | test_volumes.py
test_01_create_volume | Success | 621.29 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.22 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 100.77 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 159.75 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 263.39 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.94 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.21 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 36.01 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.15 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.86 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 125.88 | test_vm_life_cycle.py
test_02_start_vm | Success | 5.14 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.33 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 40.45 | test_templates.py
test_08_list_system_templates | Success | 0.03 | test_templates.py
test_07_list_public_templates | Success | 0.06 | test_templates.py
test_05_template_permissions | Success | 0.06 | test_templates.py
test_04_extract_template | Success | 5.15 | test_templates.py
test_03_delete_template | Success | 5.11 | test_templates.py
test_02_edit_template | Success | 90.15 | test_templates.py
test_01_create_template | Success | 50.48 | test_templates.py
test_10_destroy_cpvm | Success | 161.71 | test_ssvm.py
test_09_destroy_ssvm | Success | 163.42 | test_ssvm.py
test_08_reboot_cpvm | Success | 101.34 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.60 | test_ssvm.py
test_06_stop_cpvm | Success | 136.55 | test_ssvm.py
test_05_stop_ssvm | Success | 133.78 | test_ssvm.py
test_04_cpvm_internals | Success | 0.99 | test_ssvm.py
test_03_ssvm_internals | Success | 3.46 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.15 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.14 | test_ssvm.py
test_01_snapshot_root_disk | Success | 11.12 | test_snapshots.py
test_04_change_offering_small | Success | 239.66 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.06 | test_service_offerings.py
test_01_create_service_offering | Success | 0.11 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.15 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0.18 | test_secondary_storage.py
test_09_reboot_router | Success | 35.33 | test_

[GitHub] cloudstack issue #1948: [CLOUDSTACK-9793] Faster IP in subnet check

2017-02-21 Thread rafaelweingartner
Github user rafaelweingartner commented on the issue:

https://github.com/apache/cloudstack/pull/1948
  
@ProjectMoon may I ask you a question?
The "net" object is already an array/map, right?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1935: CLOUDSTACK-9764: Delete domain failure due to...

2017-02-21 Thread nvazquez
Github user nvazquez commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1935#discussion_r102265150
  
--- Diff: server/src/com/cloud/user/DomainManagerImpl.java ---
@@ -273,79 +274,97 @@ public boolean deleteDomain(long domainId, Boolean 
cleanup) {
 
 @Override
 public boolean deleteDomain(DomainVO domain, Boolean cleanup) {
-// mark domain as inactive
-s_logger.debug("Marking domain id=" + domain.getId() + " as " + 
Domain.State.Inactive + " before actually deleting it");
-domain.setState(Domain.State.Inactive);
-_domainDao.update(domain.getId(), domain);
-boolean rollBackState = false;
-boolean hasDedicatedResources = false;
+GlobalLock lock = GlobalLock.getInternLock("AccountCleanup");
+if (lock == null) {
+s_logger.debug("Couldn't get the global lock");
+return false;
+}
+
+if (!lock.lock(30)) {
+s_logger.debug("Couldn't lock the db");
+return false;
+}
 
 try {
-long ownerId = domain.getAccountId();
-if ((cleanup != null) && cleanup.booleanValue()) {
-if (!cleanupDomain(domain.getId(), ownerId)) {
-rollBackState = true;
-CloudRuntimeException e =
-new CloudRuntimeException("Failed to clean up 
domain resources and sub domains, delete failed on domain " + domain.getName() 
+ " (id: " +
-domain.getId() + ").");
-e.addProxyObject(domain.getUuid(), "domainId");
-throw e;
-}
-} else {
-//don't delete the domain if there are accounts set for 
cleanup, or non-removed networks exist, or domain has dedicated resources
-List networkIds = 
_networkDomainDao.listNetworkIdsByDomain(domain.getId());
-List accountsForCleanup = 
_accountDao.findCleanupsForRemovedAccounts(domain.getId());
-List dedicatedResources = 
_dedicatedDao.listByDomainId(domain.getId());
-if (dedicatedResources != null && 
!dedicatedResources.isEmpty()) {
-s_logger.error("There are dedicated resources for the 
domain " + domain.getId());
-hasDedicatedResources = true;
-}
-if (accountsForCleanup.isEmpty() && networkIds.isEmpty() 
&& !hasDedicatedResources) {
-_messageBus.publish(_name, 
MESSAGE_PRE_REMOVE_DOMAIN_EVENT, PublishScope.LOCAL, domain);
-if (!_domainDao.remove(domain.getId())) {
+// mark domain as inactive
+s_logger.debug("Marking domain id=" + domain.getId() + " as " 
+ Domain.State.Inactive + " before actually deleting it");
+domain.setState(Domain.State.Inactive);
+_domainDao.update(domain.getId(), domain);
+boolean rollBackState = false;
+boolean hasDedicatedResources = false;
+
+try {
+long ownerId = domain.getAccountId();
+if ((cleanup != null) && cleanup.booleanValue()) {
+if (!cleanupDomain(domain.getId(), ownerId)) {
 rollBackState = true;
 CloudRuntimeException e =
-new CloudRuntimeException("Delete failed on 
domain " + domain.getName() + " (id: " + domain.getId() +
-"); Please make sure all users and sub 
domains have been removed from the domain before deleting");
+new CloudRuntimeException("Failed to clean up 
domain resources and sub domains, delete failed on domain " + domain.getName() 
+ " (id: " +
+domain.getId() + ").");
 e.addProxyObject(domain.getUuid(), "domainId");
 throw e;
 }
-_messageBus.publish(_name, 
MESSAGE_REMOVE_DOMAIN_EVENT, PublishScope.LOCAL, domain);
 } else {
-rollBackState = true;
-String msg = null;
-if (!accountsForCleanup.isEmpty()) {
-msg = accountsForCleanup.size() + " accounts to 
cleanup";
-} else if (!networkIds.isEmpty()) {
-msg = networkIds.size() + " non-removed networks";
-} else if (hasDedicatedResources) {
-msg = "dedicated resources.";
+//don't delete the domain if there are accounts set 
for cleanup, or non-removed networks 

[GitHub] cloudstack pull request #1935: CLOUDSTACK-9764: Delete domain failure due to...

2017-02-21 Thread nvazquez
Github user nvazquez commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1935#discussion_r102265306
  
--- Diff: server/src/com/cloud/user/DomainManagerImpl.java ---
@@ -273,79 +274,97 @@ public boolean deleteDomain(long domainId, Boolean 
cleanup) {
 
 @Override
 public boolean deleteDomain(DomainVO domain, Boolean cleanup) {
-// mark domain as inactive
-s_logger.debug("Marking domain id=" + domain.getId() + " as " + 
Domain.State.Inactive + " before actually deleting it");
-domain.setState(Domain.State.Inactive);
-_domainDao.update(domain.getId(), domain);
-boolean rollBackState = false;
-boolean hasDedicatedResources = false;
+GlobalLock lock = GlobalLock.getInternLock("AccountCleanup");
+if (lock == null) {
+s_logger.debug("Couldn't get the global lock");
+return false;
+}
+
+if (!lock.lock(30)) {
+s_logger.debug("Couldn't lock the db");
+return false;
+}
 
 try {
-long ownerId = domain.getAccountId();
-if ((cleanup != null) && cleanup.booleanValue()) {
-if (!cleanupDomain(domain.getId(), ownerId)) {
-rollBackState = true;
-CloudRuntimeException e =
-new CloudRuntimeException("Failed to clean up 
domain resources and sub domains, delete failed on domain " + domain.getName() 
+ " (id: " +
-domain.getId() + ").");
-e.addProxyObject(domain.getUuid(), "domainId");
-throw e;
-}
-} else {
-//don't delete the domain if there are accounts set for 
cleanup, or non-removed networks exist, or domain has dedicated resources
-List networkIds = 
_networkDomainDao.listNetworkIdsByDomain(domain.getId());
-List accountsForCleanup = 
_accountDao.findCleanupsForRemovedAccounts(domain.getId());
-List dedicatedResources = 
_dedicatedDao.listByDomainId(domain.getId());
-if (dedicatedResources != null && 
!dedicatedResources.isEmpty()) {
-s_logger.error("There are dedicated resources for the 
domain " + domain.getId());
-hasDedicatedResources = true;
-}
-if (accountsForCleanup.isEmpty() && networkIds.isEmpty() 
&& !hasDedicatedResources) {
-_messageBus.publish(_name, 
MESSAGE_PRE_REMOVE_DOMAIN_EVENT, PublishScope.LOCAL, domain);
-if (!_domainDao.remove(domain.getId())) {
+// mark domain as inactive
+s_logger.debug("Marking domain id=" + domain.getId() + " as " 
+ Domain.State.Inactive + " before actually deleting it");
+domain.setState(Domain.State.Inactive);
+_domainDao.update(domain.getId(), domain);
+boolean rollBackState = false;
+boolean hasDedicatedResources = false;
+
+try {
+long ownerId = domain.getAccountId();
+if ((cleanup != null) && cleanup.booleanValue()) {
--- End diff --

Done, thanks @rafaelweingartner


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1935: CLOUDSTACK-9764: Delete domain failure due to...

2017-02-21 Thread nvazquez
Github user nvazquez commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1935#discussion_r102264841
  
--- Diff: server/src/com/cloud/user/DomainManagerImpl.java ---
@@ -273,79 +274,97 @@ public boolean deleteDomain(long domainId, Boolean 
cleanup) {
 
 @Override
 public boolean deleteDomain(DomainVO domain, Boolean cleanup) {
-// mark domain as inactive
-s_logger.debug("Marking domain id=" + domain.getId() + " as " + 
Domain.State.Inactive + " before actually deleting it");
-domain.setState(Domain.State.Inactive);
-_domainDao.update(domain.getId(), domain);
-boolean rollBackState = false;
-boolean hasDedicatedResources = false;
+GlobalLock lock = GlobalLock.getInternLock("AccountCleanup");
+if (lock == null) {
+s_logger.debug("Couldn't get the global lock");
+return false;
+}
+
+if (!lock.lock(30)) {
+s_logger.debug("Couldn't lock the db");
+return false;
+}
 
 try {
-long ownerId = domain.getAccountId();
-if ((cleanup != null) && cleanup.booleanValue()) {
-if (!cleanupDomain(domain.getId(), ownerId)) {
-rollBackState = true;
-CloudRuntimeException e =
-new CloudRuntimeException("Failed to clean up 
domain resources and sub domains, delete failed on domain " + domain.getName() 
+ " (id: " +
-domain.getId() + ").");
-e.addProxyObject(domain.getUuid(), "domainId");
-throw e;
-}
-} else {
-//don't delete the domain if there are accounts set for 
cleanup, or non-removed networks exist, or domain has dedicated resources
-List networkIds = 
_networkDomainDao.listNetworkIdsByDomain(domain.getId());
-List accountsForCleanup = 
_accountDao.findCleanupsForRemovedAccounts(domain.getId());
-List dedicatedResources = 
_dedicatedDao.listByDomainId(domain.getId());
-if (dedicatedResources != null && 
!dedicatedResources.isEmpty()) {
-s_logger.error("There are dedicated resources for the 
domain " + domain.getId());
-hasDedicatedResources = true;
-}
-if (accountsForCleanup.isEmpty() && networkIds.isEmpty() 
&& !hasDedicatedResources) {
-_messageBus.publish(_name, 
MESSAGE_PRE_REMOVE_DOMAIN_EVENT, PublishScope.LOCAL, domain);
-if (!_domainDao.remove(domain.getId())) {
+// mark domain as inactive
+s_logger.debug("Marking domain id=" + domain.getId() + " as " 
+ Domain.State.Inactive + " before actually deleting it");
+domain.setState(Domain.State.Inactive);
+_domainDao.update(domain.getId(), domain);
+boolean rollBackState = false;
+boolean hasDedicatedResources = false;
+
+try {
+long ownerId = domain.getAccountId();
+if ((cleanup != null) && cleanup.booleanValue()) {
+if (!cleanupDomain(domain.getId(), ownerId)) {
 rollBackState = true;
 CloudRuntimeException e =
-new CloudRuntimeException("Delete failed on 
domain " + domain.getName() + " (id: " + domain.getId() +
-"); Please make sure all users and sub 
domains have been removed from the domain before deleting");
+new CloudRuntimeException("Failed to clean up 
domain resources and sub domains, delete failed on domain " + domain.getName() 
+ " (id: " +
+domain.getId() + ").");
 e.addProxyObject(domain.getUuid(), "domainId");
 throw e;
 }
-_messageBus.publish(_name, 
MESSAGE_REMOVE_DOMAIN_EVENT, PublishScope.LOCAL, domain);
 } else {
-rollBackState = true;
-String msg = null;
-if (!accountsForCleanup.isEmpty()) {
-msg = accountsForCleanup.size() + " accounts to 
cleanup";
-} else if (!networkIds.isEmpty()) {
-msg = networkIds.size() + " non-removed networks";
-} else if (hasDedicatedResources) {
-msg = "dedicated resources.";
+//don't delete the domain if there are accounts set 
for cleanup, or non-removed networks 

[GitHub] cloudstack issue #1878: CLOUDSTACK-9717: [VMware] RVRs have mismatching MAC ...

2017-02-21 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/1878
  
Trillian test result (tid-873)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 33766 seconds
Marvin logs: 
https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1878-t873-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Intermitten failure detected: /marvin/tests/smoke/test_vm_life_cycle.py
Intermitten failure detected: /marvin/tests/smoke/test_vpc_redundant.py
Test completed. 46 look ok, 3 have error(s)


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | `Failure` | 362.57 
| test_vpc_redundant.py
test_04_rvpc_privategw_static_routes | `Failure` | 334.37 | 
test_privategw_acl.py
test_02_list_snapshots_with_removed_data_store | `Error` | 0.03 | 
test_snapshots.py
test_01_vpc_site2site_vpn | Success | 159.36 | test_vpc_vpn.py
test_01_vpc_remote_access_vpn | Success | 65.83 | test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn | Success | 239.78 | test_vpc_vpn.py
test_02_VPC_default_routes | Success | 274.04 | test_vpc_router_nics.py
test_01_VPC_nics_after_destroy | Success | 531.49 | test_vpc_router_nics.py
test_05_rvpc_multi_tiers | Success | 509.60 | test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics | Success | 1410.18 | 
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | 
Success | 536.46 | test_vpc_redundant.py
test_02_redundant_VPC_default_routes | Success | 747.15 | 
test_vpc_redundant.py
test_09_delete_detached_volume | Success | 151.49 | test_volumes.py
test_08_resize_volume | Success | 156.71 | test_volumes.py
test_07_resize_fail | Success | 156.11 | test_volumes.py
test_06_download_detached_volume | Success | 155.99 | test_volumes.py
test_05_detach_volume | Success | 150.63 | test_volumes.py
test_04_delete_attached_volume | Success | 150.93 | test_volumes.py
test_03_download_attached_volume | Success | 156.02 | test_volumes.py
test_02_attach_volume | Success | 95.59 | test_volumes.py
test_01_create_volume | Success | 711.06 | test_volumes.py
test_03_delete_vm_snapshots | Success | 275.18 | test_vm_snapshots.py
test_02_revert_vm_snapshots | Success | 95.70 | test_vm_snapshots.py
test_01_create_vm_snapshots | Success | 158.66 | test_vm_snapshots.py
test_deploy_vm_multiple | Success | 267.19 | test_vm_life_cycle.py
test_deploy_vm | Success | 0.02 | test_vm_life_cycle.py
test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
test_10_attachAndDetach_iso | Success | 26.64 | test_vm_life_cycle.py
test_09_expunge_vm | Success | 125.11 | test_vm_life_cycle.py
test_08_migrate_vm | Success | 40.67 | test_vm_life_cycle.py
test_07_restore_vm | Success | 0.06 | test_vm_life_cycle.py
test_06_destroy_vm | Success | 125.92 | test_vm_life_cycle.py
test_03_reboot_vm | Success | 126.20 | test_vm_life_cycle.py
test_02_start_vm | Success | 10.13 | test_vm_life_cycle.py
test_01_stop_vm | Success | 40.26 | test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName | Success | 60.49 | test_templates.py
test_08_list_system_templates | Success | 0.02 | test_templates.py
test_07_list_public_templates | Success | 0.02 | test_templates.py
test_05_template_permissions | Success | 0.04 | test_templates.py
test_04_extract_template | Success | 5.14 | test_templates.py
test_03_delete_template | Success | 5.08 | test_templates.py
test_02_edit_template | Success | 90.16 | test_templates.py
test_01_create_template | Success | 30.29 | test_templates.py
test_10_destroy_cpvm | Success | 131.42 | test_ssvm.py
test_09_destroy_ssvm | Success | 168.49 | test_ssvm.py
test_08_reboot_cpvm | Success | 131.51 | test_ssvm.py
test_07_reboot_ssvm | Success | 133.41 | test_ssvm.py
test_06_stop_cpvm | Success | 131.56 | test_ssvm.py
test_05_stop_ssvm | Success | 133.54 | test_ssvm.py
test_04_cpvm_internals | Success | 1.14 | test_ssvm.py
test_03_ssvm_internals | Success | 3.29 | test_ssvm.py
test_02_list_cpvm_vm | Success | 0.08 | test_ssvm.py
test_01_list_sec_storage_vm | Success | 0.09 | test_ssvm.py
test_01_snapshot_root_disk | Success | 10.94 | test_snapshots.py
test_04_change_offering_small | Success | 239.55 | test_service_offerings.py
test_03_delete_service_offering | Success | 0.03 | test_service_offerings.py
test_02_edit_service_offering | Success | 0.04 | test_service_offerings.py
test_01_create_service_offering | Success | 0.07 | test_service_offerings.py
test_02_sys_template_ready | Success | 0.12 | test_secondary_storage.py
test_01_sys_vm_start | Success | 0

  1   2   >