Re: Review Request: Cloudstack-2621 [Multiple_IP_Ranges] Failed to delete guest IP range from a new subnet/C

2013-06-11 Thread Koushik Das

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11435/#review21700
---



server/src/com/cloud/configuration/ConfigurationManagerImpl.java


why not pass the 'caller' parameter to handleIpAliasDelete?



server/src/com/cloud/configuration/ConfigurationManagerImpl.java


why is both rollback and commit done?


- Koushik Das


On June 11, 2013, 6:38 a.m., bharat kumar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11435/
> ---
> 
> (Updated June 11, 2013, 6:38 a.m.)
> 
> 
> Review request for cloudstack, Abhinandan Prateek and Koushik Das.
> 
> 
> Description
> ---
> 
> [Multiple_IP_Ranges] Failed to delete guest IP range from a new subnet/C
> https://issues.apache.org/jira/browse/CLOUDSTACK-2621
> 
> 
> This addresses bug Cloudstack-2621.
> 
> 
> Diffs
> -
> 
>   server/src/com/cloud/configuration/ConfigurationManagerImpl.java 59e70cf 
>   server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java 
> c71d037 
> 
> Diff: https://reviews.apache.org/r/11435/diff/
> 
> 
> Testing
> ---
> 
> tested on master.
> 
> 
> Thanks,
> 
> bharat kumar
> 
>



RE: Build failed in Jenkins: cloudstack-rat-master #1468

2013-06-11 Thread Hugo Trippaers
This is local to jenkins.

It uses tag to keep track of the changes between the different runs of the 
build. It is not pushing these tags anywhere, they stay in the git environment 
in the workspace.

This looks like a new slave was not yet configured with the global identity in 
the Jenkins configuration.

Cheers,

Hugo

> -Original Message-
> From: Animesh Chaturvedi [mailto:animesh.chaturv...@citrix.com]
> Sent: dinsdag 11 juni 2013 7:49
> To: dev@cloudstack.apache.org
> Subject: RE: Build failed in Jenkins: cloudstack-rat-master #1468
> 
> Seeing this again, shouldn't jenkins just pull and compile?
> > -Original Message-
> > From: David Nalley [mailto:da...@gnsa.us]
> > Sent: Thursday, June 06, 2013 6:06 AM
> > To: dev@cloudstack.apache.org
> > Subject: Re: Build failed in Jenkins: cloudstack-rat-master #1468
> >
> > Why is jenkins trying to create a tag in our repo?
> >
> > --David
> >
> > On Thu, Jun 6, 2013 at 9:00 AM, Apache Jenkins Server
> >  wrote:
> > > See 
> > >
> > > --
> > > Started by an SCM change
> > > Building remotely on ubuntu2 in workspace
> > > 
> > > Checkout:cloudstack-rat-master /
> > >  -
> > > hudson.remoting.Channel@9907404:ubuntu2
> > > Using strategy: Default
> > > Last Built Revision: Revision
> > > d98289baca7fbc8a793adadfa386e6ab234952f7
> > > (origin/master) Fetching changes from 1 remote Git repository
> > > Fetching upstream changes from
> > > https://git-wip-us.apache.org/repos/asf/cloudstack.git
> > > Commencing build of Revision
> > > c0d894346a57e61626f332a9ef25efa9b5e77646
> > > (origin/master) Checking out Revision
> > > c0d894346a57e61626f332a9ef25efa9b5e77646 (origin/master)
> > > FATAL: Could not apply tag jenkins-cloudstack-rat-master-1468
> > > hudson.plugins.git.GitException: Could not apply tag jenkins-
> > cloudstack-rat-master-1468
> > > at hudson.plugins.git.GitAPI.tag(GitAPI.java:829)
> > > at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1270)
> > > at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1231)
> > > at
> > hudson.FilePath$FileCallableWrapper.call(FilePath.java:2348)
> > > at hudson.remoting.UserRequest.perform(UserRequest.java:118)
> > > at hudson.remoting.UserRequest.perform(UserRequest.java:48)
> > > at hudson.remoting.Request$2.run(Request.java:326)
> > > at
> > hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecuto
> > rS
> > ervice.java:72)
> > > at
> > java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> > > at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> > > at
> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.j
> > av
> > a:1146)
> > > at
> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
> > ja
> > va:615)
> > > at java.lang.Thread.run(Thread.java:679)
> > > Caused by: hudson.plugins.git.GitException: Command "git tag -a -f
> > > -m
> > Jenkins Build #1468 jenkins-cloudstack-rat-master-1468" returned
> > status code 128:
> > > stdout:
> > > stderr:
> > > *** Please tell me who you are.
> > >
> > > Run
> > >
> > >   git config --global user.email "y...@example.com"
> > >   git config --global user.name "Your Name"
> > >
> > > to set your account's default identity.
> > > Omit --global to set the identity only in this repository.
> > >
> > > fatal: empty ident   not allowed
> > >
> > > at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:897)
> > > at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:858)
> > > at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:868)
> > > at hudson.plugins.git.GitAPI.tag(GitAPI.java:827)
> > > ... 12 more


Re: Review Request: Cloudstack-2621 [Multiple_IP_Ranges] Failed to delete guest IP range from a new subnet/C

2013-06-11 Thread bharat kumar


> On June 11, 2013, 8:42 a.m., Koushik Das wrote:
> > server/src/com/cloud/configuration/ConfigurationManagerImpl.java, line 2967
> > 
> >
> > why not pass the 'caller' parameter to handleIpAliasDelete?

I am not using the caller in the handleIpAliasDeletion


- bharat


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11435/#review21700
---


On June 11, 2013, 6:38 a.m., bharat kumar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11435/
> ---
> 
> (Updated June 11, 2013, 6:38 a.m.)
> 
> 
> Review request for cloudstack, Abhinandan Prateek and Koushik Das.
> 
> 
> Description
> ---
> 
> [Multiple_IP_Ranges] Failed to delete guest IP range from a new subnet/C
> https://issues.apache.org/jira/browse/CLOUDSTACK-2621
> 
> 
> This addresses bug Cloudstack-2621.
> 
> 
> Diffs
> -
> 
>   server/src/com/cloud/configuration/ConfigurationManagerImpl.java 59e70cf 
>   server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java 
> c71d037 
> 
> Diff: https://reviews.apache.org/r/11435/diff/
> 
> 
> Testing
> ---
> 
> tested on master.
> 
> 
> Thanks,
> 
> bharat kumar
> 
>



git pull fails since 10 june

2013-06-11 Thread Daan Hoogland
LS,

Both in eclipse and on the command line I get the following error:

$ git pull
error: The requested URL returned error: 502 while accessing 
https://git-wip-us.apache.org/repos/asf/cloudstack.git/info/refs?service=git-upload-pack
fatal: HTTP request failed

Any clues anyone?

Regards
Daan Hoogland


PCI-Passthrough with CloudStack

2013-06-11 Thread Pawit Pornkitprasan
Hi,

I am implementing PCI-Passthrough to use with CloudStack for use with
high-performance networking (10 Gigabit Ethernet/Infiniband).

The current design is to attach a PCI ID (from lspci) to a compute
offering. (Not a network offering since from CloudStack’s point of view,
the pass through device has nothing to do with network and may as well be
used for other things.) A host tag can be used to limit deployment to
machines with the required PCI device.

Then, when starting the virtual machine, the PCI ID is passed into
VirtualMachineTO to the agent (currently using KVM) and the agent creates a
corresponding  (
http://libvirt.org/guide/html/Application_Development_Guide-Device_Config-PCI_Pass.html)
tag and then libvirt will handle the rest.

For allocation, the current idea is to use CloudStack’s capacity system (at
the same place where allocation of CPU and RAM is determined) to limit 1
PCI-Passthrough VM per physical host.

The current design has many limitations such as:

   - One physical host can only have 1 VM with PCI-Passthrough, even if
   many PCI-cards with equivalent functions are available
   - The PCI ID is fixed inside the compute offering, so all machines have
   to be homogeneous and have the same PCI ID for the device.

The initial implementation is working. Any suggestions and comments are
welcomed.

Thank you,
Pawit


RE: git pull fails since 10 june

2013-06-11 Thread Pranav Saxena
May be try changing your git config with the following url -

https://git-wip-us.apache.org/repos/asf/cloudstack.git

2) Have you tried cloning again?

3) Also run:   git remote show origin  
   and check What's the fetch URL? You can verify if you are trying to pull 
from a correct URL . 

-Original Message-
From: Daan Hoogland [mailto:dhoogl...@schubergphilis.com] 
Sent: Tuesday, June 11, 2013 3:07 PM
To: dev@cloudstack.apache.org
Subject: git pull fails since 10 june

LS,

Both in eclipse and on the command line I get the following error:

$ git pull
error: The requested URL returned error: 502 while accessing 
https://git-wip-us.apache.org/repos/asf/cloudstack.git/info/refs?service=git-upload-pack
fatal: HTTP request failed

Any clues anyone?

Regards
Daan Hoogland


Re: Review Request: Cloudstack-2621 [Multiple_IP_Ranges] Failed to delete guest IP range from a new subnet/C

2013-06-11 Thread bharat kumar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11435/
---

(Updated June 11, 2013, 11:16 a.m.)


Review request for cloudstack, Abhinandan Prateek and Koushik Das.


Description
---

[Multiple_IP_Ranges] Failed to delete guest IP range from a new subnet/C
https://issues.apache.org/jira/browse/CLOUDSTACK-2621


This addresses bug Cloudstack-2621.


Diffs (updated)
-

  server/src/com/cloud/configuration/ConfigurationManagerImpl.java 59e70cf 
  server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java 
c71d037 

Diff: https://reviews.apache.org/r/11435/diff/


Testing
---

tested on master.


Thanks,

bharat kumar



Re: git pull fails since 10 june

2013-06-11 Thread Wei ZHOU
Maybe apache server is under attack again.


2013/6/11 Pranav Saxena 

> May be try changing your git config with the following url -
>
> https://git-wip-us.apache.org/repos/asf/cloudstack.git
>
> 2) Have you tried cloning again?
>
> 3) Also run:   git remote show origin
>and check What's the fetch URL? You can verify if you are trying to
> pull from a correct URL .
>
> -Original Message-
> From: Daan Hoogland [mailto:dhoogl...@schubergphilis.com]
> Sent: Tuesday, June 11, 2013 3:07 PM
> To: dev@cloudstack.apache.org
> Subject: git pull fails since 10 june
>
> LS,
>
> Both in eclipse and on the command line I get the following error:
>
> $ git pull
> error: The requested URL returned error: 502 while accessing
> https://git-wip-us.apache.org/repos/asf/cloudstack.git/info/refs?service=git-upload-pack
> fatal: HTTP request failed
>
> Any clues anyone?
>
> Regards
> Daan Hoogland
>


Re: Review Request: Updated account and domain id for nic secondary ips for shared networks

2013-06-11 Thread Abhinandan Prateek

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11458/#review21702
---

Ship it!


Ship It!

- Abhinandan Prateek


On June 11, 2013, 6:24 a.m., Jayapal Reddy wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11458/
> ---
> 
> (Updated June 11, 2013, 6:24 a.m.)
> 
> 
> Review request for cloudstack, Abhinandan Prateek and Murali Reddy.
> 
> 
> Description
> ---
> 
> Updated account and domain id for nic secondary ips for shared networks.
> Fixed by getting accoundId and domainId from the caller instead of network
> 
> 
> This addresses bug CLOUDSTACK-2609.
> 
> 
> Diffs
> -
> 
>   server/src/com/cloud/network/NetworkServiceImpl.java d5a59d6 
> 
> Diff: https://reviews.apache.org/r/11458/diff/
> 
> 
> Testing
> ---
> 
> Tested on isolated and shared networks.
> 
> 
> Thanks,
> 
> Jayapal Reddy
> 
>



RE: git pull fails since 10 june

2013-06-11 Thread Daan Hoogland
I tried to re-clone with the url you send, Pranav.

$ git clone https://git-wip-us.apache.org/repos/asf/cloudstack.git
Cloning into 'cloudstack'...
error: The requested URL returned error: 502 while accessing 
https://git-wip-us.apache.org/repos/asf/cloudstack.git/info/refs?service=git-upload-pack
fatal: HTTP request failed

The url is not changed since yesterday with respect to my older clones. I am 
going with Wei's explanation, for now.

I tried a fetch as well. I gave a different message, same result.

$ git fetch
error: Unknown SSL protocol error in connection to git-wip-us.apache.org:443  
while accessing 
https://git-wip-us.apache.org/repos/asf/cloudstack.git/info/refs?service=git-upload-pack
fatal: HTTP request failed


regards,
Daan Hoogland

-Original Message-
From: Wei ZHOU [mailto:ustcweiz...@gmail.com] 
Sent: dinsdag 11 juni 2013 13:29
To: dev@cloudstack.apache.org
Subject: Re: git pull fails since 10 june

Maybe apache server is under attack again.


2013/6/11 Pranav Saxena 

> May be try changing your git config with the following url -
>
> https://git-wip-us.apache.org/repos/asf/cloudstack.git
>
> 2) Have you tried cloning again?
>
> 3) Also run:   git remote show origin
>and check What's the fetch URL? You can verify if you are trying to 
> pull from a correct URL .
>
> -Original Message-
> From: Daan Hoogland [mailto:dhoogl...@schubergphilis.com]
> Sent: Tuesday, June 11, 2013 3:07 PM
> To: dev@cloudstack.apache.org
> Subject: git pull fails since 10 june
>
> LS,
>
> Both in eclipse and on the command line I get the following error:
>
> $ git pull
> error: The requested URL returned error: 502 while accessing 
> https://git-wip-us.apache.org/repos/asf/cloudstack.git/info/refs?servi
> ce=git-upload-pack
> fatal: HTTP request failed
>
> Any clues anyone?
>
> Regards
> Daan Hoogland
>


Re: git pull fails since 10 june

2013-06-11 Thread Ryan Lei
I had that same error right now and yesterday, but it was fine "most of the
time." I guess the Apache repo has been quite unstable lately.


On Tue, Jun 11, 2013 at 7:39 PM, Daan Hoogland  wrote:

> I tried to re-clone with the url you send, Pranav.
>
> $ git clone https://git-wip-us.apache.org/repos/asf/cloudstack.git
> Cloning into 'cloudstack'...
> error: The requested URL returned error: 502 while accessing
> https://git-wip-us.apache.org/repos/asf/cloudstack.git/info/refs?service=git-upload-pack
> fatal: HTTP request failed
>
> The url is not changed since yesterday with respect to my older clones. I
> am going with Wei's explanation, for now.
>
> I tried a fetch as well. I gave a different message, same result.
>
> $ git fetch
> error: Unknown SSL protocol error in connection to
> git-wip-us.apache.org:443  while accessing
> https://git-wip-us.apache.org/repos/asf/cloudstack.git/info/refs?service=git-upload-pack
> fatal: HTTP request failed
>
>
> regards,
> Daan Hoogland
>
> -Original Message-
> From: Wei ZHOU [mailto:ustcweiz...@gmail.com]
> Sent: dinsdag 11 juni 2013 13:29
> To: dev@cloudstack.apache.org
> Subject: Re: git pull fails since 10 june
>
> Maybe apache server is under attack again.
>
>
> 2013/6/11 Pranav Saxena 
>
> > May be try changing your git config with the following url -
> >
> > https://git-wip-us.apache.org/repos/asf/cloudstack.git
> >
> > 2) Have you tried cloning again?
> >
> > 3) Also run:   git remote show origin
> >and check What's the fetch URL? You can verify if you are trying to
> > pull from a correct URL .
> >
> > -Original Message-
> > From: Daan Hoogland [mailto:dhoogl...@schubergphilis.com]
> > Sent: Tuesday, June 11, 2013 3:07 PM
> > To: dev@cloudstack.apache.org
> > Subject: git pull fails since 10 june
> >
> > LS,
> >
> > Both in eclipse and on the command line I get the following error:
> >
> > $ git pull
> > error: The requested URL returned error: 502 while accessing
> > https://git-wip-us.apache.org/repos/asf/cloudstack.git/info/refs?servi
> > ce=git-upload-pack
> > fatal: HTTP request failed
> >
> > Any clues anyone?
> >
> > Regards
> > Daan Hoogland
> >
>


Re: PCI-Passthrough with CloudStack

2013-06-11 Thread David Nalley
On Tue, Jun 11, 2013 at 3:52 AM, Pawit Pornkitprasan  wrote:
> Hi,
>
> I am implementing PCI-Passthrough to use with CloudStack for use with
> high-performance networking (10 Gigabit Ethernet/Infiniband).
>
> The current design is to attach a PCI ID (from lspci) to a compute
> offering. (Not a network offering since from CloudStack’s point of view,
> the pass through device has nothing to do with network and may as well be
> used for other things.) A host tag can be used to limit deployment to
> machines with the required PCI device.
>
> Then, when starting the virtual machine, the PCI ID is passed into
> VirtualMachineTO to the agent (currently using KVM) and the agent creates a
> corresponding  (
> http://libvirt.org/guide/html/Application_Development_Guide-Device_Config-PCI_Pass.html)
> tag and then libvirt will handle the rest.
>
> For allocation, the current idea is to use CloudStack’s capacity system (at
> the same place where allocation of CPU and RAM is determined) to limit 1
> PCI-Passthrough VM per physical host.
>
> The current design has many limitations such as:
>
>- One physical host can only have 1 VM with PCI-Passthrough, even if
>many PCI-cards with equivalent functions are available
>- The PCI ID is fixed inside the compute offering, so all machines have
>to be homogeneous and have the same PCI ID for the device.
>
> The initial implementation is working. Any suggestions and comments are
> welcomed.
>
> Thank you,
> Pawit

This looks like a compelling idea, though I am sure not limited to
just networking (think GPU passthrough).
How are things like live migration affected? Are you making planner
changes to deal with the limiting factor of a single PCI-passthrough
VM being available per host?
What's the level of effort to extend this to work with VMware
DirectPath I/O and PCI passthrough on XenServer?

--David


Re: git pull fails since 10 june

2013-06-11 Thread David Nalley
Yes- ull (the server that provides SSL offload for git-wip-us and a
number of other services) is experiencing issues.

--David

On Tue, Jun 11, 2013 at 7:47 AM, Ryan Lei  wrote:
> I had that same error right now and yesterday, but it was fine "most of the
> time." I guess the Apache repo has been quite unstable lately.
>
>
> On Tue, Jun 11, 2013 at 7:39 PM, Daan Hoogland > wrote:
>
>> I tried to re-clone with the url you send, Pranav.
>>
>> $ git clone https://git-wip-us.apache.org/repos/asf/cloudstack.git
>> Cloning into 'cloudstack'...
>> error: The requested URL returned error: 502 while accessing
>> https://git-wip-us.apache.org/repos/asf/cloudstack.git/info/refs?service=git-upload-pack
>> fatal: HTTP request failed
>>
>> The url is not changed since yesterday with respect to my older clones. I
>> am going with Wei's explanation, for now.
>>
>> I tried a fetch as well. I gave a different message, same result.
>>
>> $ git fetch
>> error: Unknown SSL protocol error in connection to
>> git-wip-us.apache.org:443  while accessing
>> https://git-wip-us.apache.org/repos/asf/cloudstack.git/info/refs?service=git-upload-pack
>> fatal: HTTP request failed
>>
>>
>> regards,
>> Daan Hoogland
>>
>> -Original Message-
>> From: Wei ZHOU [mailto:ustcweiz...@gmail.com]
>> Sent: dinsdag 11 juni 2013 13:29
>> To: dev@cloudstack.apache.org
>> Subject: Re: git pull fails since 10 june
>>
>> Maybe apache server is under attack again.
>>
>>
>> 2013/6/11 Pranav Saxena 
>>
>> > May be try changing your git config with the following url -
>> >
>> > https://git-wip-us.apache.org/repos/asf/cloudstack.git
>> >
>> > 2) Have you tried cloning again?
>> >
>> > 3) Also run:   git remote show origin
>> >and check What's the fetch URL? You can verify if you are trying to
>> > pull from a correct URL .
>> >
>> > -Original Message-
>> > From: Daan Hoogland [mailto:dhoogl...@schubergphilis.com]
>> > Sent: Tuesday, June 11, 2013 3:07 PM
>> > To: dev@cloudstack.apache.org
>> > Subject: git pull fails since 10 june
>> >
>> > LS,
>> >
>> > Both in eclipse and on the command line I get the following error:
>> >
>> > $ git pull
>> > error: The requested URL returned error: 502 while accessing
>> > https://git-wip-us.apache.org/repos/asf/cloudstack.git/info/refs?servi
>> > ce=git-upload-pack
>> > fatal: HTTP request failed
>> >
>> > Any clues anyone?
>> >
>> > Regards
>> > Daan Hoogland
>> >
>>


How to propose dev ?

2013-06-11 Thread nfoata.ext
Hi Cloudstack community,

I would like to propose two developments queries if possible (see below).
However, it seems if I want to submit them, I have to send a git diff (I can do 
it if need be).
Is it the good way to do it or I have to follow a specific process ?

In my mind, I thought that first we have to propose (to be sure that nobody 
done it or the community is ok),
and then if everything goes fine, to do and send it.

Thanks in advance,

Have a good day,

Best regards,

Nicolas Foata


 1) VM instantiation : network information for VM

Goal: Sending network information to a new VM instance

Abstract/suggestion:
While a VM is instantiated, CloudStack could also send the following 
information if need be :
- the instance name  (CS uuid)
- the display name
- VM tags
- network information (IPv4, IPv6, netmask, routing, gateway, mac address, etc.)
just if we activate some global settings such as :
- vm.instance.boot.network.required (true/false)
- vm.instance.boot.vmname (true/false)
- vm.instance.boot.uuid (true/false)
- vm.instance.boot.tags (true/false)

Applications:
- A VM could discover its network and dialog with physical and virtual 
machines, etc.
- A VM do not need a virtual router
- According of this type of information (tags, names, ...) , management servers 
could be able to configure and deploy correctly VMs.



2)  VM instantiation : specific information for hypervisors

Goal  : Sending specific information for the hypervisor to well instantiate a VM

Abstract/suggestion:
While a VM is instantiated, Cloudstack could send and add furthermore data for 
the hypervisor
coming from a new field such as 'Other options/Other configuration' for example 
in the 'Compute offering' screen.
Thus, each hypervisor could decide whether it want to process the data and how 
or to do not take it into account.
With a such input, Cloudstack will be able to use the specificity and the full 
power of each hypervisor.

Applications:
1) On XCP, it will be possible to branch some pci straightforwardly (via pci 
passthrough)
2) To use the more efficiently the min, max memories (static and/or dynamic)

Please feel free to modify the text if you to find better and sexy application 
examples
with this two kinds of features and of course to correct mistakes.






_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
France Telecom - Orange decline toute responsabilite si ce message a ete 
altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, France Telecom - Orange is not liable for messages 
that have been modified, changed or falsified.
Thank you.



Review Request: Fix systemVM template job

2013-06-11 Thread Prasanna Santhanam

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11802/
---

Review request for cloudstack, Chiradeep Vittal and Rohit Yadav.


Description
---

Putting this here since git-asf is down at the moment:

When both systemvmtemplate64 and systemvmtemplate are present the pattern match 
can fail and return the hdd path of the 64-bit template. Do a perfect match by 
including the path separator (/) in the grep expression


Diffs
-

  tools/appliance/build.sh 0216c06 

Diff: https://reviews.apache.org/r/11802/diff/


Testing
---

System VM job is able to run manually


Thanks,

Prasanna Santhanam



Re: haproxy on VMWare systemVM template

2013-06-11 Thread Prasanna Santhanam
On Mon, Jun 10, 2013 at 09:50:09PM +, Chiradeep Vittal wrote:
> The jenkins build of the systemvm has been failing for a couple of days.
> Can someone clean it up?

Looks like the 64-bit job was enabled and left a hdd hanging. There
was a mistake in the grep pattern that returned the hung disk to the
32-bit job. I fixed that but it's hung on a git fetch now.

So I've put a review here : https://reviews.apache.org/r/11802/

When git comes back up, please apply it. Going home now.

-- 
Prasanna.,


Powered by BigRock.com



Re: git pull fails since 10 june

2013-06-11 Thread Daan Hoogland
is anyone working on this? do they need help?

daan


On Tue, Jun 11, 2013 at 2:11 PM, David Nalley  wrote:

> Yes- ull (the server that provides SSL offload for git-wip-us and a
> number of other services) is experiencing issues.
>
> --David
>
> On Tue, Jun 11, 2013 at 7:47 AM, Ryan Lei  wrote:
> > I had that same error right now and yesterday, but it was fine "most of
> the
> > time." I guess the Apache repo has been quite unstable lately.
> >
> >
> > On Tue, Jun 11, 2013 at 7:39 PM, Daan Hoogland <
> dhoogl...@schubergphilis.com
> >> wrote:
> >
> >> I tried to re-clone with the url you send, Pranav.
> >>
> >> $ git clone https://git-wip-us.apache.org/repos/asf/cloudstack.git
> >> Cloning into 'cloudstack'...
> >> error: The requested URL returned error: 502 while accessing
> >>
> https://git-wip-us.apache.org/repos/asf/cloudstack.git/info/refs?service=git-upload-pack
> >> fatal: HTTP request failed
> >>
> >> The url is not changed since yesterday with respect to my older clones.
> I
> >> am going with Wei's explanation, for now.
> >>
> >> I tried a fetch as well. I gave a different message, same result.
> >>
> >> $ git fetch
> >> error: Unknown SSL protocol error in connection to
> >> git-wip-us.apache.org:443  while accessing
> >>
> https://git-wip-us.apache.org/repos/asf/cloudstack.git/info/refs?service=git-upload-pack
> >> fatal: HTTP request failed
> >>
> >>
> >> regards,
> >> Daan Hoogland
> >>
> >> -Original Message-
> >> From: Wei ZHOU [mailto:ustcweiz...@gmail.com]
> >> Sent: dinsdag 11 juni 2013 13:29
> >> To: dev@cloudstack.apache.org
> >> Subject: Re: git pull fails since 10 june
> >>
> >> Maybe apache server is under attack again.
> >>
> >>
> >> 2013/6/11 Pranav Saxena 
> >>
> >> > May be try changing your git config with the following url -
> >> >
> >> > https://git-wip-us.apache.org/repos/asf/cloudstack.git
> >> >
> >> > 2) Have you tried cloning again?
> >> >
> >> > 3) Also run:   git remote show origin
> >> >and check What's the fetch URL? You can verify if you are trying to
> >> > pull from a correct URL .
> >> >
> >> > -Original Message-
> >> > From: Daan Hoogland [mailto:dhoogl...@schubergphilis.com]
> >> > Sent: Tuesday, June 11, 2013 3:07 PM
> >> > To: dev@cloudstack.apache.org
> >> > Subject: git pull fails since 10 june
> >> >
> >> > LS,
> >> >
> >> > Both in eclipse and on the command line I get the following error:
> >> >
> >> > $ git pull
> >> > error: The requested URL returned error: 502 while accessing
> >> >
> https://git-wip-us.apache.org/repos/asf/cloudstack.git/info/refs?servi
> >> > ce=git-upload-pack
> >> > fatal: HTTP request failed
> >> >
> >> > Any clues anyone?
> >> >
> >> > Regards
> >> > Daan Hoogland
> >> >
> >>
>


Re: Board report for June board meeting...

2013-06-11 Thread Chip Childers
On Mon, Jun 03, 2013 at 12:35:43PM -0400, David Nalley wrote:
> On Mon, Jun 3, 2013 at 12:22 PM, Chip Childers
>  wrote:
> > Hi all,
> >
> > Since I'm going to be on vacation until next Monday (starting Tuesday
> > evening), I'd like to ask for help in creating the board report for
> > this month.
> >
> > I've created the template here:
> > https://cwiki.apache.org/confluence/display/CLOUDSTACK/2013-06+Board+Report+for+Apache+CloudStack
> >
> > I'll have a bit of time to help finalize it, but would really love if
> > another(or more) community member would take the lead in authoring the
> > report this month.  It's due by Wed, June 12...  so ideally it would
> > be drafted by Friday, and a note sent to this list for comments /
> > updates.
> >
> > -chip
> 
> I'll be happy to make sure this happens.
> 
> --David
>

Thanks again for the help with this David.  I've edited it a bit to add
more about current activities.

Can everyone please take a look and provide comments if you have any.

https://cwiki.apache.org/confluence/display/CLOUDSTACK/2013-06+Board+Report+for+Apache+CloudStack

I'll be submitting to the board tomorrow.

-chip


ISO creation not reflecting in secondary storage count

2013-06-11 Thread Gaurav Aradhye
Hi all,

I am creating an ISO ( Iso.create() ) by giving an external URL and then
checking the count for the secondary storage.
The ISO file at the specified URL is quite small (about 352 KB).

Even after the Iso is created, secondary storage count is showing as 0.
The question is " *Is it necessary to perform download operation on the Iso
created so as to get the correct secondary storage count?*".

Also, the download operation is failing by giving error:

AssertionError: Exception while downloading ISO
3a8483cb-bbd0-47e2-b0cf-957e12a5a945: Error In Downloading ISO: ISO Status
- Storage agent or storage VM disconnected


Has anyone encountered the same problem before?

Regards,
Gaurav | +919028496765


Re: git commit: updated refs/heads/master to a59067e

2013-06-11 Thread Wei ZHOU
Hi Jessica,

I was wondering why shared network can not be added here?

-Wei


2013/6/10 

> Updated Branches:
>   refs/heads/master 40982ccef -> a59067e94
>
>
> CLOUDSTACK UI - network menu - create guest network dialog - change label.
>
>
> Project: http://git-wip-us.apache.org/repos/asf/cloudstack/repo
> Commit: http://git-wip-us.apache.org/repos/asf/cloudstack/commit/a59067e9
> Tree: http://git-wip-us.apache.org/repos/asf/cloudstack/tree/a59067e9
> Diff: http://git-wip-us.apache.org/repos/asf/cloudstack/diff/a59067e9
>
> Branch: refs/heads/master
> Commit: a59067e94f7095a2448d342d5eed0ffee5f066c0
> Parents: 40982cc
> Author: Jessica Wang 
> Authored: Mon Jun 10 13:43:07 2013 -0700
> Committer: Jessica Wang 
> Committed: Mon Jun 10 13:43:07 2013 -0700
>
> --
>  ui/scripts/network.js | 7 +++
>  1 file changed, 3 insertions(+), 4 deletions(-)
> --
>
>
>
> http://git-wip-us.apache.org/repos/asf/cloudstack/blob/a59067e9/ui/scripts/network.js
> --
> diff --git a/ui/scripts/network.js b/ui/scripts/network.js
> index 9e60cbc..61468fc 100755
> --- a/ui/scripts/network.js
> +++ b/ui/scripts/network.js
> @@ -320,8 +320,8 @@
>  title: 'label.guest.networks',
>  listView: {
>actions: {
> -add: { //add Isolated guest network (can't add Shared guest
> network here)
> -  label: 'Add Isolated Guest Network',
> +add: {
> +  label: 'Add Isolated Guest Network with SourceNat',
>
>preFilter: function(args) { //Isolated networks is only
> supported in Advanced (SG-disabled) zone
>  if(args.context.zoneType != 'Basic')
> @@ -331,8 +331,7 @@
>},
>
>createForm: {
> -title: 'Add Isolated Guest Network',
> -desc: 'Add Isolated Guest Network with SourceNat',
> +title: 'Add Isolated Guest Network with SourceNat',
>  fields: {
>name: { label: 'label.name', validation: { required:
> true }, docID: 'helpGuestNetworkName' },
>displayText: { label: 'label.display.text', validation:
> { required: true }, docID: 'helpGuestNetworkDisplayText'},
>
>


Review Request: fixed not showing uuid of ip address id and network in list firewall and list egress firewall rules response

2013-06-11 Thread Jayapal Reddy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11803/
---

Review request for cloudstack.


Description
---

showing uuid of ip address id in list firewall rules response
showing uuid of network id of in list egress firewall rules response


This addresses bug cloudstack-2934.


Diffs
-

  api/src/org/apache/cloudstack/api/response/FirewallResponse.java 26d2433 
  server/src/com/cloud/api/ApiResponseHelper.java 0c98abc 

Diff: https://reviews.apache.org/r/11803/diff/


Testing
---

Tested listFirewallRules and listEgressFirewallRules API responses


Thanks,

Jayapal Reddy



Re: Contributing as a non-committer

2013-06-11 Thread Joe Brockmeier
On Mon, Jun 10, 2013, at 10:03 PM, Alex Huang wrote:
> > Forget about eclipse for now :) just use vi :)
> 
> Why don't we just go back to ed?  

+1 

Alex - do you want to start the vote? ;-)

Best,

jzb
-- 
Joe Brockmeier
j...@zonker.net
Twitter: @jzb
http://www.dissociatedpress.net/


Re: git pull fails since 10 june

2013-06-11 Thread Joe Brockmeier
On Tue, Jun 11, 2013, at 08:34 AM, Daan Hoogland wrote:
> is anyone working on this? do they need help?

Apache Infra folks are on it. Note that you can see Apache infra status
here:

http://monitoring.apache.org/status/

If it's red, Infra should have been notified of the issue. 

Best,

jzb
-- 
Joe Brockmeier
j...@zonker.net
Twitter: @jzb
http://www.dissociatedpress.net/


Re: Board report for June board meeting...

2013-06-11 Thread Joe Brockmeier
On Tue, Jun 11, 2013, at 08:43 AM, Chip Childers wrote:
> Can everyone please take a look and provide comments if you have any.

Minor formatting changes, and corrected the release date for 4.1.0 (was
listed as June 7, actually was June 4). Otherwise it looks good to me.

Best,

jzb
-- 
Joe Brockmeier
j...@zonker.net
Twitter: @jzb
http://www.dissociatedpress.net/


RE: Build failed in Jenkins: cloudstack-rat-master #1468

2013-06-11 Thread Alex Huang
I don't know much about Jenkins.  Can this be fix?  It's not good to keep 
getting these build errors.  It hides actual errors.

Anyone know how to fix it?

--Alex

> -Original Message-
> From: Hugo Trippaers [mailto:htrippa...@schubergphilis.com]
> Sent: Tuesday, June 11, 2013 1:43 AM
> To: dev@cloudstack.apache.org
> Subject: RE: Build failed in Jenkins: cloudstack-rat-master #1468
> 
> This is local to jenkins.
> 
> It uses tag to keep track of the changes between the different runs of the
> build. It is not pushing these tags anywhere, they stay in the git environment
> in the workspace.
> 
> This looks like a new slave was not yet configured with the global identity in
> the Jenkins configuration.
> 
> Cheers,
> 
> Hugo
> 
> > -Original Message-
> > From: Animesh Chaturvedi [mailto:animesh.chaturv...@citrix.com]
> > Sent: dinsdag 11 juni 2013 7:49
> > To: dev@cloudstack.apache.org
> > Subject: RE: Build failed in Jenkins: cloudstack-rat-master #1468
> >
> > Seeing this again, shouldn't jenkins just pull and compile?
> > > -Original Message-
> > > From: David Nalley [mailto:da...@gnsa.us]
> > > Sent: Thursday, June 06, 2013 6:06 AM
> > > To: dev@cloudstack.apache.org
> > > Subject: Re: Build failed in Jenkins: cloudstack-rat-master #1468
> > >
> > > Why is jenkins trying to create a tag in our repo?
> > >
> > > --David
> > >
> > > On Thu, Jun 6, 2013 at 9:00 AM, Apache Jenkins Server
> > >  wrote:
> > > > See 
> > > >
> > > > --
> > > > Started by an SCM change
> > > > Building remotely on ubuntu2 in workspace
> > > > 
> > > > Checkout:cloudstack-rat-master /
> > > >  -
> > > > hudson.remoting.Channel@9907404:ubuntu2
> > > > Using strategy: Default
> > > > Last Built Revision: Revision
> > > > d98289baca7fbc8a793adadfa386e6ab234952f7
> > > > (origin/master) Fetching changes from 1 remote Git repository
> > > > Fetching upstream changes from
> > > > https://git-wip-us.apache.org/repos/asf/cloudstack.git
> > > > Commencing build of Revision
> > > > c0d894346a57e61626f332a9ef25efa9b5e77646
> > > > (origin/master) Checking out Revision
> > > > c0d894346a57e61626f332a9ef25efa9b5e77646 (origin/master)
> > > > FATAL: Could not apply tag jenkins-cloudstack-rat-master-1468
> > > > hudson.plugins.git.GitException: Could not apply tag jenkins-
> > > cloudstack-rat-master-1468
> > > > at hudson.plugins.git.GitAPI.tag(GitAPI.java:829)
> > > > at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1270)
> > > > at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1231)
> > > > at
> > > hudson.FilePath$FileCallableWrapper.call(FilePath.java:2348)
> > > > at hudson.remoting.UserRequest.perform(UserRequest.java:118)
> > > > at hudson.remoting.UserRequest.perform(UserRequest.java:48)
> > > > at hudson.remoting.Request$2.run(Request.java:326)
> > > > at
> > > hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecu
> > > to
> > > rS
> > > ervice.java:72)
> > > > at
> > > java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> > > > at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> > > > at
> > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor
> > > .j
> > > av
> > > a:1146)
> > > > at
> > >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
> > > ja
> > > va:615)
> > > > at java.lang.Thread.run(Thread.java:679)
> > > > Caused by: hudson.plugins.git.GitException: Command "git tag -a -f
> > > > -m
> > > Jenkins Build #1468 jenkins-cloudstack-rat-master-1468" returned
> > > status code 128:
> > > > stdout:
> > > > stderr:
> > > > *** Please tell me who you are.
> > > >
> > > > Run
> > > >
> > > >   git config --global user.email "y...@example.com"
> > > >   git config --global user.name "Your Name"
> > > >
> > > > to set your account's default identity.
> > > > Omit --global to set the identity only in this repository.
> > > >
> > > > fatal: empty ident   not allowed
> > > >
> > > > at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:897)
> > > > at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:858)
> > > > at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:868)
> > > > at hudson.plugins.git.GitAPI.tag(GitAPI.java:827)
> > > > ... 12 more


Re: Handling Self Signed Certs

2013-06-11 Thread Mike Tutkowski
I would be quite interesting in seeing where we go with this. Are we
talking about doing this in 4.2?

I have a customer playing around with the storage plug-in I've been
developing and we are having a little trouble in their environment with
certificates. If we had just one way of handling them, it would be great
(could just hand over the documentation for how this kind of thing works in
general in CloudStack).


On Mon, Jun 10, 2013 at 11:07 AM, Kelven Yang wrote:

> Will,
>
> Thanks for the effort in getting a common wrapper into utils package.
>
> As for the policy decision(whether or not to make a global flag or a
> per-device option), both have pros and cons, we can wait and see the
> feedbacks from others in the community.
>
> Considering the legacy installations and the fact that we allow
> self-signed certificates by default in existing releases, I personally
> think that having a global flag is a much economic way to get this feature
> in without too much disruptions. Of course, to have fine-control of it, we
> can always allow per-device overridden policy as well.
>
> Kelven
>
>
> On 6/10/13 9:04 AM, "Will Stevens"  wrote:
>
> >When I went looking in CS for the HTTP clients that were already
> >available,
> >I found the one that Soheil is using as well as the new apache one.  I am
> >using the new apache one because I was assuming it was going to be the
> >preferred one going forward.
> >
> >I will clean up my wrapper and make it available in the cloud-utils
> >package.
> >
> >The only question now is if the 'allow unverified certs' should be a
> >global
> >setting or a per device setting.  I tend to think that it should be per
> >device because that isolates the functionality a little better.  However,
> >by creating a global setting it makes the concept more accessible to other
> >developers and centralizes the setting for the user so they only have to
> >specify the setting in one place and all devices which have been written
> >to
> >conform to that setting will allow unverified certs.
> >
> >I think there are pros and cons to both approaches.  I am fine to
> >implement
> >my code either way, so more feedback on this choice would be appreciated.
> >
> >ws
> >
> >
> >On Thu, Jun 6, 2013 at 6:40 PM, Kelven Yang 
> >wrote:
> >
> >> Will,
> >>
> >> We don't have a common HTTPS client yet, as far as I know, different
> >> module developers probably are using slight different way to deal with
> >> self-signed certificate, it is a good time to consolidate it now if it
> >>is
> >> not too late. You may make the facility available in cloud-utils package
> >> and encourage adoption from these modules.
> >>
> >> Some modules, i.e., download manager, API module to hypervisor hosts
> >>have
> >> the similar situation.
> >>
> >>
> >> Kelven
> >>
> >> On 6/6/13 2:33 PM, "Soheil Eizadi"  wrote:
> >>
> >> >What is missing is a facility to import a certificate into the store.
> >>If
> >> >it was available you could use it for self signed CERTS. Ideally it
> >> >should be part of GUI to add devices.
> >> >
> >> >I am implementing a similar HTTP Client. You are using
> >>DefaultHttpClient
> >> >so it is based on the newer Apache libraries. The ones I found in
> >> >CloudStack were older Commons HttpClient which was EOL.
> >> >
> >> >In my case I planned to wrap the Client as you have for development and
> >> >for production have an API to import a certificate for SSL into the
> >> >Certificate Store.
> >> >
> >> >I would call to AuthScope(host, 443) to limit access to only the
> >>specific
> >> >host and port.
> >> >
> >> >-Soheil
> >> >
> >> >From: williamstev...@gmail.com [williamstev...@gmail.com] on behalf of
> >> >Will Stevens [wstev...@cloudops.com]
> >> >Sent: Thursday, June 06, 2013 1:08 PM
> >> >To: dev@cloudstack.apache.org
> >> >Subject: Re: Handling Self Signed Certs
> >> >
> >> >Hey Kelven,
> >> >I am using the same https client libraries as elsewhere in Cloudstack
> >> >(well
> >> >one of them because there is more than one version of http client libs
> >> >currently available in CS).
> >> >
> >> >I am using this client:
> >> >import org.apache.http.impl.client.DefaultHttpClient;
> >> >
> >> >I initialize it like this:
> >> >_httpclient = new DefaultHttpClient();
> >> >
> >> >Then if self signed certs are allowed, I currently have a utility
> >>library
> >> >in my plugin which allows me to do this:
> >> >// Allows you to connect via SSL using unverified certs
> >> >_httpclient = HttpClientWrapper.wrapClient(_httpclient);
> >> >
> >> >Is there a class that already exists in CloudStack which I can use to
> >>wrap
> >> >my client to enable unverified certs, or will I need to add one?
> >>Should I
> >> >create a global setting such as 'Allow unverified SSL certs' which
> >>would
> >> >be
> >> >checked by the code to determine if the http client should be wrapped?
> >> >
> >> >Thx, Will
> >> >
> >> >
> >> >On Thu, Jun 6, 2013 at 2:43 PM, Kelven Yang 
> >> >wrote:

Re: Handling Self Signed Certs

2013-06-11 Thread Will Stevens
Kelven, I like the idea of having a global setting that can be overridden
by the developers at the device level if they want to offer finer control.
 I think this gives us the best of both worlds.

Mike, I am not sure I will be able to get it into 4.2 unless I release it
as its own patch prior to my code for the Palo Alto integration getting in.
 If we iron out how we expect the functionality to behave, I can push to
get it in earlier than the rest of my code.

Will


On Tue, Jun 11, 2013 at 12:22 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:

> I would be quite interesting in seeing where we go with this. Are we
> talking about doing this in 4.2?
>
> I have a customer playing around with the storage plug-in I've been
> developing and we are having a little trouble in their environment with
> certificates. If we had just one way of handling them, it would be great
> (could just hand over the documentation for how this kind of thing works in
> general in CloudStack).
>
>
> On Mon, Jun 10, 2013 at 11:07 AM, Kelven Yang  >wrote:
>
> > Will,
> >
> > Thanks for the effort in getting a common wrapper into utils package.
> >
> > As for the policy decision(whether or not to make a global flag or a
> > per-device option), both have pros and cons, we can wait and see the
> > feedbacks from others in the community.
> >
> > Considering the legacy installations and the fact that we allow
> > self-signed certificates by default in existing releases, I personally
> > think that having a global flag is a much economic way to get this
> feature
> > in without too much disruptions. Of course, to have fine-control of it,
> we
> > can always allow per-device overridden policy as well.
> >
> > Kelven
> >
> >
> > On 6/10/13 9:04 AM, "Will Stevens"  wrote:
> >
> > >When I went looking in CS for the HTTP clients that were already
> > >available,
> > >I found the one that Soheil is using as well as the new apache one.  I
> am
> > >using the new apache one because I was assuming it was going to be the
> > >preferred one going forward.
> > >
> > >I will clean up my wrapper and make it available in the cloud-utils
> > >package.
> > >
> > >The only question now is if the 'allow unverified certs' should be a
> > >global
> > >setting or a per device setting.  I tend to think that it should be per
> > >device because that isolates the functionality a little better.
>  However,
> > >by creating a global setting it makes the concept more accessible to
> other
> > >developers and centralizes the setting for the user so they only have to
> > >specify the setting in one place and all devices which have been written
> > >to
> > >conform to that setting will allow unverified certs.
> > >
> > >I think there are pros and cons to both approaches.  I am fine to
> > >implement
> > >my code either way, so more feedback on this choice would be
> appreciated.
> > >
> > >ws
> > >
> > >
> > >On Thu, Jun 6, 2013 at 6:40 PM, Kelven Yang 
> > >wrote:
> > >
> > >> Will,
> > >>
> > >> We don't have a common HTTPS client yet, as far as I know, different
> > >> module developers probably are using slight different way to deal with
> > >> self-signed certificate, it is a good time to consolidate it now if it
> > >>is
> > >> not too late. You may make the facility available in cloud-utils
> package
> > >> and encourage adoption from these modules.
> > >>
> > >> Some modules, i.e., download manager, API module to hypervisor hosts
> > >>have
> > >> the similar situation.
> > >>
> > >>
> > >> Kelven
> > >>
> > >> On 6/6/13 2:33 PM, "Soheil Eizadi"  wrote:
> > >>
> > >> >What is missing is a facility to import a certificate into the store.
> > >>If
> > >> >it was available you could use it for self signed CERTS. Ideally it
> > >> >should be part of GUI to add devices.
> > >> >
> > >> >I am implementing a similar HTTP Client. You are using
> > >>DefaultHttpClient
> > >> >so it is based on the newer Apache libraries. The ones I found in
> > >> >CloudStack were older Commons HttpClient which was EOL.
> > >> >
> > >> >In my case I planned to wrap the Client as you have for development
> and
> > >> >for production have an API to import a certificate for SSL into the
> > >> >Certificate Store.
> > >> >
> > >> >I would call to AuthScope(host, 443) to limit access to only the
> > >>specific
> > >> >host and port.
> > >> >
> > >> >-Soheil
> > >> >
> > >> >From: williamstev...@gmail.com [williamstev...@gmail.com] on behalf
> of
> > >> >Will Stevens [wstev...@cloudops.com]
> > >> >Sent: Thursday, June 06, 2013 1:08 PM
> > >> >To: dev@cloudstack.apache.org
> > >> >Subject: Re: Handling Self Signed Certs
> > >> >
> > >> >Hey Kelven,
> > >> >I am using the same https client libraries as elsewhere in Cloudstack
> > >> >(well
> > >> >one of them because there is more than one version of http client
> libs
> > >> >currently available in CS).
> > >> >
> > >> >I am using this client:
> > >> >import org.apache.http.impl.client.DefaultH

Re: Handling Self Signed Certs

2013-06-11 Thread Kelven Yang
Will, 

Security is one of the important aspects in any system(if not of topmost),
it is a good move to consolidate it. Thanks for your effort!

Kelven

On 6/11/13 9:27 AM, "Will Stevens"  wrote:

>Kelven, I like the idea of having a global setting that can be overridden
>by the developers at the device level if they want to offer finer control.
> I think this gives us the best of both worlds.
>
>Mike, I am not sure I will be able to get it into 4.2 unless I release it
>as its own patch prior to my code for the Palo Alto integration getting
>in.
> If we iron out how we expect the functionality to behave, I can push to
>get it in earlier than the rest of my code.
>
>Will
>
>
>On Tue, Jun 11, 2013 at 12:22 PM, Mike Tutkowski <
>mike.tutkow...@solidfire.com> wrote:
>
>> I would be quite interesting in seeing where we go with this. Are we
>> talking about doing this in 4.2?
>>
>> I have a customer playing around with the storage plug-in I've been
>> developing and we are having a little trouble in their environment with
>> certificates. If we had just one way of handling them, it would be great
>> (could just hand over the documentation for how this kind of thing
>>works in
>> general in CloudStack).
>>
>>
>> On Mon, Jun 10, 2013 at 11:07 AM, Kelven Yang > >wrote:
>>
>> > Will,
>> >
>> > Thanks for the effort in getting a common wrapper into utils package.
>> >
>> > As for the policy decision(whether or not to make a global flag or a
>> > per-device option), both have pros and cons, we can wait and see the
>> > feedbacks from others in the community.
>> >
>> > Considering the legacy installations and the fact that we allow
>> > self-signed certificates by default in existing releases, I personally
>> > think that having a global flag is a much economic way to get this
>> feature
>> > in without too much disruptions. Of course, to have fine-control of
>>it,
>> we
>> > can always allow per-device overridden policy as well.
>> >
>> > Kelven
>> >
>> >
>> > On 6/10/13 9:04 AM, "Will Stevens"  wrote:
>> >
>> > >When I went looking in CS for the HTTP clients that were already
>> > >available,
>> > >I found the one that Soheil is using as well as the new apache one.
>>I
>> am
>> > >using the new apache one because I was assuming it was going to be
>>the
>> > >preferred one going forward.
>> > >
>> > >I will clean up my wrapper and make it available in the cloud-utils
>> > >package.
>> > >
>> > >The only question now is if the 'allow unverified certs' should be a
>> > >global
>> > >setting or a per device setting.  I tend to think that it should be
>>per
>> > >device because that isolates the functionality a little better.
>>  However,
>> > >by creating a global setting it makes the concept more accessible to
>> other
>> > >developers and centralizes the setting for the user so they only
>>have to
>> > >specify the setting in one place and all devices which have been
>>written
>> > >to
>> > >conform to that setting will allow unverified certs.
>> > >
>> > >I think there are pros and cons to both approaches.  I am fine to
>> > >implement
>> > >my code either way, so more feedback on this choice would be
>> appreciated.
>> > >
>> > >ws
>> > >
>> > >
>> > >On Thu, Jun 6, 2013 at 6:40 PM, Kelven Yang 
>> > >wrote:
>> > >
>> > >> Will,
>> > >>
>> > >> We don't have a common HTTPS client yet, as far as I know,
>>different
>> > >> module developers probably are using slight different way to deal
>>with
>> > >> self-signed certificate, it is a good time to consolidate it now
>>if it
>> > >>is
>> > >> not too late. You may make the facility available in cloud-utils
>> package
>> > >> and encourage adoption from these modules.
>> > >>
>> > >> Some modules, i.e., download manager, API module to hypervisor
>>hosts
>> > >>have
>> > >> the similar situation.
>> > >>
>> > >>
>> > >> Kelven
>> > >>
>> > >> On 6/6/13 2:33 PM, "Soheil Eizadi"  wrote:
>> > >>
>> > >> >What is missing is a facility to import a certificate into the
>>store.
>> > >>If
>> > >> >it was available you could use it for self signed CERTS. Ideally
>>it
>> > >> >should be part of GUI to add devices.
>> > >> >
>> > >> >I am implementing a similar HTTP Client. You are using
>> > >>DefaultHttpClient
>> > >> >so it is based on the newer Apache libraries. The ones I found in
>> > >> >CloudStack were older Commons HttpClient which was EOL.
>> > >> >
>> > >> >In my case I planned to wrap the Client as you have for
>>development
>> and
>> > >> >for production have an API to import a certificate for SSL into
>>the
>> > >> >Certificate Store.
>> > >> >
>> > >> >I would call to AuthScope(host, 443) to limit access to only the
>> > >>specific
>> > >> >host and port.
>> > >> >
>> > >> >-Soheil
>> > >> >
>> > >> >From: williamstev...@gmail.com [williamstev...@gmail.com] on
>>behalf
>> of
>> > >> >Will Stevens [wstev...@cloudops.com]
>> > >> >Sent: Thursday, June 06, 2013 1:08 PM
>> > >> >To: dev@cloudstack.apache.org
>> > >> >Subject: Re: 

Re: Handling Self Signed Certs

2013-06-11 Thread Mike Tutkowski
Oh, OK, Will, no problem. I wasn't sure if it would make it into 4.2, so I
figured I'd check in with you on that.

Thanks!


On Tue, Jun 11, 2013 at 10:40 AM, Kelven Yang wrote:

> Will,
>
> Security is one of the important aspects in any system(if not of topmost),
> it is a good move to consolidate it. Thanks for your effort!
>
> Kelven
>
> On 6/11/13 9:27 AM, "Will Stevens"  wrote:
>
> >Kelven, I like the idea of having a global setting that can be overridden
> >by the developers at the device level if they want to offer finer control.
> > I think this gives us the best of both worlds.
> >
> >Mike, I am not sure I will be able to get it into 4.2 unless I release it
> >as its own patch prior to my code for the Palo Alto integration getting
> >in.
> > If we iron out how we expect the functionality to behave, I can push to
> >get it in earlier than the rest of my code.
> >
> >Will
> >
> >
> >On Tue, Jun 11, 2013 at 12:22 PM, Mike Tutkowski <
> >mike.tutkow...@solidfire.com> wrote:
> >
> >> I would be quite interesting in seeing where we go with this. Are we
> >> talking about doing this in 4.2?
> >>
> >> I have a customer playing around with the storage plug-in I've been
> >> developing and we are having a little trouble in their environment with
> >> certificates. If we had just one way of handling them, it would be great
> >> (could just hand over the documentation for how this kind of thing
> >>works in
> >> general in CloudStack).
> >>
> >>
> >> On Mon, Jun 10, 2013 at 11:07 AM, Kelven Yang  >> >wrote:
> >>
> >> > Will,
> >> >
> >> > Thanks for the effort in getting a common wrapper into utils package.
> >> >
> >> > As for the policy decision(whether or not to make a global flag or a
> >> > per-device option), both have pros and cons, we can wait and see the
> >> > feedbacks from others in the community.
> >> >
> >> > Considering the legacy installations and the fact that we allow
> >> > self-signed certificates by default in existing releases, I personally
> >> > think that having a global flag is a much economic way to get this
> >> feature
> >> > in without too much disruptions. Of course, to have fine-control of
> >>it,
> >> we
> >> > can always allow per-device overridden policy as well.
> >> >
> >> > Kelven
> >> >
> >> >
> >> > On 6/10/13 9:04 AM, "Will Stevens"  wrote:
> >> >
> >> > >When I went looking in CS for the HTTP clients that were already
> >> > >available,
> >> > >I found the one that Soheil is using as well as the new apache one.
> >>I
> >> am
> >> > >using the new apache one because I was assuming it was going to be
> >>the
> >> > >preferred one going forward.
> >> > >
> >> > >I will clean up my wrapper and make it available in the cloud-utils
> >> > >package.
> >> > >
> >> > >The only question now is if the 'allow unverified certs' should be a
> >> > >global
> >> > >setting or a per device setting.  I tend to think that it should be
> >>per
> >> > >device because that isolates the functionality a little better.
> >>  However,
> >> > >by creating a global setting it makes the concept more accessible to
> >> other
> >> > >developers and centralizes the setting for the user so they only
> >>have to
> >> > >specify the setting in one place and all devices which have been
> >>written
> >> > >to
> >> > >conform to that setting will allow unverified certs.
> >> > >
> >> > >I think there are pros and cons to both approaches.  I am fine to
> >> > >implement
> >> > >my code either way, so more feedback on this choice would be
> >> appreciated.
> >> > >
> >> > >ws
> >> > >
> >> > >
> >> > >On Thu, Jun 6, 2013 at 6:40 PM, Kelven Yang 
> >> > >wrote:
> >> > >
> >> > >> Will,
> >> > >>
> >> > >> We don't have a common HTTPS client yet, as far as I know,
> >>different
> >> > >> module developers probably are using slight different way to deal
> >>with
> >> > >> self-signed certificate, it is a good time to consolidate it now
> >>if it
> >> > >>is
> >> > >> not too late. You may make the facility available in cloud-utils
> >> package
> >> > >> and encourage adoption from these modules.
> >> > >>
> >> > >> Some modules, i.e., download manager, API module to hypervisor
> >>hosts
> >> > >>have
> >> > >> the similar situation.
> >> > >>
> >> > >>
> >> > >> Kelven
> >> > >>
> >> > >> On 6/6/13 2:33 PM, "Soheil Eizadi"  wrote:
> >> > >>
> >> > >> >What is missing is a facility to import a certificate into the
> >>store.
> >> > >>If
> >> > >> >it was available you could use it for self signed CERTS. Ideally
> >>it
> >> > >> >should be part of GUI to add devices.
> >> > >> >
> >> > >> >I am implementing a similar HTTP Client. You are using
> >> > >>DefaultHttpClient
> >> > >> >so it is based on the newer Apache libraries. The ones I found in
> >> > >> >CloudStack were older Commons HttpClient which was EOL.
> >> > >> >
> >> > >> >In my case I planned to wrap the Client as you have for
> >>development
> >> and
> >> > >> >for production have an API to import a certificate for SSL into
> >>the
> >> > >> >Certi

[NOTICE] CloudStack 4.1.1 release

2013-06-11 Thread Musayev, Ilya
Just FYI, I'm going to be unavailable from Friday 14 - 21st of June and then 
attending CS conference from June 23-25. Prior to 14th  of June, I have lots of 
deliverables at $dayjob and while I would like to work on ACS side, its 
physically not possible for the next 10 days or so.

I've asked Chip to help me with release of ACS 4.1.1 as I will be unavailable 
for extended period of time and we wanted to release 4.1.1 sooner.

I anticipate my load to get lighter post all the travel  and can focus on ACS 
RM work then.

I'll be trolling through JIRA/GIT today to see what can be merged into 4.1. If 
you know of an issue that has been resolved and is applicable to 4.1, please 
lets us know and if possible, commit.

Thank you Chip for helping,

Regards,
ilya


Re: [MERGE] disk_io_throttling to MASTER

2013-06-11 Thread John Burwell
Mike,

Please see my responses in-line below.

Thanks,
-John

On Jun 10, 2013, at 11:08 PM, Mike Tutkowski  
wrote:

> Let me make sure I follow where we're going here:
> 
> 1) There should be NO references to hypervisor code in the storage plug-ins
> code (this includes the default storage plug-in, which currently sends
> several commands to the hypervisor in use (although it does not know which
> hypervisor (XenServer, ESX(i), etc.) is actually in use))

The Storage->Hypervisor dependencies have been in CloudStack for some time.  My 
goal is eventually eliminate these, and as part of that evolution, I don't want 
to see any more such dependencies added.  Additionally, as we invert the 
dependency in new code, it will lay the foundation for remove the existing 
Storage->Hypervisor dependencies.

> 
> 2) managed=true or managed=false can be placed in the url field (if not
> present, we default to false). This info is stored in the
> storage_pool_details table.

As I understand the data model, storage_pool_details implementation specific 
properties.  I see the managed flag as a common value for all storage pools 
calculated as follows:

- If the associated driver does not support device management, the 
value is always set to false.
- If the associated driver supports device management, the value 
defaults to true, but can be overridden when the device definition is created

As such, it seems to be that it should be a new column on the storage_pool 
table.

> 
> 3) When the "attach" command is sent to the hypervisor in question, we pass
> the managed property along (this takes the place of the
> StoragePoolType.Dynamic check).
> 
> 4) execute(AttachVolumeCommand) in the hypervisor checks for the managed
> property. If true for an attach, the necessary hypervisor data structure is
> created and the rest of the attach command executes to attach the volume.

> 5) When execute(AttachVolumeCommand) is invoked to detach a volume, the
> same check is made. If managed, the hypervisor data structure is removed.

Sounds reasonable to me.  Will there is be a new method added to the Hypervisor 
plugin to create this data structure (e.g. createVMStorage(VolumeTO))?

> 
> 6) I do not see an clear way to support Burst IOPS in 4.2 unless it is
> stored in the volumes and disk_offerings table. If we have some idea,
> that'd be cool.

Shucks.  It sounds like the StoragePool#details won't sufficient.  I think we 
need to address extended data in a number of places (e.g. hypervisor, storage, 
and network drivers, compute and disk offerings, etc).  I propose that we 
address it broadly in 4.3 in a manner that provides a mechanism to store, 
describe, validate, and render them.

> 
> Thanks!
> 
> 
> On Mon, Jun 10, 2013 at 8:58 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
> 
>> "+1 -- Burst IOPS can be implemented while avoiding implementation
>> attributes.  I always wondered about the details field.  I think we should
>> beef up the description in the documentation regarding the expected format
>> of the field.  In 4.1, I noticed that the details are not returned on the
>> createStoratePool updateStoragePool, or listStoragePool response.  Why
>> don't we return it?  It seems like it would be useful for clients to be
>> able to inspect the contents of the details field."
>> 
>> Not sure how this would work storing Burst IOPS here.
>> 
>> Burst IOPS need to be variable on a Disk Offering-by-Disk Offering basis.
>> For each Disk Offering created, you have to be able to associate unique
>> Burst IOPS. There is a disk_offering_details table. Maybe it could go there?
>> 
>> I'm also not sure how you would accept the Burst IOPS in the GUI if it's
>> not stored like the Min and Max fields are in the DB.
>> 
> 
> 
> 
> -- 
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud
> *™*



Re: [MERGE] disk_io_throttling to MASTER

2013-06-11 Thread John Burwell
Mike,

We have a delicate merge dance to perform.  The disk_io_throttling, solidfire, 
and object_store appear to have a number of overlapping elements.  I understand 
the dependencies between the patches to be as follows:

object_store <- solidfire -> disk_io_throttling

Am I correct that the device management aspects of SolidFire are additive to 
the object_store branch or there are circular dependency between the branches?  
Once we understand the dependency graph, we can determine the best approach to 
land the changes in master.

Thanks,
-John


On Jun 10, 2013, at 11:10 PM, Mike Tutkowski  
wrote:

> Also, if we are good with Edison merging my code into his branch before
> going into master, I am good with that.
> 
> We can remove the StoragePoolType.Dynamic code after his merge and we can
> deal with Burst IOPS then, as well.
> 
> 
> On Mon, Jun 10, 2013 at 9:08 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
> 
>> Let me make sure I follow where we're going here:
>> 
>> 1) There should be NO references to hypervisor code in the storage
>> plug-ins code (this includes the default storage plug-in, which currently
>> sends several commands to the hypervisor in use (although it does not know
>> which hypervisor (XenServer, ESX(i), etc.) is actually in use))
>> 
>> 2) managed=true or managed=false can be placed in the url field (if not
>> present, we default to false). This info is stored in the
>> storage_pool_details table.
>> 
>> 3) When the "attach" command is sent to the hypervisor in question, we
>> pass the managed property along (this takes the place of the
>> StoragePoolType.Dynamic check).
>> 
>> 4) execute(AttachVolumeCommand) in the hypervisor checks for the managed
>> property. If true for an attach, the necessary hypervisor data structure is
>> created and the rest of the attach command executes to attach the volume.
>> 
>> 5) When execute(AttachVolumeCommand) is invoked to detach a volume, the
>> same check is made. If managed, the hypervisor data structure is removed.
>> 
>> 6) I do not see an clear way to support Burst IOPS in 4.2 unless it is
>> stored in the volumes and disk_offerings table. If we have some idea,
>> that'd be cool.
>> 
>> Thanks!
>> 
>> 
>> On Mon, Jun 10, 2013 at 8:58 PM, Mike Tutkowski <
>> mike.tutkow...@solidfire.com> wrote:
>> 
>>> "+1 -- Burst IOPS can be implemented while avoiding implementation
>>> attributes.  I always wondered about the details field.  I think we should
>>> beef up the description in the documentation regarding the expected format
>>> of the field.  In 4.1, I noticed that the details are not returned on the
>>> createStoratePool updateStoragePool, or listStoragePool response.  Why
>>> don't we return it?  It seems like it would be useful for clients to be
>>> able to inspect the contents of the details field."
>>> 
>>> Not sure how this would work storing Burst IOPS here.
>>> 
>>> Burst IOPS need to be variable on a Disk Offering-by-Disk Offering
>>> basis. For each Disk Offering created, you have to be able to associate
>>> unique Burst IOPS. There is a disk_offering_details table. Maybe it could
>>> go there?
>>> 
>>> I'm also not sure how you would accept the Burst IOPS in the GUI if it's
>>> not stored like the Min and Max fields are in the DB.
>>> 
>> 
>> 
>> 
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkow...@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the 
>> cloud
>> *™*
>> 
> 
> 
> 
> -- 
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud
> *™*



Re: [MERGE] disk_io_throttling to MASTER

2013-06-11 Thread Mike Tutkowski
Hey John,

The SolidFire patch does not depend on the object_store branch, but - as
Edison mentioned - it might be easier if we merge the SolidFire branch into
the object_store branch before object_store goes into master.

I'm not sure how the disk_io_throttling fits into this merge strategy.
Perhaps Wei can chime in on that.


On Tue, Jun 11, 2013 at 11:07 AM, John Burwell  wrote:

> Mike,
>
> We have a delicate merge dance to perform.  The disk_io_throttling,
> solidfire, and object_store appear to have a number of overlapping
> elements.  I understand the dependencies between the patches to be as
> follows:
>
> object_store <- solidfire -> disk_io_throttling
>
> Am I correct that the device management aspects of SolidFire are additive
> to the object_store branch or there are circular dependency between the
> branches?  Once we understand the dependency graph, we can determine the
> best approach to land the changes in master.
>
> Thanks,
> -John
>
>
> On Jun 10, 2013, at 11:10 PM, Mike Tutkowski 
> wrote:
>
> > Also, if we are good with Edison merging my code into his branch before
> > going into master, I am good with that.
> >
> > We can remove the StoragePoolType.Dynamic code after his merge and we can
> > deal with Burst IOPS then, as well.
> >
> >
> > On Mon, Jun 10, 2013 at 9:08 PM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com> wrote:
> >
> >> Let me make sure I follow where we're going here:
> >>
> >> 1) There should be NO references to hypervisor code in the storage
> >> plug-ins code (this includes the default storage plug-in, which
> currently
> >> sends several commands to the hypervisor in use (although it does not
> know
> >> which hypervisor (XenServer, ESX(i), etc.) is actually in use))
> >>
> >> 2) managed=true or managed=false can be placed in the url field (if not
> >> present, we default to false). This info is stored in the
> >> storage_pool_details table.
> >>
> >> 3) When the "attach" command is sent to the hypervisor in question, we
> >> pass the managed property along (this takes the place of the
> >> StoragePoolType.Dynamic check).
> >>
> >> 4) execute(AttachVolumeCommand) in the hypervisor checks for the managed
> >> property. If true for an attach, the necessary hypervisor data
> structure is
> >> created and the rest of the attach command executes to attach the
> volume.
> >>
> >> 5) When execute(AttachVolumeCommand) is invoked to detach a volume, the
> >> same check is made. If managed, the hypervisor data structure is
> removed.
> >>
> >> 6) I do not see an clear way to support Burst IOPS in 4.2 unless it is
> >> stored in the volumes and disk_offerings table. If we have some idea,
> >> that'd be cool.
> >>
> >> Thanks!
> >>
> >>
> >> On Mon, Jun 10, 2013 at 8:58 PM, Mike Tutkowski <
> >> mike.tutkow...@solidfire.com> wrote:
> >>
> >>> "+1 -- Burst IOPS can be implemented while avoiding implementation
> >>> attributes.  I always wondered about the details field.  I think we
> should
> >>> beef up the description in the documentation regarding the expected
> format
> >>> of the field.  In 4.1, I noticed that the details are not returned on
> the
> >>> createStoratePool updateStoragePool, or listStoragePool response.  Why
> >>> don't we return it?  It seems like it would be useful for clients to be
> >>> able to inspect the contents of the details field."
> >>>
> >>> Not sure how this would work storing Burst IOPS here.
> >>>
> >>> Burst IOPS need to be variable on a Disk Offering-by-Disk Offering
> >>> basis. For each Disk Offering created, you have to be able to associate
> >>> unique Burst IOPS. There is a disk_offering_details table. Maybe it
> could
> >>> go there?
> >>>
> >>> I'm also not sure how you would accept the Burst IOPS in the GUI if
> it's
> >>> not stored like the Min and Max fields are in the DB.
> >>>
> >>
> >>
> >>
> >> --
> >> *Mike Tutkowski*
> >> *Senior CloudStack Developer, SolidFire Inc.*
> >> e: mike.tutkow...@solidfire.com
> >> o: 303.746.7302
> >> Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> >> *™*
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkow...@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud
> > *™*
>
>


-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud
*™*


Re: PCI-Passthrough with CloudStack

2013-06-11 Thread Kelven Yang


On 6/11/13 12:52 AM, "Pawit Pornkitprasan"  wrote:

>Hi,
>
>I am implementing PCI-Passthrough to use with CloudStack for use with
>high-performance networking (10 Gigabit Ethernet/Infiniband).
>
>The current design is to attach a PCI ID (from lspci) to a compute
>offering. (Not a network offering since from CloudStack¹s point of view,
>the pass through device has nothing to do with network and may as well be
>used for other things.) A host tag can be used to limit deployment to
>machines with the required PCI device.


>
>Then, when starting the virtual machine, the PCI ID is passed into
>VirtualMachineTO to the agent (currently using KVM) and the agent creates
>a
>corresponding  (
>http://libvirt.org/guide/html/Application_Development_Guide-Device_Config-
>PCI_Pass.html)
>tag and then libvirt will handle the rest.


VirtualMachineTO.params is designed to carry generic VM specific
configurations, these configuration parameters can either be statically
linked with the VM or dynamically populated based on other factors like
this one. Are you passing PCI ID using VirtualMachineTO.params?

>
>For allocation, the current idea is to use CloudStack¹s capacity system
>(at
>the same place where allocation of CPU and RAM is determined) to limit 1
>PCI-Passthrough VM per physical host.
>
>The current design has many limitations such as:
>
>   - One physical host can only have 1 VM with PCI-Passthrough, even if
>   many PCI-cards with equivalent functions are available
>   - The PCI ID is fixed inside the compute offering, so all machines have
>   to be homogeneous and have the same PCI ID for the device.

Anything that affects VM placement could have impact to HA/migration, we
probably need some graceful error-handling in these code paths, hopefully
these have been taken care of.

>
>The initial implementation is working. Any suggestions and comments are
>welcomed.
>
>Thank you,
>Pawit



Re: [MERGE] disk_io_throttling to MASTER

2013-06-11 Thread John Burwell
Mike,

So my dependency graph below is incorrect.  If there is no dependency between 
object_store and solidfire, why wouldn't merge them separately?  I ask because 
the object_store patch is already very large.  As a reviewer try to comprehend 
the changes, a series of smaller of patches is easier to digest .

Thanks,
-John

On Jun 11, 2013, at 1:10 PM, Mike Tutkowski  
wrote:

> Hey John,
> 
> The SolidFire patch does not depend on the object_store branch, but - as
> Edison mentioned - it might be easier if we merge the SolidFire branch into
> the object_store branch before object_store goes into master.
> 
> I'm not sure how the disk_io_throttling fits into this merge strategy.
> Perhaps Wei can chime in on that.
> 
> 
> On Tue, Jun 11, 2013 at 11:07 AM, John Burwell  wrote:
> 
>> Mike,
>> 
>> We have a delicate merge dance to perform.  The disk_io_throttling,
>> solidfire, and object_store appear to have a number of overlapping
>> elements.  I understand the dependencies between the patches to be as
>> follows:
>> 
>>object_store <- solidfire -> disk_io_throttling
>> 
>> Am I correct that the device management aspects of SolidFire are additive
>> to the object_store branch or there are circular dependency between the
>> branches?  Once we understand the dependency graph, we can determine the
>> best approach to land the changes in master.
>> 
>> Thanks,
>> -John
>> 
>> 
>> On Jun 10, 2013, at 11:10 PM, Mike Tutkowski 
>> wrote:
>> 
>>> Also, if we are good with Edison merging my code into his branch before
>>> going into master, I am good with that.
>>> 
>>> We can remove the StoragePoolType.Dynamic code after his merge and we can
>>> deal with Burst IOPS then, as well.
>>> 
>>> 
>>> On Mon, Jun 10, 2013 at 9:08 PM, Mike Tutkowski <
>>> mike.tutkow...@solidfire.com> wrote:
>>> 
 Let me make sure I follow where we're going here:
 
 1) There should be NO references to hypervisor code in the storage
 plug-ins code (this includes the default storage plug-in, which
>> currently
 sends several commands to the hypervisor in use (although it does not
>> know
 which hypervisor (XenServer, ESX(i), etc.) is actually in use))
 
 2) managed=true or managed=false can be placed in the url field (if not
 present, we default to false). This info is stored in the
 storage_pool_details table.
 
 3) When the "attach" command is sent to the hypervisor in question, we
 pass the managed property along (this takes the place of the
 StoragePoolType.Dynamic check).
 
 4) execute(AttachVolumeCommand) in the hypervisor checks for the managed
 property. If true for an attach, the necessary hypervisor data
>> structure is
 created and the rest of the attach command executes to attach the
>> volume.
 
 5) When execute(AttachVolumeCommand) is invoked to detach a volume, the
 same check is made. If managed, the hypervisor data structure is
>> removed.
 
 6) I do not see an clear way to support Burst IOPS in 4.2 unless it is
 stored in the volumes and disk_offerings table. If we have some idea,
 that'd be cool.
 
 Thanks!
 
 
 On Mon, Jun 10, 2013 at 8:58 PM, Mike Tutkowski <
 mike.tutkow...@solidfire.com> wrote:
 
> "+1 -- Burst IOPS can be implemented while avoiding implementation
> attributes.  I always wondered about the details field.  I think we
>> should
> beef up the description in the documentation regarding the expected
>> format
> of the field.  In 4.1, I noticed that the details are not returned on
>> the
> createStoratePool updateStoragePool, or listStoragePool response.  Why
> don't we return it?  It seems like it would be useful for clients to be
> able to inspect the contents of the details field."
> 
> Not sure how this would work storing Burst IOPS here.
> 
> Burst IOPS need to be variable on a Disk Offering-by-Disk Offering
> basis. For each Disk Offering created, you have to be able to associate
> unique Burst IOPS. There is a disk_offering_details table. Maybe it
>> could
> go there?
> 
> I'm also not sure how you would accept the Burst IOPS in the GUI if
>> it's
> not stored like the Min and Max fields are in the DB.
> 
 
 
 
 --
 *Mike Tutkowski*
 *Senior CloudStack Developer, SolidFire Inc.*
 e: mike.tutkow...@solidfire.com
 o: 303.746.7302
 Advancing the way the world uses the cloud<
>> http://solidfire.com/solution/overview/?video=play>
 *™*
 
>>> 
>>> 
>>> 
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkow...@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the
>>> cloud
>>> *™*
>> 
>> 
> 
> 
> -- 
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the worl

Re: [MERGE] disk_io_throttling to MASTER

2013-06-11 Thread Mike Tutkowski
My comments are below in *red*.

Thanks!


On Tue, Jun 11, 2013 at 11:01 AM, John Burwell  wrote:

> Mike,
>
> Please see my responses in-line below.
>
> Thanks,
> -John
>
> On Jun 10, 2013, at 11:08 PM, Mike Tutkowski 
> wrote:
>
> > Let me make sure I follow where we're going here:
> >
> > 1) There should be NO references to hypervisor code in the storage
> plug-ins
> > code (this includes the default storage plug-in, which currently sends
> > several commands to the hypervisor in use (although it does not know
> which
> > hypervisor (XenServer, ESX(i), etc.) is actually in use))
>

*Agreed. The default storage plug-in (which has references to hypervisor
code) will be left as is for 4.2. Perhaps this can be addressed in 4.3.*

>
> The Storage->Hypervisor dependencies have been in CloudStack for some
> time.  My goal is eventually eliminate these, and as part of that
> evolution, I don't want to see any more such dependencies added.
>  Additionally, as we invert the dependency in new code, it will lay the
> foundation for remove the existing Storage->Hypervisor dependencies.
>
> >
> > 2) managed=true or managed=false can be placed in the url field (if not
> > present, we default to false). This info is stored in the
> > storage_pool_details table.
>
> As I understand the data model, storage_pool_details implementation
> specific properties.  I see the managed flag as a common value for all
> storage pools calculated as follows:
>
> - If the associated driver does not support device management, the
> value is always set to false.
> - If the associated driver supports device management, the value
> defaults to true, but can be overridden when the device definition is
> created
>
> As such, it seems to be that it should be a new column on the storage_pool
> table.
>

*Let's see here...when you add a new Primary Storage into CloudStack, one
of the new parameters is called "Provider". If you do not set this
parameter, it defaults to Edison's default storage plug-in. If you specify,
say, "SolidFire", then it will be associated with my plug-in. Either way, a
new row is added to the storage_pool table. This table could have a new
column, called "managed", that contains the data we later send to the
hypervisor to let it know if it needs to create, say on Xen, an SR.*
*
*
*Is this what you're thinking, John?*

>
> >
> > 3) When the "attach" command is sent to the hypervisor in question, we
> pass
> > the managed property along (this takes the place of the
> > StoragePoolType.Dynamic check).
> >
> > 4) execute(AttachVolumeCommand) in the hypervisor checks for the managed
> > property. If true for an attach, the necessary hypervisor data structure
> is
> > created and the rest of the attach command executes to attach the volume.
>
> > 5) When execute(AttachVolumeCommand) is invoked to detach a volume, the
> > same check is made. If managed, the hypervisor data structure is removed.
>
> Sounds reasonable to me.  Will there is be a new method added to the
> Hypervisor plugin to create this data structure (e.g.
> createVMStorage(VolumeTO))?


*Yeah, I already have the code, I just need to factor it into a new method
that can be called from the "attach/detach" method.*

>
> >
> > 6) I do not see an clear way to support Burst IOPS in 4.2 unless it is
> > stored in the volumes and disk_offerings table. If we have some idea,
> > that'd be cool.
>
> Shucks.  It sounds like the StoragePool#details won't sufficient.  I think
> we need to address extended data in a number of places (e.g. hypervisor,
> storage, and network drivers, compute and disk offerings, etc).  I propose
> that we address it broadly in 4.3 in a manner that provides a mechanism to
> store, describe, validate, and render them.
>

*Waiting until 4.3 is fine. I can set the Burst above the Max by a certain
percentage automatically.*

>
> >
> > Thanks!
> >
> >
> > On Mon, Jun 10, 2013 at 8:58 PM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com> wrote:
> >
> >> "+1 -- Burst IOPS can be implemented while avoiding implementation
> >> attributes.  I always wondered about the details field.  I think we
> should
> >> beef up the description in the documentation regarding the expected
> format
> >> of the field.  In 4.1, I noticed that the details are not returned on
> the
> >> createStoratePool updateStoragePool, or listStoragePool response.  Why
> >> don't we return it?  It seems like it would be useful for clients to be
> >> able to inspect the contents of the details field."
> >>
> >> Not sure how this would work storing Burst IOPS here.
> >>
> >> Burst IOPS need to be variable on a Disk Offering-by-Disk Offering
> basis.
> >> For each Disk Offering created, you have to be able to associate unique
> >> Burst IOPS. There is a disk_offering_details table. Maybe it could go
> there?
> >>
> >> I'm also not sure how you would accept the Burst IOPS in the GUI if it's
> >> not stored like the Min and Max fields are in the DB.
> >>
> >
> >
> >
> > --
> > *

Re: [NOTICE] CloudStack 4.1.1 release

2013-06-11 Thread Kelven Yang
I just fixed a critical bug that could cause XenServer host to be out of
service. I would like the fix to be merged into 4.1.1 release
https://issues.apache.org/jira/browse/CLOUDSTACK-2925


Kelven


On 6/11/13 10:01 AM, "Musayev, Ilya"  wrote:

>Just FYI, I'm going to be unavailable from Friday 14 - 21st of June and
>then attending CS conference from June 23-25. Prior to 14th  of June, I
>have lots of deliverables at $dayjob and while I would like to work on
>ACS side, its physically not possible for the next 10 days or so.
>
>I've asked Chip to help me with release of ACS 4.1.1 as I will be
>unavailable for extended period of time and we wanted to release 4.1.1
>sooner.
>
>I anticipate my load to get lighter post all the travel  and can focus on
>ACS RM work then.
>
>I'll be trolling through JIRA/GIT today to see what can be merged into
>4.1. If you know of an issue that has been resolved and is applicable to
>4.1, please lets us know and if possible, commit.
>
>Thank you Chip for helping,
>
>Regards,
>ilya



Re: [MERGE] disk_io_throttling to MASTER

2013-06-11 Thread Mike Tutkowski
I am OK with it either way.

Edison, do you still have a preference?

Thanks!


On Tue, Jun 11, 2013 at 11:14 AM, John Burwell  wrote:

> Mike,
>
> So my dependency graph below is incorrect.  If there is no dependency
> between object_store and solidfire, why wouldn't merge them separately?  I
> ask because the object_store patch is already very large.  As a reviewer
> try to comprehend the changes, a series of smaller of patches is easier to
> digest .
>
> Thanks,
> -John
>
> On Jun 11, 2013, at 1:10 PM, Mike Tutkowski 
> wrote:
>
> > Hey John,
> >
> > The SolidFire patch does not depend on the object_store branch, but - as
> > Edison mentioned - it might be easier if we merge the SolidFire branch
> into
> > the object_store branch before object_store goes into master.
> >
> > I'm not sure how the disk_io_throttling fits into this merge strategy.
> > Perhaps Wei can chime in on that.
> >
> >
> > On Tue, Jun 11, 2013 at 11:07 AM, John Burwell 
> wrote:
> >
> >> Mike,
> >>
> >> We have a delicate merge dance to perform.  The disk_io_throttling,
> >> solidfire, and object_store appear to have a number of overlapping
> >> elements.  I understand the dependencies between the patches to be as
> >> follows:
> >>
> >>object_store <- solidfire -> disk_io_throttling
> >>
> >> Am I correct that the device management aspects of SolidFire are
> additive
> >> to the object_store branch or there are circular dependency between the
> >> branches?  Once we understand the dependency graph, we can determine the
> >> best approach to land the changes in master.
> >>
> >> Thanks,
> >> -John
> >>
> >>
> >> On Jun 10, 2013, at 11:10 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com>
> >> wrote:
> >>
> >>> Also, if we are good with Edison merging my code into his branch before
> >>> going into master, I am good with that.
> >>>
> >>> We can remove the StoragePoolType.Dynamic code after his merge and we
> can
> >>> deal with Burst IOPS then, as well.
> >>>
> >>>
> >>> On Mon, Jun 10, 2013 at 9:08 PM, Mike Tutkowski <
> >>> mike.tutkow...@solidfire.com> wrote:
> >>>
>  Let me make sure I follow where we're going here:
> 
>  1) There should be NO references to hypervisor code in the storage
>  plug-ins code (this includes the default storage plug-in, which
> >> currently
>  sends several commands to the hypervisor in use (although it does not
> >> know
>  which hypervisor (XenServer, ESX(i), etc.) is actually in use))
> 
>  2) managed=true or managed=false can be placed in the url field (if
> not
>  present, we default to false). This info is stored in the
>  storage_pool_details table.
> 
>  3) When the "attach" command is sent to the hypervisor in question, we
>  pass the managed property along (this takes the place of the
>  StoragePoolType.Dynamic check).
> 
>  4) execute(AttachVolumeCommand) in the hypervisor checks for the
> managed
>  property. If true for an attach, the necessary hypervisor data
> >> structure is
>  created and the rest of the attach command executes to attach the
> >> volume.
> 
>  5) When execute(AttachVolumeCommand) is invoked to detach a volume,
> the
>  same check is made. If managed, the hypervisor data structure is
> >> removed.
> 
>  6) I do not see an clear way to support Burst IOPS in 4.2 unless it is
>  stored in the volumes and disk_offerings table. If we have some idea,
>  that'd be cool.
> 
>  Thanks!
> 
> 
>  On Mon, Jun 10, 2013 at 8:58 PM, Mike Tutkowski <
>  mike.tutkow...@solidfire.com> wrote:
> 
> > "+1 -- Burst IOPS can be implemented while avoiding implementation
> > attributes.  I always wondered about the details field.  I think we
> >> should
> > beef up the description in the documentation regarding the expected
> >> format
> > of the field.  In 4.1, I noticed that the details are not returned on
> >> the
> > createStoratePool updateStoragePool, or listStoragePool response.
>  Why
> > don't we return it?  It seems like it would be useful for clients to
> be
> > able to inspect the contents of the details field."
> >
> > Not sure how this would work storing Burst IOPS here.
> >
> > Burst IOPS need to be variable on a Disk Offering-by-Disk Offering
> > basis. For each Disk Offering created, you have to be able to
> associate
> > unique Burst IOPS. There is a disk_offering_details table. Maybe it
> >> could
> > go there?
> >
> > I'm also not sure how you would accept the Burst IOPS in the GUI if
> >> it's
> > not stored like the Min and Max fields are in the DB.
> >
> 
> 
> 
>  --
>  *Mike Tutkowski*
>  *Senior CloudStack Developer, SolidFire Inc.*
>  e: mike.tutkow...@solidfire.com
>  o: 303.746.7302
>  Advancing the way the world uses the cloud<
> >> http://solidfire.com/solution/overview/?video=play>
>  *™*
> 
> >>>
> >>>
> >>>
> >>

RE: [MERGE] disk_io_throttling to MASTER

2013-06-11 Thread Edison Su
Will you be on today's Go to meeting? We can talk about your stuff.

> -Original Message-
> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> Sent: Tuesday, June 11, 2013 10:20 AM
> To: dev@cloudstack.apache.org
> Subject: Re: [MERGE] disk_io_throttling to MASTER
> 
> I am OK with it either way.
> 
> Edison, do you still have a preference?
> 
> Thanks!
> 
> 
> On Tue, Jun 11, 2013 at 11:14 AM, John Burwell 
> wrote:
> 
> > Mike,
> >
> > So my dependency graph below is incorrect.  If there is no dependency
> > between object_store and solidfire, why wouldn't merge them
> > separately?  I ask because the object_store patch is already very
> > large.  As a reviewer try to comprehend the changes, a series of
> > smaller of patches is easier to digest .
> >
> > Thanks,
> > -John
> >
> > On Jun 11, 2013, at 1:10 PM, Mike Tutkowski
> > 
> > wrote:
> >
> > > Hey John,
> > >
> > > The SolidFire patch does not depend on the object_store branch, but
> > > - as Edison mentioned - it might be easier if we merge the SolidFire
> > > branch
> > into
> > > the object_store branch before object_store goes into master.
> > >
> > > I'm not sure how the disk_io_throttling fits into this merge strategy.
> > > Perhaps Wei can chime in on that.
> > >
> > >
> > > On Tue, Jun 11, 2013 at 11:07 AM, John Burwell 
> > wrote:
> > >
> > >> Mike,
> > >>
> > >> We have a delicate merge dance to perform.  The disk_io_throttling,
> > >> solidfire, and object_store appear to have a number of overlapping
> > >> elements.  I understand the dependencies between the patches to be
> > >> as
> > >> follows:
> > >>
> > >>object_store <- solidfire -> disk_io_throttling
> > >>
> > >> Am I correct that the device management aspects of SolidFire are
> > additive
> > >> to the object_store branch or there are circular dependency between
> > >> the branches?  Once we understand the dependency graph, we can
> > >> determine the best approach to land the changes in master.
> > >>
> > >> Thanks,
> > >> -John
> > >>
> > >>
> > >> On Jun 10, 2013, at 11:10 PM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com>
> > >> wrote:
> > >>
> > >>> Also, if we are good with Edison merging my code into his branch
> > >>> before going into master, I am good with that.
> > >>>
> > >>> We can remove the StoragePoolType.Dynamic code after his merge
> and
> > >>> we
> > can
> > >>> deal with Burst IOPS then, as well.
> > >>>
> > >>>
> > >>> On Mon, Jun 10, 2013 at 9:08 PM, Mike Tutkowski <
> > >>> mike.tutkow...@solidfire.com> wrote:
> > >>>
> >  Let me make sure I follow where we're going here:
> > 
> >  1) There should be NO references to hypervisor code in the
> >  storage plug-ins code (this includes the default storage plug-in,
> >  which
> > >> currently
> >  sends several commands to the hypervisor in use (although it does
> >  not
> > >> know
> >  which hypervisor (XenServer, ESX(i), etc.) is actually in use))
> > 
> >  2) managed=true or managed=false can be placed in the url field
> >  (if
> > not
> >  present, we default to false). This info is stored in the
> >  storage_pool_details table.
> > 
> >  3) When the "attach" command is sent to the hypervisor in
> >  question, we pass the managed property along (this takes the
> >  place of the StoragePoolType.Dynamic check).
> > 
> >  4) execute(AttachVolumeCommand) in the hypervisor checks for the
> > managed
> >  property. If true for an attach, the necessary hypervisor data
> > >> structure is
> >  created and the rest of the attach command executes to attach the
> > >> volume.
> > 
> >  5) When execute(AttachVolumeCommand) is invoked to detach a
> >  volume,
> > the
> >  same check is made. If managed, the hypervisor data structure is
> > >> removed.
> > 
> >  6) I do not see an clear way to support Burst IOPS in 4.2 unless
> >  it is stored in the volumes and disk_offerings table. If we have
> >  some idea, that'd be cool.
> > 
> >  Thanks!
> > 
> > 
> >  On Mon, Jun 10, 2013 at 8:58 PM, Mike Tutkowski <
> >  mike.tutkow...@solidfire.com> wrote:
> > 
> > > "+1 -- Burst IOPS can be implemented while avoiding
> > > implementation attributes.  I always wondered about the details
> > > field.  I think we
> > >> should
> > > beef up the description in the documentation regarding the
> > > expected
> > >> format
> > > of the field.  In 4.1, I noticed that the details are not
> > > returned on
> > >> the
> > > createStoratePool updateStoragePool, or listStoragePool response.
> >  Why
> > > don't we return it?  It seems like it would be useful for
> > > clients to
> > be
> > > able to inspect the contents of the details field."
> > >
> > > Not sure how this would work storing Burst IOPS here.
> > >
> > > Burst IOPS need to be variable on a Disk Offering-by-Disk
> > > Offering basis. For eac

Create a VDI in an SR as large as possible

2013-06-11 Thread Mike Tutkowski
Hi,

I want to create an SR that has a single VDI that takes up all of the
available space of the SR.

The SR has some metadata on it, so I can't just set the size of the VDI
equal to the size of the SR.

Right now, I take the size of the SR and trim off some hard-coded number.

Anyone know how I can come up with this number dynamically?

Thanks!

-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud
*™*


RE: git commit: updated refs/heads/master to a59067e

2013-06-11 Thread Jessica Wang
Wei,

Because Network tab is for normal user and normal user is not allowed to add 
shared network.

Only root-admin is allowed to add shared network and root-admin should do it in 
Infrastructure tab. 
(Infrastructure tab is only available to root-admin)

Jessica

-Original Message-
From: Wei ZHOU [mailto:ustcweiz...@gmail.com] 
Sent: Tuesday, June 11, 2013 6:59 AM
To: dev@cloudstack.apache.org
Subject: Re: git commit: updated refs/heads/master to a59067e

Hi Jessica,

I was wondering why shared network can not be added here?

-Wei


2013/6/10 

> Updated Branches:
>   refs/heads/master 40982ccef -> a59067e94
>
>
> CLOUDSTACK UI - network menu - create guest network dialog - change label.
>
>
> Project: http://git-wip-us.apache.org/repos/asf/cloudstack/repo
> Commit: http://git-wip-us.apache.org/repos/asf/cloudstack/commit/a59067e9
> Tree: http://git-wip-us.apache.org/repos/asf/cloudstack/tree/a59067e9
> Diff: http://git-wip-us.apache.org/repos/asf/cloudstack/diff/a59067e9
>
> Branch: refs/heads/master
> Commit: a59067e94f7095a2448d342d5eed0ffee5f066c0
> Parents: 40982cc
> Author: Jessica Wang 
> Authored: Mon Jun 10 13:43:07 2013 -0700
> Committer: Jessica Wang 
> Committed: Mon Jun 10 13:43:07 2013 -0700
>
> --
>  ui/scripts/network.js | 7 +++
>  1 file changed, 3 insertions(+), 4 deletions(-)
> --
>
>
>
> http://git-wip-us.apache.org/repos/asf/cloudstack/blob/a59067e9/ui/scripts/network.js
> --
> diff --git a/ui/scripts/network.js b/ui/scripts/network.js
> index 9e60cbc..61468fc 100755
> --- a/ui/scripts/network.js
> +++ b/ui/scripts/network.js
> @@ -320,8 +320,8 @@
>  title: 'label.guest.networks',
>  listView: {
>actions: {
> -add: { //add Isolated guest network (can't add Shared guest
> network here)
> -  label: 'Add Isolated Guest Network',
> +add: {
> +  label: 'Add Isolated Guest Network with SourceNat',
>
>preFilter: function(args) { //Isolated networks is only
> supported in Advanced (SG-disabled) zone
>  if(args.context.zoneType != 'Basic')
> @@ -331,8 +331,7 @@
>},
>
>createForm: {
> -title: 'Add Isolated Guest Network',
> -desc: 'Add Isolated Guest Network with SourceNat',
> +title: 'Add Isolated Guest Network with SourceNat',
>  fields: {
>name: { label: 'label.name', validation: { required:
> true }, docID: 'helpGuestNetworkName' },
>displayText: { label: 'label.display.text', validation:
> { required: true }, docID: 'helpGuestNetworkDisplayText'},
>
>


Re: Create a VDI in an SR as large as possible

2013-06-11 Thread Mike Tutkowski
At one point I tried the following, but it didn't work (the VDI's size was
set too high):

vdir.virtualSize = sr.getPhysicalSize(conn) -
sr.getPhysicalUtilisation(conn);


On Tue, Jun 11, 2013 at 11:31 AM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:

> Hi,
>
> I want to create an SR that has a single VDI that takes up all of the
> available space of the SR.
>
> The SR has some metadata on it, so I can't just set the size of the VDI
> equal to the size of the SR.
>
> Right now, I take the size of the SR and trim off some hard-coded number.
>
> Anyone know how I can come up with this number dynamically?
>
> Thanks!
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the 
> cloud
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud
*™*


Re: Review Request: Fix systemVM template job

2013-06-11 Thread Chiradeep Vittal

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11802/#review21716
---

Ship it!


Ship It!

- Chiradeep Vittal


On June 11, 2013, 12:39 p.m., Prasanna Santhanam wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11802/
> ---
> 
> (Updated June 11, 2013, 12:39 p.m.)
> 
> 
> Review request for cloudstack, Chiradeep Vittal and Rohit Yadav.
> 
> 
> Description
> ---
> 
> Putting this here since git-asf is down at the moment:
> 
> When both systemvmtemplate64 and systemvmtemplate are present the pattern 
> match can fail and return the hdd path of the 64-bit template. Do a perfect 
> match by including the path separator (/) in the grep expression
> 
> 
> Diffs
> -
> 
>   tools/appliance/build.sh 0216c06 
> 
> Diff: https://reviews.apache.org/r/11802/diff/
> 
> 
> Testing
> ---
> 
> System VM job is able to run manually
> 
> 
> Thanks,
> 
> Prasanna Santhanam
> 
>



Re: Create a VDI in an SR as large as possible

2013-06-11 Thread Mike Tutkowski
Also, I've tried leaving the "virtualsize" property un-set, but it defaults
to a fairly small size.


On Tue, Jun 11, 2013 at 11:37 AM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:

> At one point I tried the following, but it didn't work (the VDI's size was
> set too high):
>
> vdir.virtualSize = sr.getPhysicalSize(conn) -
> sr.getPhysicalUtilisation(conn);
>
>
> On Tue, Jun 11, 2013 at 11:31 AM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
>
>> Hi,
>>
>> I want to create an SR that has a single VDI that takes up all of the
>> available space of the SR.
>>
>> The SR has some metadata on it, so I can't just set the size of the VDI
>> equal to the size of the SR.
>>
>> Right now, I take the size of the SR and trim off some hard-coded number.
>>
>> Anyone know how I can come up with this number dynamically?
>>
>> Thanks!
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkow...@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the 
>> cloud
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the 
> cloud
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud
*™*


Re: Create a VDI in an SR as large as possible

2013-06-11 Thread Mike Tutkowski
Thanks for that info, Anthony.


On Tue, Jun 11, 2013 at 11:43 AM, Anthony Xu  wrote:

> Please notice the maximum size of VDI is 2T,
>
> Anthony
>
> -Original Message-
> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> Sent: Tuesday, June 11, 2013 10:37 AM
> To: dev@cloudstack.apache.org
> Subject: Re: Create a VDI in an SR as large as possible
>
> At one point I tried the following, but it didn't work (the VDI's size was
> set too high):
>
> vdir.virtualSize = sr.getPhysicalSize(conn) -
> sr.getPhysicalUtilisation(conn);
>
>
> On Tue, Jun 11, 2013 at 11:31 AM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
>
> > Hi,
> >
> > I want to create an SR that has a single VDI that takes up all of the
> > available space of the SR.
> >
> > The SR has some metadata on it, so I can't just set the size of the
> > VDI equal to the size of the SR.
> >
> > Right now, I take the size of the SR and trim off some hard-coded number.
> >
> > Anyone know how I can come up with this number dynamically?
> >
> > Thanks!
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkow...@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud
> > *(tm)*
> >
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud
> *(tm)*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud
*™*


RE: Create a VDI in an SR as large as possible

2013-06-11 Thread Anthony Xu
Please notice the maximum size of VDI is 2T,

Anthony

-Original Message-
From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com] 
Sent: Tuesday, June 11, 2013 10:37 AM
To: dev@cloudstack.apache.org
Subject: Re: Create a VDI in an SR as large as possible

At one point I tried the following, but it didn't work (the VDI's size was set 
too high):

vdir.virtualSize = sr.getPhysicalSize(conn) - sr.getPhysicalUtilisation(conn);


On Tue, Jun 11, 2013 at 11:31 AM, Mike Tutkowski < 
mike.tutkow...@solidfire.com> wrote:

> Hi,
>
> I want to create an SR that has a single VDI that takes up all of the 
> available space of the SR.
>
> The SR has some metadata on it, so I can't just set the size of the 
> VDI equal to the size of the SR.
>
> Right now, I take the size of the SR and trim off some hard-coded number.
>
> Anyone know how I can come up with this number dynamically?
>
> Thanks!
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the 
> cloud
> *(tm)*
>



--
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud
*(tm)*


Re: [MERGE] disk_io_throttling to MASTER

2013-06-11 Thread Wei ZHOU
Hi Mike,

It looks the two feature do not have many conflicts in Java code, except
the cloudstack UI.
If you do not mind, I will merge disk_io_throttling branch into master this
week, so that you can develop based on it.

-Wei


2013/6/11 Mike Tutkowski 

> Hey John,
>
> The SolidFire patch does not depend on the object_store branch, but - as
> Edison mentioned - it might be easier if we merge the SolidFire branch into
> the object_store branch before object_store goes into master.
>
> I'm not sure how the disk_io_throttling fits into this merge strategy.
> Perhaps Wei can chime in on that.
>
>
> On Tue, Jun 11, 2013 at 11:07 AM, John Burwell  wrote:
>
> > Mike,
> >
> > We have a delicate merge dance to perform.  The disk_io_throttling,
> > solidfire, and object_store appear to have a number of overlapping
> > elements.  I understand the dependencies between the patches to be as
> > follows:
> >
> > object_store <- solidfire -> disk_io_throttling
> >
> > Am I correct that the device management aspects of SolidFire are additive
> > to the object_store branch or there are circular dependency between the
> > branches?  Once we understand the dependency graph, we can determine the
> > best approach to land the changes in master.
> >
> > Thanks,
> > -John
> >
> >
> > On Jun 10, 2013, at 11:10 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com>
> > wrote:
> >
> > > Also, if we are good with Edison merging my code into his branch before
> > > going into master, I am good with that.
> > >
> > > We can remove the StoragePoolType.Dynamic code after his merge and we
> can
> > > deal with Burst IOPS then, as well.
> > >
> > >
> > > On Mon, Jun 10, 2013 at 9:08 PM, Mike Tutkowski <
> > > mike.tutkow...@solidfire.com> wrote:
> > >
> > >> Let me make sure I follow where we're going here:
> > >>
> > >> 1) There should be NO references to hypervisor code in the storage
> > >> plug-ins code (this includes the default storage plug-in, which
> > currently
> > >> sends several commands to the hypervisor in use (although it does not
> > know
> > >> which hypervisor (XenServer, ESX(i), etc.) is actually in use))
> > >>
> > >> 2) managed=true or managed=false can be placed in the url field (if
> not
> > >> present, we default to false). This info is stored in the
> > >> storage_pool_details table.
> > >>
> > >> 3) When the "attach" command is sent to the hypervisor in question, we
> > >> pass the managed property along (this takes the place of the
> > >> StoragePoolType.Dynamic check).
> > >>
> > >> 4) execute(AttachVolumeCommand) in the hypervisor checks for the
> managed
> > >> property. If true for an attach, the necessary hypervisor data
> > structure is
> > >> created and the rest of the attach command executes to attach the
> > volume.
> > >>
> > >> 5) When execute(AttachVolumeCommand) is invoked to detach a volume,
> the
> > >> same check is made. If managed, the hypervisor data structure is
> > removed.
> > >>
> > >> 6) I do not see an clear way to support Burst IOPS in 4.2 unless it is
> > >> stored in the volumes and disk_offerings table. If we have some idea,
> > >> that'd be cool.
> > >>
> > >> Thanks!
> > >>
> > >>
> > >> On Mon, Jun 10, 2013 at 8:58 PM, Mike Tutkowski <
> > >> mike.tutkow...@solidfire.com> wrote:
> > >>
> > >>> "+1 -- Burst IOPS can be implemented while avoiding implementation
> > >>> attributes.  I always wondered about the details field.  I think we
> > should
> > >>> beef up the description in the documentation regarding the expected
> > format
> > >>> of the field.  In 4.1, I noticed that the details are not returned on
> > the
> > >>> createStoratePool updateStoragePool, or listStoragePool response.
>  Why
> > >>> don't we return it?  It seems like it would be useful for clients to
> be
> > >>> able to inspect the contents of the details field."
> > >>>
> > >>> Not sure how this would work storing Burst IOPS here.
> > >>>
> > >>> Burst IOPS need to be variable on a Disk Offering-by-Disk Offering
> > >>> basis. For each Disk Offering created, you have to be able to
> associate
> > >>> unique Burst IOPS. There is a disk_offering_details table. Maybe it
> > could
> > >>> go there?
> > >>>
> > >>> I'm also not sure how you would accept the Burst IOPS in the GUI if
> > it's
> > >>> not stored like the Min and Max fields are in the DB.
> > >>>
> > >>
> > >>
> > >>
> > >> --
> > >> *Mike Tutkowski*
> > >> *Senior CloudStack Developer, SolidFire Inc.*
> > >> e: mike.tutkow...@solidfire.com
> > >> o: 303.746.7302
> > >> Advancing the way the world uses the cloud<
> > http://solidfire.com/solution/overview/?video=play>
> > >> *™*
> > >>
> > >
> > >
> > >
> > > --
> > > *Mike Tutkowski*
> > > *Senior CloudStack Developer, SolidFire Inc.*
> > > e: mike.tutkow...@solidfire.com
> > > o: 303.746.7302
> > > Advancing the way the world uses the
> > > cloud
> > > *™*
> >
> >
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> 

cloudstack UI: add shared network

2013-06-11 Thread Jessica Wang
Wei,

> What do you think about show/hide the button according to account/user on 
> Network tab?
Network tab is for normal user.
We put everything that only root-admin is allowed to access in Infrastructure 
tab. This is by design. We probably won't change it.

> I can not find a page to add shared networks in Infracstruture tab.
Please go to Infrastructure tab > Zones > Physical Network tab > Guest > 
Network tab => "Add guest network" button

Jessica


From: Wei ZHOU [mailto:ustcweiz...@gmail.com]
Sent: Tuesday, June 11, 2013 10:57 AM
To: Jessica Wang
Subject: Re: git commit: updated refs/heads/master to a59067e

Jessica,

What do you think about show/hide the button according to account/user on 
Network tab?
I am now testing advanced zone with security groups. I can not find a page to 
add shared networks in Infracstruture tab.

-Wei

2013/6/11 Jessica Wang mailto:jessica.w...@citrix.com>>
Wei,

Because Network tab is for normal user and normal user is not allowed to add 
shared network.

Only root-admin is allowed to add shared network and root-admin should do it in 
Infrastructure tab.
(Infrastructure tab is only available to root-admin)

Jessica

-Original Message-
From: Wei ZHOU [mailto:ustcweiz...@gmail.com]
Sent: Tuesday, June 11, 2013 6:59 AM
To: dev@cloudstack.apache.org
Subject: Re: git commit: updated refs/heads/master to a59067e

Hi Jessica,

I was wondering why shared network can not be added here?

-Wei


2013/6/10 mailto:jessicaw...@apache.org>>

> Updated Branches:
>   refs/heads/master 40982ccef -> a59067e94
>
>
> CLOUDSTACK UI - network menu - create guest network dialog - change label.
>
>
> Project: http://git-wip-us.apache.org/repos/asf/cloudstack/repo
> Commit: http://git-wip-us.apache.org/repos/asf/cloudstack/commit/a59067e9
> Tree: http://git-wip-us.apache.org/repos/asf/cloudstack/tree/a59067e9
> Diff: http://git-wip-us.apache.org/repos/asf/cloudstack/diff/a59067e9
>
> Branch: refs/heads/master
> Commit: a59067e94f7095a2448d342d5eed0ffee5f066c0
> Parents: 40982cc
> Author: Jessica Wang mailto:jessicaw...@apache.org>>
> Authored: Mon Jun 10 13:43:07 2013 -0700
> Committer: Jessica Wang 
> mailto:jessicaw...@apache.org>>
> Committed: Mon Jun 10 13:43:07 2013 -0700
>
> --
>  ui/scripts/network.js | 7 +++
>  1 file changed, 3 insertions(+), 4 deletions(-)
> --
>
>
>
> http://git-wip-us.apache.org/repos/asf/cloudstack/blob/a59067e9/ui/scripts/network.js
> --
> diff --git a/ui/scripts/network.js b/ui/scripts/network.js
> index 9e60cbc..61468fc 100755
> --- a/ui/scripts/network.js
> +++ b/ui/scripts/network.js
> @@ -320,8 +320,8 @@
>  title: 'label.guest.networks',
>  listView: {
>actions: {
> -add: { //add Isolated guest network (can't add Shared guest
> network here)
> -  label: 'Add Isolated Guest Network',
> +add: {
> +  label: 'Add Isolated Guest Network with SourceNat',
>
>preFilter: function(args) { //Isolated networks is only
> supported in Advanced (SG-disabled) zone
>  if(args.context.zoneType != 'Basic')
> @@ -331,8 +331,7 @@
>},
>
>createForm: {
> -title: 'Add Isolated Guest Network',
> -desc: 'Add Isolated Guest Network with SourceNat',
> +title: 'Add Isolated Guest Network with SourceNat',
>  fields: {
>name: { label: 'label.name', 
> validation: { required:
> true }, docID: 'helpGuestNetworkName' },
>displayText: { label: 'label.display.text', validation:
> { required: true }, docID: 'helpGuestNetworkDisplayText'},
>
>



Re: [MERGE] disk_io_throttling to MASTER

2013-06-11 Thread Mike Tutkowski
Sure, that sounds good.


On Tue, Jun 11, 2013 at 12:11 PM, Wei ZHOU  wrote:

> Hi Mike,
>
> It looks the two feature do not have many conflicts in Java code, except
> the cloudstack UI.
> If you do not mind, I will merge disk_io_throttling branch into master this
> week, so that you can develop based on it.
>
> -Wei
>
>
> 2013/6/11 Mike Tutkowski 
>
> > Hey John,
> >
> > The SolidFire patch does not depend on the object_store branch, but - as
> > Edison mentioned - it might be easier if we merge the SolidFire branch
> into
> > the object_store branch before object_store goes into master.
> >
> > I'm not sure how the disk_io_throttling fits into this merge strategy.
> > Perhaps Wei can chime in on that.
> >
> >
> > On Tue, Jun 11, 2013 at 11:07 AM, John Burwell 
> wrote:
> >
> > > Mike,
> > >
> > > We have a delicate merge dance to perform.  The disk_io_throttling,
> > > solidfire, and object_store appear to have a number of overlapping
> > > elements.  I understand the dependencies between the patches to be as
> > > follows:
> > >
> > > object_store <- solidfire -> disk_io_throttling
> > >
> > > Am I correct that the device management aspects of SolidFire are
> additive
> > > to the object_store branch or there are circular dependency between the
> > > branches?  Once we understand the dependency graph, we can determine
> the
> > > best approach to land the changes in master.
> > >
> > > Thanks,
> > > -John
> > >
> > >
> > > On Jun 10, 2013, at 11:10 PM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com>
> > > wrote:
> > >
> > > > Also, if we are good with Edison merging my code into his branch
> before
> > > > going into master, I am good with that.
> > > >
> > > > We can remove the StoragePoolType.Dynamic code after his merge and we
> > can
> > > > deal with Burst IOPS then, as well.
> > > >
> > > >
> > > > On Mon, Jun 10, 2013 at 9:08 PM, Mike Tutkowski <
> > > > mike.tutkow...@solidfire.com> wrote:
> > > >
> > > >> Let me make sure I follow where we're going here:
> > > >>
> > > >> 1) There should be NO references to hypervisor code in the storage
> > > >> plug-ins code (this includes the default storage plug-in, which
> > > currently
> > > >> sends several commands to the hypervisor in use (although it does
> not
> > > know
> > > >> which hypervisor (XenServer, ESX(i), etc.) is actually in use))
> > > >>
> > > >> 2) managed=true or managed=false can be placed in the url field (if
> > not
> > > >> present, we default to false). This info is stored in the
> > > >> storage_pool_details table.
> > > >>
> > > >> 3) When the "attach" command is sent to the hypervisor in question,
> we
> > > >> pass the managed property along (this takes the place of the
> > > >> StoragePoolType.Dynamic check).
> > > >>
> > > >> 4) execute(AttachVolumeCommand) in the hypervisor checks for the
> > managed
> > > >> property. If true for an attach, the necessary hypervisor data
> > > structure is
> > > >> created and the rest of the attach command executes to attach the
> > > volume.
> > > >>
> > > >> 5) When execute(AttachVolumeCommand) is invoked to detach a volume,
> > the
> > > >> same check is made. If managed, the hypervisor data structure is
> > > removed.
> > > >>
> > > >> 6) I do not see an clear way to support Burst IOPS in 4.2 unless it
> is
> > > >> stored in the volumes and disk_offerings table. If we have some
> idea,
> > > >> that'd be cool.
> > > >>
> > > >> Thanks!
> > > >>
> > > >>
> > > >> On Mon, Jun 10, 2013 at 8:58 PM, Mike Tutkowski <
> > > >> mike.tutkow...@solidfire.com> wrote:
> > > >>
> > > >>> "+1 -- Burst IOPS can be implemented while avoiding implementation
> > > >>> attributes.  I always wondered about the details field.  I think we
> > > should
> > > >>> beef up the description in the documentation regarding the expected
> > > format
> > > >>> of the field.  In 4.1, I noticed that the details are not returned
> on
> > > the
> > > >>> createStoratePool updateStoragePool, or listStoragePool response.
> >  Why
> > > >>> don't we return it?  It seems like it would be useful for clients
> to
> > be
> > > >>> able to inspect the contents of the details field."
> > > >>>
> > > >>> Not sure how this would work storing Burst IOPS here.
> > > >>>
> > > >>> Burst IOPS need to be variable on a Disk Offering-by-Disk Offering
> > > >>> basis. For each Disk Offering created, you have to be able to
> > associate
> > > >>> unique Burst IOPS. There is a disk_offering_details table. Maybe it
> > > could
> > > >>> go there?
> > > >>>
> > > >>> I'm also not sure how you would accept the Burst IOPS in the GUI if
> > > it's
> > > >>> not stored like the Min and Max fields are in the DB.
> > > >>>
> > > >>
> > > >>
> > > >>
> > > >> --
> > > >> *Mike Tutkowski*
> > > >> *Senior CloudStack Developer, SolidFire Inc.*
> > > >> e: mike.tutkow...@solidfire.com
> > > >> o: 303.746.7302
> > > >> Advancing the way the world uses the cloud<
> > > http://solidfire.com/solution/overview/?video=play>
> > > >> *™*
> > > >>

UI Development

2013-06-11 Thread Soheil Eizadi
I did not find any UI development resources on the Wiki. I need to update the 
UI to support a new Device. I wanted to know if there is there any recommended 
tooling for the CloudStack UI. I was planning to use Eclipse JavaScript 
Development Tools (JSDT), but wanted to see if there is any recommended setup 
also what the recommended debugging environment would look like?
Thanks,
-Soheil


RE: UI Development

2013-06-11 Thread Brian Federle
Hi Soheil,

For the most part pretty much any IDE/text editor will work for UI development, 
as the code base is pure JS+HTML+CSS. I usually use Firebug for debugging 
purposes.

For IDEs, I would recommend IntelliJ IDEA, which has the best support for JS, 
including an integrated JS debugger. Not too familiar with Eclipse for web dev, 
though.

Unfortunately UI documentation is a bit sparse right now; I'm in the process of 
adding more documentation when I can find time. For now, I would recommend 
going through the UI plugin development tutorial and see if it will address 
your needs for the feature: 
https://cwiki.apache.org/CLOUDSTACK/ui-plugin-tutorial.html. It goes through 
setting up a new list view with a set of actions.

-Brian

-Original Message-
From: Soheil Eizadi [mailto:seiz...@infoblox.com] 
Sent: Tuesday, June 11, 2013 11:35 AM
To: dev@cloudstack.apache.org
Subject: UI Development

I did not find any UI development resources on the Wiki. I need to update the 
UI to support a new Device. I wanted to know if there is there any recommended 
tooling for the CloudStack UI. I was planning to use Eclipse JavaScript 
Development Tools (JSDT), but wanted to see if there is any recommended setup 
also what the recommended debugging environment would look like?
Thanks,
-Soheil


Re: Review Request: use commons-lang StringUtils

2013-06-11 Thread Laszlo Hornyak


> On June 10, 2013, 11:56 p.m., Alex Huang wrote:
> > The patch did not apply cleanly.  Please resubmit.  Since you have to 
> > resubmit anyways, I think why not just change all of the code to use the 
> > commons.lang version.  I see no point in keeping the method in StringUtils 
> > if it's replaceable by one in a standard library.
> >

Hi Alex,

I think it makes sense to keep the StringUtils.join(String, Object...) method 
because it allows to use varargs, unlike the one in commons-lang, and the code 
somewhat builds on it.


- Laszlo


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11767/#review21678
---


On June 9, 2013, 7:47 p.m., Laszlo Hornyak wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11767/
> ---
> 
> (Updated June 9, 2013, 7:47 p.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Description
> ---
> 
> commons-lang is already a transitive dependency of the utils project, which 
> allows removing some duplicated functionality.
> This patch replaces StringUtils.join(String, Object...) with it's 
> commons-lang counterpart.
> It also replaces calls to String join(Iterable, String) in 
> cases where an array is already exist and it is only wrapped into a List.
> 
> 
> Diffs
> -
> 
>   server/src/com/cloud/storage/s3/S3ManagerImpl.java 61e5573 
>   
> services/secondary-storage/src/org/apache/cloudstack/storage/resource/NfsSecondaryStorageResource.java
>  e7fa5b2 
>   utils/src/com/cloud/utils/S3Utils.java b7273a1 
>   utils/src/com/cloud/utils/StringUtils.java 14ff4b1 
>   utils/test/com/cloud/utils/StringUtilsTest.java 3c162c7 
> 
> Diff: https://reviews.apache.org/r/11767/diff/
> 
> 
> Testing
> ---
> 
> - Unit test added
> 
> 
> Thanks,
> 
> Laszlo Hornyak
> 
>



RE: UI Development

2013-06-11 Thread Soheil Eizadi
Hi Brian,
Thanks for the detail.

I have looked at the UI-Plugin page, is there a matching server side plugin 
component to go along with the UI-Plugin to extend the CloudStack functionality?

-Soheil


From: Brian Federle [brian.fede...@citrix.com]
Sent: Tuesday, June 11, 2013 11:44 AM
To: 'dev@cloudstack.apache.org'
Subject: RE: UI Development

Hi Soheil,

For the most part pretty much any IDE/text editor will work for UI development, 
as the code base is pure JS+HTML+CSS. I usually use Firebug for debugging 
purposes.

For IDEs, I would recommend IntelliJ IDEA, which has the best support for JS, 
including an integrated JS debugger. Not too familiar with Eclipse for web dev, 
though.

Unfortunately UI documentation is a bit sparse right now; I'm in the process of 
adding more documentation when I can find time. For now, I would recommend 
going through the UI plugin development tutorial and see if it will address 
your needs for the feature: 
https://cwiki.apache.org/CLOUDSTACK/ui-plugin-tutorial.html. It goes through 
setting up a new list view with a set of actions.

-Brian

-Original Message-
From: Soheil Eizadi [mailto:seiz...@infoblox.com]
Sent: Tuesday, June 11, 2013 11:35 AM
To: dev@cloudstack.apache.org
Subject: UI Development

I did not find any UI development resources on the Wiki. I need to update the 
UI to support a new Device. I wanted to know if there is there any recommended 
tooling for the CloudStack UI. I was planning to use Eclipse JavaScript 
Development Tools (JSDT), but wanted to see if there is any recommended setup 
also what the recommended debugging environment would look like?
Thanks,
-Soheil


Re: cloudstack UI: add shared network

2013-06-11 Thread Wei ZHOU
Jessica,

Thank you so much.

-Wei


2013/6/11 Jessica Wang 

>  Wei,
>
> ** **
>
> > What do you think about show/hide the button according to account/user
> on Network tab? 
>
> Network tab is for normal user.
>
> We put everything that only root-admin is allowed to access in
> Infrastructure tab. This is by design. We probably won’t change it.
>
> ** **
>
> > I can not find a page to add shared networks in Infracstruture tab.
>
> Please go to Infrastructure tab > Zones > Physical Network tab > Guest >
> Network tab => “Add guest network” button
>
> ** **
>
> Jessica
>
> ** **
>
> ** **
>
> *From:* Wei ZHOU [mailto:ustcweiz...@gmail.com]
> *Sent:* Tuesday, June 11, 2013 10:57 AM
> *To:* Jessica Wang
> *Subject:* Re: git commit: updated refs/heads/master to a59067e
>
> ** **
>
> Jessica,
>
>  
>
> What do you think about show/hide the button according to account/user on
> Network tab?
>
> I am now testing advanced zone with security groups. I can not find a page
> to add shared networks in Infracstruture tab.
>
>  
>
> -Wei
>
> ** **
>
> 2013/6/11 Jessica Wang 
>
> Wei,
>
> Because Network tab is for normal user and normal user is not allowed to
> add shared network.
>
> Only root-admin is allowed to add shared network and root-admin should do
> it in Infrastructure tab.
> (Infrastructure tab is only available to root-admin)
>
> Jessica
>
>
> -Original Message-
> From: Wei ZHOU [mailto:ustcweiz...@gmail.com]
> Sent: Tuesday, June 11, 2013 6:59 AM
> To: dev@cloudstack.apache.org
> Subject: Re: git commit: updated refs/heads/master to a59067e
>
> Hi Jessica,
>
> I was wondering why shared network can not be added here?
>
> -Wei
>
>
> 2013/6/10 
>
> > Updated Branches:
> >   refs/heads/master 40982ccef -> a59067e94
> >
> >
> > CLOUDSTACK UI - network menu - create guest network dialog - change
> label.
> >
> >
> > Project: http://git-wip-us.apache.org/repos/asf/cloudstack/repo
> > Commit:
> http://git-wip-us.apache.org/repos/asf/cloudstack/commit/a59067e9
> > Tree: http://git-wip-us.apache.org/repos/asf/cloudstack/tree/a59067e9
> > Diff: http://git-wip-us.apache.org/repos/asf/cloudstack/diff/a59067e9
> >
> > Branch: refs/heads/master
> > Commit: a59067e94f7095a2448d342d5eed0ffee5f066c0
> > Parents: 40982cc
> > Author: Jessica Wang 
> > Authored: Mon Jun 10 13:43:07 2013 -0700
> > Committer: Jessica Wang 
> > Committed: Mon Jun 10 13:43:07 2013 -0700
> >
> > --
> >  ui/scripts/network.js | 7 +++
> >  1 file changed, 3 insertions(+), 4 deletions(-)
> > --
> >
> >
> >
> >
> http://git-wip-us.apache.org/repos/asf/cloudstack/blob/a59067e9/ui/scripts/network.js
> > --
> > diff --git a/ui/scripts/network.js b/ui/scripts/network.js
> > index 9e60cbc..61468fc 100755
> > --- a/ui/scripts/network.js
> > +++ b/ui/scripts/network.js
> > @@ -320,8 +320,8 @@
> >  title: 'label.guest.networks',
> >  listView: {
> >actions: {
> > -add: { //add Isolated guest network (can't add Shared guest
> > network here)
> > -  label: 'Add Isolated Guest Network',
> > +add: {
> > +  label: 'Add Isolated Guest Network with SourceNat',
> >
> >preFilter: function(args) { //Isolated networks is only
> > supported in Advanced (SG-disabled) zone
> >  if(args.context.zoneType != 'Basic')
> > @@ -331,8 +331,7 @@
> >},
> >
> >createForm: {
> > -title: 'Add Isolated Guest Network',
> > -desc: 'Add Isolated Guest Network with SourceNat',
> > +title: 'Add Isolated Guest Network with SourceNat',
> >  fields: {
> >name: { label: 'label.name', validation: { required:
> > true }, docID: 'helpGuestNetworkName' },
> >displayText: { label: 'label.display.text',
> validation:
> > { required: true }, docID: 'helpGuestNetworkDisplayText'},
> >
> >
>
> ** **
>


RE: PCI-Passthrough with CloudStack

2013-06-11 Thread Paul Angus
We're working with 'a very large broadcasting company' how are using cavium 
cards for ssl offload in all of their hosts

We need to add:







Into the xml definition of the guest VMs

I'm very interested in working you guys to make this an integrated part of 
CloudStack

Interestingly cavium card drivers can present a number of virtual interfaces 
specifically designed to be passed through to guest vms, but these must be 
addressed separately so a single 'stock' xml definition wouldn't be flexible 
enough to fully utilise the card.


Regards,

Paul Angus
S: +44 20 3603 0540 | M: +447711418784
paul.an...@shapeblue.com

-Original Message-
From: Kelven Yang [mailto:kelven.y...@citrix.com]
Sent: 11 June 2013 18:10
To: dev@cloudstack.apache.org
Cc: Ryousei Takano
Subject: Re: PCI-Passthrough with CloudStack



On 6/11/13 12:52 AM, "Pawit Pornkitprasan"  wrote:

>Hi,
>
>I am implementing PCI-Passthrough to use with CloudStack for use with
>high-performance networking (10 Gigabit Ethernet/Infiniband).
>
>The current design is to attach a PCI ID (from lspci) to a compute
>offering. (Not a network offering since from CloudStack¹s point of
>view, the pass through device has nothing to do with network and may as
>well be used for other things.) A host tag can be used to limit
>deployment to machines with the required PCI device.


>
>Then, when starting the virtual machine, the PCI ID is passed into
>VirtualMachineTO to the agent (currently using KVM) and the agent
>creates a corresponding  (
>http://libvirt.org/guide/html/Application_Development_Guide-Device_Conf
>ig-
>PCI_Pass.html)
>tag and then libvirt will handle the rest.


VirtualMachineTO.params is designed to carry generic VM specific 
configurations, these configuration parameters can either be statically linked 
with the VM or dynamically populated based on other factors like this one. Are 
you passing PCI ID using VirtualMachineTO.params?

>
>For allocation, the current idea is to use CloudStack¹s capacity system
>(at the same place where allocation of CPU and RAM is determined) to
>limit 1 PCI-Passthrough VM per physical host.
>
>The current design has many limitations such as:
>
>   - One physical host can only have 1 VM with PCI-Passthrough, even if
>   many PCI-cards with equivalent functions are available
>   - The PCI ID is fixed inside the compute offering, so all machines have
>   to be homogeneous and have the same PCI ID for the device.

Anything that affects VM placement could have impact to HA/migration, we 
probably need some graceful error-handling in these code paths, hopefully these 
have been taken care of.

>
>The initial implementation is working. Any suggestions and comments are
>welcomed.
>
>Thank you,
>Pawit


This email and any attachments to it may be confidential and are intended 
solely for the use of the individual to whom it is addressed. Any views or 
opinions expressed are solely those of the author and do not necessarily 
represent those of Shape Blue Ltd or related companies. If you are not the 
intended recipient of this email, you must neither take any action based upon 
its contents, nor copy or show it to anyone. Please contact the sender if you 
believe you have received this email in error. Shape Blue Ltd is a company 
incorporated in England & Wales. ShapeBlue Services India LLP is operated under 
license from Shape Blue Ltd. ShapeBlue is a registered trademark.



RE: Contributing as a non-committer

2013-06-11 Thread Paul Angus
I've got an etch-a-sketch, will that work?


Regards,

Paul Angus
S: +44 20 3603 0540 | M: +447711418784
paul.an...@shapeblue.com

-Original Message-
From: Joe Brockmeier [mailto:j...@zonker.net]
Sent: 11 June 2013 16:35
To: dev@cloudstack.apache.org
Subject: Re: Contributing as a non-committer

On Mon, Jun 10, 2013, at 10:03 PM, Alex Huang wrote:
> > Forget about eclipse for now :) just use vi :)
>
> Why don't we just go back to ed?

+1

Alex - do you want to start the vote? ;-)

Best,

jzb
--
Joe Brockmeier
j...@zonker.net
Twitter: @jzb
http://www.dissociatedpress.net/

This email and any attachments to it may be confidential and are intended 
solely for the use of the individual to whom it is addressed. Any views or 
opinions expressed are solely those of the author and do not necessarily 
represent those of Shape Blue Ltd or related companies. If you are not the 
intended recipient of this email, you must neither take any action based upon 
its contents, nor copy or show it to anyone. Please contact the sender if you 
believe you have received this email in error. Shape Blue Ltd is a company 
incorporated in England & Wales. ShapeBlue Services India LLP is operated under 
license from Shape Blue Ltd. ShapeBlue is a registered trademark.



Re: PCI-Passthrough with CloudStack

2013-06-11 Thread Marcus Sorensen
What we need is some sort of plugin system for the libvirt guest
agent, where people can inject their own additions to the xml. So we
pass the VM parameters (including name, os, nics, volumes etc) to your
plugin, and it returns either nothing, or some xml. Or perhaps an
object that defines additional xml for various resources.

Or maybe we just pass the final cloudstack-generated XML to your
plugin, the external plugin processes it and returns it, complete with
whatever modifications it wants before cloudstack starts the VM. That
would actually be very simple to put in. Via the KVM host's
agent.properties file we could point to an external script. That
script could be in whatever language, as long as it's executable. It
filters the XML and returns new XML which is used to start the VM.

On Tue, Jun 11, 2013 at 12:59 PM, Paul Angus  wrote:
> We're working with 'a very large broadcasting company' how are using cavium 
> cards for ssl offload in all of their hosts
>
> We need to add:
>
> 
> 
>  function='0x1'/>
> 
> 
>
> Into the xml definition of the guest VMs
>
> I'm very interested in working you guys to make this an integrated part of 
> CloudStack
>
> Interestingly cavium card drivers can present a number of virtual interfaces 
> specifically designed to be passed through to guest vms, but these must be 
> addressed separately so a single 'stock' xml definition wouldn't be flexible 
> enough to fully utilise the card.
>
>
> Regards,
>
> Paul Angus
> S: +44 20 3603 0540 | M: +447711418784
> paul.an...@shapeblue.com
>
> -Original Message-
> From: Kelven Yang [mailto:kelven.y...@citrix.com]
> Sent: 11 June 2013 18:10
> To: dev@cloudstack.apache.org
> Cc: Ryousei Takano
> Subject: Re: PCI-Passthrough with CloudStack
>
>
>
> On 6/11/13 12:52 AM, "Pawit Pornkitprasan"  wrote:
>
>>Hi,
>>
>>I am implementing PCI-Passthrough to use with CloudStack for use with
>>high-performance networking (10 Gigabit Ethernet/Infiniband).
>>
>>The current design is to attach a PCI ID (from lspci) to a compute
>>offering. (Not a network offering since from CloudStack¹s point of
>>view, the pass through device has nothing to do with network and may as
>>well be used for other things.) A host tag can be used to limit
>>deployment to machines with the required PCI device.
>
>
>>
>>Then, when starting the virtual machine, the PCI ID is passed into
>>VirtualMachineTO to the agent (currently using KVM) and the agent
>>creates a corresponding  (
>>http://libvirt.org/guide/html/Application_Development_Guide-Device_Conf
>>ig-
>>PCI_Pass.html)
>>tag and then libvirt will handle the rest.
>
>
> VirtualMachineTO.params is designed to carry generic VM specific 
> configurations, these configuration parameters can either be statically 
> linked with the VM or dynamically populated based on other factors like this 
> one. Are you passing PCI ID using VirtualMachineTO.params?
>
>>
>>For allocation, the current idea is to use CloudStack¹s capacity system
>>(at the same place where allocation of CPU and RAM is determined) to
>>limit 1 PCI-Passthrough VM per physical host.
>>
>>The current design has many limitations such as:
>>
>>   - One physical host can only have 1 VM with PCI-Passthrough, even if
>>   many PCI-cards with equivalent functions are available
>>   - The PCI ID is fixed inside the compute offering, so all machines have
>>   to be homogeneous and have the same PCI ID for the device.
>
> Anything that affects VM placement could have impact to HA/migration, we 
> probably need some graceful error-handling in these code paths, hopefully 
> these have been taken care of.
>
>>
>>The initial implementation is working. Any suggestions and comments are
>>welcomed.
>>
>>Thank you,
>>Pawit
>
>
> This email and any attachments to it may be confidential and are intended 
> solely for the use of the individual to whom it is addressed. Any views or 
> opinions expressed are solely those of the author and do not necessarily 
> represent those of Shape Blue Ltd or related companies. If you are not the 
> intended recipient of this email, you must neither take any action based upon 
> its contents, nor copy or show it to anyone. Please contact the sender if you 
> believe you have received this email in error. Shape Blue Ltd is a company 
> incorporated in England & Wales. ShapeBlue Services India LLP is operated 
> under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
>


RE: [NOTICE] CloudStack 4.1.1 release

2013-06-11 Thread Musayev, Ilya
Kelven,

Seems like apache.org is having issues, 

Will try later today to pull it in.

Thanks
ilya

> -Original Message-
> From: Kelven Yang [mailto:kelven.y...@citrix.com]
> Sent: Tuesday, June 11, 2013 1:19 PM
> To: dev@cloudstack.apache.org
> Subject: Re: [NOTICE] CloudStack 4.1.1 release
> 
> I just fixed a critical bug that could cause XenServer host to be out of 
> service.
> I would like the fix to be merged into 4.1.1 release
> https://issues.apache.org/jira/browse/CLOUDSTACK-2925
> 
> 
> Kelven
> 
> 
> On 6/11/13 10:01 AM, "Musayev, Ilya"  wrote:
> 
> >Just FYI, I'm going to be unavailable from Friday 14 - 21st of June and
> >then attending CS conference from June 23-25. Prior to 14th  of June, I
> >have lots of deliverables at $dayjob and while I would like to work on
> >ACS side, its physically not possible for the next 10 days or so.
> >
> >I've asked Chip to help me with release of ACS 4.1.1 as I will be
> >unavailable for extended period of time and we wanted to release 4.1.1
> >sooner.
> >
> >I anticipate my load to get lighter post all the travel  and can focus
> >on ACS RM work then.
> >
> >I'll be trolling through JIRA/GIT today to see what can be merged into
> >4.1. If you know of an issue that has been resolved and is applicable
> >to 4.1, please lets us know and if possible, commit.
> >
> >Thank you Chip for helping,
> >
> >Regards,
> >ilya
> 




RE: UI Development

2013-06-11 Thread Brian Federle
I'm pretty sure there is a modular system in place for the backend, though I 
only do front-end development so I'm not familiar with it. Maybe a server-side 
dev can answer that?

-Brian

-Original Message-
From: Soheil Eizadi [mailto:seiz...@infoblox.com] 
Sent: Tuesday, June 11, 2013 11:52 AM
To: dev@cloudstack.apache.org
Subject: RE: UI Development

Hi Brian,
Thanks for the detail.

I have looked at the UI-Plugin page, is there a matching server side plugin 
component to go along with the UI-Plugin to extend the CloudStack functionality?

-Soheil


From: Brian Federle [brian.fede...@citrix.com]
Sent: Tuesday, June 11, 2013 11:44 AM
To: 'dev@cloudstack.apache.org'
Subject: RE: UI Development

Hi Soheil,

For the most part pretty much any IDE/text editor will work for UI development, 
as the code base is pure JS+HTML+CSS. I usually use Firebug for debugging 
purposes.

For IDEs, I would recommend IntelliJ IDEA, which has the best support for JS, 
including an integrated JS debugger. Not too familiar with Eclipse for web dev, 
though.

Unfortunately UI documentation is a bit sparse right now; I'm in the process of 
adding more documentation when I can find time. For now, I would recommend 
going through the UI plugin development tutorial and see if it will address 
your needs for the feature: 
https://cwiki.apache.org/CLOUDSTACK/ui-plugin-tutorial.html. It goes through 
setting up a new list view with a set of actions.

-Brian

-Original Message-
From: Soheil Eizadi [mailto:seiz...@infoblox.com]
Sent: Tuesday, June 11, 2013 11:35 AM
To: dev@cloudstack.apache.org
Subject: UI Development

I did not find any UI development resources on the Wiki. I need to update the 
UI to support a new Device. I wanted to know if there is there any recommended 
tooling for the CloudStack UI. I was planning to use Eclipse JavaScript 
Development Tools (JSDT), but wanted to see if there is any recommended setup 
also what the recommended debugging environment would look like?
Thanks,
-Soheil


RE: Review Request: use commons-lang StringUtils

2013-06-11 Thread Alex Huang
Ok...please resubmit.

--Alex

> -Original Message-
> From: Laszlo Hornyak [mailto:nore...@reviews.apache.org] On Behalf Of
> Laszlo Hornyak
> Sent: Tuesday, June 11, 2013 11:48 AM
> To: cloudstack; Laszlo Hornyak; Alex Huang
> Subject: Re: Review Request: use commons-lang StringUtils
> 
> 
> 
> > On June 10, 2013, 11:56 p.m., Alex Huang wrote:
> > > The patch did not apply cleanly.  Please resubmit.  Since you have to
> resubmit anyways, I think why not just change all of the code to use the
> commons.lang version.  I see no point in keeping the method in StringUtils if
> it's replaceable by one in a standard library.
> > >
> 
> Hi Alex,
> 
> I think it makes sense to keep the StringUtils.join(String, Object...) method
> because it allows to use varargs, unlike the one in commons-lang, and the
> code somewhat builds on it.
> 
> 
> - Laszlo
> 
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11767/#review21678
> ---
> 
> 
> On June 9, 2013, 7:47 p.m., Laszlo Hornyak wrote:
> >
> > ---
> > This is an automatically generated e-mail. To reply, visit:
> > https://reviews.apache.org/r/11767/
> > ---
> >
> > (Updated June 9, 2013, 7:47 p.m.)
> >
> >
> > Review request for cloudstack.
> >
> >
> > Description
> > ---
> >
> > commons-lang is already a transitive dependency of the utils project, which
> allows removing some duplicated functionality.
> > This patch replaces StringUtils.join(String, Object...) with it's 
> > commons-lang
> counterpart.
> > It also replaces calls to String join(Iterable, String) in
> cases where an array is already exist and it is only wrapped into a List.
> >
> >
> > Diffs
> > -
> >
> >   server/src/com/cloud/storage/s3/S3ManagerImpl.java 61e5573
> >   services/secondary-
> storage/src/org/apache/cloudstack/storage/resource/NfsSecondaryStorage
> Resource.java e7fa5b2
> >   utils/src/com/cloud/utils/S3Utils.java b7273a1
> >   utils/src/com/cloud/utils/StringUtils.java 14ff4b1
> >   utils/test/com/cloud/utils/StringUtilsTest.java 3c162c7
> >
> > Diff: https://reviews.apache.org/r/11767/diff/
> >
> >
> > Testing
> > ---
> >
> > - Unit test added
> >
> >
> > Thanks,
> >
> > Laszlo Hornyak
> >
> >



Re: Hadoop cluster running in cloudstack

2013-06-11 Thread Chiradeep Vittal
Taking it to dev@ to see if there is any interest.


It is a good and interesting requirement. I can see hacking 'pre-setup'
storage with tags to achieve this, but it is going to be a fragile hack.
I believe GCE also has the concept of some instance types having dedicated
spindles.


On 6/6/13 11:14 AM, "David Ortiz"  wrote:

>Chiradeep,
> Currently I am working with KVM hypervisor nodes.  The use case of
>having 4 spindles and assigning one to each node is exactly what I would
>like to do.  For the moment I have all four spindles configured in a RAID
>with the cloudstack local storage pointed at it.
>Shanker,
>  I had not seen that slideshow yet, so thank you for pointing me to
>it.  As of now, the hadoop resources I am using are statically allocated
>between 4 hosts.  As it stands now, I am constrained to those resources
>without the ability to add any additional storage cluster (or additional
>storage to my current shared storage appliance), or additional nodes.
>Fortunately, my use cases don't require any kind of reallocation of the
>hadoop nodes.  It's more clients for the cluster as well as web service
>nodes that run clients that are being dynamically spun up and down.  I
>have found that I can get through my jobs alright, they just take a lot
>of extra time to run since I have the storage acting as a bottleneck
>right now.
>Thanks, David Ortiz
>
>> From: run...@gmail.com
>> Subject: Re: Hadoop cluster running in cloudstack
>> Date: Thu, 6 Jun 2013 10:23:50 -0400
>> To: us...@cloudstack.apache.org
>> 
>> 
>> On Jun 6, 2013, at 4:05 AM, Shanker Balan 
>>wrote:
>> 
>> > On 05-Jun-2013, at 12:13 AM, David Ortiz  wrote:
>> > 
>> >> Hello,
>> >>Has anyone tried running a hadoop cluster in a cloudstack
>>environment?  I have set one up, but I am finding that I am having some
>>IO contention between slave nodes on each host since they all share one
>>local storage pool.  As I understand it, there is not currently a method
>>for using multiple local storage pools with VMs through cloudstack.  Has
>>anyone found a workaround for this by any chance?
>> > 
>> > 
>> > Hi David,
>> > 
>> > Have you seen Seb's
>>http://www.slideshare.net/sebastiengoasguen/cloudstack-and-bigdata
>>slides yet?
>> 
>> As a quick disclaimer, the various configurations I highlight in this
>>deck are a bit hand wavy and I did not test them. I just made a guess
>>about how one might want to use the baremetal functionality in
>>cloudstack. The main distinction being between using a "big data" store
>>as storage backends of cloudstack and using cloudstack to provision a
>>bigdata store on-demand.
>> 
>> -sebastien
>> 
>> > 
>> > In my experience running Hadoop (100+ nodes) on traditional servers,
>>its going to be really hard to scale up Hadoop workloads using local
>>storage and HDFS on a cloud.
>> > 
>> > I ran out of IOPS very quickly. There was enough CPU headroom but
>>could not add more slots as disk became the bottleneck. Every time there
>>was a node/disk failure, rebalancing was a nightmare with a 3x HDFS
>>replication factor.
>> > 
>> > If I were to run Hadoop on an IaaS cloud, I would do it very similar
>>to Amazon AWS EMR - instances backed by a "Storage As A Service" layer
>>(S3) for big data instead of HDFS.
>> > 
>> > The system would work as below:
>> > 
>> > - Create a dedicated big data storage tier using a distributed
>>filesystem like Gluster/Ceph/Isilon. Most of the vendors now provide S3
>>compat connectors for Hadoop.
>> > 
>> > http://ceph.com/docs/master/cephfs/hadoop/
>> > http://gluster.org/community/documentation/index.php/Hadoop
>> > http://www.emc.com/big-data/scale-out-storage-hadoop.htm
>> > 
>> > - Hadoop instances are spun up on bare metal or on hypervisors. The
>>service offerings for "big data" instances could will run on dedicated
>>hypervisors (via tags) with high bandwidth network connectivity to the
>>storage service.
>> > 
>> > - Hadoop instances use Local storage for run time data.
>> > 
>> > - Hadoop VMs connect to the storage tier via connectors for permanent
>>storage
>> > 
>> > Benefits:
>> > 
>> > - Spinning up/down VMs don't cause HDFS rebalancing as there is no
>>HDFS anywhere.
>> > 
>> > - Scale out VMs independently of storage. Add more spindles / nodes
>>to the storage cluster to scale out IOPS and capacity
>> > 
>> > - Easy upgrade of Hadoop releases without risk to data
>> > 
>> > Regards.
>> > @shankerbalan
>> > 
>> > -- 
>> > Shanker Balan
>> > Managing Consultant
>> > 
>> > 
>> > 
>> > M: +91 98860 60539
>> > shanker.ba...@shapeblue.com | www.shapeblue.com | Twitter:@shapeblue
>> > ShapeBlue India, 22nd floor, Unit 2201A, World Trade Centre,
>>Bangalore - 560 055
>> > 
>> > This email and any attachments to it may be confidential and are
>>intended solely for the use of the individual to whom it is addressed.
>>Any views or opinions expressed are solely those of the author and do
>>not necessarily represent those of Shape Blue Ltd or related companies.
>>If you are n

Re: Contributing as a non-committer

2013-06-11 Thread Ahmad Emneina
Is there a carrier pigeon plugin for git yet?

Ahmad

On Jun 11, 2013, at 12:12 PM, Paul Angus  wrote:

> I've got an etch-a-sketch, will that work?
> 
> 
> Regards,
> 
> Paul Angus
> S: +44 20 3603 0540 | M: +447711418784
> paul.an...@shapeblue.com
> 
> -Original Message-
> From: Joe Brockmeier [mailto:j...@zonker.net]
> Sent: 11 June 2013 16:35
> To: dev@cloudstack.apache.org
> Subject: Re: Contributing as a non-committer
> 
> On Mon, Jun 10, 2013, at 10:03 PM, Alex Huang wrote:
>>> Forget about eclipse for now :) just use vi :)
>> 
>> Why don't we just go back to ed?
> 
> +1
> 
> Alex - do you want to start the vote? ;-)
> 
> Best,
> 
> jzb
> --
> Joe Brockmeier
> j...@zonker.net
> Twitter: @jzb
> http://www.dissociatedpress.net/
> 
> This email and any attachments to it may be confidential and are intended 
> solely for the use of the individual to whom it is addressed. Any views or 
> opinions expressed are solely those of the author and do not necessarily 
> represent those of Shape Blue Ltd or related companies. If you are not the 
> intended recipient of this email, you must neither take any action based upon 
> its contents, nor copy or show it to anyone. Please contact the sender if you 
> believe you have received this email in error. Shape Blue Ltd is a company 
> incorporated in England & Wales. ShapeBlue Services India LLP is operated 
> under license from Shape Blue Ltd. ShapeBlue is a registered trademark.
> 


RE: PCI-Passthrough with CloudStack

2013-06-11 Thread Vijayendra Bhamidipati


-Original Message-
From: David Nalley [mailto:da...@gnsa.us] 
Sent: Tuesday, June 11, 2013 5:08 AM
To: dev@cloudstack.apache.org
Cc: Ryousei Takano
Subject: Re: PCI-Passthrough with CloudStack

On Tue, Jun 11, 2013 at 3:52 AM, Pawit Pornkitprasan  wrote:
> Hi,
>
> I am implementing PCI-Passthrough to use with CloudStack for use with 
> high-performance networking (10 Gigabit Ethernet/Infiniband).
>
> The current design is to attach a PCI ID (from lspci) to a compute 
> offering. (Not a network offering since from CloudStack's point of 
> view, the pass through device has nothing to do with network and may 
> as well be used for other things.) 

[Vijay>] Any specific reasons for not tracking the type of device? Different 
hypervisors may implement passthrough differently. KVM may use the PCI ID but 
afaik vmware does not and so we probably will need to know the type of device 
in order to map it as a passthrough device.

> A host tag can be used to limit 
> deployment to machines with the required PCI device.
>
> Then, when starting the virtual machine, the PCI ID is passed into 
> VirtualMachineTO to the agent (currently using KVM) and the agent 
> creates a corresponding  (
> http://libvirt.org/guide/html/Application_Development_Guide-Device_Con
> fig-PCI_Pass.html) tag and then libvirt will handle the rest.
>
> For allocation, the current idea is to use CloudStack's capacity 
> system (at the same place where allocation of CPU and RAM is 
> determined) to limit 1 PCI-Passthrough VM per physical host.
>
> The current design has many limitations such as:
>
>- One physical host can only have 1 VM with PCI-Passthrough, even if
>many PCI-cards with equivalent functions are available

[Vijay>] What is the reason for this limitation? Is it that PCI IDs can change 
among PCI devices on a host across reboots? In general, what is the effect of a 
host reboot on PCI IDs? Could the PCI ID of the physical device change? Is 
there a way to configure passthrough devices without using the PCI ID of the 
device?

>- The PCI ID is fixed inside the compute offering, so all machines have
>to be homogeneous and have the same PCI ID for the device.


>
> The initial implementation is working. Any suggestions and comments 
> are welcomed.
>
> Thank you,
> Pawit

This looks like a compelling idea, though I am sure not limited to just 
networking (think GPU passthrough).
How are things like live migration affected? Are you making planner changes to 
deal with the limiting factor of a single PCI-passthrough VM being available 
per host?
What's the level of effort to extend this to work with VMware DirectPath I/O 
and PCI passthrough on XenServer?

[Vijay>] It's probably a good idea to limit the passthrough to networking to 
begin with and implement other types of devices (HBA/CD-ROMs etc) 
incrementally. Live migration will definitely be affected. In vmware, live 
migration is disabled for a VM once the VM is configured with a passthrough 
device. The implementation should handle this. A host of other features also 
get disabled when passthrough is configured, and if cloudstack is using any of 
those, we should handle those paths as well.


Regards,
Vijay

--David


RE: PCI-Passthrough with CloudStack

2013-06-11 Thread Edison Su


> -Original Message-
> From: Marcus Sorensen [mailto:shadow...@gmail.com]
> Sent: Tuesday, June 11, 2013 12:10 PM
> To: dev@cloudstack.apache.org
> Cc: Ryousei Takano; Kelven Yang
> Subject: Re: PCI-Passthrough with CloudStack
> 
> What we need is some sort of plugin system for the libvirt guest agent,
> where people can inject their own additions to the xml. So we pass the VM
> parameters (including name, os, nics, volumes etc) to your plugin, and it
> returns either nothing, or some xml. Or perhaps an object that defines
> additional xml for various resources.
> 
> Or maybe we just pass the final cloudstack-generated XML to your plugin,
> the external plugin processes it and returns it, complete with whatever
> modifications it wants before cloudstack starts the VM. That would actually
> be very simple to put in. Via the KVM host's agent.properties file we could
> point to an external script. That script could be in whatever language, as 
> long
> as it's executable. It filters the XML and returns new XML which is used to
> start the VM.

If change vm's xml is enough, then how about use libvirt's hook system:
http://www.libvirt.org/hooks.html

I think, the issue is that, how to let cloudstack only create one VM per KVM 
host, or few VMs per host(based on the available PCI devices on the host).
If we think PCI devices are the resource CloudStack should to take care of 
during the resource allocation, then we need a framework:
1. During host discovering, host can report whatever resources it can detect to 
mgt server. RAM/CPU freq/local storage are the resources, that currently 
supported by kvm agent. Here we may need to add PCI devices as another 
resource.  Such as, KVM agent host returns a StartupAuxiliaryDevicesReportCmd 
along as with other startupRouteringcmd/StartStorage*cmd etc, during the 
startup.
2. There will be a listener on the mgt server, which can listen on 
StartupAuxiliaryDevicesReportCmd, then records available PCI devices into DB,  
such as host_pci_device_ref table.
3. Need to extend FirstFitAllocator, take PCI devices as another resource 
during the allocation. And also need to find a place to mark the PCI device as 
used in host_pci_device_ref table, so the pci device won't be allocated to more 
than one VM. 
4. Have api to create a customized computing offering, the offering can contain 
info about PCI device, such as how many PCI devices plugged into a VM.
5. If user chooses above customized computing offering during the VM 
deployment, then the allocator in step 3 will be triggered, which will choose a 
KVM host which has enough PCI devices to fulfill the computing offering.
6. In the startupcommand, the mgt server send to kvm host, it should contain 
the PCI devices allocated to this VM.
7. At the KVM agent code, change VM's xml file properly based on the 
startupcommand.

How do you think?

> 
> On Tue, Jun 11, 2013 at 12:59 PM, Paul Angus 
> wrote:
> > We're working with 'a very large broadcasting company' how are using
> > cavium cards for ssl offload in all of their hosts
> >
> > We need to add:
> >
> > 
> > 
> >  > function='0x1'/>
> > 
> > 
> >
> > Into the xml definition of the guest VMs
> >
> > I'm very interested in working you guys to make this an integrated
> > part of CloudStack
> >
> > Interestingly cavium card drivers can present a number of virtual interfaces
> specifically designed to be passed through to guest vms, but these must be
> addressed separately so a single 'stock' xml definition wouldn't be flexible
> enough to fully utilise the card.
> >
> >
> > Regards,
> >
> > Paul Angus
> > S: +44 20 3603 0540 | M: +447711418784 paul.an...@shapeblue.com
> >
> > -Original Message-
> > From: Kelven Yang [mailto:kelven.y...@citrix.com]
> > Sent: 11 June 2013 18:10
> > To: dev@cloudstack.apache.org
> > Cc: Ryousei Takano
> > Subject: Re: PCI-Passthrough with CloudStack
> >
> >
> >
> > On 6/11/13 12:52 AM, "Pawit Pornkitprasan"  wrote:
> >
> >>Hi,
> >>
> >>I am implementing PCI-Passthrough to use with CloudStack for use with
> >>high-performance networking (10 Gigabit Ethernet/Infiniband).
> >>
> >>The current design is to attach a PCI ID (from lspci) to a compute
> >>offering. (Not a network offering since from CloudStack¹s point of
> >>view, the pass through device has nothing to do with network and may
> >>as well be used for other things.) A host tag can be used to limit
> >>deployment to machines with the required PCI device.
> >
> >
> >>
> >>Then, when starting the virtual machine, the PCI ID is passed into
> >>VirtualMachineTO to the agent (currently using KVM) and the agent
> >>creates a corresponding  (
> >>http://libvirt.org/guide/html/Application_Development_Guide-
> Device_Con
> >>f
> >>ig-
> >>PCI_Pass.html)
> >>tag and then libvirt will handle the rest.
> >
> >
> > VirtualMachineTO.params is designed to carry generic VM specific
> configurations, these configuration parameters can either be statically link

Re: PCI-Passthrough with CloudStack

2013-06-11 Thread Marcus Sorensen
So we wouldn't bother ourselves with whether it's a network resource,
gpu resource, or whatever else? That seems more feasible than trying
to teach or create individual objects to use PCI passthroughs,
although we'd miss out on some of the specifics, like configuring the
device. Perhaps that doesn't matter. My solution was with the mindset
of supporting any custom thing, I can see people saying 'please
include support for x,y,z', when they can add it via the xml. Hooks is
a good way to do that, you're right.

On Tue, Jun 11, 2013 at 3:35 PM, Edison Su  wrote:
>
>
>> -Original Message-
>> From: Marcus Sorensen [mailto:shadow...@gmail.com]
>> Sent: Tuesday, June 11, 2013 12:10 PM
>> To: dev@cloudstack.apache.org
>> Cc: Ryousei Takano; Kelven Yang
>> Subject: Re: PCI-Passthrough with CloudStack
>>
>> What we need is some sort of plugin system for the libvirt guest agent,
>> where people can inject their own additions to the xml. So we pass the VM
>> parameters (including name, os, nics, volumes etc) to your plugin, and it
>> returns either nothing, or some xml. Or perhaps an object that defines
>> additional xml for various resources.
>>
>> Or maybe we just pass the final cloudstack-generated XML to your plugin,
>> the external plugin processes it and returns it, complete with whatever
>> modifications it wants before cloudstack starts the VM. That would actually
>> be very simple to put in. Via the KVM host's agent.properties file we could
>> point to an external script. That script could be in whatever language, as 
>> long
>> as it's executable. It filters the XML and returns new XML which is used to
>> start the VM.
>
> If change vm's xml is enough, then how about use libvirt's hook system:
> http://www.libvirt.org/hooks.html
>
> I think, the issue is that, how to let cloudstack only create one VM per KVM 
> host, or few VMs per host(based on the available PCI devices on the host).
> If we think PCI devices are the resource CloudStack should to take care of 
> during the resource allocation, then we need a framework:
> 1. During host discovering, host can report whatever resources it can detect 
> to mgt server. RAM/CPU freq/local storage are the resources, that currently 
> supported by kvm agent. Here we may need to add PCI devices as another 
> resource.  Such as, KVM agent host returns a StartupAuxiliaryDevicesReportCmd 
> along as with other startupRouteringcmd/StartStorage*cmd etc, during the 
> startup.
> 2. There will be a listener on the mgt server, which can listen on 
> StartupAuxiliaryDevicesReportCmd, then records available PCI devices into DB, 
>  such as host_pci_device_ref table.
> 3. Need to extend FirstFitAllocator, take PCI devices as another resource 
> during the allocation. And also need to find a place to mark the PCI device 
> as used in host_pci_device_ref table, so the pci device won't be allocated to 
> more than one VM.
> 4. Have api to create a customized computing offering, the offering can 
> contain info about PCI device, such as how many PCI devices plugged into a VM.
> 5. If user chooses above customized computing offering during the VM 
> deployment, then the allocator in step 3 will be triggered, which will choose 
> a KVM host which has enough PCI devices to fulfill the computing offering.
> 6. In the startupcommand, the mgt server send to kvm host, it should contain 
> the PCI devices allocated to this VM.
> 7. At the KVM agent code, change VM's xml file properly based on the 
> startupcommand.
>
> How do you think?
>
>>
>> On Tue, Jun 11, 2013 at 12:59 PM, Paul Angus 
>> wrote:
>> > We're working with 'a very large broadcasting company' how are using
>> > cavium cards for ssl offload in all of their hosts
>> >
>> > We need to add:
>> >
>> > 
>> > 
>> > > > function='0x1'/>
>> > 
>> > 
>> >
>> > Into the xml definition of the guest VMs
>> >
>> > I'm very interested in working you guys to make this an integrated
>> > part of CloudStack
>> >
>> > Interestingly cavium card drivers can present a number of virtual 
>> > interfaces
>> specifically designed to be passed through to guest vms, but these must be
>> addressed separately so a single 'stock' xml definition wouldn't be flexible
>> enough to fully utilise the card.
>> >
>> >
>> > Regards,
>> >
>> > Paul Angus
>> > S: +44 20 3603 0540 | M: +447711418784 paul.an...@shapeblue.com
>> >
>> > -Original Message-
>> > From: Kelven Yang [mailto:kelven.y...@citrix.com]
>> > Sent: 11 June 2013 18:10
>> > To: dev@cloudstack.apache.org
>> > Cc: Ryousei Takano
>> > Subject: Re: PCI-Passthrough with CloudStack
>> >
>> >
>> >
>> > On 6/11/13 12:52 AM, "Pawit Pornkitprasan"  wrote:
>> >
>> >>Hi,
>> >>
>> >>I am implementing PCI-Passthrough to use with CloudStack for use with
>> >>high-performance networking (10 Gigabit Ethernet/Infiniband).
>> >>
>> >>The current design is to attach a PCI ID (from lspci) to a compute
>> >>offering. (Not a network offering since from CloudStack¹s point 

Re: PCI-Passthrough with CloudStack

2013-06-11 Thread Kelven Yang
As of supporting arbitrary custom parameter, if the custom thing involves
with arbitration process, it makes sense to get CloudStack core to be
aware of (i.e, HA/Migration flow, Allocators). Only custom parameters that
do not need arbitration from CloudStack can be passed through CloudStack
in a general approach(transparent to CloudStack).

Kelven

On 6/11/13 2:43 PM, "Marcus Sorensen"  wrote:

>So we wouldn't bother ourselves with whether it's a network resource,
>gpu resource, or whatever else? That seems more feasible than trying
>to teach or create individual objects to use PCI passthroughs,
>although we'd miss out on some of the specifics, like configuring the
>device. Perhaps that doesn't matter. My solution was with the mindset
>of supporting any custom thing, I can see people saying 'please
>include support for x,y,z', when they can add it via the xml. Hooks is
>a good way to do that, you're right.
>
>On Tue, Jun 11, 2013 at 3:35 PM, Edison Su  wrote:
>>
>>
>>> -Original Message-
>>> From: Marcus Sorensen [mailto:shadow...@gmail.com]
>>> Sent: Tuesday, June 11, 2013 12:10 PM
>>> To: dev@cloudstack.apache.org
>>> Cc: Ryousei Takano; Kelven Yang
>>> Subject: Re: PCI-Passthrough with CloudStack
>>>
>>> What we need is some sort of plugin system for the libvirt guest agent,
>>> where people can inject their own additions to the xml. So we pass the
>>>VM
>>> parameters (including name, os, nics, volumes etc) to your plugin, and
>>>it
>>> returns either nothing, or some xml. Or perhaps an object that defines
>>> additional xml for various resources.
>>>
>>> Or maybe we just pass the final cloudstack-generated XML to your
>>>plugin,
>>> the external plugin processes it and returns it, complete with whatever
>>> modifications it wants before cloudstack starts the VM. That would
>>>actually
>>> be very simple to put in. Via the KVM host's agent.properties file we
>>>could
>>> point to an external script. That script could be in whatever
>>>language, as long
>>> as it's executable. It filters the XML and returns new XML which is
>>>used to
>>> start the VM.
>>
>> If change vm's xml is enough, then how about use libvirt's hook system:
>> http://www.libvirt.org/hooks.html
>>
>> I think, the issue is that, how to let cloudstack only create one VM
>>per KVM host, or few VMs per host(based on the available PCI devices on
>>the host).
>> If we think PCI devices are the resource CloudStack should to take care
>>of during the resource allocation, then we need a framework:
>> 1. During host discovering, host can report whatever resources it can
>>detect to mgt server. RAM/CPU freq/local storage are the resources, that
>>currently supported by kvm agent. Here we may need to add PCI devices as
>>another resource.  Such as, KVM agent host returns a
>>StartupAuxiliaryDevicesReportCmd along as with other
>>startupRouteringcmd/StartStorage*cmd etc, during the startup.
>> 2. There will be a listener on the mgt server, which can listen on
>>StartupAuxiliaryDevicesReportCmd, then records available PCI devices
>>into DB,  such as host_pci_device_ref table.
>> 3. Need to extend FirstFitAllocator, take PCI devices as another
>>resource during the allocation. And also need to find a place to mark
>>the PCI device as used in host_pci_device_ref table, so the pci device
>>won't be allocated to more than one VM.
>> 4. Have api to create a customized computing offering, the offering can
>>contain info about PCI device, such as how many PCI devices plugged into
>>a VM.
>> 5. If user chooses above customized computing offering during the VM
>>deployment, then the allocator in step 3 will be triggered, which will
>>choose a KVM host which has enough PCI devices to fulfill the computing
>>offering.
>> 6. In the startupcommand, the mgt server send to kvm host, it should
>>contain the PCI devices allocated to this VM.
>> 7. At the KVM agent code, change VM's xml file properly based on the
>>startupcommand.
>>
>> How do you think?
>>
>>>
>>> On Tue, Jun 11, 2013 at 12:59 PM, Paul Angus 
>>> wrote:
>>> > We're working with 'a very large broadcasting company' how are using
>>> > cavium cards for ssl offload in all of their hosts
>>> >
>>> > We need to add:
>>> >
>>> > 
>>> > 
>>> > >>function='0x1'/>
>>> > 
>>> > 
>>> >
>>> > Into the xml definition of the guest VMs
>>> >
>>> > I'm very interested in working you guys to make this an integrated
>>> > part of CloudStack
>>> >
>>> > Interestingly cavium card drivers can present a number of virtual
>>>interfaces
>>> specifically designed to be passed through to guest vms, but these
>>>must be
>>> addressed separately so a single 'stock' xml definition wouldn't be
>>>flexible
>>> enough to fully utilise the card.
>>> >
>>> >
>>> > Regards,
>>> >
>>> > Paul Angus
>>> > S: +44 20 3603 0540 | M: +447711418784 paul.an...@shapeblue.com
>>> >
>>> > -Original Message-
>>> > From: Kelven Yang [mailto:kelven.y...@citrix.com]
>>> > Sent: 11 June 2013 18:10
>>>

if anybody know how to restrict access to a specific zone

2013-06-11 Thread William Jiang
Hi,

if anybody know how to restrict access to a specific zone? We have multiple 
zone in our cloudstack 3.0 and I want to give access to a user to see only a 
specific zone.

Thanks,
William
This e-mail may be privileged and/or confidential, and the sender does not 
waive any related rights and obligations. Any distribution, use or copying of 
this e-mail or the information it contains by other than an intended recipient 
is unauthorized. If you received this e-mail in error, please advise me (by 
return e-mail or otherwise) immediately. Ce courrier ?lectronique est 
confidentiel et prot?g?. L'exp?diteur ne renonce pas aux droits et obligations 
qui s'y rapportent. Toute diffusion, utilisation ou copie de ce message ou des 
renseignements qu'il contient par une personne autre que le (les) 
destinataire(s) d?sign?(s) est interdite. Si vous recevez ce courrier 
?lectronique par erreur, veuillez m'en aviser imm?diatement, par retour de 
courrier ?lectronique ou par un autre moyen.


CloudStack Git Repo Down

2013-06-11 Thread Soheil Eizadi
Is there a scheduled outage for the CloudStack Git Repo?

"Service Temporarily Unavailable

The server is temporarily unable to service your request due to maintenance 
downtime or capacity problems. Please try again later."


RE: if anybody know how to restrict access to a specific zone

2013-06-11 Thread Jessica Wang
> I want to give access to a user to see only a specific zone.

What is the role of the user that you want to give access to see only a 
specific zone?
A normal-user, a domain-admin, or a root-admin?


-Original Message-
From: William Jiang [mailto:william.ji...@manwin.com] 
Sent: Tuesday, June 11, 2013 3:17 PM
To: dev@cloudstack.apache.org; users...@cloudstack.apache.org; 
us...@cloudstack.apache.org
Subject: if anybody know how to restrict access to a specific zone

Hi,

if anybody know how to restrict access to a specific zone? We have multiple 
zone in our cloudstack 3.0 and I want to give access to a user to see only a 
specific zone.

Thanks,
William
This e-mail may be privileged and/or confidential, and the sender does not 
waive any related rights and obligations. Any distribution, use or copying of 
this e-mail or the information it contains by other than an intended recipient 
is unauthorized. If you received this e-mail in error, please advise me (by 
return e-mail or otherwise) immediately. Ce courrier ?lectronique est 
confidentiel et prot?g?. L'exp?diteur ne renonce pas aux droits et obligations 
qui s'y rapportent. Toute diffusion, utilisation ou copie de ce message ou des 
renseignements qu'il contient par une personne autre que le (les) 
destinataire(s) d?sign?(s) est interdite. Si vous recevez ce courrier 
?lectronique par erreur, veuillez m'en aviser imm?diatement, par retour de 
courrier ?lectronique ou par un autre moyen.


Re: Review Request: (CLOUDSTACK-1301) VM Disk I/O Throttling

2013-06-11 Thread Wido den Hollander

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11782/#review21754
---



api/src/org/apache/cloudstack/api/command/admin/offering/CreateDiskOfferingCmd.java


Any reason this is commented? Or just a small mistake?



plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtVMDef.java


Isn't there a way to get this information via the libvirt-java API? If not, 
what are you missing?

I'm not a big fan of these constructions.

http://libvirt.org/sources/java/javadoc/org/libvirt/Connect.html

Wouldn't these methods help you?
* getCapabilities() 
* getHypervisorVersion(java.lang.String type)
* getLibVirVersion()
* getVersion()



server/src/com/cloud/configuration/ConfigurationManagerImpl.java


Why is this warn and nog debug? Also, couldn't you extend these lines a bit 
more? Since a non-developer would never get what you mean with this.


- Wido den Hollander


On June 10, 2013, 5:51 p.m., Wei Zhou wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11782/
> ---
> 
> (Updated June 10, 2013, 5:51 p.m.)
> 
> 
> Review request for cloudstack, Wido den Hollander and John Burwell.
> 
> 
> Description
> ---
> 
> The patch for VM Disk I/O throttling based on commit 
> 3f3c6aa35f64c4129c203d54840524e6aa2c4621
> 
> 
> This addresses bug CLOUDSTACK-1301.
> 
> 
> Diffs
> -
> 
>   api/src/com/cloud/agent/api/to/VolumeTO.java 4cbe82b 
>   api/src/com/cloud/offering/DiskOffering.java dd77c70 
>   api/src/com/cloud/vm/DiskProfile.java e3a3386 
>   api/src/org/apache/cloudstack/api/ApiConstants.java ab1402c 
>   
> api/src/org/apache/cloudstack/api/command/admin/offering/CreateDiskOfferingCmd.java
>  aa11599 
>   
> api/src/org/apache/cloudstack/api/command/admin/offering/CreateServiceOfferingCmd.java
>  4c54a4e 
>   api/src/org/apache/cloudstack/api/response/DiskOfferingResponse.java 
> 377e66e 
>   api/src/org/apache/cloudstack/api/response/ServiceOfferingResponse.java 
> 31533f8 
>   api/src/org/apache/cloudstack/api/response/VolumeResponse.java 21d7d1a 
>   client/WEB-INF/classes/resources/messages.properties 2b17359 
>   core/src/com/cloud/agent/api/AttachVolumeCommand.java 302b8f8 
>   engine/schema/src/com/cloud/storage/DiskOfferingVO.java 909d7fe 
>   
> plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
>  bab53bc 
>   
> plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtDomainXMLParser.java
>  b8645e1 
>   
> plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtVMDef.java
>  9cddb2e 
>   server/src/com/cloud/api/query/dao/DiskOfferingJoinDaoImpl.java 283181f 
>   server/src/com/cloud/api/query/dao/ServiceOfferingJoinDaoImpl.java 56e4d0a 
>   server/src/com/cloud/api/query/dao/VolumeJoinDaoImpl.java e27e2d9 
>   server/src/com/cloud/api/query/vo/DiskOfferingJoinVO.java 6d3cdcb 
>   server/src/com/cloud/api/query/vo/ServiceOfferingJoinVO.java e87a101 
>   server/src/com/cloud/api/query/vo/VolumeJoinVO.java 6ef8c91 
>   server/src/com/cloud/configuration/Config.java 5ee0fad 
>   server/src/com/cloud/configuration/ConfigurationManager.java 8db037b 
>   server/src/com/cloud/configuration/ConfigurationManagerImpl.java 59e70cf 
>   server/src/com/cloud/storage/StorageManager.java d49a7f8 
>   server/src/com/cloud/storage/StorageManagerImpl.java d38b35e 
>   server/src/com/cloud/storage/VolumeManagerImpl.java 43f3681 
>   server/src/com/cloud/test/DatabaseConfig.java 70c8178 
>   server/test/com/cloud/vpc/MockConfigurationManagerImpl.java 21b3590 
>   setup/db/db/schema-410to420.sql bcfbcc9 
>   ui/dictionary.jsp a5f0662 
>   ui/scripts/configuration.js cb15598 
>   ui/scripts/instances.js 7149815 
> 
> Diff: https://reviews.apache.org/r/11782/diff/
> 
> 
> Testing
> ---
> 
> testing ok.
> 
> 
> Thanks,
> 
> Wei Zhou
> 
>



Re: CloudStack Git Repo Down

2013-06-11 Thread David Nalley
There are some problems currently around some of the ASF services,
including git.

The issues are being worked.

Feel free to look at http://twitter.com/infrabot and http://status.apache.org

--David



On Tue, Jun 11, 2013 at 6:22 PM, Soheil Eizadi  wrote:
> Is there a scheduled outage for the CloudStack Git Repo?
>
> "Service Temporarily Unavailable
>
> The server is temporarily unable to service your request due to maintenance 
> downtime or capacity problems. Please try again later."


Re: if anybody know how to restrict access to a specific zone

2013-06-11 Thread Kelcey Jamison Damage
in cloudstack the domain or subdomain would require 1 or more 'private zones' 
to restrict user access

If you make a zone under root(called a public zone) then everyone can access 
and is restricted by quotas.

hope that helps.

- Original Message -
From: "Jessica Wang" 
To: us...@cloudstack.apache.org, dev@cloudstack.apache.org, 
users...@cloudstack.apache.org
Sent: Tuesday, June 11, 2013 3:23:08 PM
Subject: RE: if anybody know how to restrict access to a specific zone

> I want to give access to a user to see only a specific zone.

What is the role of the user that you want to give access to see only a 
specific zone?
A normal-user, a domain-admin, or a root-admin?


-Original Message-
From: William Jiang [mailto:william.ji...@manwin.com] 
Sent: Tuesday, June 11, 2013 3:17 PM
To: dev@cloudstack.apache.org; users...@cloudstack.apache.org; 
us...@cloudstack.apache.org
Subject: if anybody know how to restrict access to a specific zone

Hi,

if anybody know how to restrict access to a specific zone? We have multiple 
zone in our cloudstack 3.0 and I want to give access to a user to see only a 
specific zone.

Thanks,
William
This e-mail may be privileged and/or confidential, and the sender does not 
waive any related rights and obligations. Any distribution, use or copying of 
this e-mail or the information it contains by other than an intended recipient 
is unauthorized. If you received this e-mail in error, please advise me (by 
return e-mail or otherwise) immediately. Ce courrier ?lectronique est 
confidentiel et prot?g?. L'exp?diteur ne renonce pas aux droits et obligations 
qui s'y rapportent. Toute diffusion, utilisation ou copie de ce message ou des 
renseignements qu'il contient par une personne autre que le (les) 
destinataire(s) d?sign?(s) est interdite. Si vous recevez ce courrier 
?lectronique par erreur, veuillez m'en aviser imm?diatement, par retour de 
courrier ?lectronique ou par un autre moyen.


Re: if anybody know how to restrict access to a specific zone

2013-06-11 Thread Kelcey Jamison Damage
Also the our user structure is per-domain:

[]root| {Quota}}{Quota}   < --- public 
zone
  |
  {Access}[]sub-domain| {Restricted}  < --- private 
zone
  |
  {Access}  {Restricted}  []t2-sub-domain|< --- private 
zone



- Original Message -
From: "Kelcey Jamison Damage" 
To: dev@cloudstack.apache.org
Sent: Tuesday, June 11, 2013 3:35:30 PM
Subject: Re: if anybody know how to restrict access to a specific zone

in cloudstack the domain or subdomain would require 1 or more 'private zones' 
to restrict user access

If you make a zone under root(called a public zone) then everyone can access 
and is restricted by quotas.

hope that helps.

- Original Message -
From: "Jessica Wang" 
To: us...@cloudstack.apache.org, dev@cloudstack.apache.org, 
users...@cloudstack.apache.org
Sent: Tuesday, June 11, 2013 3:23:08 PM
Subject: RE: if anybody know how to restrict access to a specific zone

> I want to give access to a user to see only a specific zone.

What is the role of the user that you want to give access to see only a 
specific zone?
A normal-user, a domain-admin, or a root-admin?


-Original Message-
From: William Jiang [mailto:william.ji...@manwin.com] 
Sent: Tuesday, June 11, 2013 3:17 PM
To: dev@cloudstack.apache.org; users...@cloudstack.apache.org; 
us...@cloudstack.apache.org
Subject: if anybody know how to restrict access to a specific zone

Hi,

if anybody know how to restrict access to a specific zone? We have multiple 
zone in our cloudstack 3.0 and I want to give access to a user to see only a 
specific zone.

Thanks,
William
This e-mail may be privileged and/or confidential, and the sender does not 
waive any related rights and obligations. Any distribution, use or copying of 
this e-mail or the information it contains by other than an intended recipient 
is unauthorized. If you received this e-mail in error, please advise me (by 
return e-mail or otherwise) immediately. Ce courrier ?lectronique est 
confidentiel et prot?g?. L'exp?diteur ne renonce pas aux droits et obligations 
qui s'y rapportent. Toute diffusion, utilisation ou copie de ce message ou des 
renseignements qu'il contient par une personne autre que le (les) 
destinataire(s) d?sign?(s) est interdite. Si vous recevez ce courrier 
?lectronique par erreur, veuillez m'en aviser imm?diatement, par retour de 
courrier ?lectronique ou par un autre moyen.


Re: Review Request: Fix for test case failure test_network.py:test_delete_account - CLOUDSTACK-2898

2013-06-11 Thread Rayees Namathponnan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11713/
---

(Updated June 11, 2013, 11:17 p.m.)


Review request for cloudstack, Prasanna Santhanam and Girish Shilamkar.


Changes
---

Attached file 


Description
---

https://issues.apache.org/jira/browse/CLOUDSTACK-2898

In this test case we need to capture "cloudstackAPIException" before capturing 
more generic exceptions


Diffs
-


Diff: https://reviews.apache.org/r/11713/diff/


Testing
---

Tested 


Thanks,

Rayees Namathponnan



Re: Test halting build every now and then

2013-06-11 Thread Mike Tutkowski
Took five tries to get the build over this hump a moment ago.

Any thoughts on what's going on there?

Thanks!


On Tue, Jun 4, 2013 at 12:22 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:

> Hi,
>
> Does anyone know if there is a way to stop the build from (very often)
> halting here indefinitely?
>
> 2013-06-04 12:19:47,836 INFO  [utils.net.NetUtilsTest] (main:) IP is
> 1234:5678::dd3b:e82c:ce6b:fe5c
> 2013-06-04 12:19:47,839 INFO  [utils.net.NetUtilsTest] (main:) IP is
> 1234:5678::814a:9955:e8e2:84f
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.085 sec
> Running com.cloud.utils.StringUtilsTest
> Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.011 sec
> Running com.cloud.utils.testcase.NioTest
> 2013-06-04 12:19:47,860 INFO  [utils.testcase.NioTest] (main:) Test
> 2013-06-04 12:19:47,913 INFO  [utils.nio.NioServer]
> (NioTestServer-Selector:) NioConnection started and listening on
> 0.0.0.0/0.0.0.0:
>
> I usually just wait for this point in the build and if it halts, then I
> CTRL-C and try again.
>
> Thanks!
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the 
> cloud
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud
*™*


Expanding Volumes

2013-06-11 Thread Maurice Lawler

Hello,

I see one of the features is to expand and/or shrink drives. However, I 
just encountered I can only do that for secondary drives on a particular 
instance. Is this feature not readily available to be done via the 
primary virtual drive of said instance?



- Maurice


Re: [GSOC] Community Bonding Period

2013-06-11 Thread Jessica Tomechak
This strikes me as a nice opportunity to make an enjoyable video! Since we
have our videographer Gregg available for the summer, let's explore the
possibility of making a video about the GSoC participants' experience.
Seems like that could be a fun video to make, plus very motivating for
other new CloudStack contributors to view!

I wonder if Google would have any restrictions about using the name GSoC in
a video, etc., but we could check with them.

Jessica T.


On Wed, May 29, 2013 at 1:38 AM, Sebastien Goasguen wrote:

> Hi Dharmesh, Meng, Ian, Nguyen and Shiva,
>
> Congratulations again on being selected for the 2013 Google Summer of Code.
>
> The program has started and we are now in "community bonding period". On
> June 17th you will officially start to code.
>
> I will mentor Dharmesh and Meng
> Abhi will mentor Ian
> Hugo will mentor Nguyen
> Kelcey will mentor Shiva
> I will act as overall coordinator.
>
> While these are your official mentors from a Google perspective, the
> entire CloudStack community will help you.
>
> There are a few things to keep in mind:
> --
> -The timeline: Check [0]. Note that there are evaluations throughout the
> program and that if progress is not satisfactory you can be dropped from
> the program. Hopefully with terrific mentoring from us all at CloudStack
> this will not happen and you will finish the program with flying colors.
>
> -Email: At the Apache Software Foundation, official communication happen
> via email, so make sure you are registered to the
> dev@cloudstack.apache.org (you are). This is a high traffic list, so
> remember to setup mail filters and be sure to keep the GSOC emails where
> you can read them, without filters you will be overwhelmed and we don't
> want that to happen. When you email the list for a GSOC specific question,
> just put [GSOC] at the start of the subject line.  I CC you in this email
> but will not do it afterwards and just email dev@cloudstack.apache.org
>
> -IRC: For daily conversation and help, we use IRC. Install an IRC client
> and join the #cloudstack and #cloudstack-dev on irc.freenode.net [1].
> Make yourself none and learn a few IRC tricks.
>
> -JIRA: Our ticketing system is JIRA [2], create an account and browse
> JIRA, you should already know where your project is described (which ticket
> number ?). As you start working you will create tickets and subtasks that
> will allow us to track progress. Students having to work on Mesos, Whirr
> and Provisionr will be able to use the same account.
>
> -Review Board [3]: This is the web interface to submit patches when you
> are not an official Apache committer. Create an account on review board.
>
> -Git: To manage the CloudStack source code we use git [4]. You will need
> to become familiar with git. I strongly recommend that you create a
> personal github [5] account. If you are not already familiar with git,
> check my screencast [6].
>
> -Wiki: All our developer content is on our wiki [7]. Browse it, get an
> account and create a page about your project in the Student Project page
> [8].
>
> -Website: I hope you already know our website :) [9]
>
> -CloudStack University: To get your started and get a tour of CloudStack,
> you can watch CloudStack University [10]
>
> Expectations for bonding period:
> 
> *To get you on-board I would like to ask each of you to send an email
> introducing yourself in couple sentences, describe your project (couple
> sentences plus link to the JIRA entry and the wiki page you created),
> confirm that you joined IRC and if you registered a nick tell us what it is
> and finally confirm that you created an account on review board and JIRA.
>
> *By the end of the period, I would like to see your first patch submitted.
> It will be your GSOC proposal in docbook format contributed to a GSOC guide
> I will create. There is no code writing involved, this will just serve as a
> way to make sure you understand the process of submitting a patch and will
> be the start of a great documentation of our GSOC efforts. More on that
> later
>
> On behalf of everyone at CloudStack and especially your mentors (Abhi,
> Kelcey, Hugo and myself) , welcome and let's have fun coding.
>
> -Sebastien
>
>
> [0] - http://www.google-melange.com/gsoc/events/google/gsoc2013
> [1] - http://www.freenode.net
> [2] - https://issues.apache.org/jira/browse/CLOUDSTACK
> [3] - https://reviews.apache.org/dashboard/
> [4] - http://git-scm.com
> [5] - https://github.com
> [6] -
> http://www.youtube.com/watch?v=3c5JIW4onGk&list=PLb899uhkHRoZCRE00h_9CRgUSiHEgFDbC&index=5
> [7] - https://cwiki.apache.org/CLOUDSTACK/
> [8] - https://cwiki.apache.org/CLOUDSTACK/student-projects.html
> [9] - http://cloudstack.apache.org
> [10] -
> http://www.youtube.com/playlist?list=PLb899uhkHRoZCRE00h_9CRgUSiHEgFDbC
>
>
>
>


Please run with assert on when you're developing...

2013-06-11 Thread Alex Huang
Hi All,

CloudStack code have many asserts to guarantee code is written correctly for 
the developers.  I recently realized that since we've converted to maven, we no 
longer run with assert on as developers.  It is very important that we do 
because it will find problems for you during load time and run time.

To run with assert on, you can add "-ea" to MAVEN_DEBUG_OPTS.  

I tried this recently and there are many places that are asserting.  Please do 
a run and fix what you can.

Thanks.

--Alex


Call for participants: "Top 10 Coolest Clouds" video. Show off your cloud

2013-06-11 Thread Jessica Tomechak
Is your cloud one of the "Top 10 Coolest CloudStack Deployments"?
Videographer Gregg Witkin and writer Jessica Tomechak are working together
this summer on a video that aims to show some of the most interesting
real-world applications of CloudStack. They welcome your participation on
this video, and suggestions for other videos you'd like to see.

If you would like your cloud to be considered for inclusion in this video,
please contact Jessica and Gregg ASAP.

Check out this video Gregg did with CloudStack last year, just after we
entered the Apache incubator:
Introduction to the Apache CloudStack
Project


Jessica T.


Re: Please run with assert on when you're developing...

2013-06-11 Thread John Burwell
+100
On Jun 11, 2013, at 8:17 PM, Alex Huang  wrote:

> Hi All,
> 
> CloudStack code have many asserts to guarantee code is written correctly for 
> the developers.  I recently realized that since we've converted to maven, we 
> no longer run with assert on as developers.  It is very important that we do 
> because it will find problems for you during load time and run time.
> 
> To run with assert on, you can add "-ea" to MAVEN_DEBUG_OPTS.  
> 
> I tried this recently and there are many places that are asserting.  Please 
> do a run and fix what you can.
> 
> Thanks.
> 
> --Alex



[ACS42] Ceph Storage Integration with Cloudstack -

2013-06-11 Thread Sudha Ponnaganti
Hi Wido,

This is regarding Ceph integration validation related to the following tickets

https://issues.apache.org/jira/browse/CLOUDSTACK-574
https://issues.apache.org/jira/browse/CLOUDSTACK-1191

If someone want to validate Ceph integration with Cloudstack, what are the 
requirements and also is code ready for someone to validate??

Thanks
/sudha



Re: PCI-Passthrough with CloudStack

2013-06-11 Thread Pawit Pornkitprasan
On Tue, Jun 11, 2013 at 8:26 PM, Vijayendra Bhamidipati
 wrote:

> -Original Message-
> From: David Nalley [mailto:da...@gnsa.us]
> Sent: Tuesday, June 11, 2013 5:08 AM
> To: dev@cloudstack.apache.org
> Cc: Ryousei Takano
> Subject: Re: PCI-Passthrough with CloudStack
>
> [Vijay>] Any specific reasons for not tracking the type of device? Different 
> hypervisors may implement passthrough differently. KVM may use the PCI ID but 
> afaik vmware does not and so we probably will need to know the type of device 
> in order to map it as a passthrough device.

I don't think that there is any use to tracking the type of device,
and a PCI device can be any kind of device.

I don't have any direct experience with VMWare, but VMWare
documentation 
(http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1010789)
does show that it is recorded using the PCI ID.

> [Vijay>] What is the reason for this limitation? Is it that PCI IDs can 
> change among PCI devices on a host across reboots? In general, what is the 
> effect of a host reboot on PCI IDs? Could the PCI ID of the physical device 
> change? Is there a way to configure passthrough devices without using the PCI 
> ID of the device?

The limitation is to simplify the initial implementation of
allocation. I believe that a PCI ID is constant (unless of course, the
PCI card is physically moved inside the server). A PCI ID will always
have to be specified somewhere, whether it is in the management or
agent.

> This looks like a compelling idea, though I am sure not limited to just 
> networking (think GPU passthrough).
> How are things like live migration affected? Are you making planner changes 
> to deal with the limiting factor of a single PCI-passthrough VM being 
> available per host?

So far, I've made the change to FirstFitAllocator so that it only
assigns one VM with PCI Passthrough to one host. I'm looking to making
it smarter though. (Like what Edison Su suggested)

> What's the level of effort to extend this to work with VMware DirectPath I/O 
> and PCI passthrough on XenServer?

I don't have much experience with VMware or XenServer, so I am not
sure. I am actually doing this as an internship project, so my scope
is likely limited to KVM.

> [Vijay>] It's probably a good idea to limit the passthrough to networking to 
> begin with and implement other types of devices (HBA/CD-ROMs etc) 
> incrementally. Live migration will definitely be affected. In vmware, live 
> migration is disabled for a VM once the VM is configured with a passthrough 
> device. The implementation should handle this. A host of other features also 
> get disabled when passthrough is configured, and if cloudstack is using any 
> of those, we should handle those paths as well.

With KVM, libvirt prevents live migration for machines with PCI
Passthrough enabled. The errors goes back up the stack and the UI
"correctly" displays the error message "Failed to migrate vm".

>
> Regards,
> Vijay
> --David

Best Regards,
Pawit


Re: PCI-Passthrough with CloudStack

2013-06-11 Thread Pawit Pornkitprasan
On 6/11/13 09:35 PM, "Edison Su"  wrote:

> If change vm's xml is enough, then how about use libvirt's hook system:
> http://www.libvirt.org/hooks.html
> I think, the issue is that, how to let cloudstack only create one VM per KVM 
> host, or few
> VMs per host(based on the available PCI devices on the host).
> If we think PCI devices are the resource CloudStack should to take care of 
> during the resource
> allocation, then we need a framework:
> 1. During host discovering, host can report whatever resources it can detect 
> to mgt server.
> RAM/CPU freq/local storage are the resources, that currently supported by kvm 
> agent. Here
> we may need to add PCI devices as another resource.  Such as, KVM agent host 
> returns a StartupAuxiliaryDevicesReportCmd
> along as with other startupRouteringcmd/StartStorage*cmd etc, during the 
> startup.
> 2. There will be a listener on the mgt server, which can listen on 
> StartupAuxiliaryDevicesReportCmd,
> then records available PCI devices into DB,  such as host_pci_device_ref 
> table.
> 3. Need to extend FirstFitAllocator, take PCI devices as another resource 
> during the allocation.
> And also need to find a place to mark the PCI device as used in 
> host_pci_device_ref table,
> so the pci device won't be allocated to more than one VM.
> 4. Have api to create a customized computing offering, the offering can 
> contain info about
> PCI device, such as how many PCI devices plugged into a VM.
> 5. If user chooses above customized computing offering during the VM 
> deployment, then the
> allocator in step 3 will be triggered, which will choose a KVM host which has 
> enough PCI devices
> to fulfill the computing offering.
> 6. In the startupcommand, the mgt server send to kvm host, it should contain 
> the PCI devices
> allocated to this VM.
> 7. At the KVM agent code, change VM's xml file properly based on the 
> startupcommand.
> How do you think?

I think this is a very good idea. Maybe we can do further
generalization (to support Paul's case) by tagging each PCI device
with a name, and we can store the name inside the compute offering
instead of the ID. Then management will look up inside
host_pci_device_ref and find the ID of a suitable PCI device. This
would allow multiple VMs to be allocated to one host of the host has
multiple PCI cards providing the same function.

> > >
> > > -Original Message-
> > > From: Kelven Yang [mailto:kelven.y...@citrix.com]
> > > Sent: 11 June 2013 18:10
> > > To: dev@cloudstack.apache.org
> > > Cc: Ryousei Takano
> > > Subject: Re: PCI-Passthrough with CloudStack
> > >
> > > VirtualMachineTO.params is designed to carry generic VM specific
> > configurations, these configuration parameters can either be statically 
> > linked
> > with the VM or dynamically populated based on other factors like this one.
> > Are you passing PCI ID using VirtualMachineTO.params?

I've created PciTO and pass an array similar to VolumeTO and NicTO.
This there anything wrong with this approach?

> > >
> > > Anything that affects VM placement could have impact to HA/migration,
> > we probably need some graceful error-handling in these code paths,
> > hopefully these have been taken care of.
> > >

Migration is prevented by libvirt and cloudstack displays "Failed to
migrate vm" if the user attempts to migrate a VM. I have not
investigated HA yet.


Re: Contributing as a non-committer

2013-06-11 Thread Mathias Mullins
Ok, Alex and I'Ll start e-mailing you through Elm too! :-)

Matt 



On 6/10/13 11:03 PM, "Alex Huang"  wrote:

>> Forget about eclipse for now :) just use vi :)
>
>Why don't we just go back to ed?
>
>--Alex



Re: Expanding Volumes

2013-06-11 Thread Marcus Sorensen
It would be trivial to do for root volumes, given what's in place. It
was originally done for data disks only because data disks are tied to
disk offerings, whereas root volumes are not. Root disks would have to
be treated the same as a custom disk offering, which some people may
not like, for example if they bill based on disk offerings. Simply
enabling root resize would only be maybe two or three lines of code,
but we would probably want to make a global parameter for people to be
able to turn it on or off as well.

On Tue, Jun 11, 2013 at 5:59 PM, Maurice Lawler  wrote:
> Hello,
>
> I see one of the features is to expand and/or shrink drives. However, I just
> encountered I can only do that for secondary drives on a particular
> instance. Is this feature not readily available to be done via the primary
> virtual drive of said instance?
>
>
> - Maurice


Gui question about a checkbox

2013-06-11 Thread Mike Tutkowski
Hi,

I've noticed a subtle problem with certain checkboxes that have fields tied
to them.

For example, on the New Disk Offering dialog, there is a checkbox called
Custom Disk Size. It starts out un-checked and a Disk Size text field is
visible below it. When you check the checkbox, the text field goes away. If
you un-check it, the field comes back.

This works fine for this field, but the same setup does not work for the
Public checkbox right below it in this same dialog. This checkbox starts
out un-checked as well and is supposed to have a combobox tied to it.
However, you have to check the checkbox, then un-check it for the combobox
to appear.

I have noticed this same problem with code that I have added that has two
text fields tied to the checked or un-checked state of a checkbox: The
checkbox is un-checked by default, but you don't see the text fields until
you check and then un-check the checkbox.

Has anyone else observed this?

Thanks!

-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud
*™*


RE: Gui question about a checkbox

2013-06-11 Thread Pranav Saxena
I reckon , you are using Chrome . You might want to check Firefox once and see 
that the behavior is different :)

PS: Chrome seems to have some issues with the isReverse attribute defined in 
the widget. 

Thanks !

-Original Message-
From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com] 
Sent: Wednesday, June 12, 2013 10:21 AM
To: dev@cloudstack.apache.org
Subject: Gui question about a checkbox

Hi,

I've noticed a subtle problem with certain checkboxes that have fields tied to 
them.

For example, on the New Disk Offering dialog, there is a checkbox called Custom 
Disk Size. It starts out un-checked and a Disk Size text field is visible below 
it. When you check the checkbox, the text field goes away. If you un-check it, 
the field comes back.

This works fine for this field, but the same setup does not work for the Public 
checkbox right below it in this same dialog. This checkbox starts out 
un-checked as well and is supposed to have a combobox tied to it.
However, you have to check the checkbox, then un-check it for the combobox to 
appear.

I have noticed this same problem with code that I have added that has two text 
fields tied to the checked or un-checked state of a checkbox: The checkbox is 
un-checked by default, but you don't see the text fields until you check and 
then un-check the checkbox.

Has anyone else observed this?

Thanks!

--
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud
*(tm)*


Re: [ACS42] Ceph Storage Integration with Cloudstack -

2013-06-11 Thread Wido den Hollander

Hi,

I recently wrote a short blog post about this: 
http://blog.widodh.nl/2013/06/a-quick-note-on-running-cloudstack-with-rbd-on-ubuntu-12-04/


The RBD integration in CloudStack 4.2 (master branch) is ready for 
testing, not everything has been verified by me, but it should work.


Things that I'm not yet sure about:
* Expunging of RBD volumes
* Expunging of unused templates from Primary Storage

It has all been integrated into the build process in master, so it 
should be just a matter of setting up a cluster from the master branch, 
add Ceph primary storage and see if it works.


A second pair of eyes on the code is always welcome, since I probably 
forgot something.


Wido

On 06/12/2013 03:36 AM, Sudha Ponnaganti wrote:

Hi Wido,

This is regarding Ceph integration validation related to the following tickets

https://issues.apache.org/jira/browse/CLOUDSTACK-574
https://issues.apache.org/jira/browse/CLOUDSTACK-1191

If someone want to validate Ceph integration with Cloudstack, what are the 
requirements and also is code ready for someone to validate??

Thanks
/sudha




RE: [ACS42] Ceph Storage Integration with Cloudstack -

2013-06-11 Thread Sudha Ponnaganti
Thanks Wido for the response. I have another question.  Is there any impact 
because of storage refactoring work that Edison and min have been doing??

-Original Message-
From: Wido den Hollander [mailto:w...@widodh.nl] 
Sent: Tuesday, June 11, 2013 10:23 PM
To: dev@cloudstack.apache.org
Subject: Re: [ACS42] Ceph Storage Integration with Cloudstack -

Hi,

I recently wrote a short blog post about this: 
http://blog.widodh.nl/2013/06/a-quick-note-on-running-cloudstack-with-rbd-on-ubuntu-12-04/

The RBD integration in CloudStack 4.2 (master branch) is ready for testing, not 
everything has been verified by me, but it should work.

Things that I'm not yet sure about:
* Expunging of RBD volumes
* Expunging of unused templates from Primary Storage

It has all been integrated into the build process in master, so it should be 
just a matter of setting up a cluster from the master branch, add Ceph primary 
storage and see if it works.

A second pair of eyes on the code is always welcome, since I probably forgot 
something.

Wido

On 06/12/2013 03:36 AM, Sudha Ponnaganti wrote:
> Hi Wido,
>
> This is regarding Ceph integration validation related to the following 
> tickets
>
> https://issues.apache.org/jira/browse/CLOUDSTACK-574
> https://issues.apache.org/jira/browse/CLOUDSTACK-1191
>
> If someone want to validate Ceph integration with Cloudstack, what are the 
> requirements and also is code ready for someone to validate??
>
> Thanks
> /sudha
>
>


Re: Review Request: Cloudstack-2854 [Multiple_IP_Ranges] Failed to create ip alias on VR while deploying guest vm with ip address from new CIDR

2013-06-11 Thread Abhinandan Prateek

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11656/#review21778
---

Ship it!


Ship It!

- Abhinandan Prateek


On June 5, 2013, 4:28 p.m., bharat kumar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11656/
> ---
> 
> (Updated June 5, 2013, 4:28 p.m.)
> 
> 
> Review request for cloudstack and Abhinandan Prateek.
> 
> 
> Description
> ---
> 
> [Multiple_IP_Ranges] Failed to create ip alias on VR while deploying guest vm 
> with ip address from new CIDR
> https://issues.apache.org/jira/browse/CLOUDSTACK-2854
> 
> 
> This addresses bug Cloudstack-2854.
> 
> 
> Diffs
> -
> 
>   
> core/src/com/cloud/agent/resource/virtualnetwork/VirtualRoutingResource.java 
> 9e40eef 
>   scripts/network/domr/call_dnsmasq.sh PRE-CREATION 
>   scripts/network/domr/createipAlias.sh PRE-CREATION 
>   scripts/network/domr/deleteipAlias.sh PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/11656/diff/
> 
> 
> Testing
> ---
> 
> tested on master.
> 
> 
> Thanks,
> 
> bharat kumar
> 
>



Re: Review Request: Cloudstack-2854 [Multiple_IP_Ranges] Failed to create ip alias on VR while deploying guest vm with ip address from new CIDR

2013-06-11 Thread Abhinandan Prateek

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11656/#review21779
---

Ship it!


It would be nice to consolidate the related commands into same script.

- Abhinandan Prateek


On June 5, 2013, 4:28 p.m., bharat kumar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11656/
> ---
> 
> (Updated June 5, 2013, 4:28 p.m.)
> 
> 
> Review request for cloudstack and Abhinandan Prateek.
> 
> 
> Description
> ---
> 
> [Multiple_IP_Ranges] Failed to create ip alias on VR while deploying guest vm 
> with ip address from new CIDR
> https://issues.apache.org/jira/browse/CLOUDSTACK-2854
> 
> 
> This addresses bug Cloudstack-2854.
> 
> 
> Diffs
> -
> 
>   
> core/src/com/cloud/agent/resource/virtualnetwork/VirtualRoutingResource.java 
> 9e40eef 
>   scripts/network/domr/call_dnsmasq.sh PRE-CREATION 
>   scripts/network/domr/createipAlias.sh PRE-CREATION 
>   scripts/network/domr/deleteipAlias.sh PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/11656/diff/
> 
> 
> Testing
> ---
> 
> tested on master.
> 
> 
> Thanks,
> 
> bharat kumar
> 
>



Re: Review Request: Cloudstack-2511 Multiple_Ip_Ranges: Adding guest ip range in subset/superset to existing CIDR is allowed https://issues.apache.org/jira/browse/CLOUDSTACK-2511, Cloudstack-2651 [Sha

2013-06-11 Thread Abhinandan Prateek

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11600/#review21780
---

Ship it!


Ship It!

- Abhinandan Prateek


On June 4, 2013, 2:51 p.m., bharat kumar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11600/
> ---
> 
> (Updated June 4, 2013, 2:51 p.m.)
> 
> 
> Review request for cloudstack, Abhinandan Prateek and Koushik Das.
> 
> 
> Description
> ---
> 
> Cloudstack-2511 Multiple_Ip_Ranges: Adding guest ip range in subset/superset 
> to existing CIDR is allowed
> https://issues.apache.org/jira/browse/CLOUDSTACK-2511
> 
> Cloudstack-2651 [Shared n/w]Add IP range should not ask for gateway and 
> netmask while adding the ip range to the existing subnet.
> https://issues.apache.org/jira/browse/CLOUDSTACK-2651
> 
> 
> This addresses bugs Cloudstack-2511 and Cloudstack-2651.
> 
> 
> Diffs
> -
> 
>   server/src/com/cloud/configuration/ConfigurationManagerImpl.java 59e70cf 
>   server/test/com/cloud/configuration/ValidateIpRangeTest.java 7681667 
>   utils/src/com/cloud/utils/net/NetUtils.java 8c094c8 
> 
> Diff: https://reviews.apache.org/r/11600/diff/
> 
> 
> Testing
> ---
> 
> Tested with master.
> 
> 
> Thanks,
> 
> bharat kumar
> 
>



Re: Review Request: Cloudstack-2511 Multiple_Ip_Ranges: Adding guest ip range in subset/superset to existing CIDR is allowed https://issues.apache.org/jira/browse/CLOUDSTACK-2511, Cloudstack-2651 [Sha

2013-06-11 Thread Abhinandan Prateek


> On June 12, 2013, 5:53 a.m., Abhinandan Prateek wrote:
> > Ship It!

create enums for return values to signify the cidd overlap.


- Abhinandan


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11600/#review21780
---


On June 4, 2013, 2:51 p.m., bharat kumar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11600/
> ---
> 
> (Updated June 4, 2013, 2:51 p.m.)
> 
> 
> Review request for cloudstack, Abhinandan Prateek and Koushik Das.
> 
> 
> Description
> ---
> 
> Cloudstack-2511 Multiple_Ip_Ranges: Adding guest ip range in subset/superset 
> to existing CIDR is allowed
> https://issues.apache.org/jira/browse/CLOUDSTACK-2511
> 
> Cloudstack-2651 [Shared n/w]Add IP range should not ask for gateway and 
> netmask while adding the ip range to the existing subnet.
> https://issues.apache.org/jira/browse/CLOUDSTACK-2651
> 
> 
> This addresses bugs Cloudstack-2511 and Cloudstack-2651.
> 
> 
> Diffs
> -
> 
>   server/src/com/cloud/configuration/ConfigurationManagerImpl.java 59e70cf 
>   server/test/com/cloud/configuration/ValidateIpRangeTest.java 7681667 
>   utils/src/com/cloud/utils/net/NetUtils.java 8c094c8 
> 
> Diff: https://reviews.apache.org/r/11600/diff/
> 
> 
> Testing
> ---
> 
> Tested with master.
> 
> 
> Thanks,
> 
> bharat kumar
> 
>



Re: if anybody know how to restrict access to a specific zone

2013-06-11 Thread Nitin Mehta
William - good use case, CS has the ability to dedicate resources to an
account domain, but not restrict them. You can read about that here -
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Dedicated+Resources+
-+Private+zone%2C+pod%2C+cluster%2C+host+Functional+Spec

Do you want to raise an enhancement for restricting a domain/account to a
zone ?

Thanks,
-Nitin

On 12/06/13 3:47 AM, "William Jiang"  wrote:

>Hi,
>
>if anybody know how to restrict access to a specific zone? We have
>multiple zone in our cloudstack 3.0 and I want to give access to a user
>to see only a specific zone.
>
>Thanks,
>William
>This e-mail may be privileged and/or confidential, and the sender does
>not waive any related rights and obligations. Any distribution, use or
>copying of this e-mail or the information it contains by other than an
>intended recipient is unauthorized. If you received this e-mail in error,
>please advise me (by return e-mail or otherwise) immediately. Ce courrier
>?lectronique est confidentiel et prot?g?. L'exp?diteur ne renonce pas aux
>droits et obligations qui s'y rapportent. Toute diffusion, utilisation ou
>copie de ce message ou des renseignements qu'il contient par une personne
>autre que le (les) destinataire(s) d?sign?(s) est interdite. Si vous
>recevez ce courrier ?lectronique par erreur, veuillez m'en aviser
>imm?diatement, par retour de courrier ?lectronique ou par un autre moyen.



Re: [ACS42] Ceph Storage Integration with Cloudstack -

2013-06-11 Thread Wido den Hollander



On 06/12/2013 07:38 AM, Sudha Ponnaganti wrote:

Thanks Wido for the response. I have another question.  Is there any impact 
because of storage refactoring work that Edison and min have been doing??



I don't think so. We could implement some new strategies for RBD with 
the new framework, but I haven't done that.


The Object Store is something different, since that enables you to use 
the Amazon S3 compatible RADOS Gateway from Ceph as Secondary Storage 
(Backup and Template).


Wido


-Original Message-
From: Wido den Hollander [mailto:w...@widodh.nl]
Sent: Tuesday, June 11, 2013 10:23 PM
To: dev@cloudstack.apache.org
Subject: Re: [ACS42] Ceph Storage Integration with Cloudstack -

Hi,

I recently wrote a short blog post about this:
http://blog.widodh.nl/2013/06/a-quick-note-on-running-cloudstack-with-rbd-on-ubuntu-12-04/

The RBD integration in CloudStack 4.2 (master branch) is ready for testing, not 
everything has been verified by me, but it should work.

Things that I'm not yet sure about:
* Expunging of RBD volumes
* Expunging of unused templates from Primary Storage

It has all been integrated into the build process in master, so it should be 
just a matter of setting up a cluster from the master branch, add Ceph primary 
storage and see if it works.

A second pair of eyes on the code is always welcome, since I probably forgot 
something.

Wido

On 06/12/2013 03:36 AM, Sudha Ponnaganti wrote:

Hi Wido,

This is regarding Ceph integration validation related to the following
tickets

https://issues.apache.org/jira/browse/CLOUDSTACK-574
https://issues.apache.org/jira/browse/CLOUDSTACK-1191

If someone want to validate Ceph integration with Cloudstack, what are the 
requirements and also is code ready for someone to validate??

Thanks
/sudha




Re: [MERGE] disk_io_throttling to MASTER

2013-06-11 Thread Mike Tutkowski
Hi Edison, John, and Wei (and whoever else is reading this :) ),

Just an FYI that I believe I have implemented all the areas we wanted
addressed.

I plan to review the code again tomorrow morning or afternoon, then send in
another patch.

Thanks for all the work on this everyone!


On Tue, Jun 11, 2013 at 12:29 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:

> Sure, that sounds good.
>
>
> On Tue, Jun 11, 2013 at 12:11 PM, Wei ZHOU  wrote:
>
>> Hi Mike,
>>
>> It looks the two feature do not have many conflicts in Java code, except
>> the cloudstack UI.
>> If you do not mind, I will merge disk_io_throttling branch into master
>> this
>> week, so that you can develop based on it.
>>
>> -Wei
>>
>>
>> 2013/6/11 Mike Tutkowski 
>>
>> > Hey John,
>> >
>> > The SolidFire patch does not depend on the object_store branch, but - as
>> > Edison mentioned - it might be easier if we merge the SolidFire branch
>> into
>> > the object_store branch before object_store goes into master.
>> >
>> > I'm not sure how the disk_io_throttling fits into this merge strategy.
>> > Perhaps Wei can chime in on that.
>> >
>> >
>> > On Tue, Jun 11, 2013 at 11:07 AM, John Burwell 
>> wrote:
>> >
>> > > Mike,
>> > >
>> > > We have a delicate merge dance to perform.  The disk_io_throttling,
>> > > solidfire, and object_store appear to have a number of overlapping
>> > > elements.  I understand the dependencies between the patches to be as
>> > > follows:
>> > >
>> > > object_store <- solidfire -> disk_io_throttling
>> > >
>> > > Am I correct that the device management aspects of SolidFire are
>> additive
>> > > to the object_store branch or there are circular dependency between
>> the
>> > > branches?  Once we understand the dependency graph, we can determine
>> the
>> > > best approach to land the changes in master.
>> > >
>> > > Thanks,
>> > > -John
>> > >
>> > >
>> > > On Jun 10, 2013, at 11:10 PM, Mike Tutkowski <
>> > mike.tutkow...@solidfire.com>
>> > > wrote:
>> > >
>> > > > Also, if we are good with Edison merging my code into his branch
>> before
>> > > > going into master, I am good with that.
>> > > >
>> > > > We can remove the StoragePoolType.Dynamic code after his merge and
>> we
>> > can
>> > > > deal with Burst IOPS then, as well.
>> > > >
>> > > >
>> > > > On Mon, Jun 10, 2013 at 9:08 PM, Mike Tutkowski <
>> > > > mike.tutkow...@solidfire.com> wrote:
>> > > >
>> > > >> Let me make sure I follow where we're going here:
>> > > >>
>> > > >> 1) There should be NO references to hypervisor code in the storage
>> > > >> plug-ins code (this includes the default storage plug-in, which
>> > > currently
>> > > >> sends several commands to the hypervisor in use (although it does
>> not
>> > > know
>> > > >> which hypervisor (XenServer, ESX(i), etc.) is actually in use))
>> > > >>
>> > > >> 2) managed=true or managed=false can be placed in the url field (if
>> > not
>> > > >> present, we default to false). This info is stored in the
>> > > >> storage_pool_details table.
>> > > >>
>> > > >> 3) When the "attach" command is sent to the hypervisor in
>> question, we
>> > > >> pass the managed property along (this takes the place of the
>> > > >> StoragePoolType.Dynamic check).
>> > > >>
>> > > >> 4) execute(AttachVolumeCommand) in the hypervisor checks for the
>> > managed
>> > > >> property. If true for an attach, the necessary hypervisor data
>> > > structure is
>> > > >> created and the rest of the attach command executes to attach the
>> > > volume.
>> > > >>
>> > > >> 5) When execute(AttachVolumeCommand) is invoked to detach a volume,
>> > the
>> > > >> same check is made. If managed, the hypervisor data structure is
>> > > removed.
>> > > >>
>> > > >> 6) I do not see an clear way to support Burst IOPS in 4.2 unless
>> it is
>> > > >> stored in the volumes and disk_offerings table. If we have some
>> idea,
>> > > >> that'd be cool.
>> > > >>
>> > > >> Thanks!
>> > > >>
>> > > >>
>> > > >> On Mon, Jun 10, 2013 at 8:58 PM, Mike Tutkowski <
>> > > >> mike.tutkow...@solidfire.com> wrote:
>> > > >>
>> > > >>> "+1 -- Burst IOPS can be implemented while avoiding implementation
>> > > >>> attributes.  I always wondered about the details field.  I think
>> we
>> > > should
>> > > >>> beef up the description in the documentation regarding the
>> expected
>> > > format
>> > > >>> of the field.  In 4.1, I noticed that the details are not
>> returned on
>> > > the
>> > > >>> createStoratePool updateStoragePool, or listStoragePool response.
>> >  Why
>> > > >>> don't we return it?  It seems like it would be useful for clients
>> to
>> > be
>> > > >>> able to inspect the contents of the details field."
>> > > >>>
>> > > >>> Not sure how this would work storing Burst IOPS here.
>> > > >>>
>> > > >>> Burst IOPS need to be variable on a Disk Offering-by-Disk Offering
>> > > >>> basis. For each Disk Offering created, you have to be able to
>> > associate
>> > > >>> unique Burst IOPS. There is a disk_offering_details table. Maybe
>> it

Re: Review Request: (CLOUDSTACK-1301) VM Disk I/O Throttling

2013-06-11 Thread Wei Zhou


> On June 11, 2013, 10:30 p.m., Wido den Hollander wrote:
> >

Wido,

The first and third are debugging messages. For the first one, I will remove 
the line before this. For the third one, I will remove this line.

For the second one, I will use the methods for coding. Thank you very much for 
your review.

-Wei


- Wei


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11782/#review21754
---


On June 10, 2013, 5:51 p.m., Wei Zhou wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11782/
> ---
> 
> (Updated June 10, 2013, 5:51 p.m.)
> 
> 
> Review request for cloudstack, Wido den Hollander and John Burwell.
> 
> 
> Description
> ---
> 
> The patch for VM Disk I/O throttling based on commit 
> 3f3c6aa35f64c4129c203d54840524e6aa2c4621
> 
> 
> This addresses bug CLOUDSTACK-1301.
> 
> 
> Diffs
> -
> 
>   api/src/com/cloud/agent/api/to/VolumeTO.java 4cbe82b 
>   api/src/com/cloud/offering/DiskOffering.java dd77c70 
>   api/src/com/cloud/vm/DiskProfile.java e3a3386 
>   api/src/org/apache/cloudstack/api/ApiConstants.java ab1402c 
>   
> api/src/org/apache/cloudstack/api/command/admin/offering/CreateDiskOfferingCmd.java
>  aa11599 
>   
> api/src/org/apache/cloudstack/api/command/admin/offering/CreateServiceOfferingCmd.java
>  4c54a4e 
>   api/src/org/apache/cloudstack/api/response/DiskOfferingResponse.java 
> 377e66e 
>   api/src/org/apache/cloudstack/api/response/ServiceOfferingResponse.java 
> 31533f8 
>   api/src/org/apache/cloudstack/api/response/VolumeResponse.java 21d7d1a 
>   client/WEB-INF/classes/resources/messages.properties 2b17359 
>   core/src/com/cloud/agent/api/AttachVolumeCommand.java 302b8f8 
>   engine/schema/src/com/cloud/storage/DiskOfferingVO.java 909d7fe 
>   
> plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
>  bab53bc 
>   
> plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtDomainXMLParser.java
>  b8645e1 
>   
> plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtVMDef.java
>  9cddb2e 
>   server/src/com/cloud/api/query/dao/DiskOfferingJoinDaoImpl.java 283181f 
>   server/src/com/cloud/api/query/dao/ServiceOfferingJoinDaoImpl.java 56e4d0a 
>   server/src/com/cloud/api/query/dao/VolumeJoinDaoImpl.java e27e2d9 
>   server/src/com/cloud/api/query/vo/DiskOfferingJoinVO.java 6d3cdcb 
>   server/src/com/cloud/api/query/vo/ServiceOfferingJoinVO.java e87a101 
>   server/src/com/cloud/api/query/vo/VolumeJoinVO.java 6ef8c91 
>   server/src/com/cloud/configuration/Config.java 5ee0fad 
>   server/src/com/cloud/configuration/ConfigurationManager.java 8db037b 
>   server/src/com/cloud/configuration/ConfigurationManagerImpl.java 59e70cf 
>   server/src/com/cloud/storage/StorageManager.java d49a7f8 
>   server/src/com/cloud/storage/StorageManagerImpl.java d38b35e 
>   server/src/com/cloud/storage/VolumeManagerImpl.java 43f3681 
>   server/src/com/cloud/test/DatabaseConfig.java 70c8178 
>   server/test/com/cloud/vpc/MockConfigurationManagerImpl.java 21b3590 
>   setup/db/db/schema-410to420.sql bcfbcc9 
>   ui/dictionary.jsp a5f0662 
>   ui/scripts/configuration.js cb15598 
>   ui/scripts/instances.js 7149815 
> 
> Diff: https://reviews.apache.org/r/11782/diff/
> 
> 
> Testing
> ---
> 
> testing ok.
> 
> 
> Thanks,
> 
> Wei Zhou
> 
>