Re: Review Request: Bugfix CLOUDSTACK-1594: Secondary storage host always remains Alert status

2013-06-17 Thread Nitin Mehta
Roxanne/Abhi - Thanks for following up. I guess this should be resolved by
the Object Store work going on.
Edison/Min - Would we still be having an entry for the secondary storage
in the host table ?

Thanks,
-Nitin

On 17/06/13 12:01 PM, "Abhinandan Prateek"  wrote:

>
>
>> On June 17, 2013, 5:06 a.m., Abhinandan Prateek wrote:
>> > This is a old patch, is it still valid ?
>> 
>> Prasanna Santhanam wrote:
>> Hmm - quite embarrassing it is over 3 months old. We may have
>>missed a contributor :(
>> 
>> However should be valid still because the secondary storage host
>>still shows 'Alert' on all installs.
>> 
>> roxanne chang wrote:
>> It will be nice if the changes can help. And.. Is the new design
>>finished ?
>
>Roxanne,  you may have to redo the patch on the current master and
>resubmit as it fails to apply now.
>
>
>- Abhinandan
>
>
>---
>This is an automatically generated e-mail. To reply, visit:
>https://reviews.apache.org/r/9818/#review21971
>---
>
>
>On May 31, 2013, 1:01 a.m., roxanne chang wrote:
>> 
>> ---
>> This is an automatically generated e-mail. To reply, visit:
>> https://reviews.apache.org/r/9818/
>> ---
>> 
>> (Updated May 31, 2013, 1:01 a.m.)
>> 
>> 
>> Review request for cloudstack, Abhinandan Prateek and edison su.
>> 
>> 
>> Description
>> ---
>> 
>> Bugfix CLOUDSTACK-1594: Secondary storage host always remains Alert
>>status
>> [https://issues.apache.org/jira/browse/CLOUDSTACK-1594]
>> 
>> In file SecondarySotrageManagerImpl.java, function
>>generateSetupCommand, if the host type is Secondary storage VM, the
>>logic is to set secondarystorage host, at this time, secondarystorage
>>host stauts should become Up.
>> 
>> The secondary storage host always remains Alert status, because before
>>the secondary storage vm is deployed, the secondary storage host is
>>created. The tricky way (in the end of file AgentManagerImpl.java,
>>function NotifiMonitorsOfConnection) will try to disconnect secondary
>>storage, therefore the secondary storage host becomes Alert status. The
>>code should take SSVM into consider, not only Answer reponse.
>> 
>> File ResourceManagerImpl.java, function discoverHostsFull, in the end
>>will call discoverer.postDiscovery, in file
>>SecondarySotrageDiscover.postDiscovery, the condition _userServiceVM is
>>not needed since its use to make secondary storage host wait for SSVM is
>>already done in SecondaryStorageManagerImpl. This makes why secondary
>>storage host always remains Alert status.
>> 
>> 
>> This addresses bug
>>https://issues.apache.org/jira/browse/CLOUDSTACK-1594.
>> 
>> 
>> Diffs
>> -
>> 
>>   server/src/com/cloud/agent/manager/AgentManagerImpl.java c1bbb58
>>   
>>server/src/com/cloud/storage/secondary/SecondaryStorageDiscoverer.java
>>3ca74a3 
>>   
>>server/src/com/cloud/storage/secondary/SecondaryStorageManagerImpl.java
>>46ac7af 
>> 
>> Diff: https://reviews.apache.org/r/9818/diff/
>> 
>> 
>> Testing
>> ---
>> 
>> Test 4.0.0, 4.2.0 in basic mode, works well.
>> 
>> 
>> Thanks,
>> 
>> roxanne chang
>> 
>>
>



Re: Object based Secondary storage.

2013-06-17 Thread Thomas O'Dowd
Thanks Min - I filed 3 small issues today. I've a couple more but I want
to try and repeat them again before I file them and I've no time right
now. Please let me know if you need any further detail on any of these.

https://issues.apache.org/jira/browse/CLOUDSTACK-3027
https://issues.apache.org/jira/browse/CLOUDSTACK-3028
https://issues.apache.org/jira/browse/CLOUDSTACK-3030

An example of the other issues I'm running into are that when I upload
an .gz template on regular NFS storage, it is automatically decompressed
for me where as with S3 the template remains as a .gz file. Is this
correct or not? Also, perhaps related but after successfully uploading
the template to S3 and then trying to start an instance using it, I can
select it and go all the way to the last screen where I think the action
button says launch instance or something and it fails with a resource
unreachable error. I'll have to dig up the error later and file the bug
as my machine got rebooted over the weekend.

The multipart upload looks like it is working correctly though and I can
verify the checksums etc are correct with what they should be.

Tom.

On Fri, 2013-06-14 at 16:55 +, Min Chen wrote:
> HI Tom,
> 
>   You can file JIRA ticket for object_store branch by prefixing your bug
> with "Object_Store_Refactor" and mentioning that it is using build from
> object_store. Here is an example bug filed from Sangeetha against
> object_store branch build:
> https://issues.apache.org/jira/browse/CLOUDSTACK-2528.
>   If you use devcloud for testing, you may run into an issue where ssvm
> cannot access public url when you register a template, so register
> template will fail. You may have to set up internal web server inside
> devcloud and post template to be registered there to give a URL that
> devcloud can access. We mainly used devcloud to run our TestNG automation
> test earlier, and then switched to real hypervisor for real testing.
>   Thanks
>   -min
> 
> On 6/14/13 1:46 AM, "Thomas O'Dowd"  wrote:
> 
> >Edison,
> >
> >I've got devcloud running along with the object_store branch and I've
> >finally been able to test a bit today.
> >
> >I found some issues (or things that I think are bugs) and would like to
> >file a few issues. I know where the bug database is and I have an
> >account but what is the best way to file bugs against this particular
> >branch? I guess I can select "Future" as the version? What other way are
> >feature branches usually identified in issues? Perhaps in the subject?
> >Please let me know the preference.
> >
> >Also, can you describe (or point me at a document) what the best way to
> >test against the object_store branch is? So far I have been doing the
> >following but I'm not sure it is the best?
> >
> > a) setup devcloud.
> > b) stop any instances on devcloud from previous runs
> >  xe vm-shutdown --multiple
> > c) check out and update the object_store branch.
> > d) clean build as described in devcloud doc (ADIDD for short)
> > e) deploydb (ADIDD)
> > f) start management console (ADIDD) and wait for it.
> > g) deploysvr (ADIDD) in another shell.
> > h) on devcloud machine use xentop to wait for 2 vms to launch.
> >(I'm not sure what the nfs vm is used for here??)
> > i) login on gui -> infra -> secondary and remove nfs secondary storage
> > j) add s3 secondary storage (using cache of old secondary storage?)
> >
> >Then rest of testing starts from here... (and also perhaps in step j)
> >
> >Thanks,
> >
> >Tom.
> >-- 
> >Cloudian KK - http://www.cloudian.com/get-started.html
> >Fancy 100TB of full featured S3 Storage?
> >Checkout the Cloudian® Community Edition!
> >
> 

-- 
Cloudian KK - http://www.cloudian.com/get-started.html
Fancy 100TB of full featured S3 Storage?
Checkout the Cloudian® Community Edition!



Re: Upgrade failure. 2.2.14 to 4.1.0

2013-06-17 Thread Wei ZHOU
The fix for Bug CLOUDSTACK-3005 may help you.

-Wei


2013/6/16 Glen Baars 

> Hello Cloudstack dev,
>
> Just wanted to share my experiences on upgrading 2.2.14 to 4.1.0 with 6
> Xenserver 5.6SP2 hosts.
>
> The documentation doesn't work, so many errors. I ended up restoring after
> 25 hours back to 2.2.14.
>
> I am willing to edit the docs when I get the upgrade to work.
>
> I would suggest the following changes:
>
> 1.
> Automatically take a mysqldump prior to the database upgrade.
> If it fails, revert to the dump and log.
>
> This would be faster to diag / fix and leave the db in a usable state. I
> know you should take a db backup first anyway, but this would be cleaner.
>
> 2.
> The vhd-util download isn't on the upgrade guides.
>
> 3.
> The upgrade guide for 4.1.0 incorrectly list the repos in the examples and
> 4.0
>
> 4.
> The upgrade guide gets cloud and Cloudstack folders wrong all the time
>
> 5.
> I had an issue with a SR-BACKEND failure that ended up being a corrupt
> vhd. This was throwing all kinds of errors with the solution. I know this
> is a Xenserver issue, just might help others with SR-BACKEND failures.
>
> 6.
> 4.0.2 Ubuntu repo install is missing the cloud-setup-encryption binaries.
>
> 7.
> Can't upgrade from 2.2.14 to 4.0.2 due to a know bug that is listed as
> solved due to 4.1.0 being released. This doesn't help when you can't get
> 4.1.0 to work. ( bug is for the database conversion missing 4.02 schema
> files ). If the upgrades don't work, can we put that on the upgrade guide /
> release notes?
>
> This brings me to the issue that I am having.
>
> After the upgrade, I can't deploy any VDI's into the SR's. The secondary
> storage is not getting mounted on the xenservers.
>
> The errors creating the secondary storage VM is below.
>
> http://pastebin.com/H4jZutuP
>
> Any ideas? I put the vhd-util in the correct location and set the
> permissions.
>
> Just to confirm my upgrade steps
>
> 1. Upgrade my single Ubuntu 12.04LTS management server from 2.2.14 to
> 4.1.0 2. rolling pool upgrade to Xenserver 5.6SP2 to 6.1 3. run the script
> to reboot all virtual routers and system vm's
>
> I got to step three and no routers came back up :( due the to secondary
> storage / SR issues.
>
> I have been looking for a way to contribute, maybe I can do Xenserver
> upgrade testing. I am setting up a test lab for this issue currently.
>
> Regards,
>
> Glen Baars
>
>


Re: Cloud Usage and API

2013-06-17 Thread Wei ZHOU
1. No. APIs are for users, not for cloudstack. There is a thread running at
the scheduled time.
2. No, generateUsageRecords  API adds an usage job in database
cloud_usage.usage_job.
3. There is a thread named Heartbeat in usage server which running every
minute. It checks the database and generates the usage records from last
usage job.
4. see 3.
5. I do not know.

-Wei


2013/6/16 CK 

> Can anyone help with my queries...?
>
>
> On 10 June 2013 09:21, CK  wrote:
>
> > Can someone please provide further information on what the API call:
> > generateUsageRecords actually does?
> >
> > I know it generates usage records, but:
> >
> > 1. Does the Usage Server service call the generateUsageRecords API when
> > it runs at the scheduled time(=usage.stats.job.exec.time) or does the
> > service do more than that?
> >
> > 2. Does the generateUsageRecords API execute the Usage Server service on
> > demand to generate the usage records?
> >
> > 3. For what period does it generate the usage records eg.
> > i. from the last time this API function was called
> > ii. from the last time the usage server ran
> >
> > 4. If I create a VM and the corresponding ACS events are created in the
> > Cloud DB, if I then run generateUsageRecords soon after will the usage
> > records be generated for up to that point?
> >
> > 5. Are there any other documented details available on this API command
> > other than the API document?
> >
>


[GSOC] A short description about CloudStack Networking plugin

2013-06-17 Thread Nguyen Anh Tu
Hi all,

I made an wiki entry about the CloudStack networking design. I think it's
useful for all network plugins can follow. It's located in my gsoc project
about improving the native SDN controller. Take a look on it

https://cwiki.apache.org/confluence/display/CLOUDSTACK/Add+Xen+and+XCP+support+for+GRE+SDN+controller

Thanks,

-- 

N.g.U.y.e.N.A.n.H.t.U


Re: [GSoC] End of bonding period, start of 'Work Period'

2013-06-17 Thread Ian Duffy
Hi Sebastian,

I have updated the JIRA case for my project to add more steps so it is
like Nguyen's one.

Just after looking through JIRA, results are as follows:

(Resolved) CLOUDSTACK-2287 - Automation:LDAP: Appears to be solved.
tsp added tests for it.

(Resolved) CLOUDSTACK-1172 - Ldap enhancements: Marked of by abhi as resolved.

(Blocked) CLOUDSTACK-1540 - LdapRemove within ui: Blocked due to
CLOUDSTACK-2168.

(Resolved) CLOUDSTACK-1495 - Change UI field name "bind username"

(Open) CLOUDSTACK-2168 - configured LDAP values not shown in UI.
Commenter says its a regression issues. When I tested this my values
disappeared and a message "No data to show" was given. Querying the
API directly still showed my LDAP configuration.

(Open) CLOUDSTACK-430 - Authentication should support multiple LDAP servers.

(Resolved) CLOUDSTACK-1069 - Workaround for CS and LDAP users to login
simultaneously. Marked as unresolved, appears to be resolved based on
comments. Appears to no longer be an issue in 4.1 and 4.2 based on
comments on CLOUDSTACK-1930.

(Open) CLOUDSTACK-1062 - LDAP integration with ACS user management,
detailed at 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/ACS+integration+with+User+LDAP+base

(Open) Cloudstack-1213 - LDAP SSL auth failed to setup. On first look
and reading of the error message It appears to be a user issue with
the trustedstore certificate not existing.

(Fixed) Cloudstack-1142 - due to "%" being illegal character. Marked
as big fixed.

(Fixed) CLOUDSTACK-1398 - Failed to update job status. Marked fixed.

(Fixed) CLOUDSTACK-1494 - Showing wrong warning messages. Markd as Fixed.


Re: jenkins jobs for new docs guide

2013-06-17 Thread Prasanna Santhanam
On Sun, Jun 16, 2013 at 05:10:09PM -0400, Sebastien Goasguen wrote:
> Hi,
> 
> I have been working on some new doc guides:
> 
> -the gsoc one is in master : docs/publican-gsoc-2013.cfg
> 
> -in the ACS101 branch under docs/acs101/publican.cfg (tons of new goodies in 
> there libcloud, jclouds-cli, knife-cs, whir etc)
> 
> Hugo mentioned to me that we should have some jenkins jobs to build
> those, any takers ? ( I no squat about jenkins)

It's quite easy. Once you are logged in. Go to the view under which
you create a job. Then click "New Job", then select "copy from
existing job". In your case you can use the doc jobs that Hugo setup
for midonet/nicira. Then tweak the configuration shell script to build
acs101!

Easy! :)

-- 
Prasanna.,


Powered by BigRock.com



Re: [GSoC] End of bonding period, start of 'Work Period'

2013-06-17 Thread Sebastien Goasguen

On Jun 17, 2013, at 4:54 AM, Ian Duffy  wrote:

> Hi Sebastian,
> 
> I have updated the JIRA case for my project to add more steps so it is
> like Nguyen's one.
> 
> Just after looking through JIRA, results are as follows:
> 
> (Resolved) CLOUDSTACK-2287 - Automation:LDAP: Appears to be solved.
> tsp added tests for it.
> 
> (Resolved) CLOUDSTACK-1172 - Ldap enhancements: Marked of by abhi as resolved.
> 
> (Blocked) CLOUDSTACK-1540 - LdapRemove within ui: Blocked due to
> CLOUDSTACK-2168.
> 
> (Resolved) CLOUDSTACK-1495 - Change UI field name "bind username"
> 
> (Open) CLOUDSTACK-2168 - configured LDAP values not shown in UI.
> Commenter says its a regression issues. When I tested this my values
> disappeared and a message "No data to show" was given. Querying the
> API directly still showed my LDAP configuration.
> 
> (Open) CLOUDSTACK-430 - Authentication should support multiple LDAP servers.
> 
> (Resolved) CLOUDSTACK-1069 - Workaround for CS and LDAP users to login
> simultaneously. Marked as unresolved, appears to be resolved based on
> comments. Appears to no longer be an issue in 4.1 and 4.2 based on
> comments on CLOUDSTACK-1930.
> 
> (Open) CLOUDSTACK-1062 - LDAP integration with ACS user management,
> detailed at 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/ACS+integration+with+User+LDAP+base
> 
> (Open) Cloudstack-1213 - LDAP SSL auth failed to setup. On first look
> and reading of the error message It appears to be a user issue with
> the trustedstore certificate not existing.
> 
> (Fixed) Cloudstack-1142 - due to "%" being illegal character. Marked
> as big fixed.
> 
> (Fixed) CLOUDSTACK-1398 - Failed to update job status. Marked fixed.
> 
> (Fixed) CLOUDSTACK-1494 - Showing wrong warning messages. Markd as Fixed.

Ian thanks for this,

Where it makes sense, feel free to comment and test. 

If some of those open bugs fit into your project plan, make sure to link to 
those jira bugs
If you think you can fix them, feel free to claim the bugs and assign them to 
you.

-Sebastien




Re: [GSoC] End of bonding period, start of 'Work Period'

2013-06-17 Thread Abhinandan Prateek


On 17/06/13 2:34 PM, "Sebastien Goasguen"  wrote:

>
>On Jun 17, 2013, at 4:54 AM, Ian Duffy  wrote:
>
>> Hi Sebastian,
>> 
>> I have updated the JIRA case for my project to add more steps so it is
>> like Nguyen's one.
>> 
>> Just after looking through JIRA, results are as follows:
>> 
>> (Resolved) CLOUDSTACK-2287 - Automation:LDAP: Appears to be solved.
>> tsp added tests for it.
>> 
>> (Resolved) CLOUDSTACK-1172 - Ldap enhancements: Marked of by abhi as
>>resolved.
>> 
>> (Blocked) CLOUDSTACK-1540 - LdapRemove within ui: Blocked due to
>> CLOUDSTACK-2168.
>> 
>> (Resolved) CLOUDSTACK-1495 - Change UI field name "bind username"
>> 
>> (Open) CLOUDSTACK-2168 - configured LDAP values not shown in UI.
>> Commenter says its a regression issues. When I tested this my values
>> disappeared and a message "No data to show" was given. Querying the
>> API directly still showed my LDAP configuration.
>> 
>> (Open) CLOUDSTACK-430 - Authentication should support multiple LDAP
>>servers.
>> 
>> (Resolved) CLOUDSTACK-1069 - Workaround for CS and LDAP users to login
>> simultaneously. Marked as unresolved, appears to be resolved based on
>> comments. Appears to no longer be an issue in 4.1 and 4.2 based on
>> comments on CLOUDSTACK-1930.
>> 
>> (Open) CLOUDSTACK-1062 - LDAP integration with ACS user management,
>> detailed at 
>>https://cwiki.apache.org/confluence/display/CLOUDSTACK/ACS+integration+wi
>>th+User+LDAP+base
>> 
>> (Open) Cloudstack-1213 - LDAP SSL auth failed to setup. On first look
>> and reading of the error message It appears to be a user issue with
>> the trustedstore certificate not existing.
>> 
>> (Fixed) Cloudstack-1142 - due to "%" being illegal character. Marked
>> as big fixed.
>> 
>> (Fixed) CLOUDSTACK-1398 - Failed to update job status. Marked fixed.
>> 
>> (Fixed) CLOUDSTACK-1494 - Showing wrong warning messages. Markd as
>>Fixed.
>
>Ian thanks for this,
>
>Where it makes sense, feel free to comment and test.
>
>If some of those open bugs fit into your project plan, make sure to link
>to those jira bugs
>If you think you can fix them, feel free to claim the bugs and assign
>them to you.
>
>-Sebastien
>
>
In case some of these bugs do not make sense, do discuss these with me.

-abhi
>




Re: Review Request: Cloudstack-2621 [Multiple_IP_Ranges] Failed to delete guest IP range from a new subnet/C

2013-06-17 Thread bharat kumar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11435/
---

(Updated June 17, 2013, 10:01 a.m.)


Review request for cloudstack, Abhinandan Prateek and Koushik Das.


Changes
---

rebased with master.


Description
---

[Multiple_IP_Ranges] Failed to delete guest IP range from a new subnet/C
https://issues.apache.org/jira/browse/CLOUDSTACK-2621


This addresses bug Cloudstack-2621.


Diffs (updated)
-

  server/src/com/cloud/configuration/ConfigurationManagerImpl.java 111586d 
  server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java 
db4786a 

Diff: https://reviews.apache.org/r/11435/diff/


Testing
---

tested on master.


Thanks,

bharat kumar



Re: [DISCUSS] Issue with cloudmonkey-4.1.0-0 on pypi

2013-06-17 Thread Prasanna Santhanam
On Sun, Jun 09, 2013 at 10:26:43AM -0400, David Nalley wrote:
> On Sun, Jun 9, 2013 at 7:51 AM, Rohit Yadav  wrote:
> > Hi,
> >
> > I was about to test CloudStack but the cloudmonkey-4.1.0-0 release on pypi
> > does not bundle failsafe api cache so when I install it I don't get any api
> > commands. The autodiscovery using sync is useful but only with the
> > ApiDiscovery plugin which works only for 4.2 and later. For 4.1 and below I
> > think we should, in that case, bundle the cache for all the apis. Or maybe
> > just oss components/plugins?
> >
> > I'll wait for Chip and others to comment if we want to ship it as it is or
> > bundle the cache against 4.1 release?
> >
> > Cheers.
> 
> Honestly - this is exactly why I've been suggesting[1] that we break
> CloudMonkey (and Marvin) out of the main repo and giving it it's own
> lifecycle. It's far easier/faster to iterate cloudmonkey than all of
> CloudStack and tying it to the slower lifecycle of ACS will continue
> to trouble it IMO.
> 
> --David
> 
> [1] http://markmail.org/message/wir5vfawex3y22ot

I haven't given breaking out the project much thought. But it's
certainly a possibility:

a) However, there are parts of the codebase (checkin tests) that depend
on marvin.

b) I need to come up with a easier way to update marvin across
cloudstack providers to enable auto-upating marvin's libraries like
cloudmonkey can. For this I've made a couple enhancements to
apidiscovery but it's not in master yet and I don't have it fully
figured out.

Need some time to think through this.

-- 
Prasanna.,


Powered by BigRock.com



Re: [DISCUSS} review flow

2013-06-17 Thread Prasanna Santhanam
On Mon, Jun 17, 2013 at 09:07:54AM +, Daan Hoogland wrote:
> H,
> 
> Even though the rebase of both patches where without conflicts for a
> change, this morning, I would really like to have them 'shipped'. Of
> course I don't mind doing more work on them as I am taking
> responsibility for the code but I am not looking forward to a
> permanent rebase re-apply job to keep my users satisfied.
> 
> More generally: What is the problem I should be addressing?
> 
> 1.  Is there something in my way of working that obstructs a
> ready acceptance of the code?
> 
> 2.  Is this a common problem with a certain type of issues that
> I happen to have addressed twice now?
> 
> 3.  Is a workflow for closing (accepting/refusing) review
> requests missing?
> On the third possibility; I noticed that review requests dating as
> far back as to September 11th (no pun) are open for cloudstack. Some
> are not updated for more than 6 months. I don't mind if you guys
> refuse my code, but I don't want open ends like that. Do you have a
> policy or any ideas on that?
> 
> Please note that this is not a complaint about the people who have
> commented on my code. I have appreciated and seriously addressed
> their input.
> 

Hi,

Thanks for your patience. But reviewboard hasn't been recieving much
love possibly because folks are busy with Collab presentations etc.
And 4.2 is looming around ther corner.

This isn't an excuse though. Sometimes there's few folks who
understand the full scale of changes to specific parts of cloudstack
because those critical pieces are poorly documented. In such cases
reviews get delayed.

Usually, if you remind the list when your review hasn't recieved
attention should get someone to take a look at it. [1] Just as you've
done here.

As for improving the process for speedier reviews I think it's great if
more eyeballs are laid on patches. It doesn't just have to be just
committers taking a look. So feel free to jump in!


Appreciate your thoughts on improving the process!

[1] https://cwiki.apache.org/confluence/x/sSbVAQ

-- 
Prasanna.,


Powered by BigRock.com



FW: Karen Vuong

2013-06-17 Thread Karen Vuong
nee 
http://www.nccascience.com/aaorzqnt/RNDCHR,3,15%/ezfblvlyau/vqpivoesmuhbpy/oswigpk.htm




wqj 

Re: jenkins jobs for new docs guide

2013-06-17 Thread Sebastien Goasguen

On Jun 17, 2013, at 5:03 AM, Prasanna Santhanam  wrote:

> On Sun, Jun 16, 2013 at 05:10:09PM -0400, Sebastien Goasguen wrote:
>> Hi,
>> 
>> I have been working on some new doc guides:
>> 
>> -the gsoc one is in master : docs/publican-gsoc-2013.cfg
>> 
>> -in the ACS101 branch under docs/acs101/publican.cfg (tons of new goodies in 
>> there libcloud, jclouds-cli, knife-cs, whir etc)
>> 
>> Hugo mentioned to me that we should have some jenkins jobs to build
>> those, any takers ? ( I no squat about jenkins)
> 
> It's quite easy. Once you are logged in. Go to the view under which
> you create a job. Then click "New Job", then select "copy from
> existing job". In your case you can use the doc jobs that Hugo setup
> for midonet/nicira. Then tweak the configuration shell script to build
> acs101!
> 
> Easy! :)
> 

Indeed:

http://jenkins.cloudstack.org/job/docs-4.3-gsoc-guide/

thanks vogxn for creating an account for me.


> -- 
> Prasanna.,
> 
> 
> Powered by BigRock.com
> 



Review Request: CLOUDSTACK-1047: tracking in logs using job id

2013-06-17 Thread Sanjay Tripathi

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11906/
---

Review request for cloudstack, Devdeep Singh, Nitin Mehta, and Sateesh 
Chodapuneedi.


Description
---

CLOUDSTACK-1047: tracking in logs using job id

https://issues.apache.org/jira/browse/CLOUDSTACK-1047


This addresses bug CLOUDSTACK-1047.


Diffs
-

  server/src/com/cloud/async/AsyncJobManagerImpl.java 0101a8a 
  server/src/com/cloud/storage/VolumeManagerImpl.java 4297efb 

Diff: https://reviews.apache.org/r/11906/diff/


Testing
---

Tests:
1. Deploy an Instance.
2. In the Management server logs, check the async job description, it should be 
somthing like: job-[ 22 ] = [ 1075d499-03a8-44c3-ac9e-348dc5b32ba1 ]


Thanks,

Sanjay Tripathi



Re: committer wanted for review

2013-06-17 Thread John Burwell
All,

Please see my comments in-line below.

Thanks,
-John

On Jun 15, 2013, at 6:11 AM, Hiroaki KAWAI  wrote:

> Probably we've agreed on that double slash should not
> generated by cloudstack.
> 
> If something went wrong and double slash was passed to
> Winfows based NFS, the reason may A) there was another
> code that generates double slash B) cloudstack configuration
> or something user input was bad C) some path components became
> empty string because of database error or something unexpeceted
> D) cloudstack is really being attacked etc.,

A indicates that we adding technical debt and later defects to the system.  We 
need to fix upstream for correctness before it rots further.  B sound like a 
case for stronger input validation rather than a "fix up" on the backend.  C 
seems like we need to be more careful in how we persist and retrieve the 
information from the database.  The more we discuss this solution, the more 
this feels like a front-end input validation and database persistence issue.  
Treating it this way would obviate any security issues or logging needs.

> 
> Anyway, double slash should not happen and the admins should be
> able to know when the NFS layer got that sequence.
> I'd prefer WARN for this reason, but INFO may do as well.
> I don't have strong opinion on log level.


If it shouldn't happen then we should be rejecting the data as part of input 
validation and no allowing it to be persisted.

> 
> In addition to that, "auto-fix" may not be a "fix" for example in
> case "C". I don't want to see autofix code in many places,
> "auto-fix" might be a "fix" where the path is really passed to
> NFS layer.
> 
> Another approach to double-slash is just reject the input and raise
> a CloudstackRuntimeException.
> But I'd prefer auto-fix because of case "A" at this moment…

Originally, I thought this fix was the equivalent of escaping a URL or HTML 
string.  Now that I understand it more fully, I believe we need to throw a 
CloudRuntimeException to ferret out code generating incorrectly formatted 
input.  

> 
> 
> (2013/06/15 18:01), Daan Hoogland wrote:
>> H John,
>> 
>> Yes, actually I was going to make it info level but you swapped me of my
>> feet with your remark.
>> 
>> The point is that a mixed posix-paths/UNC system triggered this fix. A
>> double slash has double meaning in such an environment. However the error,
>> be it human or system generated, does not destabalize cloudstack in any
>> way, so I will stick with the info. It is certainly not debug in my
>> opinion. It is not a bug that needs debugging.
>> 
>> Of course a deeper understanding of cloudstack might change my position on
>> the issue.
>> 
>> regards,
>> Daan
>> 
>> 
>> On Fri, Jun 14, 2013 at 5:58 PM, John Burwell  wrote:
>> 
>>> Daan,
>>> 
>>> Since a WARN indicates a condition that could lead to system instability,
>>> many folks configure their log analysis to trigger notifications on WARN
>>> and INFO.  Does escaping a character in a path warrant meet that criteria?
>>> 
>>> Thanks,
>>> -John
>>> 
>>> On Jun 14, 2013, at 11:52 AM, Daan Hoogland 
>>> wrote:
>>> 
 H John,
 
 I browsed through your comments and most I will apply. There is one where
 you contradict Hiroaki. This is about the logging level for reporting a
 changed path. I am going to follow my heart at this unless there is a
 project directive on it.
 
 regards,
 Daan
 
 
 On Fri, Jun 14, 2013 at 5:25 PM, John Burwell 
>>> wrote:
 
> Daan,
> 
> I just looked through the review request, and published my comments.
> 
> Thanks,
> -John
> 
> On Jun 14, 2013, at 10:27 AM, Daan Hoogland 
> wrote:
> 
>> Hiroaki,
>> 
>> - auto-fix may happen where it is really required
>>> 
>> I do not have a clear view on this, so I took the approach of better
>>> safe
>> then sorry. The submitted is what works. I don't see how the auto-fix
>> should ever be needed if the source is fixed. Hope you can live with
> this.
>> 
>>> - and if auto-fix happens, it should log it with
>>> WARN level.
>> 
>> Applied
>> 
>> 
>> regards,
>> 
>> 
>> On Fri, Jun 14, 2013 at 10:35 AM, Daan Hoogland <
>>> daan.hoogl...@gmail.com
>> wrote:
>> 
>>> Thanks Hiroaki,
>>> 
>>> On Fri, Jun 14, 2013 at 3:41 AM, Hiroaki KAWAI <
> ka...@stratosphere.co.jp>wrote:
>>> 
 I'd suggest:
 - fix the generation of double slash itself
 
>>> Is in the patch
>>> 
 - auto-fix may happen where it is really required
 - and if auto-fix happens, it should log it with
 WARN level.
>>> 
>>> Good point, I will up the level in an update.
>>> 
 
 
 
 (2013/06/13 21:15), Daan Hoogland wrote:
 
> H,
> 
> Can someone look at Review Request #11861 org/r/11861/ 

Re: Review Request: (CLOUDSTACK-1301) VM Disk I/O Throttling

2013-06-17 Thread Wei Zhou

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11782/
---

(Updated June 17, 2013, 12:03 p.m.)


Review request for cloudstack, Wido den Hollander and John Burwell.


Changes
---

According to John's comments,
(1) change default value of rates from 0 to null.
Thanks, John.


Description
---

The patch for VM Disk I/O throttling based on commit 
3f3c6aa35f64c4129c203d54840524e6aa2c4621


This addresses bug CLOUDSTACK-1301.


Diffs (updated)
-

  api/src/com/cloud/agent/api/to/VolumeTO.java 4cbe82b 
  api/src/com/cloud/offering/DiskOffering.java dd77c70 
  api/src/com/cloud/vm/DiskProfile.java e3a3386 
  api/src/org/apache/cloudstack/api/ApiConstants.java ab1402c 
  
api/src/org/apache/cloudstack/api/command/admin/offering/CreateDiskOfferingCmd.java
 aa11599 
  
api/src/org/apache/cloudstack/api/command/admin/offering/CreateServiceOfferingCmd.java
 4c54a4e 
  api/src/org/apache/cloudstack/api/response/DiskOfferingResponse.java 377e66e 
  api/src/org/apache/cloudstack/api/response/ServiceOfferingResponse.java 
31533f8 
  api/src/org/apache/cloudstack/api/response/VolumeResponse.java 21d7d1a 
  client/WEB-INF/classes/resources/messages.properties 2b17359 
  core/src/com/cloud/agent/api/AttachVolumeCommand.java 302b8f8 
  engine/schema/src/com/cloud/storage/DiskOfferingVO.java 909d7fe 
  
plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
 f90edd8 
  
plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtDomainXMLParser.java
 b8645e1 
  
plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtVMDef.java 
e91e347 
  server/src/com/cloud/api/query/dao/DiskOfferingJoinDaoImpl.java 283181f 
  server/src/com/cloud/api/query/dao/ServiceOfferingJoinDaoImpl.java 56e4d0a 
  server/src/com/cloud/api/query/dao/VolumeJoinDaoImpl.java e27e2d9 
  server/src/com/cloud/api/query/vo/DiskOfferingJoinVO.java 6d3cdcb 
  server/src/com/cloud/api/query/vo/ServiceOfferingJoinVO.java e87a101 
  server/src/com/cloud/api/query/vo/VolumeJoinVO.java 6ef8c91 
  server/src/com/cloud/configuration/Config.java 5ee0fad 
  server/src/com/cloud/configuration/ConfigurationManager.java 8db037b 
  server/src/com/cloud/configuration/ConfigurationManagerImpl.java 131d340 
  server/src/com/cloud/storage/StorageManager.java d49a7f8 
  server/src/com/cloud/storage/StorageManagerImpl.java d38b35e 
  server/src/com/cloud/storage/VolumeManagerImpl.java 4297efb 
  server/src/com/cloud/test/DatabaseConfig.java 70c8178 
  server/test/com/cloud/vpc/MockConfigurationManagerImpl.java 21b3590 
  setup/db/db/schema-410to420.sql bcfbcc9 
  ui/dictionary.jsp a5f0662 
  ui/scripts/configuration.js cadde8c 

Diff: https://reviews.apache.org/r/11782/diff/


Testing
---

testing ok.


Thanks,

Wei Zhou



Re: Infra Issues from the IRC meeting (Wed, Jun 12)

2013-06-17 Thread Prasanna Santhanam
On Sat, Jun 15, 2013 at 01:48:57PM -0400, Chip Childers wrote:
> > There's also a couple of issues here -
> > 1. Does everyone know where the tests run?
> Nope
> > 2. Do people know how to spot the failures?
> Nope
> > 3. Do people know how to find the logs for the failures?
> Nope
> > 
> > If the answer is no to all this, I have more documentation on my
> > hands.

I'll have the documentation draft up soon. Thanks for pointing this
out. All the logs show up under the test-matrix(-extended) job on the
cloudstack-qa view. You can drill down from the "Test Result" shown by
jenkins to see the stacktrace of the failure. For the management
server log, it's a little hidden - it goes under the profile
(hypervisor, ms-distro). For now I'm pulling in management server
logs. Will expose the kvm agent debug logs too.

> > Ideally, I'd like those interested in infra activities to form a
> > separate group for cloudstack-infra related topics. The group's focus
> > will be to maintain, test and add to the infrastructure of this
> > project. But that's in the future. Without such a group, building an
> > IaaS cloud is not much fun :)
> 
> +1 - and at least for now, perhaps we start getting more organized
> around this via dev@cs.a.o using [INFRA] tags.

Will start using the tag as a start.

> 
> Some thoughts I have are: I know that some stuff is being put to use for
> the project in Fremont, but I don't know what it is.  I also don't
> know what hardware donations might be helpful for the environment, so
> that perhaps I could help find something.
> 

Since every $company deploys cloudstack a different way, ideally the
environment should be a small mirror of what is used in production by
$company. That environment can be behind a firewall. What is required
is a jenkins slave that can be either hooked in through jnlp or SSH to
the jenkins.buildacloud.org instance. It will be labelled as a test
slave there and when we need to run tests, we can utilize it for
running tests.  The auth keys can be shared among those interested to
work towards maintaining that infra.

> In all seriousness, if there is a need, I could take up the question at
> $dayjob to provide some testing resources within one of our labs as
> well.  I actually think this would be easier to do then a "donation" of
> hardware that's not really a "donation" to the ASF.  The question is:
> *what's needed* that we don't have already?
> 

Right - donations are (IIUC) only reqd if the ASF infra is going to
manage this. But if there's a group of people within the project
managing this infra and not have it flout any infra rules we're good
to go and get started independantly on this.

We have a single dedicated enviornment that I cycle through deployment
styles that are oft used within Citrix. But obviously others are using
it differently. With perhaps RBD /Ceph, Object stores, OVS, Nicira,
etc. These are not tested.

For specifics on setup and internal resources like - NFS, code
repositories, images repositories, pypi mirrors/caches, log gathering
etc - we can start a separate thread if there is interest.

> > 
> > > 17:44:17 [topcloud]: i can't imagine apache wanting bvt to only run 
> > > inside citrix all the time.
> > It doesn't run within Citrix. It runs in a DC in Fremont. There are
> > other environments within Citrix however that run their own tests for
> > their needs - eg: object_store tests, cloudplatform tests, customer
> > related issue tests etc.
> > 
> > /me beginning to think more doc work is on my way :/
> 
> Well, really, the key is for us to all know about which infra is being
> shared for the use of the project.  Stuff that's inside a corp that we
> can't all see isn't worth documenting for the project itself.
> 
But it should be if the infra is exposing all troubleshooting tools,
logs to fix cloudstack bugs. If it's running custom builds etc, then I
agree it would not be of much use.

-- 
Prasanna.,


Powered by BigRock.com



Re: Review Request: Selenium Headless configuration using PhantomJS

2013-06-17 Thread Chip Childers

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/10846/#review21978
---


Was this committed?  I see a "Ship It" from Edison.

- Chip Childers


On May 2, 2013, 12:08 a.m., Parth Jagirdar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/10846/
> ---
> 
> (Updated May 2, 2013, 12:08 a.m.)
> 
> 
> Review request for cloudstack, David Nalley, Chip Childers, and edison su.
> 
> 
> Description
> ---
> 
> Selenium Headless configuration using PhantomJS.
> 
> Fixed Readme Typos, and added an extra field for PhantomJS and How to 
> configure Management Server IP.
> 
> 
> This addresses bug Cloudstack-2282.
> 
> 
> Diffs
> -
> 
>   test/selenium/ReadMe.txt 30b0e0d 
>   test/selenium/lib/initialize.py e8cc49a 
>   test/selenium/smoke/Login_and_Accounts.py c5132d9 
>   test/selenium/smoke/main.py 86bb930 
> 
> Diff: https://reviews.apache.org/r/10846/diff/
> 
> 
> Testing
> ---
> 
> NA.
> 
> 
> Thanks,
> 
> Parth Jagirdar
> 
>



Re: Review Request: Documentation changes for VMware dvSwitch and Nexus dvSwitch

2013-06-17 Thread Chip Childers

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/10366/#review21979
---


Radhika,

Was this committed?  If so, can you please close this review?  If not, what's 
needed to help get it done?

- Chip Childers


On April 10, 2013, 7:05 a.m., Radhika PC wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/10366/
> ---
> 
> (Updated April 10, 2013, 7:05 a.m.)
> 
> 
> Review request for cloudstack, David Nalley, Chip Childers, Jessica Tomechak, 
> Pranav Saxena, Sateesh Chodapuneedi, and ilya musayev.
> 
> 
> Description
> ---
> 
> Documentation on Distributed Switches: nexus and dvSwitch.
> Prerequisites part of VMware dvSwitch is still unclear. Please provide 
> necessary suggestions.
> 
> 
> This addresses bug CLOUDSTACK-772.
> 
> 
> Diffs
> -
> 
>   docs/en-US/Book_Info.xml c125ab8 
>   docs/en-US/add-clusters-vsphere.xml 6b2dff2 
>   docs/en-US/images/add-cluster.png 383f375ebedd62d9b294a56f777ed4b8c0d92e10 
>   docs/en-US/images/dvswitch-config.png PRE-CREATION 
>   docs/en-US/images/dvswitchconfig.png PRE-CREATION 
>   docs/en-US/vmware-cluster-config-dvswitch.xml PRE-CREATION 
>   docs/en-US/vmware-install.xml 467e135 
> 
> Diff: https://reviews.apache.org/r/10366/diff/
> 
> 
> Testing
> ---
> 
> Publican builds, patch applies.
> 
> 
> Thanks,
> 
> Radhika PC
> 
>



Review Request: Fix CLOUDSTACK-2168

2013-06-17 Thread Ian Duffy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11907/
---

Review request for cloudstack and Sebastien Goasguen.


Description
---

CLOUDSTACK-2168 detailed a bug that caused the ldap configuration to not appear.

This bug occurred due to the UI expecting a "list" of ldap configurations back 
from the API.

I have modified the API command to return a "list" like format, but since 
cloudstack only currently supports authentication against one ldap server it 
will only return 1 configuration.

Whoever takes up CLOUDSTACK-430 
https://issues.apache.org/jira/browse/CLOUDSTACK-430 - Add support for multiple 
ldap servers will have to expand on this output. 


This addresses bug CLOUDSTACK-2168.


Diffs
-

  api/src/org/apache/cloudstack/api/command/admin/ldap/LDAPConfigCmd.java 
2726f84 

Diff: https://reviews.apache.org/r/11907/diff/


Testing
---

Build the code with the changes, confirmed the API still returned the expected 
results.
Confirmed that the UI showed the configuration.


Thanks,

Ian Duffy



Re: Review Request: remove dead allocations

2013-06-17 Thread Prasanna Santhanam

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11324/
---

(Updated June 17, 2013, 1:40 p.m.)


Review request for cloudstack and Devdeep Singh.


Changes
---

devdeep (ping)


Description
---

code allocates a collection, and then immediately overwrites the reference 
variable holding that collection. That original collection is just a dead 
allocation, and is not needed - patch removes.


Diffs
-

  
api/src/org/apache/cloudstack/api/command/admin/host/FindHostsForMigrationCmd.java
 b2d77b8 
  api/src/org/apache/cloudstack/api/command/admin/host/ListHostsCmd.java 
69c6980 
  
api/src/org/apache/cloudstack/api/command/admin/region/ListPortableIpRangesCmd.java
 75bcce0 
  api/src/org/apache/cloudstack/api/command/user/iso/ListIsosCmd.java f872c12 
  api/src/org/apache/cloudstack/api/command/user/template/ListTemplatesCmd.java 
f0fc241 

Diff: https://reviews.apache.org/r/11324/diff/


Testing
---


Thanks,

Dave Brosius



Re: committer wanted for review

2013-06-17 Thread Daan Hoogland
John,

If I understand it correctly, you are stating that my take on the solution
is 'not done/not the way to go'?

For the record the case I solved was an instance of A, but I would not call
it adding technical debt. A arose from existing code in combination of a
requirement to work with a non-posix-path compliant (but unc) nfs server.

regards,


On Mon, Jun 17, 2013 at 2:01 PM, John Burwell  wrote:

> All,
>
> Please see my comments in-line below.
>
> Thanks,
> -John
>
> On Jun 15, 2013, at 6:11 AM, Hiroaki KAWAI 
> wrote:
>
> > Probably we've agreed on that double slash should not
> > generated by cloudstack.
> >
> > If something went wrong and double slash was passed to
> > Winfows based NFS, the reason may A) there was another
> > code that generates double slash B) cloudstack configuration
> > or something user input was bad C) some path components became
> > empty string because of database error or something unexpeceted
> > D) cloudstack is really being attacked etc.,
>
> A indicates that we adding technical debt and later defects to the system.
>  We need to fix upstream for correctness before it rots further.  B sound
> like a case for stronger input validation rather than a "fix up" on the
> backend.  C seems like we need to be more careful in how we persist and
> retrieve the information from the database.  The more we discuss this
> solution, the more this feels like a front-end input validation and
> database persistence issue.  Treating it this way would obviate any
> security issues or logging needs.
>
> >
> > Anyway, double slash should not happen and the admins should be
> > able to know when the NFS layer got that sequence.
> > I'd prefer WARN for this reason, but INFO may do as well.
> > I don't have strong opinion on log level.
>
>
> If it shouldn't happen then we should be rejecting the data as part of
> input validation and no allowing it to be persisted.
>
> >
> > In addition to that, "auto-fix" may not be a "fix" for example in
> > case "C". I don't want to see autofix code in many places,
> > "auto-fix" might be a "fix" where the path is really passed to
> > NFS layer.
> >
> > Another approach to double-slash is just reject the input and raise
> > a CloudstackRuntimeException.
> > But I'd prefer auto-fix because of case "A" at this moment…
>
> Originally, I thought this fix was the equivalent of escaping a URL or
> HTML string.  Now that I understand it more fully, I believe we need to
> throw a CloudRuntimeException to ferret out code generating incorrectly
> formatted input.
>
> >
> >
> > (2013/06/15 18:01), Daan Hoogland wrote:
> >> H John,
> >>
> >> Yes, actually I was going to make it info level but you swapped me of my
> >> feet with your remark.
> >>
> >> The point is that a mixed posix-paths/UNC system triggered this fix. A
> >> double slash has double meaning in such an environment. However the
> error,
> >> be it human or system generated, does not destabalize cloudstack in any
> >> way, so I will stick with the info. It is certainly not debug in my
> >> opinion. It is not a bug that needs debugging.
> >>
> >> Of course a deeper understanding of cloudstack might change my position
> on
> >> the issue.
> >>
> >> regards,
> >> Daan
> >>
> >>
> >> On Fri, Jun 14, 2013 at 5:58 PM, John Burwell 
> wrote:
> >>
> >>> Daan,
> >>>
> >>> Since a WARN indicates a condition that could lead to system
> instability,
> >>> many folks configure their log analysis to trigger notifications on
> WARN
> >>> and INFO.  Does escaping a character in a path warrant meet that
> criteria?
> >>>
> >>> Thanks,
> >>> -John
> >>>
> >>> On Jun 14, 2013, at 11:52 AM, Daan Hoogland 
> >>> wrote:
> >>>
>  H John,
> 
>  I browsed through your comments and most I will apply. There is one
> where
>  you contradict Hiroaki. This is about the logging level for reporting
> a
>  changed path. I am going to follow my heart at this unless there is a
>  project directive on it.
> 
>  regards,
>  Daan
> 
> 
>  On Fri, Jun 14, 2013 at 5:25 PM, John Burwell 
> >>> wrote:
> 
> > Daan,
> >
> > I just looked through the review request, and published my comments.
> >
> > Thanks,
> > -John
> >
> > On Jun 14, 2013, at 10:27 AM, Daan Hoogland  >
> > wrote:
> >
> >> Hiroaki,
> >>
> >> - auto-fix may happen where it is really required
> >>>
> >> I do not have a clear view on this, so I took the approach of better
> >>> safe
> >> then sorry. The submitted is what works. I don't see how the
> auto-fix
> >> should ever be needed if the source is fixed. Hope you can live with
> > this.
> >>
> >>> - and if auto-fix happens, it should log it with
> >>> WARN level.
> >>
> >> Applied
> >>
> >>
> >> regards,
> >>
> >>
> >> On Fri, Jun 14, 2013 at 10:35 AM, Daan Hoogland <
> >>> daan.hoogl...@gmail.com
> >> wrote:
> >>
> >>> Thanks Hiroaki,
> >>>
> >

Re: Review Request: set rpcProvider field correctly in constructor

2013-06-17 Thread Prasanna Santhanam

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11325/#review21980
---

Ship it!


3a02942

- Prasanna Santhanam


On May 22, 2013, 6:25 a.m., Dave Brosius wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11325/
> ---
> 
> (Updated May 22, 2013, 6:25 a.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Description
> ---
> 
> code does
> 
> rpcProvider = rpcProvider; which is a NOP, due to missing 'this." - patch 
> adds that.
> 
> 
> Diffs
> -
> 
>   
> engine/storage/src/org/apache/cloudstack/storage/HypervisorHostEndPointRpcServer.java
>  bc21776 
> 
> Diff: https://reviews.apache.org/r/11325/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Dave Brosius
> 
>



Review Request: CLOUDSTACK-2902: Updating repository refs

2013-06-17 Thread Nils Vogels

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11908/
---

Review request for cloudstack.


Description
---

This solves CLOUDSTACK-2902, replacing all references to 4.0 repositories to 
4.1 for the 4.1 release


Diffs
-

  docs/en-US/Release_Notes.xml 2ae8732 
  docs/en-US/configure-package-repository.xml c8ba48f 
  docs/pot/configure-package-repository.pot e915358 

Diff: https://reviews.apache.org/r/11908/diff/


Testing
---

Compiled docs with changes applied


Thanks,

Nils Vogels



Re: Review Request: CLOUDSTACK-2902: Updating repository refs

2013-06-17 Thread Nils Vogels

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11908/
---

(Updated June 17, 2013, 1:58 p.m.)


Review request for cloudstack.


Description
---

This solves CLOUDSTACK-2902, replacing all references to 4.0 repositories to 
4.1 for the 4.1 release


This addresses bug CLOUDSTACK-2902.


Diffs
-

  docs/en-US/Release_Notes.xml 2ae8732 
  docs/en-US/configure-package-repository.xml c8ba48f 
  docs/pot/configure-package-repository.pot e915358 

Diff: https://reviews.apache.org/r/11908/diff/


Testing
---

Compiled docs with changes applied


Thanks,

Nils Vogels



Re: Review Request: Fix CLOUDSTACK-2168

2013-06-17 Thread Ian Duffy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11907/
---

(Updated June 17, 2013, 2 p.m.)


Review request for cloudstack and Sebastien Goasguen.


Description
---

CLOUDSTACK-2168 detailed a bug that caused the ldap configuration to not appear.

This bug occurred due to the UI expecting a "list" of ldap configurations back 
from the API.

I have modified the API command to return a "list" like format, but since 
cloudstack only currently supports authentication against one ldap server it 
will only return 1 configuration.

Whoever takes up CLOUDSTACK-430 
https://issues.apache.org/jira/browse/CLOUDSTACK-430 - Add support for multiple 
ldap servers will have to expand on this output. 


This addresses bug CLOUDSTACK-2168.


Diffs (updated)
-

  api/src/org/apache/cloudstack/api/command/admin/ldap/LDAPConfigCmd.java 
2726f84 

Diff: https://reviews.apache.org/r/11907/diff/


Testing
---

Build the code with the changes, confirmed the API still returned the expected 
results.
Confirmed that the UI showed the configuration.


Thanks,

Ian Duffy



Review Request: Automation: (vpc network pf and lb rules) - Corrected code related to cleanup.

2013-06-17 Thread Gaurav Aradhye

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11909/
---

Review request for cloudstack and Prasanna Santhanam.


Description
---

Corrected code related to cleanup of VPC offering. self.vpc_off was added twice 
instead of vpc_off. That may lead to error in the cleanup process eventually 
leading to accounts and vpc offerings not being cleaned up.


Diffs
-

  test/integration/component/test_vpc_network_lbrules.py 66d6c4d 
  test/integration/component/test_vpc_network_pfrules.py 92b04ad 

Diff: https://reviews.apache.org/r/11909/diff/


Testing
---


Thanks,

Gaurav Aradhye



Re: Review Request: Automation: (vpc network pf and lb rules) - Corrected code related to cleanup.

2013-06-17 Thread Prasanna Santhanam

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11909/#review21981
---

Ship it!


Applied but the test will fail because of a regression in the NetworkACL code. 
Details in CLOUDSTACK-2915

- Prasanna Santhanam


On June 17, 2013, 2:03 p.m., Gaurav Aradhye wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11909/
> ---
> 
> (Updated June 17, 2013, 2:03 p.m.)
> 
> 
> Review request for cloudstack and Prasanna Santhanam.
> 
> 
> Description
> ---
> 
> Corrected code related to cleanup of VPC offering. self.vpc_off was added 
> twice instead of vpc_off. That may lead to error in the cleanup process 
> eventually leading to accounts and vpc offerings not being cleaned up.
> 
> 
> Diffs
> -
> 
>   test/integration/component/test_vpc_network_lbrules.py 66d6c4d 
>   test/integration/component/test_vpc_network_pfrules.py 92b04ad 
> 
> Diff: https://reviews.apache.org/r/11909/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Gaurav Aradhye
> 
>



[GSOC] apache whirr startup docs

2013-06-17 Thread Sebastien Goasguen
Especially for Meng,

Check out the pdf at:
http://jenkins.cloudstack.org/job/docs-4.3-clients-wrappers-guide/

There is some basic apache whirr docs, that I tested on cloudstack.

I also entered a bug on whirr at:
https://issues.apache.org/jira/browse/WHIRR-725

A fix should be in jclouds 1.6.1 and once it gets releases the trunk of whirr 
should use jclouds 1.6.1. 
Hopefully this will fix it.

However you should dive into jclouds, understand how it supports cloudstack, 
then see how whirr uses it.

thanks,

-sebastien

Review Request: Fix for CLOUDSTACK-2987 Ensure XStools to be there in template inorder to enable dynamic scaling of vm

2013-06-17 Thread Harikrishna Patnala

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11910/
---

Review request for cloudstack, Abhinandan Prateek and Nitin Mehta.


Description
---

CLOUDSTACK-2987 Ensure XStools to be there in template inorder to enable 
dynamic scaling of vm 

CLOUDSTACK-3042 - handle Scaling up of vm memory/CPU based on the presence of 
XS tools in the template
This should also take care of updation of VM after XS tools are installed in 
the vm and set memory values accordingly to support dynamic scaling after stop 
start of VM


This addresses bugs CLOUDSTACK-2987 and CLOUDSTACK-3042.


Diffs
-

  api/src/com/cloud/agent/api/to/VirtualMachineTO.java 46ee01b 
  api/src/com/cloud/template/VirtualMachineTemplate.java cedc793 
  api/src/org/apache/cloudstack/api/ApiConstants.java ab1402c 
  api/src/org/apache/cloudstack/api/BaseUpdateTemplateOrIsoCmd.java 6fd9773 
  api/src/org/apache/cloudstack/api/command/user/iso/RegisterIsoCmd.java 
284d553 
  
api/src/org/apache/cloudstack/api/command/user/template/RegisterTemplateCmd.java
 c9da0c2 
  api/src/org/apache/cloudstack/api/command/user/vm/UpdateVMCmd.java 2860283 
  api/src/org/apache/cloudstack/api/response/TemplateResponse.java 896154a 
  api/src/org/apache/cloudstack/api/response/UserVmResponse.java 1f9eb1a 
  core/src/com/cloud/agent/api/ScaleVmCommand.java b361485 
  engine/schema/src/com/cloud/storage/VMTemplateVO.java e643d75 
  engine/schema/src/com/cloud/vm/VMInstanceVO.java fbe03dc 
  
engine/storage/src/org/apache/cloudstack/storage/image/TemplateEntityImpl.java 
4d162bb 
  plugins/hypervisors/xen/src/com/cloud/hypervisor/XenServerGuru.java 8c38a69 
  
plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/CitrixResourceBase.java
 5e8283a 
  
plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/XenServer56FP1Resource.java
 8e37809 
  server/src/com/cloud/api/ApiResponseHelper.java 94c5d6c 
  server/src/com/cloud/api/query/dao/UserVmJoinDaoImpl.java dbfe94d 
  server/src/com/cloud/api/query/vo/UserVmJoinVO.java 8ad0fdd 
  server/src/com/cloud/hypervisor/HypervisorGuruBase.java 1ad9a1f 
  server/src/com/cloud/server/ManagementServerImpl.java 96c72e4 
  server/src/com/cloud/storage/TemplateProfile.java 0b55f1f 
  server/src/com/cloud/template/TemplateAdapter.java 9a2d877 
  server/src/com/cloud/template/TemplateAdapterBase.java 0940d3e 
  server/src/com/cloud/vm/UserVmManagerImpl.java 1c8ab75 
  server/src/com/cloud/vm/VirtualMachineManagerImpl.java f946cd1 
  server/test/com/cloud/vm/VirtualMachineManagerImplTest.java 8715c9e 
  setup/db/db/schema-410to420.sql 272fc42 

Diff: https://reviews.apache.org/r/11910/diff/


Testing
---

Tested locally


Thanks,

Harikrishna Patnala



Re: Review Request: Fix for CLOUDSTACK-2987 Ensure XStools to be there in template inorder to enable dynamic scaling of vm

2013-06-17 Thread Prasanna Santhanam

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11910/#review21982
---


Since XCP shares the same resource (XcpOssResource<-CitrixResourceBase) can the 
command ScaleVmCommand be implemented for XCP too? 



server/src/com/cloud/vm/UserVmManagerImpl.java


can this be made case insensitive? so comparisons can compare as 
equalIgnoreCase?




server/test/com/cloud/vm/VirtualMachineManagerImplTest.java


Can you remove the wildcard import?


- Prasanna Santhanam


On June 17, 2013, 2:44 p.m., Harikrishna Patnala wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11910/
> ---
> 
> (Updated June 17, 2013, 2:44 p.m.)
> 
> 
> Review request for cloudstack, Abhinandan Prateek and Nitin Mehta.
> 
> 
> Description
> ---
> 
> CLOUDSTACK-2987 Ensure XStools to be there in template inorder to enable 
> dynamic scaling of vm 
> 
> CLOUDSTACK-3042 - handle Scaling up of vm memory/CPU based on the presence of 
> XS tools in the template
> This should also take care of updation of VM after XS tools are installed in 
> the vm and set memory values accordingly to support dynamic scaling after 
> stop start of VM
> 
> 
> This addresses bugs CLOUDSTACK-2987 and CLOUDSTACK-3042.
> 
> 
> Diffs
> -
> 
>   api/src/com/cloud/agent/api/to/VirtualMachineTO.java 46ee01b 
>   api/src/com/cloud/template/VirtualMachineTemplate.java cedc793 
>   api/src/org/apache/cloudstack/api/ApiConstants.java ab1402c 
>   api/src/org/apache/cloudstack/api/BaseUpdateTemplateOrIsoCmd.java 6fd9773 
>   api/src/org/apache/cloudstack/api/command/user/iso/RegisterIsoCmd.java 
> 284d553 
>   
> api/src/org/apache/cloudstack/api/command/user/template/RegisterTemplateCmd.java
>  c9da0c2 
>   api/src/org/apache/cloudstack/api/command/user/vm/UpdateVMCmd.java 2860283 
>   api/src/org/apache/cloudstack/api/response/TemplateResponse.java 896154a 
>   api/src/org/apache/cloudstack/api/response/UserVmResponse.java 1f9eb1a 
>   core/src/com/cloud/agent/api/ScaleVmCommand.java b361485 
>   engine/schema/src/com/cloud/storage/VMTemplateVO.java e643d75 
>   engine/schema/src/com/cloud/vm/VMInstanceVO.java fbe03dc 
>   
> engine/storage/src/org/apache/cloudstack/storage/image/TemplateEntityImpl.java
>  4d162bb 
>   plugins/hypervisors/xen/src/com/cloud/hypervisor/XenServerGuru.java 8c38a69 
>   
> plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/CitrixResourceBase.java
>  5e8283a 
>   
> plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/XenServer56FP1Resource.java
>  8e37809 
>   server/src/com/cloud/api/ApiResponseHelper.java 94c5d6c 
>   server/src/com/cloud/api/query/dao/UserVmJoinDaoImpl.java dbfe94d 
>   server/src/com/cloud/api/query/vo/UserVmJoinVO.java 8ad0fdd 
>   server/src/com/cloud/hypervisor/HypervisorGuruBase.java 1ad9a1f 
>   server/src/com/cloud/server/ManagementServerImpl.java 96c72e4 
>   server/src/com/cloud/storage/TemplateProfile.java 0b55f1f 
>   server/src/com/cloud/template/TemplateAdapter.java 9a2d877 
>   server/src/com/cloud/template/TemplateAdapterBase.java 0940d3e 
>   server/src/com/cloud/vm/UserVmManagerImpl.java 1c8ab75 
>   server/src/com/cloud/vm/VirtualMachineManagerImpl.java f946cd1 
>   server/test/com/cloud/vm/VirtualMachineManagerImplTest.java 8715c9e 
>   setup/db/db/schema-410to420.sql 272fc42 
> 
> Diff: https://reviews.apache.org/r/11910/diff/
> 
> 
> Testing
> ---
> 
> Tested locally
> 
> 
> Thanks,
> 
> Harikrishna Patnala
> 
>



[ANNOUNCE] New committer: Jayapal Reddy Uradi

2013-06-17 Thread Chip Childers
The Project Management Committee (PMC) for Apache CloudStack
has asked Jayapal Reddy Uradi to become a committer and we are 
pleased to announce that they have accepted.

Being a committer allows many contributors to contribute more
autonomously. For developers, it makes it easier to submit changes and
eliminates the need to have contributions reviewed via the patch
submission process. Whether contributions are development-related or
otherwise, it is a recognition of a contributor's participation in the
project and commitment to the project and the Apache Way.

Please join me in congratulating Jayapal!

-chip
on behalf of the CloudStack PMC


RE: [ANNOUNCE] New committer: Jayapal Reddy Uradi

2013-06-17 Thread Rajesh Battala
Hearty Congratulations Jayapal Reddy :) 

> -Original Message-
> From: Chip Childers [mailto:chip.child...@sungard.com]
> Sent: Monday, June 17, 2013 9:00 PM
> To: dev@cloudstack.apache.org
> Subject: [ANNOUNCE] New committer: Jayapal Reddy Uradi
> 
> The Project Management Committee (PMC) for Apache CloudStack has asked
> Jayapal Reddy Uradi to become a committer and we are pleased to
> announce that they have accepted.
> 
> Being a committer allows many contributors to contribute more
> autonomously. For developers, it makes it easier to submit changes and
> eliminates the need to have contributions reviewed via the patch
> submission process. Whether contributions are development-related or
> otherwise, it is a recognition of a contributor's participation in the project
> and commitment to the project and the Apache Way.
> 
> Please join me in congratulating Jayapal!
> 
> -chip
> on behalf of the CloudStack PMC


Re: enableStorageMaintenance

2013-06-17 Thread La Motta, David
Along the same lines… is there a REST command coming in 4.2 to quiesce one or 
multiple virtual machines?


David La Motta
Technical Marketing Engineer
Citrix Solutions

NetApp
919.476.5042
dlamo...@netapp.com



On Jun 14, 2013, at 10:53 AM, "La Motta, David" 
mailto:david.lamo...@netapp.com>> wrote:

…works great for putting down the storage into maintenance mode (looking 
forward seeing this for secondary storage as well!).

Now the question is, after I've run it… how do I know when it is done so I can 
operate on the volume?

Poll using updateStoragePool and query the state for "Maintenance"?  What about 
introducing the ability to pass in callback URLs to the REST call?

Thx.



David La Motta
Technical Marketing Engineer
Citrix Solutions

NetApp
919.476.5042
dlamo...@netapp.com






Re: [ANNOUNCE] New committer: Jayapal Reddy Uradi

2013-06-17 Thread Prasanna Santhanam
On Mon, Jun 17, 2013 at 11:30:16AM -0400, Chip Childers wrote:
> The Project Management Committee (PMC) for Apache CloudStack
> has asked Jayapal Reddy Uradi to become a committer and we are 
> pleased to announce that they have accepted.
> 
> Being a committer allows many contributors to contribute more
> autonomously. For developers, it makes it easier to submit changes and
> eliminates the need to have contributions reviewed via the patch
> submission process. Whether contributions are development-related or
> otherwise, it is a recognition of a contributor's participation in the
> project and commitment to the project and the Apache Way.
> 
> Please join me in congratulating Jayapal!
> 
> -chip
> on behalf of the CloudStack PMC

Congrats Jayapal!

-- 
Prasanna.,


Powered by BigRock.com



RE: [ANNOUNCE] New committer: Jayapal Reddy Uradi

2013-06-17 Thread Saksham Srivastava
Congrats Jayapal.

-Original Message-
From: Chip Childers [mailto:chip.child...@sungard.com] 
Sent: Monday, June 17, 2013 9:00 PM
To: dev@cloudstack.apache.org
Subject: [ANNOUNCE] New committer: Jayapal Reddy Uradi

The Project Management Committee (PMC) for Apache CloudStack has asked Jayapal 
Reddy Uradi to become a committer and we are pleased to announce that they have 
accepted.

Being a committer allows many contributors to contribute more autonomously. For 
developers, it makes it easier to submit changes and eliminates the need to 
have contributions reviewed via the patch submission process. Whether 
contributions are development-related or otherwise, it is a recognition of a 
contributor's participation in the project and commitment to the project and 
the Apache Way.

Please join me in congratulating Jayapal!

-chip
on behalf of the CloudStack PMC


Re: systemvm.iso not updated in packages

2013-06-17 Thread Chip Childers
On Mon, Jun 17, 2013 at 11:05:43AM +0530, Prasanna Santhanam wrote:
> Applied yet another fix for this from Rajesh:
> 
> commit 6d140538c5efc394fda8a4ddc7cb72832470d0b3
> Author: Rajesh Battala 
> Date:   Sat Jun 15 11:21:46 2013 +0530
> 
> CLOUDSTACK-3004: remove duplicate ssvm-check.sh
> 
> ssvm_check.sh remove the duplicate file from consoleproxy and include the
> script from secondary storage folder while packing iso
> 
> Signed-off-by: Prasanna Santhanam 

Should this go into 4.1?



Re: Object based Secondary storage.

2013-06-17 Thread Min Chen
Hi Tom,

Thanks for your testing. Glad to hear that multipart is working fine by
using Cloudian. Regarding your questions about .gz template, that behavior
is as expected. We will upload it to S3 as its .gz format. Only when the
template is used and downloaded to primary storage, we will use staging
area to decompress it.
We will look at the bugs you filed and update them accordingly.

-min

On 6/17/13 12:31 AM, "Thomas O'Dowd"  wrote:

>Thanks Min - I filed 3 small issues today. I've a couple more but I want
>to try and repeat them again before I file them and I've no time right
>now. Please let me know if you need any further detail on any of these.
>
>https://issues.apache.org/jira/browse/CLOUDSTACK-3027
>https://issues.apache.org/jira/browse/CLOUDSTACK-3028
>https://issues.apache.org/jira/browse/CLOUDSTACK-3030
>
>An example of the other issues I'm running into are that when I upload
>an .gz template on regular NFS storage, it is automatically decompressed
>for me where as with S3 the template remains as a .gz file. Is this
>correct or not? Also, perhaps related but after successfully uploading
>the template to S3 and then trying to start an instance using it, I can
>select it and go all the way to the last screen where I think the action
>button says launch instance or something and it fails with a resource
>unreachable error. I'll have to dig up the error later and file the bug
>as my machine got rebooted over the weekend.
>
>The multipart upload looks like it is working correctly though and I can
>verify the checksums etc are correct with what they should be.
>
>Tom.
>
>On Fri, 2013-06-14 at 16:55 +, Min Chen wrote:
>> HI Tom,
>> 
>>  You can file JIRA ticket for object_store branch by prefixing your bug
>> with "Object_Store_Refactor" and mentioning that it is using build from
>> object_store. Here is an example bug filed from Sangeetha against
>> object_store branch build:
>> https://issues.apache.org/jira/browse/CLOUDSTACK-2528.
>>  If you use devcloud for testing, you may run into an issue where ssvm
>> cannot access public url when you register a template, so register
>> template will fail. You may have to set up internal web server inside
>> devcloud and post template to be registered there to give a URL that
>> devcloud can access. We mainly used devcloud to run our TestNG
>>automation
>> test earlier, and then switched to real hypervisor for real testing.
>>  Thanks
>>  -min
>> 
>> On 6/14/13 1:46 AM, "Thomas O'Dowd"  wrote:
>> 
>> >Edison,
>> >
>> >I've got devcloud running along with the object_store branch and I've
>> >finally been able to test a bit today.
>> >
>> >I found some issues (or things that I think are bugs) and would like to
>> >file a few issues. I know where the bug database is and I have an
>> >account but what is the best way to file bugs against this particular
>> >branch? I guess I can select "Future" as the version? What other way
>>are
>> >feature branches usually identified in issues? Perhaps in the subject?
>> >Please let me know the preference.
>> >
>> >Also, can you describe (or point me at a document) what the best way to
>> >test against the object_store branch is? So far I have been doing the
>> >following but I'm not sure it is the best?
>> >
>> > a) setup devcloud.
>> > b) stop any instances on devcloud from previous runs
>> >  xe vm-shutdown --multiple
>> > c) check out and update the object_store branch.
>> > d) clean build as described in devcloud doc (ADIDD for short)
>> > e) deploydb (ADIDD)
>> > f) start management console (ADIDD) and wait for it.
>> > g) deploysvr (ADIDD) in another shell.
>> > h) on devcloud machine use xentop to wait for 2 vms to launch.
>> >(I'm not sure what the nfs vm is used for here??)
>> > i) login on gui -> infra -> secondary and remove nfs secondary storage
>> > j) add s3 secondary storage (using cache of old secondary storage?)
>> >
>> >Then rest of testing starts from here... (and also perhaps in step j)
>> >
>> >Thanks,
>> >
>> >Tom.
>> >-- 
>> >Cloudian KK - http://www.cloudian.com/get-started.html
>> >Fancy 100TB of full featured S3 Storage?
>> >Checkout the Cloudian® Community Edition!
>> >
>> 
>
>-- 
>Cloudian KK - http://www.cloudian.com/get-started.html
>Fancy 100TB of full featured S3 Storage?
>Checkout the Cloudian® Community Edition!
>



Re: [GSOC] A short description about CloudStack Networking plugin

2013-06-17 Thread Chip Childers
On Mon, Jun 17, 2013 at 03:41:58PM +0700, Nguyen Anh Tu wrote:
> Hi all,
> 
> I made an wiki entry about the CloudStack networking design. I think it's
> useful for all network plugins can follow. It's located in my gsoc project
> about improving the native SDN controller. Take a look on it
> 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Add+Xen+and+XCP+support+for+GRE+SDN+controller
> 
> Thanks,
> 
> -- 
> 
> N.g.U.y.e.N.A.n.H.t.U

Good work! Thanks.


Re: Object based Secondary storage.

2013-06-17 Thread John Burwell
Min,

Why are objects being compressed before being sent to S3?

Thanks,
-John

On Jun 17, 2013, at 12:24 PM, Min Chen  wrote:

> Hi Tom,
> 
>   Thanks for your testing. Glad to hear that multipart is working fine by
> using Cloudian. Regarding your questions about .gz template, that behavior
> is as expected. We will upload it to S3 as its .gz format. Only when the
> template is used and downloaded to primary storage, we will use staging
> area to decompress it.
>   We will look at the bugs you filed and update them accordingly.
> 
>   -min
> 
> On 6/17/13 12:31 AM, "Thomas O'Dowd"  wrote:
> 
>> Thanks Min - I filed 3 small issues today. I've a couple more but I want
>> to try and repeat them again before I file them and I've no time right
>> now. Please let me know if you need any further detail on any of these.
>> 
>> https://issues.apache.org/jira/browse/CLOUDSTACK-3027
>> https://issues.apache.org/jira/browse/CLOUDSTACK-3028
>> https://issues.apache.org/jira/browse/CLOUDSTACK-3030
>> 
>> An example of the other issues I'm running into are that when I upload
>> an .gz template on regular NFS storage, it is automatically decompressed
>> for me where as with S3 the template remains as a .gz file. Is this
>> correct or not? Also, perhaps related but after successfully uploading
>> the template to S3 and then trying to start an instance using it, I can
>> select it and go all the way to the last screen where I think the action
>> button says launch instance or something and it fails with a resource
>> unreachable error. I'll have to dig up the error later and file the bug
>> as my machine got rebooted over the weekend.
>> 
>> The multipart upload looks like it is working correctly though and I can
>> verify the checksums etc are correct with what they should be.
>> 
>> Tom.
>> 
>> On Fri, 2013-06-14 at 16:55 +, Min Chen wrote:
>>> HI Tom,
>>> 
>>> You can file JIRA ticket for object_store branch by prefixing your bug
>>> with "Object_Store_Refactor" and mentioning that it is using build from
>>> object_store. Here is an example bug filed from Sangeetha against
>>> object_store branch build:
>>> https://issues.apache.org/jira/browse/CLOUDSTACK-2528.
>>> If you use devcloud for testing, you may run into an issue where ssvm
>>> cannot access public url when you register a template, so register
>>> template will fail. You may have to set up internal web server inside
>>> devcloud and post template to be registered there to give a URL that
>>> devcloud can access. We mainly used devcloud to run our TestNG
>>> automation
>>> test earlier, and then switched to real hypervisor for real testing.
>>> Thanks
>>> -min
>>> 
>>> On 6/14/13 1:46 AM, "Thomas O'Dowd"  wrote:
>>> 
 Edison,
 
 I've got devcloud running along with the object_store branch and I've
 finally been able to test a bit today.
 
 I found some issues (or things that I think are bugs) and would like to
 file a few issues. I know where the bug database is and I have an
 account but what is the best way to file bugs against this particular
 branch? I guess I can select "Future" as the version? What other way
>>> are
 feature branches usually identified in issues? Perhaps in the subject?
 Please let me know the preference.
 
 Also, can you describe (or point me at a document) what the best way to
 test against the object_store branch is? So far I have been doing the
 following but I'm not sure it is the best?
 
 a) setup devcloud.
 b) stop any instances on devcloud from previous runs
 xe vm-shutdown --multiple
 c) check out and update the object_store branch.
 d) clean build as described in devcloud doc (ADIDD for short)
 e) deploydb (ADIDD)
 f) start management console (ADIDD) and wait for it.
 g) deploysvr (ADIDD) in another shell.
 h) on devcloud machine use xentop to wait for 2 vms to launch.
   (I'm not sure what the nfs vm is used for here??)
 i) login on gui -> infra -> secondary and remove nfs secondary storage
 j) add s3 secondary storage (using cache of old secondary storage?)
 
 Then rest of testing starts from here... (and also perhaps in step j)
 
 Thanks,
 
 Tom.
 -- 
 Cloudian KK - http://www.cloudian.com/get-started.html
 Fancy 100TB of full featured S3 Storage?
 Checkout the Cloudian® Community Edition!
 
>>> 
>> 
>> -- 
>> Cloudian KK - http://www.cloudian.com/get-started.html
>> Fancy 100TB of full featured S3 Storage?
>> Checkout the Cloudian® Community Edition!
>> 
> 



Re: Object based Secondary storage.

2013-06-17 Thread Min Chen
John,

Let me clarify, we didn't do extra compression before sending to S3. 
Only
when user provides a URL pointing to a compressed template during
registering, we will just download that template to S3 without
decompressing it afterwards as we did for NFS currently. If the register
url provided user is not compressed format, we will just send uncompressed
version to S3.

Thanks
-min

On 6/17/13 9:45 AM, "John Burwell"  wrote:

>Min,
>
>Why are objects being compressed before being sent to S3?
>
>Thanks,
>-John
>
>On Jun 17, 2013, at 12:24 PM, Min Chen  wrote:
>
>> Hi Tom,
>> 
>>  Thanks for your testing. Glad to hear that multipart is working fine by
>> using Cloudian. Regarding your questions about .gz template, that
>>behavior
>> is as expected. We will upload it to S3 as its .gz format. Only when the
>> template is used and downloaded to primary storage, we will use staging
>> area to decompress it.
>>  We will look at the bugs you filed and update them accordingly.
>> 
>>  -min
>> 
>> On 6/17/13 12:31 AM, "Thomas O'Dowd"  wrote:
>> 
>>> Thanks Min - I filed 3 small issues today. I've a couple more but I
>>>want
>>> to try and repeat them again before I file them and I've no time right
>>> now. Please let me know if you need any further detail on any of these.
>>> 
>>> https://issues.apache.org/jira/browse/CLOUDSTACK-3027
>>> https://issues.apache.org/jira/browse/CLOUDSTACK-3028
>>> https://issues.apache.org/jira/browse/CLOUDSTACK-3030
>>> 
>>> An example of the other issues I'm running into are that when I upload
>>> an .gz template on regular NFS storage, it is automatically
>>>decompressed
>>> for me where as with S3 the template remains as a .gz file. Is this
>>> correct or not? Also, perhaps related but after successfully uploading
>>> the template to S3 and then trying to start an instance using it, I can
>>> select it and go all the way to the last screen where I think the
>>>action
>>> button says launch instance or something and it fails with a resource
>>> unreachable error. I'll have to dig up the error later and file the bug
>>> as my machine got rebooted over the weekend.
>>> 
>>> The multipart upload looks like it is working correctly though and I
>>>can
>>> verify the checksums etc are correct with what they should be.
>>> 
>>> Tom.
>>> 
>>> On Fri, 2013-06-14 at 16:55 +, Min Chen wrote:
 HI Tom,
 
You can file JIRA ticket for object_store branch by prefixing your
bug
 with "Object_Store_Refactor" and mentioning that it is using build
from
 object_store. Here is an example bug filed from Sangeetha against
 object_store branch build:
 https://issues.apache.org/jira/browse/CLOUDSTACK-2528.
If you use devcloud for testing, you may run into an issue where ssvm
 cannot access public url when you register a template, so register
 template will fail. You may have to set up internal web server inside
 devcloud and post template to be registered there to give a URL that
 devcloud can access. We mainly used devcloud to run our TestNG
 automation
 test earlier, and then switched to real hypervisor for real testing.
Thanks
-min
 
 On 6/14/13 1:46 AM, "Thomas O'Dowd"  wrote:
 
> Edison,
> 
> I've got devcloud running along with the object_store branch and I've
> finally been able to test a bit today.
> 
> I found some issues (or things that I think are bugs) and would like
>to
> file a few issues. I know where the bug database is and I have an
> account but what is the best way to file bugs against this particular
> branch? I guess I can select "Future" as the version? What other way
 are
> feature branches usually identified in issues? Perhaps in the
>subject?
> Please let me know the preference.
> 
> Also, can you describe (or point me at a document) what the best way
>to
> test against the object_store branch is? So far I have been doing the
> following but I'm not sure it is the best?
> 
> a) setup devcloud.
> b) stop any instances on devcloud from previous runs
> xe vm-shutdown --multiple
> c) check out and update the object_store branch.
> d) clean build as described in devcloud doc (ADIDD for short)
> e) deploydb (ADIDD)
> f) start management console (ADIDD) and wait for it.
> g) deploysvr (ADIDD) in another shell.
> h) on devcloud machine use xentop to wait for 2 vms to launch.
>   (I'm not sure what the nfs vm is used for here??)
> i) login on gui -> infra -> secondary and remove nfs secondary
>storage
> j) add s3 secondary storage (using cache of old secondary storage?)
> 
> Then rest of testing starts from here... (and also perhaps in step j)
> 
> Thanks,
> 
> Tom.
> -- 
> Cloudian KK - http://www.cloudian.com/get-started.html
> Fancy 100TB of full featured S3 Storage?
> Checkout

Re: committer wanted for review

2013-06-17 Thread John Burwell
Daan,

Please see my comments in-line below.

Thanks,
-John

On Jun 17, 2013, at 9:40 AM, Daan Hoogland  wrote:

> John,
> 
> If I understand it correctly, you are stating that my take on the solution
> is 'not done/not the way to go'?

> 
> For the record the case I solved was an instance of A, but I would not call
> it adding technical debt. A arose from existing code in combination of a
> requirement to work with a non-posix-path compliant (but unc) nfs server.

From my perspective, it is technical debt because the solution, as implemented, 
is masking/compensating for underlying defects.  I think we should fix the 
underlying defects, input validation and value persistence, rather than trying 
to compensate for it in the storage layer.  We also likely need some type of 
utility/functionality to upgrade tools to identify invalid path data in 
existing installations for correction.

> 
> regards,
> 
> 
> On Mon, Jun 17, 2013 at 2:01 PM, John Burwell  wrote:
> 
>> All,
>> 
>> Please see my comments in-line below.
>> 
>> Thanks,
>> -John
>> 
>> On Jun 15, 2013, at 6:11 AM, Hiroaki KAWAI 
>> wrote:
>> 
>>> Probably we've agreed on that double slash should not
>>> generated by cloudstack.
>>> 
>>> If something went wrong and double slash was passed to
>>> Winfows based NFS, the reason may A) there was another
>>> code that generates double slash B) cloudstack configuration
>>> or something user input was bad C) some path components became
>>> empty string because of database error or something unexpeceted
>>> D) cloudstack is really being attacked etc.,
>> 
>> A indicates that we adding technical debt and later defects to the system.
>> We need to fix upstream for correctness before it rots further.  B sound
>> like a case for stronger input validation rather than a "fix up" on the
>> backend.  C seems like we need to be more careful in how we persist and
>> retrieve the information from the database.  The more we discuss this
>> solution, the more this feels like a front-end input validation and
>> database persistence issue.  Treating it this way would obviate any
>> security issues or logging needs.
>> 
>>> 
>>> Anyway, double slash should not happen and the admins should be
>>> able to know when the NFS layer got that sequence.
>>> I'd prefer WARN for this reason, but INFO may do as well.
>>> I don't have strong opinion on log level.
>> 
>> 
>> If it shouldn't happen then we should be rejecting the data as part of
>> input validation and no allowing it to be persisted.
>> 
>>> 
>>> In addition to that, "auto-fix" may not be a "fix" for example in
>>> case "C". I don't want to see autofix code in many places,
>>> "auto-fix" might be a "fix" where the path is really passed to
>>> NFS layer.
>>> 
>>> Another approach to double-slash is just reject the input and raise
>>> a CloudstackRuntimeException.
>>> But I'd prefer auto-fix because of case "A" at this moment…
>> 
>> Originally, I thought this fix was the equivalent of escaping a URL or
>> HTML string.  Now that I understand it more fully, I believe we need to
>> throw a CloudRuntimeException to ferret out code generating incorrectly
>> formatted input.
>> 
>>> 
>>> 
>>> (2013/06/15 18:01), Daan Hoogland wrote:
 H John,
 
 Yes, actually I was going to make it info level but you swapped me of my
 feet with your remark.
 
 The point is that a mixed posix-paths/UNC system triggered this fix. A
 double slash has double meaning in such an environment. However the
>> error,
 be it human or system generated, does not destabalize cloudstack in any
 way, so I will stick with the info. It is certainly not debug in my
 opinion. It is not a bug that needs debugging.
 
 Of course a deeper understanding of cloudstack might change my position
>> on
 the issue.
 
 regards,
 Daan
 
 
 On Fri, Jun 14, 2013 at 5:58 PM, John Burwell 
>> wrote:
 
> Daan,
> 
> Since a WARN indicates a condition that could lead to system
>> instability,
> many folks configure their log analysis to trigger notifications on
>> WARN
> and INFO.  Does escaping a character in a path warrant meet that
>> criteria?
> 
> Thanks,
> -John
> 
> On Jun 14, 2013, at 11:52 AM, Daan Hoogland 
> wrote:
> 
>> H John,
>> 
>> I browsed through your comments and most I will apply. There is one
>> where
>> you contradict Hiroaki. This is about the logging level for reporting
>> a
>> changed path. I am going to follow my heart at this unless there is a
>> project directive on it.
>> 
>> regards,
>> Daan
>> 
>> 
>> On Fri, Jun 14, 2013 at 5:25 PM, John Burwell 
> wrote:
>> 
>>> Daan,
>>> 
>>> I just looked through the review request, and published my comments.
>>> 
>>> Thanks,
>>> -John
>>> 
>>> On Jun 14, 2013, at 10:27 AM, Daan Hoogland >> 
>>> wrote:
>>> 
 Hiroaki,
 
 -

Re: systemvm.iso not updated in packages

2013-06-17 Thread Prasanna Santhanam
On Mon, Jun 17, 2013 at 12:08:54PM -0400, Chip Childers wrote:
> On Mon, Jun 17, 2013 at 11:05:43AM +0530, Prasanna Santhanam wrote:
> > Applied yet another fix for this from Rajesh:
> > 
> > commit 6d140538c5efc394fda8a4ddc7cb72832470d0b3
> > Author: Rajesh Battala 
> > Date:   Sat Jun 15 11:21:46 2013 +0530
> > 
> > CLOUDSTACK-3004: remove duplicate ssvm-check.sh
> > 
> > ssvm_check.sh remove the duplicate file from consoleproxy and include 
> > the
> > script from secondary storage folder while packing iso
> > 
> > Signed-off-by: Prasanna Santhanam 
> 
> Should this go into 4.1?

Yes it should, I put in 4.1.1 as a fix version for that bug - the test
has run but it doesn't seem to have fixed the issue in ssvm-check.sh.
Will take a look tomorrow after couple more runs.

-- 
Prasanna.,


Powered by BigRock.com



RE: [ANNOUNCE] New committer: Jayapal Reddy Uradi

2013-06-17 Thread Koushik Das
Congrats Jayapal

> -Original Message-
> From: Chip Childers [mailto:chip.child...@sungard.com]
> Sent: Monday, June 17, 2013 9:02 PM
> To: dev@cloudstack.apache.org
> Subject: [ANNOUNCE] New committer: Jayapal Reddy Uradi
> 
> The Project Management Committee (PMC) for Apache CloudStack has
> asked Jayapal Reddy Uradi to become a committer and we are pleased to
> announce that they have accepted.
> 
> Being a committer allows many contributors to contribute more
> autonomously. For developers, it makes it easier to submit changes and
> eliminates the need to have contributions reviewed via the patch submission
> process. Whether contributions are development-related or otherwise, it is a
> recognition of a contributor's participation in the project and commitment to
> the project and the Apache Way.
> 
> Please join me in congratulating Jayapal!
> 
> -chip
> on behalf of the CloudStack PMC


Re: Object based Secondary storage.

2013-06-17 Thread John Burwell
Min,

Cool.  I just wanted to make sure we weren't compressing the template and 
template.properties …

Thanks for the clarification,
-John

On Jun 17, 2013, at 12:49 PM, Min Chen  wrote:

> John,
> 
>   Let me clarify, we didn't do extra compression before sending to S3. 
> Only
> when user provides a URL pointing to a compressed template during
> registering, we will just download that template to S3 without
> decompressing it afterwards as we did for NFS currently. If the register
> url provided user is not compressed format, we will just send uncompressed
> version to S3.
> 
>   Thanks
>   -min
> 
> On 6/17/13 9:45 AM, "John Burwell"  wrote:
> 
>> Min,
>> 
>> Why are objects being compressed before being sent to S3?
>> 
>> Thanks,
>> -John
>> 
>> On Jun 17, 2013, at 12:24 PM, Min Chen  wrote:
>> 
>>> Hi Tom,
>>> 
>>> Thanks for your testing. Glad to hear that multipart is working fine by
>>> using Cloudian. Regarding your questions about .gz template, that
>>> behavior
>>> is as expected. We will upload it to S3 as its .gz format. Only when the
>>> template is used and downloaded to primary storage, we will use staging
>>> area to decompress it.
>>> We will look at the bugs you filed and update them accordingly.
>>> 
>>> -min
>>> 
>>> On 6/17/13 12:31 AM, "Thomas O'Dowd"  wrote:
>>> 
 Thanks Min - I filed 3 small issues today. I've a couple more but I
 want
 to try and repeat them again before I file them and I've no time right
 now. Please let me know if you need any further detail on any of these.
 
 https://issues.apache.org/jira/browse/CLOUDSTACK-3027
 https://issues.apache.org/jira/browse/CLOUDSTACK-3028
 https://issues.apache.org/jira/browse/CLOUDSTACK-3030
 
 An example of the other issues I'm running into are that when I upload
 an .gz template on regular NFS storage, it is automatically
 decompressed
 for me where as with S3 the template remains as a .gz file. Is this
 correct or not? Also, perhaps related but after successfully uploading
 the template to S3 and then trying to start an instance using it, I can
 select it and go all the way to the last screen where I think the
 action
 button says launch instance or something and it fails with a resource
 unreachable error. I'll have to dig up the error later and file the bug
 as my machine got rebooted over the weekend.
 
 The multipart upload looks like it is working correctly though and I
 can
 verify the checksums etc are correct with what they should be.
 
 Tom.
 
 On Fri, 2013-06-14 at 16:55 +, Min Chen wrote:
> HI Tom,
> 
>   You can file JIRA ticket for object_store branch by prefixing your
> bug
> with "Object_Store_Refactor" and mentioning that it is using build
> from
> object_store. Here is an example bug filed from Sangeetha against
> object_store branch build:
> https://issues.apache.org/jira/browse/CLOUDSTACK-2528.
>   If you use devcloud for testing, you may run into an issue where ssvm
> cannot access public url when you register a template, so register
> template will fail. You may have to set up internal web server inside
> devcloud and post template to be registered there to give a URL that
> devcloud can access. We mainly used devcloud to run our TestNG
> automation
> test earlier, and then switched to real hypervisor for real testing.
>   Thanks
>   -min
> 
> On 6/14/13 1:46 AM, "Thomas O'Dowd"  wrote:
> 
>> Edison,
>> 
>> I've got devcloud running along with the object_store branch and I've
>> finally been able to test a bit today.
>> 
>> I found some issues (or things that I think are bugs) and would like
>> to
>> file a few issues. I know where the bug database is and I have an
>> account but what is the best way to file bugs against this particular
>> branch? I guess I can select "Future" as the version? What other way
> are
>> feature branches usually identified in issues? Perhaps in the
>> subject?
>> Please let me know the preference.
>> 
>> Also, can you describe (or point me at a document) what the best way
>> to
>> test against the object_store branch is? So far I have been doing the
>> following but I'm not sure it is the best?
>> 
>> a) setup devcloud.
>> b) stop any instances on devcloud from previous runs
>>xe vm-shutdown --multiple
>> c) check out and update the object_store branch.
>> d) clean build as described in devcloud doc (ADIDD for short)
>> e) deploydb (ADIDD)
>> f) start management console (ADIDD) and wait for it.
>> g) deploysvr (ADIDD) in another shell.
>> h) on devcloud machine use xentop to wait for 2 vms to launch.
>>  (I'm not sure what the nfs vm is used for here??)
>> i) login on gui -> infra -> secondary and remove nfs secondary
>> storage

Re: systemvm.iso not updated in packages

2013-06-17 Thread Chip Childers
On Mon, Jun 17, 2013 at 10:22:44PM +0530, Prasanna Santhanam wrote:
> On Mon, Jun 17, 2013 at 12:08:54PM -0400, Chip Childers wrote:
> > On Mon, Jun 17, 2013 at 11:05:43AM +0530, Prasanna Santhanam wrote:
> > > Applied yet another fix for this from Rajesh:
> > > 
> > > commit 6d140538c5efc394fda8a4ddc7cb72832470d0b3
> > > Author: Rajesh Battala 
> > > Date:   Sat Jun 15 11:21:46 2013 +0530
> > > 
> > > CLOUDSTACK-3004: remove duplicate ssvm-check.sh
> > > 
> > > ssvm_check.sh remove the duplicate file from consoleproxy and include 
> > > the
> > > script from secondary storage folder while packing iso
> > > 
> > > Signed-off-by: Prasanna Santhanam 
> > 
> > Should this go into 4.1?
> 
> Yes it should, I put in 4.1.1 as a fix version for that bug - the test
> has run but it doesn't seem to have fixed the issue in ssvm-check.sh.
> Will take a look tomorrow after couple more runs.

OK - feel free to cherry-pick appropriate commits into the 4.1 branch as
you see fit for this bug fix.

Thanks!


Re: [ANNOUNCE] New committer: Jayapal Reddy Uradi

2013-06-17 Thread Abhinandan Prateek
Congrats Jayapal ! Well deserved.

On 17/06/13 9:00 PM, "Chip Childers"  wrote:

>The Project Management Committee (PMC) for Apache CloudStack
>has asked Jayapal Reddy Uradi to become a committer and we are
>pleased to announce that they have accepted.
>
>Being a committer allows many contributors to contribute more
>autonomously. For developers, it makes it easier to submit changes and
>eliminates the need to have contributions reviewed via the patch
>submission process. Whether contributions are development-related or
>otherwise, it is a recognition of a contributor's participation in the
>project and commitment to the project and the Apache Way.
>
>Please join me in congratulating Jayapal!
>
>-chip
>on behalf of the CloudStack PMC




Re: [MERGE] Merge VMSync improvement branch into master

2013-06-17 Thread Chip Childers
On Mon, Jun 17, 2013 at 04:59:00PM +, Kelven Yang wrote:
> I'd like to kick off the official merge process. We will start the merge
> process after the branch has passed necessary tests
> 
> Kelven

Can you share what testing is being run against the branch?


Re: Review Request: CLOUDSTACK-2902: Updating repository refs

2013-06-17 Thread ASF Subversion and Git Services

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11908/#review21985
---


Commit 76d3c27bf4c0ab3690840e56ca162935cea91d48 in branch refs/heads/master 
from Nils
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=76d3c27 ]

CLOUDSTACK-2902: Fixing references to 4.1 repository for this release


- ASF Subversion and Git Services


On June 17, 2013, 1:58 p.m., Nils Vogels wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11908/
> ---
> 
> (Updated June 17, 2013, 1:58 p.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Description
> ---
> 
> This solves CLOUDSTACK-2902, replacing all references to 4.0 repositories to 
> 4.1 for the 4.1 release
> 
> 
> This addresses bug CLOUDSTACK-2902.
> 
> 
> Diffs
> -
> 
>   docs/en-US/Release_Notes.xml 2ae8732 
>   docs/en-US/configure-package-repository.xml c8ba48f 
>   docs/pot/configure-package-repository.pot e915358 
> 
> Diff: https://reviews.apache.org/r/11908/diff/
> 
> 
> Testing
> ---
> 
> Compiled docs with changes applied
> 
> 
> Thanks,
> 
> Nils Vogels
> 
>



[MERGE] Merge VMSync improvement branch into master

2013-06-17 Thread Kelven Yang
I'd like to kick off the official merge process. We will start the merge
process after the branch has passed necessary tests

Kelven

On 6/10/13 2:51 PM, "Kelven Yang"  wrote:

>Hi there,
>
>Alex Huang and I are targeting to finish the debugging process on VMsync
>improvement by the end of this week. I'd like to encourage those who are
>interested or having concerns about this  project to review it as soon as
>possible on branch vmsync. We are going to propose the official merge
>request soon after it has passed our internal test.
>
>Some details have been posted to wiki a while ago at
>https://cwiki.apache.org/confluence/display/CLOUDSTACK/FS+-+VMSync+improve
>ment.  Here is a summary about this change.
>
>
>  1.  We changed the underlying VM state sync modeling.
>
>For those who are familiar with the old Microsoft COM model, they may
>recall "Free threading model" and "Thread-apartment model",  VM state
>sync modeling change is similar to switching from free-threading to
>thread-apartment modeling.  Previously, VM state changes are reported and
>processed in management server in a "free-threading" fashion, regardless
>whether or not there is active process with the subject VM, the state
>sync process is always executed in place.  This approach has issues with
>the concurrency complexity by nature, since all sync-process has been
>concentrated into one place and caused complex code logic that is hard to
>change and maintain.
>
>A major modeling shift is introduced in this change, we now switch to an
>approach which we can call it "job-apartment" model, comparable to
>Microsoft's COM "thread-apartment model", that is, making the sync logic
>within the process context and de-centralize it across the board.  This
>approach can simplify VM state sync logic individually and leave the
>complexity to underlying framework, which in the future, the framework
>can be optimized separately without affecting business layer (separating
>of concerns at architecture level)
>
>2. De-couple hypervisor resource agent from managing VM state in Cloud
>layer
>
>We also changed the way on how resource agent is involved in the overall
>VM state sync process. Previously, resource agent needs to participate VM
>state management in the Cloud layer closely, this requirement is removed
>and resource agent is no longer required to help maintain "delta" state
>in the overall VM state management, all it needs is to report what it
>knows about the VM state at virtualization layer, leaving all the
>handling to CloudStack management server.
>
>The reason for this change is to simplify the architecture between agent
>resource and management server, de-coupling in this way can lower the
>requirement for developers to write a new hypervisor resource agent and
>also give room for management server developers to optimize sync logic
>independently. (Again, separating of concerns at architecture level)
>
>
>3. Job framework has been improved
>
>To make the proposal possible, job framework has been refactored to
>support more explicit management of jobs,  job joining, wake-up
>scheduling and serializing job execution has been added together with a
>topic-based message bus facility.
>
>4. Compile-time strong typing of Java generic usage in
>VirtualMachineManagerImpl
>
>Job scheduling change require more flexible run-time handling, however,
>previously VirtualMachineManagerImpl has a heavy-weight usage of Java
>generic to take advantage of compile-time strong typing provided by Java,
>this has brought some troubles with object serialization the occurs
>between boundaries of "job-apartments", VirtualMachineManagerImpl has
>been refactored because of that.
>
>Flames and Comment? all are welcome.
>
>Kelven
>



Query String Request Authentication(QSRA) support by S3 providers

2013-06-17 Thread Min Chen
Tom filed a very good bug for ACL setting change on S3 object when users issue 
extractTemplate API (https://issues.apache.org/jira/browse/CLOUDSTACK-3030), 
and his recommendation of using Query String Request Authentication (QSRA) 
alternative sounds like a right approach to fix this bug. Before implementing 
it, I would like to confirm if QSRA should be supported by all S3 providers if 
they claim that they are AWS s3 compatible. If so, we will make this assumption 
in our code. Based on Tom, Cloudian is supporting it. How about RiakCS, John?

Thanks
-min



RE: Hypervisor Host Type Required at Zone Level for Primary Storage?

2013-06-17 Thread Edison Su
I think it's due to this 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Zone-wide+primary+storage+target
There are zone-wide storages, may only work with one particular hypervisor. For 
example, the data store created on VCenter can be shared by all the clusters in 
a DC, but only for vmware. And, CloudStack supports multiple hypervisors in one 
Zone, so, somehow, need a way to tell mgt server, for a particular zone-wide 
storage, which can only work with certain hypervisors.
You can treat hypervisor type on the storage pool, is another tag, to help 
storage pool allocator to find proper storage pool. But seems hypervisor type 
is not enough for your case, as your storage pool can work with both 
vmware/xenserver, but not for other hypervisors(that's your current code's 
implementation limitation, not your storage itself can't do that). 
So I'd think you need to extend ZoneWideStoragePoolAllocator, maybe, a new 
allocator called: solidfirezonewidestoragepoolAllocator. And, replace the 
following line in applicationContext.xml:
  
With your solidfirezonewidestoragepoolAllocator
It also means, for each CloudStack mgt server deployment, admin needs to 
configure applicationContext.xml for their needs.

> -Original Message-
> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> Sent: Saturday, June 15, 2013 11:34 AM
> To: dev@cloudstack.apache.org
> Subject: Hypervisor Host Type Required at Zone Level for Primary Storage?
> 
> Hi,
> 
> I recently updated my local repo and noticed that we now require a
> hypervisor type to be associated with zone-wide primary storage.
> 
> I was wondering what the motivation for this might be?
> 
> In my case, my zone-wide primary storage represents a SAN. Volumes are
> carved out of the SAN as needed and can currently be utilized on both Xen
> and VMware (although, of course, once you've used a given volume on one
> hypervisor type or the other, you can only continue to use it with that
> hypervisor type).
> 
> I guess the point being my primary storage can be associated with more than
> one hypervisor type because of its dynamic nature.
> 
> Can someone fill me in on the reasons behind this recent change and
> recommendations on how I should proceed here?
> 
> Thanks!
> 
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud
> *(tm)*


Re: Hypervisor Host Type Required at Zone Level for Primary Storage?

2013-06-17 Thread Mike Tutkowski
Hi Edison,

I haven't looked into this much, so maybe what I suggest here won't make
sense, but here goes:

What about a Hypervisor.MULTIPLE enum option ('Hypervisor' might not be the
name of the enumeration...I forget). The ZoneWideStoragePoolAllocator could
use this to be less choosy about if a storage pool qualifies to be used.

Does that make any sense?

Thanks!


On Mon, Jun 17, 2013 at 11:28 AM, Edison Su  wrote:

> I think it's due to this
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Zone-wide+primary+storage+target
> There are zone-wide storages, may only work with one particular
> hypervisor. For example, the data store created on VCenter can be shared by
> all the clusters in a DC, but only for vmware. And, CloudStack supports
> multiple hypervisors in one Zone, so, somehow, need a way to tell mgt
> server, for a particular zone-wide storage, which can only work with
> certain hypervisors.
> You can treat hypervisor type on the storage pool, is another tag, to help
> storage pool allocator to find proper storage pool. But seems hypervisor
> type is not enough for your case, as your storage pool can work with both
> vmware/xenserver, but not for other hypervisors(that's your current code's
> implementation limitation, not your storage itself can't do that).
> So I'd think you need to extend ZoneWideStoragePoolAllocator, maybe, a new
> allocator called: solidfirezonewidestoragepoolAllocator. And, replace the
> following line in applicationContext.xml:
>class="org.apache.cloudstack.storage.allocator.ZoneWideStoragePoolAllocator"
> />
> With your solidfirezonewidestoragepoolAllocator
> It also means, for each CloudStack mgt server deployment, admin needs to
> configure applicationContext.xml for their needs.
>
> > -Original Message-
> > From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> > Sent: Saturday, June 15, 2013 11:34 AM
> > To: dev@cloudstack.apache.org
> > Subject: Hypervisor Host Type Required at Zone Level for Primary Storage?
> >
> > Hi,
> >
> > I recently updated my local repo and noticed that we now require a
> > hypervisor type to be associated with zone-wide primary storage.
> >
> > I was wondering what the motivation for this might be?
> >
> > In my case, my zone-wide primary storage represents a SAN. Volumes are
> > carved out of the SAN as needed and can currently be utilized on both Xen
> > and VMware (although, of course, once you've used a given volume on one
> > hypervisor type or the other, you can only continue to use it with that
> > hypervisor type).
> >
> > I guess the point being my primary storage can be associated with more
> than
> > one hypervisor type because of its dynamic nature.
> >
> > Can someone fill me in on the reasons behind this recent change and
> > recommendations on how I should proceed here?
> >
> > Thanks!
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkow...@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud
> > *(tm)*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud
*™*


Re: [MERGE] Merge VMSync improvement branch into master

2013-06-17 Thread Kelven Yang
Low level classes were tested in unit tests(MessageBus, Job framework, Job
dispatchers etc), interface layer changes are guarded through matching the
old semantics, but changes are tested manually, we are planning to get
this part of testing through BVT system after we have re-based the latest
master. 

Kelven 

On 6/17/13 10:01 AM, "Chip Childers"  wrote:

>On Mon, Jun 17, 2013 at 04:59:00PM +, Kelven Yang wrote:
>> I'd like to kick off the official merge process. We will start the merge
>> process after the branch has passed necessary tests
>> 
>> Kelven
>
>Can you share what testing is being run against the branch?



Re: Review Request: CLOUDSTACK-2902: Updating repository refs

2013-06-17 Thread ASF Subversion and Git Services

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11908/#review21988
---


Commit c1bb2a561b9a445241e02402232a29d75f612fde in branch refs/heads/4.1 from 
Chip Childers
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=c1bb2a5 ]

CLOUDSTACK-2902: Updating repository refs


- ASF Subversion and Git Services


On June 17, 2013, 1:58 p.m., Nils Vogels wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11908/
> ---
> 
> (Updated June 17, 2013, 1:58 p.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Description
> ---
> 
> This solves CLOUDSTACK-2902, replacing all references to 4.0 repositories to 
> 4.1 for the 4.1 release
> 
> 
> This addresses bug CLOUDSTACK-2902.
> 
> 
> Diffs
> -
> 
>   docs/en-US/Release_Notes.xml 2ae8732 
>   docs/en-US/configure-package-repository.xml c8ba48f 
>   docs/pot/configure-package-repository.pot e915358 
> 
> Diff: https://reviews.apache.org/r/11908/diff/
> 
> 
> Testing
> ---
> 
> Compiled docs with changes applied
> 
> 
> Thanks,
> 
> Nils Vogels
> 
>



Re: Review Request: CLOUDSTACK-2902: Updating repository refs

2013-06-17 Thread ASF Subversion and Git Services

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11908/#review21989
---


Commit 9ef366e4b3d7e237503d7dd6a1f4b7af4b74b445 in branch refs/heads/master 
from Chip Childers
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=9ef366e ]

CLOUDSTACK-2902: Updating repository refs

Conflicts:

docs/zh-TW/configure-package-repository.po


- ASF Subversion and Git Services


On June 17, 2013, 1:58 p.m., Nils Vogels wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11908/
> ---
> 
> (Updated June 17, 2013, 1:58 p.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Description
> ---
> 
> This solves CLOUDSTACK-2902, replacing all references to 4.0 repositories to 
> 4.1 for the 4.1 release
> 
> 
> This addresses bug CLOUDSTACK-2902.
> 
> 
> Diffs
> -
> 
>   docs/en-US/Release_Notes.xml 2ae8732 
>   docs/en-US/configure-package-repository.xml c8ba48f 
>   docs/pot/configure-package-repository.pot e915358 
> 
> Diff: https://reviews.apache.org/r/11908/diff/
> 
> 
> Testing
> ---
> 
> Compiled docs with changes applied
> 
> 
> Thanks,
> 
> Nils Vogels
> 
>



Re: Review Request: CLOUDSTACK-2902: Updating repository refs

2013-06-17 Thread ASF Subversion and Git Services

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11908/#review21990
---


Commit 25726277975fb9e39ade7c08d680a93d33dc16b2 in branch refs/heads/master 
from Chip Childers
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=2572627 ]

Revert "CLOUDSTACK-2902: Fixing references to 4.1 repository for this release"

This reverts commit 76d3c27bf4c0ab3690840e56ca162935cea91d48.


- ASF Subversion and Git Services


On June 17, 2013, 1:58 p.m., Nils Vogels wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11908/
> ---
> 
> (Updated June 17, 2013, 1:58 p.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Description
> ---
> 
> This solves CLOUDSTACK-2902, replacing all references to 4.0 repositories to 
> 4.1 for the 4.1 release
> 
> 
> This addresses bug CLOUDSTACK-2902.
> 
> 
> Diffs
> -
> 
>   docs/en-US/Release_Notes.xml 2ae8732 
>   docs/en-US/configure-package-repository.xml c8ba48f 
>   docs/pot/configure-package-repository.pot e915358 
> 
> Diff: https://reviews.apache.org/r/11908/diff/
> 
> 
> Testing
> ---
> 
> Compiled docs with changes applied
> 
> 
> Thanks,
> 
> Nils Vogels
> 
>



Re: Review Request: CLOUDSTACK-2902: Updating repository refs

2013-06-17 Thread Chip Childers

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11908/#review21991
---


Thank you for submitting this patch!

I've actually done the changes via edits locally, excluding some of the issues 
listed below.  I wanted to get back to you on these, so you knew what that text 
was for.

Master and 4.1 branches should be correct now.


docs/en-US/Release_Notes.xml


This line is actually part of a description of "previously configured" repo 
data.  See the line immediately following, where it suggests changing the value.



docs/en-US/Release_Notes.xml


This line is actually part of a description of "previously configured" repo 
data.  See the line immediately following, where it suggests changing the value.



docs/en-US/Release_Notes.xml


This line is actually part of a description of "previously configured" repo 
data.  See the line immediately following, where it suggests changing the value.


- Chip Childers


On June 17, 2013, 1:58 p.m., Nils Vogels wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11908/
> ---
> 
> (Updated June 17, 2013, 1:58 p.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Description
> ---
> 
> This solves CLOUDSTACK-2902, replacing all references to 4.0 repositories to 
> 4.1 for the 4.1 release
> 
> 
> This addresses bug CLOUDSTACK-2902.
> 
> 
> Diffs
> -
> 
>   docs/en-US/Release_Notes.xml 2ae8732 
>   docs/en-US/configure-package-repository.xml c8ba48f 
>   docs/pot/configure-package-repository.pot e915358 
> 
> Diff: https://reviews.apache.org/r/11908/diff/
> 
> 
> Testing
> ---
> 
> Compiled docs with changes applied
> 
> 
> Thanks,
> 
> Nils Vogels
> 
>



RE: enableStorageMaintenance

2013-06-17 Thread Edison Su


> -Original Message-
> From: La Motta, David [mailto:david.lamo...@netapp.com]
> Sent: Friday, June 14, 2013 7:54 AM
> To: 
> Subject: enableStorageMaintenance
> 
> ...works great for putting down the storage into maintenance mode (looking
> forward seeing this for secondary storage as well!).
> 
> Now the question is, after I've run it... how do I know when it is done so I 
> can
> operate on the volume?

enableStorageMaintenance will return a job id, which can be used in 
queryAsyncJobResult. Here is the doc:
http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.1.0/html/Developers_Guide/asynchronous-commands.html


> 
> Poll using updateStoragePool and query the state for "Maintenance"?  What
> about introducing the ability to pass in callback URLs to the REST call?


> 
> Thx.
> 
> 
> 
> David La Motta
> Technical Marketing Engineer
> Citrix Solutions
> 
> NetApp
> 919.476.5042
> dlamo...@netapp.com
> 
> 



Re: [MERGE] Merge VMSync improvement branch into master

2013-06-17 Thread Chip Childers
On Mon, Jun 17, 2013 at 05:40:36PM +, Kelven Yang wrote:
> Low level classes were tested in unit tests(MessageBus, Job framework, Job
> dispatchers etc), interface layer changes are guarded through matching the
> old semantics, but changes are tested manually, we are planning to get
> this part of testing through BVT system after we have re-based the latest
> master. 
> 
> Kelven 

Fantastic.  BVT was what I was looking for primarily.  Thanks Kelven!

> 
> On 6/17/13 10:01 AM, "Chip Childers"  wrote:
> 
> >On Mon, Jun 17, 2013 at 04:59:00PM +, Kelven Yang wrote:
> >> I'd like to kick off the official merge process. We will start the merge
> >> process after the branch has passed necessary tests
> >> 
> >> Kelven
> >
> >Can you share what testing is being run against the branch?
> 
> 


Re: Review Request: CLOUDSTACK-869-nTier-Apps-2.0_Support-NetScalar-as-external-LB-provider

2013-06-17 Thread ASF Subversion and Git Services

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/10858/#review21993
---


Commit a2c7d3a8a75b2cec266cef566b8828be7a1ebc72 in branch refs/heads/master 
from Jessica Wang
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=a2c7d3a ]

CLOUDSTACK-869: Add VPC dialog - add Public LB Provider dropdown, remove VPC 
Offering dropdown. When Public LB Provider is selected as Netscaler, pass 
"Default VPC offering with Netscaler" to createVPC API. When Public LB Provider 
is selected as VpcVirtualRouter, pass "Default VPC Offering" to createVPC API.


- ASF Subversion and Git Services


On May 8, 2013, 1:39 p.m., Rajesh Battala wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/10858/
> ---
> 
> (Updated May 8, 2013, 1:39 p.m.)
> 
> 
> Review request for cloudstack, Kishan Kavala, Murali Reddy, Alena 
> Prokharchyk, Vijay Venkatachalam, and Ram Ganesh.
> 
> 
> Description
> ---
> 
> This feature will introduce Netscaler as external LB provider in VPC.
> As of now only 1 tier is support for external LB.
> A new VPC offering will be created "Default VPC Offering with NS" with all 
> the services provided by VPCVR and LB service with NetScaler.
> Existing NetscalerElement is used and implements VpcProvider.
> In VpcManager, Netscaler is added as one of the supported providers.
> Netscaler will be dedicated to the vpc.
> 
> 
> Diffs
> -
> 
>   api/src/com/cloud/network/vpc/VpcOffering.java 3961d0a 
>   
> plugins/network-elements/netscaler/src/com/cloud/network/element/NetscalerElement.java
>  7bd9c2e 
>   server/pom.xml 808dd3e 
>   server/src/com/cloud/network/NetworkServiceImpl.java 5e8be92 
>   server/src/com/cloud/network/guru/ExternalGuestNetworkGuru.java b1606db 
>   server/src/com/cloud/network/vpc/VpcManagerImpl.java a7f06e9 
>   server/test/com/cloud/vpc/VpcTest.java PRE-CREATION 
>   
> server/test/org/apache/cloudstack/networkoffering/CreateNetworkOfferingTest.java
>  cbb6c00 
> 
> Diff: https://reviews.apache.org/r/10858/diff/
> 
> 
> Testing
> ---
> 
> Manual Testing:
> ==
> 1. Creation of Vpc with the default offering with NS is created successfully. 
> ( Enable Netscaler provider in network service providers)
> 2. Deletion of Vpc with the default offering with NS is deleted successfully.
> 3. Creation of new Vpc Network Offering with Netscaler as LB provider with 
> dedicated mode is created successfully.
> 4. Creation of new Vpc Network Offering with Netscaler as LB provider with 
> shared mode should throw exception.
> 5. Creation of tier (webtier) with the created Vpcnetscaler offering is 
> created successfully.
> 6. Verified Only one tier with netscaler as LB provider can be created. 
> 7. Verified deploying Instance in the tier is successful.
> 8. Verified a new nic got created with gateway ip from the tier cidr.
> 9. Verified deployed instance should get the ip from the specified tier cidr 
> range.
> 10. Acquire public ip in the vpc.
> 11. Verified creation on LB rule, is selecting only free dedicated Netscaler 
> device and necessary configuration is created and LB rule is created on NS
> 12. Deletion of LB rule is successful.
> 13. Modification of LB rule is successful
> 14. Creation of LB Health Check of TCP type is successful.
> 15. Deletion of LB Health Check of TCP type is successful.
> 16. Creation of LB Health Check of HTTP type is successful.
> 17. Deletion of LB Health Check of HTTP type is successful.
> 18. IpAssoc command is executed successful on Netscaler.
> 19. Deletion of tier will delete the tier and config on netscaler is cleared
> 20. Deletion of tier will mark the netscaler to be in free mode.
> 
> 
> Unit Test:
> ===
> Created VpcManger tests and added few tests to createNetworkOfferingTest
> 
> 
> Thanks,
> 
> Rajesh Battala
> 
>



RE: enableStorageMaintenance

2013-06-17 Thread Edison Su


> -Original Message-
> From: La Motta, David [mailto:david.lamo...@netapp.com]
> Sent: Monday, June 17, 2013 8:37 AM
> To: 
> Subject: Re: enableStorageMaintenance
> 
> Along the same lines... is there a REST command coming in 4.2 to quiesce one
> or multiple virtual machines?

Quiesce means quiesce guest VM file system? Like 
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1015180
4.2 will support VM snapshot for vmware, but don't know it sets quiesce flag to 
1 or not.

 
> 
> David La Motta
> Technical Marketing Engineer
> Citrix Solutions
> 
> NetApp
> 919.476.5042
> dlamo...@netapp.com
> 
> 
> 
> On Jun 14, 2013, at 10:53 AM, "La Motta, David"
> mailto:david.lamo...@netapp.com>> wrote:
> 
> ...works great for putting down the storage into maintenance mode (looking
> forward seeing this for secondary storage as well!).
> 
> Now the question is, after I've run it... how do I know when it is done so I 
> can
> operate on the volume?
> 
> Poll using updateStoragePool and query the state for "Maintenance"?  What
> about introducing the ability to pass in callback URLs to the REST call?
> 
> Thx.
> 
> 
> 
> David La Motta
> Technical Marketing Engineer
> Citrix Solutions
> 
> NetApp
> 919.476.5042
> dlamo...@netapp.com etapp.com>
> 
> 
> 



Re: Review Request: Removed String instantiation

2013-06-17 Thread Chip Childers

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11751/#review21996
---

Ship it!


Applied to master: ce8ada030d3150087357d7135c3877c25a4702c2

Thanks for the patch!

- Chip Childers


On June 8, 2013, 9:13 p.m., Laszlo Hornyak wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11751/
> ---
> 
> (Updated June 8, 2013, 9:13 p.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Description
> ---
> 
> String instantiation and redundant method call replaced with a constant.
> 
> 
> Diffs
> -
> 
>   core/src/com/cloud/network/HAProxyConfigurator.java 29fdf4a 
> 
> Diff: https://reviews.apache.org/r/11751/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Laszlo Hornyak
> 
>



Re: Review Request: String instantiation is not needed

2013-06-17 Thread Chip Childers

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11750/#review21997
---

Ship it!


Applied to master: 564013bec0d4356232d93ac52a3e44638578bff0

Thanks for the patch!

- Chip Childers


On June 8, 2013, 9:15 p.m., Laszlo Hornyak wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11750/
> ---
> 
> (Updated June 8, 2013, 9:15 p.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Description
> ---
> 
> Removed a String instntiation, test case added
> 
> 
> Diffs
> -
> 
>   utils/src/com/cloud/utils/net/NetUtils.java 8c094c8 
>   utils/test/com/cloud/utils/net/NetUtilsTest.java 16d3402 
> 
> Diff: https://reviews.apache.org/r/11750/diff/
> 
> 
> Testing
> ---
> 
> Test included
> 
> 
> Thanks,
> 
> Laszlo Hornyak
> 
>



Re: enableStorageMaintenance

2013-06-17 Thread La Motta, David
Yep.  The purpose of quiescing is exactly as described in that document:  
taking a backup without powering off the VM.


David La Motta
Technical Marketing Engineer
Citrix Solutions

NetApp
919.476.5042
dlamo...@netapp.com



On Jun 17, 2013, at 2:09 PM, Edison Su 
mailto:edison...@citrix.com>>
 wrote:



-Original Message-
From: La Motta, David [mailto:david.lamo...@netapp.com]
Sent: Monday, June 17, 2013 8:37 AM
To: mailto:dev@cloudstack.apache.org>>
Subject: Re: enableStorageMaintenance

Along the same lines... is there a REST command coming in 4.2 to quiesce one
or multiple virtual machines?

Quiesce means quiesce guest VM file system? Like 
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1015180
4.2 will support VM snapshot for vmware, but don't know it sets quiesce flag to 
1 or not.



David La Motta
Technical Marketing Engineer
Citrix Solutions

NetApp
919.476.5042
dlamo...@netapp.com



On Jun 14, 2013, at 10:53 AM, "La Motta, David"
mailto:david.lamo...@netapp.com>>
 wrote:

...works great for putting down the storage into maintenance mode (looking
forward seeing this for secondary storage as well!).

Now the question is, after I've run it... how do I know when it is done so I can
operate on the volume?

Poll using updateStoragePool and query the state for "Maintenance"?  What
about introducing the ability to pass in callback URLs to the REST call?

Thx.



David La Motta
Technical Marketing Engineer
Citrix Solutions

NetApp
919.476.5042
dlamo...@netapp.com>







Re: Review Request: use commons-lang StringUtils

2013-06-17 Thread Chip Childers

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11767/#review21998
---

Ship it!


Applied to master: c88d8fb3a2f6c418c6c7af8ff702a93bcdb2d752

Thanks for the patch!

- Chip Childers


On June 15, 2013, 4:32 p.m., Laszlo Hornyak wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11767/
> ---
> 
> (Updated June 15, 2013, 4:32 p.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Description
> ---
> 
> commons-lang is already a transitive dependency of the utils project, which 
> allows removing some duplicated functionality.
> This patch replaces StringUtils.join(String, Object...) with it's 
> commons-lang counterpart.
> It also replaces calls to String join(Iterable, String) in 
> cases where an array is already exist and it is only wrapped into a List.
> 
> 
> Diffs
> -
> 
>   server/src/com/cloud/storage/s3/S3ManagerImpl.java 61e5573 
>   
> services/secondary-storage/src/org/apache/cloudstack/storage/resource/NfsSecondaryStorageResource.java
>  e7fa5b2 
>   utils/src/com/cloud/utils/S3Utils.java b7273a1 
>   utils/src/com/cloud/utils/StringUtils.java 14ff4b1 
>   utils/test/com/cloud/utils/StringUtilsTest.java 3c162c7 
> 
> Diff: https://reviews.apache.org/r/11767/diff/
> 
> 
> Testing
> ---
> 
> - Unit test added
> 
> 
> Thanks,
> 
> Laszlo Hornyak
> 
>



Re: Review Request: removed 3 NumbersUtils methods

2013-06-17 Thread Chip Childers

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11894/#review21999
---


I get the following conflicts when attempting to apply this:

git am ~/patches/11894.patch
Applying: removed 3 NumbersUtils methods
/home/sg-user/incubator-cloudstack/.git/rebase-apply/patch:1152: space before 
tab in indent.
_numRetries = NumberUtils.toInt((String) 
params.get("numretries"), 1);
/home/sg-user/incubator-cloudstack/.git/rebase-apply/patch:1735: space before 
tab in indent.
_clusterRequestTimeoutSeconds = NumberUtils.toInt(value, 
DEFAULT_REQUEST_TIMEOUT);
/home/sg-user/incubator-cloudstack/.git/rebase-apply/patch:2689: space before 
tab in indent.
_capacityPerSSVM = 
NumberUtils.toInt(_configDao.getValue(Config.SecStorageSessionMax.key()), 
DEFAULT_SS_VM_CAPACITY);
/home/sg-user/incubator-cloudstack/.git/rebase-apply/patch:3199: space before 
tab in indent.
int ramSize = 
NumberUtils.toInt(_configDao.getValue("ssvm.ram.size"), DEFAULT_SS_VM_RAMSIZE);
/home/sg-user/incubator-cloudstack/.git/rebase-apply/patch:3200: space before 
tab in indent.
int cpuFreq = 
NumberUtils.toInt(_configDao.getValue("ssvm.cpu.mhz"), DEFAULT_SS_VM_CPUMHZ);
error: patch failed: 
plugins/affinity-group-processors/host-anti-affinity/src/org/apache/cloudstack/affinity/HostAntiAffinityProcessor.java:25
error: 
plugins/affinity-group-processors/host-anti-affinity/src/org/apache/cloudstack/affinity/HostAntiAffinityProcessor.java:
 patch does not apply
error: patch failed: 
server/src/com/cloud/deploy/DeploymentPlanningManagerImpl.java:41
error: server/src/com/cloud/deploy/DeploymentPlanningManagerImpl.java: patch 
does not apply
Patch failed at 0001 removed 3 NumbersUtils methods
When you have resolved this problem run "git am --resolved".
If you would prefer to skip this patch, instead run "git am --skip".
To restore the original branch and stop patching run "git am --abort".

- Chip Childers


On June 15, 2013, 9:29 p.m., Laszlo Hornyak wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11894/
> ---
> 
> (Updated June 15, 2013, 9:29 p.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Description
> ---
> 
> Removed parseInt, parseLong and parseFloat and replaced with calls to 
> commons-lang NumberUtils
> 
> 
> Diffs
> -
> 
>   agent/src/com/cloud/agent/AgentShell.java cf454b8 
>   agent/src/com/cloud/agent/VmmAgentShell.java 190d116 
>   agent/src/com/cloud/agent/resource/consoleproxy/ConsoleProxyResource.java 
> 991764c 
>   
> core/src/com/cloud/agent/resource/virtualnetwork/VirtualRoutingResource.java 
> dae1c85 
>   core/src/com/cloud/storage/template/TemplateLocation.java 58d023a 
>   
> engine/orchestration/src/org/apache/cloudstack/engine/datacenter/entity/api/db/dao/EngineDataCenterDaoImpl.java
>  f99bc6c 
>   engine/schema/src/com/cloud/dc/dao/DataCenterDaoImpl.java 4d9d010 
>   engine/schema/src/com/cloud/upgrade/dao/Upgrade218to22.java 2ef842a 
>   
> engine/storage/snapshot/src/org/apache/cloudstack/storage/snapshot/strategy/AncientSnapshotStrategy.java
>  4aba3d9 
>   
> engine/storage/src/org/apache/cloudstack/storage/allocator/AbstractStoragePoolAllocator.java
>  5326701 
>   
> engine/storage/src/org/apache/cloudstack/storage/allocator/LocalStoragePoolAllocator.java
>  632ba43 
>   
> engine/storage/src/org/apache/cloudstack/storage/motion/AncientDataMotionStrategy.java
>  a6880c3 
>   
> plugins/affinity-group-processors/host-anti-affinity/src/org/apache/cloudstack/affinity/HostAntiAffinityProcessor.java
>  6c3f57f 
>   
> plugins/dedicated-resources/src/org/apache/cloudstack/dedicated/DedicatedResourceManagerImpl.java
>  c321b22 
>   
> plugins/deployment-planners/implicit-dedication/src/com/cloud/deploy/ImplicitDedicationPlanner.java
>  be016cb 
>   
> plugins/deployment-planners/user-dispersing/src/com/cloud/deploy/UserDispersingPlanner.java
>  2b0b158 
>   
> plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/BridgeVifDriver.java
>  b897df2 
>   
> plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
>  f90edd8 
>   
> plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/OvsVifDriver.java
>  eac3248 
>   
> plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/manager/VmwareManagerImpl.java
>  a604392 
>   
> plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/manager/VmwareStorageManagerImpl.java
>  4ae0f30 
>   
> plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/discoverer/XcpServerDiscoverer.java
>  5b6b546 
>   
> plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/CitrixResourceBase.java
>  5e8283a 
>   
> plugins/network-elements/cisco-vnmc/sr

Re: Review Request: NPE fix in StoragePoolJoinDaoImpl

2013-06-17 Thread Chip Childers

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11899/#review22000
---

Ship it!


Applied to master: 202cd1529054fe60acce0cce54686268797b65bd

Thanks for the patch!

- Chip Childers


On June 16, 2013, 8:51 a.m., Laszlo Hornyak wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11899/
> ---
> 
> (Updated June 16, 2013, 8:51 a.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Description
> ---
> 
> Fixes an NPE in StoragePoolJoinDaoImpl
> 
> 
> Diffs
> -
> 
>   server/src/com/cloud/api/query/dao/StoragePoolJoinDaoImpl.java 6d0cde1 
> 
> Diff: https://reviews.apache.org/r/11899/diff/
> 
> 
> Testing
> ---
> 
> yes
> 
> 
> Thanks,
> 
> Laszlo Hornyak
> 
>



Re: Review Request: CLOUDSTACK-869-nTier-Apps-2.0_Support-NetScalar-as-external-LB-provider

2013-06-17 Thread ASF Subversion and Git Services

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/10858/#review22001
---


Commit 3e3e5830b45eeba16b3dacfb5475d53d8d2dee27 in branch 
refs/heads/master-6-17-stable from Jessica Wang
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=3e3e583 ]

CLOUDSTACK-869: Add VPC dialog - add Public LB Provider dropdown, remove VPC 
Offering dropdown. When Public LB Provider is selected as Netscaler, pass 
"Default VPC offering with Netscaler" to createVPC API. When Public LB Provider 
is selected as VpcVirtualRouter, pass "Default VPC Offering" to createVPC API.


- ASF Subversion and Git Services


On May 8, 2013, 1:39 p.m., Rajesh Battala wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/10858/
> ---
> 
> (Updated May 8, 2013, 1:39 p.m.)
> 
> 
> Review request for cloudstack, Kishan Kavala, Murali Reddy, Alena 
> Prokharchyk, Vijay Venkatachalam, and Ram Ganesh.
> 
> 
> Description
> ---
> 
> This feature will introduce Netscaler as external LB provider in VPC.
> As of now only 1 tier is support for external LB.
> A new VPC offering will be created "Default VPC Offering with NS" with all 
> the services provided by VPCVR and LB service with NetScaler.
> Existing NetscalerElement is used and implements VpcProvider.
> In VpcManager, Netscaler is added as one of the supported providers.
> Netscaler will be dedicated to the vpc.
> 
> 
> Diffs
> -
> 
>   api/src/com/cloud/network/vpc/VpcOffering.java 3961d0a 
>   
> plugins/network-elements/netscaler/src/com/cloud/network/element/NetscalerElement.java
>  7bd9c2e 
>   server/pom.xml 808dd3e 
>   server/src/com/cloud/network/NetworkServiceImpl.java 5e8be92 
>   server/src/com/cloud/network/guru/ExternalGuestNetworkGuru.java b1606db 
>   server/src/com/cloud/network/vpc/VpcManagerImpl.java a7f06e9 
>   server/test/com/cloud/vpc/VpcTest.java PRE-CREATION 
>   
> server/test/org/apache/cloudstack/networkoffering/CreateNetworkOfferingTest.java
>  cbb6c00 
> 
> Diff: https://reviews.apache.org/r/10858/diff/
> 
> 
> Testing
> ---
> 
> Manual Testing:
> ==
> 1. Creation of Vpc with the default offering with NS is created successfully. 
> ( Enable Netscaler provider in network service providers)
> 2. Deletion of Vpc with the default offering with NS is deleted successfully.
> 3. Creation of new Vpc Network Offering with Netscaler as LB provider with 
> dedicated mode is created successfully.
> 4. Creation of new Vpc Network Offering with Netscaler as LB provider with 
> shared mode should throw exception.
> 5. Creation of tier (webtier) with the created Vpcnetscaler offering is 
> created successfully.
> 6. Verified Only one tier with netscaler as LB provider can be created. 
> 7. Verified deploying Instance in the tier is successful.
> 8. Verified a new nic got created with gateway ip from the tier cidr.
> 9. Verified deployed instance should get the ip from the specified tier cidr 
> range.
> 10. Acquire public ip in the vpc.
> 11. Verified creation on LB rule, is selecting only free dedicated Netscaler 
> device and necessary configuration is created and LB rule is created on NS
> 12. Deletion of LB rule is successful.
> 13. Modification of LB rule is successful
> 14. Creation of LB Health Check of TCP type is successful.
> 15. Deletion of LB Health Check of TCP type is successful.
> 16. Creation of LB Health Check of HTTP type is successful.
> 17. Deletion of LB Health Check of HTTP type is successful.
> 18. IpAssoc command is executed successful on Netscaler.
> 19. Deletion of tier will delete the tier and config on netscaler is cleared
> 20. Deletion of tier will mark the netscaler to be in free mode.
> 
> 
> Unit Test:
> ===
> Created VpcManger tests and added few tests to createNetworkOfferingTest
> 
> 
> Thanks,
> 
> Rajesh Battala
> 
>



Re: Review Request: Fix for CLOUDSTACK-2987 Ensure XStools to be there in template inorder to enable dynamic scaling of vm

2013-06-17 Thread Harikrishna Patnala


> On June 17, 2013, 3:12 p.m., Prasanna Santhanam wrote:
> > server/src/com/cloud/vm/UserVmManagerImpl.java, line 1818
> > 
> >
> > can this be made case insensitive? so comparisons can compare as 
> > equalIgnoreCase?
> >

Hi Prasanna,
There are no comparisons for this, as IsScalable is name of a value. We 
retrieve the value corresponding to the name "IsScalable" and do comparisons on 
that value at some places.  


> On June 17, 2013, 3:12 p.m., Prasanna Santhanam wrote:
> > server/test/com/cloud/vm/VirtualMachineManagerImplTest.java, line 77
> > 
> >
> > Can you remove the wildcard import?

This is due to auto import 'll fix this.
Thankyou


- Harikrishna


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11910/#review21982
---


On June 17, 2013, 2:44 p.m., Harikrishna Patnala wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11910/
> ---
> 
> (Updated June 17, 2013, 2:44 p.m.)
> 
> 
> Review request for cloudstack, Abhinandan Prateek and Nitin Mehta.
> 
> 
> Description
> ---
> 
> CLOUDSTACK-2987 Ensure XStools to be there in template inorder to enable 
> dynamic scaling of vm 
> 
> CLOUDSTACK-3042 - handle Scaling up of vm memory/CPU based on the presence of 
> XS tools in the template
> This should also take care of updation of VM after XS tools are installed in 
> the vm and set memory values accordingly to support dynamic scaling after 
> stop start of VM
> 
> 
> This addresses bugs CLOUDSTACK-2987 and CLOUDSTACK-3042.
> 
> 
> Diffs
> -
> 
>   api/src/com/cloud/agent/api/to/VirtualMachineTO.java 46ee01b 
>   api/src/com/cloud/template/VirtualMachineTemplate.java cedc793 
>   api/src/org/apache/cloudstack/api/ApiConstants.java ab1402c 
>   api/src/org/apache/cloudstack/api/BaseUpdateTemplateOrIsoCmd.java 6fd9773 
>   api/src/org/apache/cloudstack/api/command/user/iso/RegisterIsoCmd.java 
> 284d553 
>   
> api/src/org/apache/cloudstack/api/command/user/template/RegisterTemplateCmd.java
>  c9da0c2 
>   api/src/org/apache/cloudstack/api/command/user/vm/UpdateVMCmd.java 2860283 
>   api/src/org/apache/cloudstack/api/response/TemplateResponse.java 896154a 
>   api/src/org/apache/cloudstack/api/response/UserVmResponse.java 1f9eb1a 
>   core/src/com/cloud/agent/api/ScaleVmCommand.java b361485 
>   engine/schema/src/com/cloud/storage/VMTemplateVO.java e643d75 
>   engine/schema/src/com/cloud/vm/VMInstanceVO.java fbe03dc 
>   
> engine/storage/src/org/apache/cloudstack/storage/image/TemplateEntityImpl.java
>  4d162bb 
>   plugins/hypervisors/xen/src/com/cloud/hypervisor/XenServerGuru.java 8c38a69 
>   
> plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/CitrixResourceBase.java
>  5e8283a 
>   
> plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/XenServer56FP1Resource.java
>  8e37809 
>   server/src/com/cloud/api/ApiResponseHelper.java 94c5d6c 
>   server/src/com/cloud/api/query/dao/UserVmJoinDaoImpl.java dbfe94d 
>   server/src/com/cloud/api/query/vo/UserVmJoinVO.java 8ad0fdd 
>   server/src/com/cloud/hypervisor/HypervisorGuruBase.java 1ad9a1f 
>   server/src/com/cloud/server/ManagementServerImpl.java 96c72e4 
>   server/src/com/cloud/storage/TemplateProfile.java 0b55f1f 
>   server/src/com/cloud/template/TemplateAdapter.java 9a2d877 
>   server/src/com/cloud/template/TemplateAdapterBase.java 0940d3e 
>   server/src/com/cloud/vm/UserVmManagerImpl.java 1c8ab75 
>   server/src/com/cloud/vm/VirtualMachineManagerImpl.java f946cd1 
>   server/test/com/cloud/vm/VirtualMachineManagerImplTest.java 8715c9e 
>   setup/db/db/schema-410to420.sql 272fc42 
> 
> Diff: https://reviews.apache.org/r/11910/diff/
> 
> 
> Testing
> ---
> 
> Tested locally
> 
> 
> Thanks,
> 
> Harikrishna Patnala
> 
>



Re: Review Request: Fix for CLOUDSTACK-2987 Ensure XStools to be there in template inorder to enable dynamic scaling of vm

2013-06-17 Thread Harikrishna Patnala

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11910/
---

(Updated June 17, 2013, 6:37 p.m.)


Review request for cloudstack, Abhinandan Prateek and Nitin Mehta.


Changes
---

Updated patch with fixes that are due to auto import.


Description
---

CLOUDSTACK-2987 Ensure XStools to be there in template inorder to enable 
dynamic scaling of vm 

CLOUDSTACK-3042 - handle Scaling up of vm memory/CPU based on the presence of 
XS tools in the template
This should also take care of updation of VM after XS tools are installed in 
the vm and set memory values accordingly to support dynamic scaling after stop 
start of VM


This addresses bugs CLOUDSTACK-2987 and CLOUDSTACK-3042.


Diffs (updated)
-

  api/src/com/cloud/agent/api/to/VirtualMachineTO.java 46ee01b 
  api/src/com/cloud/template/VirtualMachineTemplate.java cedc793 
  api/src/org/apache/cloudstack/api/ApiConstants.java ab1402c 
  api/src/org/apache/cloudstack/api/BaseUpdateTemplateOrIsoCmd.java 6fd9773 
  api/src/org/apache/cloudstack/api/command/user/iso/RegisterIsoCmd.java 
284d553 
  
api/src/org/apache/cloudstack/api/command/user/template/RegisterTemplateCmd.java
 c9da0c2 
  api/src/org/apache/cloudstack/api/command/user/vm/UpdateVMCmd.java 2860283 
  api/src/org/apache/cloudstack/api/response/TemplateResponse.java 896154a 
  api/src/org/apache/cloudstack/api/response/UserVmResponse.java 1f9eb1a 
  core/src/com/cloud/agent/api/ScaleVmCommand.java b361485 
  engine/schema/src/com/cloud/storage/VMTemplateVO.java e643d75 
  engine/schema/src/com/cloud/vm/VMInstanceVO.java fbe03dc 
  
engine/storage/src/org/apache/cloudstack/storage/image/TemplateEntityImpl.java 
4d162bb 
  plugins/hypervisors/xen/src/com/cloud/hypervisor/XenServerGuru.java 8c38a69 
  
plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/CitrixResourceBase.java
 5e8283a 
  
plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/XenServer56FP1Resource.java
 8e37809 
  server/src/com/cloud/api/ApiResponseHelper.java 94c5d6c 
  server/src/com/cloud/api/query/dao/UserVmJoinDaoImpl.java dbfe94d 
  server/src/com/cloud/api/query/vo/UserVmJoinVO.java 8ad0fdd 
  server/src/com/cloud/hypervisor/HypervisorGuruBase.java 1ad9a1f 
  server/src/com/cloud/server/ManagementServerImpl.java 96c72e4 
  server/src/com/cloud/storage/TemplateProfile.java 0b55f1f 
  server/src/com/cloud/template/TemplateAdapter.java 9a2d877 
  server/src/com/cloud/template/TemplateAdapterBase.java 0940d3e 
  server/src/com/cloud/vm/UserVmManagerImpl.java 1c8ab75 
  server/src/com/cloud/vm/VirtualMachineManagerImpl.java f946cd1 
  server/test/com/cloud/vm/VirtualMachineManagerImplTest.java 8715c9e 
  setup/db/db/schema-410to420.sql 272fc42 

Diff: https://reviews.apache.org/r/11910/diff/


Testing
---

Tested locally


Thanks,

Harikrishna Patnala



Re: Review Request: CLOUDSTACK-1623 MySQL Database connection can fail to "localhost" on a V6 enabled host

2013-06-17 Thread Chip Childers

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/9852/#review22005
---

Ship it!


Applied to master: d4477ba8da60290d21afd58ce0901d12c85de3a9

Thanks for the patch!

- Chip Childers


On March 11, 2013, 1:31 p.m., Shanker Balan wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/9852/
> ---
> 
> (Updated March 11, 2013, 1:31 p.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Description
> ---
> 
> MySQL Database connection can fail to "localhost" on a V6 enabled host
> 
> 
> Diffs
> -
> 
>   docs/en-US/management-server-install-db-local.xml 918cdc0 
> 
> Diff: https://reviews.apache.org/r/9852/diff/
> 
> 
> Testing
> ---
> 
> No. Am yet to figure out how to regen publican docs
> 
> 
> Thanks,
> 
> Shanker Balan
> 
>



Re: Review Request: CLOUDSTACK-1623 MySQL Database connection can fail to "localhost" on a V6 enabled host

2013-06-17 Thread ASF Subversion and Git Services

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/9852/#review22004
---


Commit d4477ba8da60290d21afd58ce0901d12c85de3a9 in branch refs/heads/master 
from Shanker Balan
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=d4477ba ]

CLOUDSTACK-1623: Update documentation to check hosts entry for correct loopback 
interface setup to fix cloud-setup-databases issues during setup


- ASF Subversion and Git Services


On March 11, 2013, 1:31 p.m., Shanker Balan wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/9852/
> ---
> 
> (Updated March 11, 2013, 1:31 p.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Description
> ---
> 
> MySQL Database connection can fail to "localhost" on a V6 enabled host
> 
> 
> Diffs
> -
> 
>   docs/en-US/management-server-install-db-local.xml 918cdc0 
> 
> Diff: https://reviews.apache.org/r/9852/diff/
> 
> 
> Testing
> ---
> 
> No. Am yet to figure out how to regen publican docs
> 
> 
> Thanks,
> 
> Shanker Balan
> 
>



Re: [ANNOUNCE] New committer: Jayapal Reddy Uradi

2013-06-17 Thread Harikrishna Patnala
Congratulations Jayapal.

On 17-Jun-2013, at 10:29 PM, Abhinandan Prateek 
 wrote:

> Congrats Jayapal ! Well deserved.
> 
> On 17/06/13 9:00 PM, "Chip Childers"  wrote:
> 
>> The Project Management Committee (PMC) for Apache CloudStack
>> has asked Jayapal Reddy Uradi to become a committer and we are
>> pleased to announce that they have accepted.
>> 
>> Being a committer allows many contributors to contribute more
>> autonomously. For developers, it makes it easier to submit changes and
>> eliminates the need to have contributions reviewed via the patch
>> submission process. Whether contributions are development-related or
>> otherwise, it is a recognition of a contributor's participation in the
>> project and commitment to the project and the Apache Way.
>> 
>> Please join me in congratulating Jayapal!
>> 
>> -chip
>> on behalf of the CloudStack PMC
> 
> 



Re: doc hacking at Hack Day at CCC13

2013-06-17 Thread Joe Brockmeier


On Sat, Jun 15, 2013, at 09:27 AM, Daan Hoogland wrote:
> To all of you planning to hack away at documentation:
> 
> There is quite a lot of commeted out code. And there is quite a lot of
> code
> uncommented; classes and public/protected and package scope methods
> without
> description. I have not taken up the job of improving on this. Instead I
> have been swearing at others for it, for which i apologise.
> 
> Both Joe Brockmeier and Mike Tutkowski have useful proposals for doc
> improvement and I want to call on them to devise doc-from-code generation
> tooling to achieve their goals. (but i call not just on them of course)

Can you expand on this? 

We already generate API docs from the code. Generally when I talk about
docs, I'm referring to implementation docs - e.g. "how do I set up,
install, and manage CloudStack" docs.

Best,

jzb
-- 
Joe Brockmeier
j...@zonker.net
Twitter: @jzb
http://www.dissociatedpress.net/


Re: Review Request: remove duplicated VPC router in return value of DomainRouterDaoImpl.listByStateAndNetworkType

2013-06-17 Thread Chip Childers

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/10062/#review22007
---


Wei,

Should this still be applied?  If so, can you go ahead and do it (remember, 4.0 
is not going to see another non-security release).

If not, can you please close this review?

-chip

- Chip Childers


On March 22, 2013, 6:22 a.m., Wei Zhou wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/10062/
> ---
> 
> (Updated March 22, 2013, 6:22 a.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Description
> ---
> 
> The return value of listByStateAndNetworkType function in 
> server/src/com/cloud/vm/dao/DomainRouterDaoImpl.java contains duplicated VPC 
> router.
> For example, if a VPC contains 3 tiers, the return value contains 3 VPC 
> routers.
> 
> We need to get only one VPC router for a VPC.
> 
> This patch applies on 4.0.1. I will create a patch for master/4.1 later.
> 
> 
> Diffs
> -
> 
>   server/src/com/cloud/vm/dao/DomainRouterDaoImpl.java 175d3f2 
> 
> Diff: https://reviews.apache.org/r/10062/diff/
> 
> 
> Testing
> ---
> 
> Testing manually ok.
> 
> 
> Thanks,
> 
> Wei Zhou
> 
>



Re: [ANNOUNCE] New committer: Jayapal Reddy Uradi

2013-06-17 Thread Ahmad Emneina
Awe yeah. Good work Jaypal. Thanks for the contributions so far and look 
forward to more!

Ahmad

On Jun 17, 2013, at 8:30 AM, Chip Childers  wrote:

> The Project Management Committee (PMC) for Apache CloudStack
> has asked Jayapal Reddy Uradi to become a committer and we are 
> pleased to announce that they have accepted.
> 
> Being a committer allows many contributors to contribute more
> autonomously. For developers, it makes it easier to submit changes and
> eliminates the need to have contributions reviewed via the patch
> submission process. Whether contributions are development-related or
> otherwise, it is a recognition of a contributor's participation in the
> project and commitment to the project and the Apache Way.
> 
> Please join me in congratulating Jayapal!
> 
> -chip
> on behalf of the CloudStack PMC


Re: Git Push Summary

2013-06-17 Thread David Nalley
What is this branch for?
On Jun 17, 2013 1:27 PM,  wrote:

> Updated Branches:
>   refs/heads/master-6-17-stable [created] fc16e29f9
>


Re: [ANNOUNCE] New committer: Jayapal Reddy Uradi

2013-06-17 Thread John Burwell
Congrats and welcome, Jayapal.

On Jun 17, 2013, at 12:55 PM, Koushik Das  wrote:

> Congrats Jayapal
> 
>> -Original Message-
>> From: Chip Childers [mailto:chip.child...@sungard.com]
>> Sent: Monday, June 17, 2013 9:02 PM
>> To: dev@cloudstack.apache.org
>> Subject: [ANNOUNCE] New committer: Jayapal Reddy Uradi
>> 
>> The Project Management Committee (PMC) for Apache CloudStack has
>> asked Jayapal Reddy Uradi to become a committer and we are pleased to
>> announce that they have accepted.
>> 
>> Being a committer allows many contributors to contribute more
>> autonomously. For developers, it makes it easier to submit changes and
>> eliminates the need to have contributions reviewed via the patch submission
>> process. Whether contributions are development-related or otherwise, it is a
>> recognition of a contributor's participation in the project and commitment to
>> the project and the Apache Way.
>> 
>> Please join me in congratulating Jayapal!
>> 
>> -chip
>> on behalf of the CloudStack PMC



Re: [MERGE] Merge VMSync improvement branch into master

2013-06-17 Thread John Burwell
Kelven,

Did this patch get pushed to Review Board?  If so, what is the URL?

Thanks.
-John

On Jun 17, 2013, at 1:40 PM, Kelven Yang  wrote:

> Low level classes were tested in unit tests(MessageBus, Job framework, Job
> dispatchers etc), interface layer changes are guarded through matching the
> old semantics, but changes are tested manually, we are planning to get
> this part of testing through BVT system after we have re-based the latest
> master. 
> 
> Kelven 
> 
> On 6/17/13 10:01 AM, "Chip Childers"  wrote:
> 
>> On Mon, Jun 17, 2013 at 04:59:00PM +, Kelven Yang wrote:
>>> I'd like to kick off the official merge process. We will start the merge
>>> process after the branch has passed necessary tests
>>> 
>>> Kelven
>> 
>> Can you share what testing is being run against the branch?
> 



Re: [ANNOUNCE] New committer: Jayapal Reddy Uradi

2013-06-17 Thread Joe Brockmeier
On Mon, Jun 17, 2013, at 10:30 AM, Chip Childers wrote:
> Please join me in congratulating Jayapal!

Woot! Congrats!

Best,

jzb
-- 
Joe Brockmeier
j...@zonker.net
Twitter: @jzb
http://www.dissociatedpress.net/


Re: Git Push Summary

2013-06-17 Thread Chiradeep Vittal
David, this is a temporary scratch branch to perform some integration
testing since atm the master looks stable.

On 6/17/13 11:57 AM, "David Nalley"  wrote:

>What is this branch for?
>On Jun 17, 2013 1:27 PM,  wrote:
>
>> Updated Branches:
>>   refs/heads/master-6-17-stable [created] fc16e29f9
>>



Re: Hypervisor Host Type Required at Zone Level for Primary Storage?

2013-06-17 Thread Mike Tutkowski
Hi Edison,

How's about if I add this logic into ZoneWideStoragePoolAllocator (below)?

After filtering storage pools by tags, it saves off the ones that are for
any hypervisor.

Next, we filter the list down more by hypervisor.

Then, we add the storage pools back into the list that were for any
hypervisor.

 @Override

protected List select(DiskProfile dskCh,

 VirtualMachineProfile vmProfile,

 DeploymentPlan plan, ExcludeList avoid, int returnUpTo) {

s_logger.debug("ZoneWideStoragePoolAllocator to find storage pool");

 List suitablePools = new ArrayList();


List storagePools =
_storagePoolDao.findZoneWideStoragePoolsByTags(plan.getDataCenterId(),
dskCh.getTags());


if (storagePools == null) {

storagePools = new ArrayList();

}


List anyHypervisorStoragePools =
newArrayList();


for (StoragePoolVO storagePool : storagePools) {

if (storagePool.getHypervisor().equals(HypervisorType.Any)) {

anyHypervisorStoragePools.add(storagePool);

}

}


List storagePoolsByHypervisor =
_storagePoolDao.findZoneWideStoragePoolsByHypervisor(plan.getDataCenterId(),
dskCh.getHypervisorType());


storagePools.retainAll(storagePoolsByHypervisor);


storagePools.addAll(anyHypervisorStoragePools);


// add remaining pools in zone, that did not match tags, to avoid
set

List allPools =
_storagePoolDao.findZoneWideStoragePoolsByTags(plan.getDataCenterId(),
null);

allPools.removeAll(storagePools);

for (StoragePoolVO pool : allPools) {

avoid.addPool(pool.getId());

}


for (StoragePoolVO storage : storagePools) {

if (suitablePools.size() == returnUpTo) {

break;

}

StoragePool pol = (StoragePool)this.dataStoreMgr
.getPrimaryDataStore(storage.getId());

if (filter(avoid, pol, dskCh, plan)) {

suitablePools.add(pol);

} else {

avoid.addPool(pol.getId());

}

}

return suitablePools;

}


On Mon, Jun 17, 2013 at 11:40 AM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:

> Hi Edison,
>
> I haven't looked into this much, so maybe what I suggest here won't make
> sense, but here goes:
>
> What about a Hypervisor.MULTIPLE enum option ('Hypervisor' might not be
> the name of the enumeration...I forget). The ZoneWideStoragePoolAllocator
> could use this to be less choosy about if a storage pool qualifies to be
> used.
>
> Does that make any sense?
>
> Thanks!
>
>
> On Mon, Jun 17, 2013 at 11:28 AM, Edison Su  wrote:
>
>> I think it's due to this
>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Zone-wide+primary+storage+target
>> There are zone-wide storages, may only work with one particular
>> hypervisor. For example, the data store created on VCenter can be shared by
>> all the clusters in a DC, but only for vmware. And, CloudStack supports
>> multiple hypervisors in one Zone, so, somehow, need a way to tell mgt
>> server, for a particular zone-wide storage, which can only work with
>> certain hypervisors.
>> You can treat hypervisor type on the storage pool, is another tag, to
>> help storage pool allocator to find proper storage pool. But seems
>> hypervisor type is not enough for your case, as your storage pool can work
>> with both vmware/xenserver, but not for other hypervisors(that's your
>> current code's implementation limitation, not your storage itself can't do
>> that).
>> So I'd think you need to extend ZoneWideStoragePoolAllocator, maybe, a
>> new allocator called: solidfirezonewidestoragepoolAllocator. And, replace
>> the following line in applicationContext.xml:
>>   > class="org.apache.cloudstack.storage.allocator.ZoneWideStoragePoolAllocator"
>> />
>> With your solidfirezonewidestoragepoolAllocator
>> It also means, for each CloudStack mgt server deployment, admin needs to
>> configure applicationContext.xml for their needs.
>>
>> > -Original Message-
>> > From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
>> > Sent: Saturday, June 15, 2013 11:34 AM
>> > To: dev@cloudstack.apache.org
>> > Subject: Hypervisor Host Type Required at Zone Level for Primary
>> Storage?
>> >
>> > Hi,
>> >
>> > I recently updated my local repo and noticed that we now require a
>> > hypervisor type to be associated with zone-wide primary storage.
>> >
>> > I was wondering what the motivation for this might be?
>> >
>> > In my case, my zone-wide primary storage represents a SAN. Volumes are
>> > carved out of the SAN as needed and can currently be utilized on both
>> Xen
>> > and VMware (although, of course, once you've used a given volume on one
>> > hypervisor type or the other, you can only continue to use it with that
>> > hypervisor type).
>> >
>> > I guess the point being my primary storage can be associated with more
>> than
>> > one hypervisor type because of its dynamic na

Re: Hypervisor Host Type Required at Zone Level for Primary Storage?

2013-06-17 Thread John Burwell
Mike,

I know my thoughts will come as a galloping shock, but the idea of a hypervisor 
type being attached to a volume is the type of dependency I think we need to 
remove from the Storage layer.  What attributes of a DataStore/StoragePool 
require association to a hypervisor type?  My thought is that we should expose 
query methods allow the Hypervisor layer to determine if a 
DataStore/StoragePool requires such a reservation, and we track that 
reservation in the Hypervisor layer.

Thanks,
-John

On Jun 17, 2013, at 3:48 PM, Mike Tutkowski  
wrote:

> Hi Edison,
> 
> How's about if I add this logic into ZoneWideStoragePoolAllocator (below)?
> 
> After filtering storage pools by tags, it saves off the ones that are for
> any hypervisor.
> 
> Next, we filter the list down more by hypervisor.
> 
> Then, we add the storage pools back into the list that were for any
> hypervisor.
> 
> @Override
> 
> protected List select(DiskProfile dskCh,
> 
> VirtualMachineProfile vmProfile,
> 
> DeploymentPlan plan, ExcludeList avoid, int returnUpTo) {
> 
>s_logger.debug("ZoneWideStoragePoolAllocator to find storage pool");
> 
> List suitablePools = new ArrayList();
> 
> 
>List storagePools =
> _storagePoolDao.findZoneWideStoragePoolsByTags(plan.getDataCenterId(),
> dskCh.getTags());
> 
> 
>if (storagePools == null) {
> 
>storagePools = new ArrayList();
> 
>}
> 
> 
>List anyHypervisorStoragePools =
> newArrayList();
> 
> 
>for (StoragePoolVO storagePool : storagePools) {
> 
>if (storagePool.getHypervisor().equals(HypervisorType.Any)) {
> 
>anyHypervisorStoragePools.add(storagePool);
> 
>}
> 
>}
> 
> 
>List storagePoolsByHypervisor =
> _storagePoolDao.findZoneWideStoragePoolsByHypervisor(plan.getDataCenterId(),
> dskCh.getHypervisorType());
> 
> 
>storagePools.retainAll(storagePoolsByHypervisor);
> 
> 
>storagePools.addAll(anyHypervisorStoragePools);
> 
> 
>// add remaining pools in zone, that did not match tags, to avoid
> set
> 
>List allPools =
> _storagePoolDao.findZoneWideStoragePoolsByTags(plan.getDataCenterId(),
> null);
> 
>allPools.removeAll(storagePools);
> 
>for (StoragePoolVO pool : allPools) {
> 
>avoid.addPool(pool.getId());
> 
>}
> 
> 
>for (StoragePoolVO storage : storagePools) {
> 
>if (suitablePools.size() == returnUpTo) {
> 
>break;
> 
>}
> 
>StoragePool pol = (StoragePool)this.dataStoreMgr
> .getPrimaryDataStore(storage.getId());
> 
>if (filter(avoid, pol, dskCh, plan)) {
> 
>suitablePools.add(pol);
> 
>} else {
> 
>avoid.addPool(pol.getId());
> 
>}
> 
>}
> 
>return suitablePools;
> 
>}
> 
> 
> On Mon, Jun 17, 2013 at 11:40 AM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
> 
>> Hi Edison,
>> 
>> I haven't looked into this much, so maybe what I suggest here won't make
>> sense, but here goes:
>> 
>> What about a Hypervisor.MULTIPLE enum option ('Hypervisor' might not be
>> the name of the enumeration...I forget). The ZoneWideStoragePoolAllocator
>> could use this to be less choosy about if a storage pool qualifies to be
>> used.
>> 
>> Does that make any sense?
>> 
>> Thanks!
>> 
>> 
>> On Mon, Jun 17, 2013 at 11:28 AM, Edison Su  wrote:
>> 
>>> I think it's due to this
>>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Zone-wide+primary+storage+target
>>> There are zone-wide storages, may only work with one particular
>>> hypervisor. For example, the data store created on VCenter can be shared by
>>> all the clusters in a DC, but only for vmware. And, CloudStack supports
>>> multiple hypervisors in one Zone, so, somehow, need a way to tell mgt
>>> server, for a particular zone-wide storage, which can only work with
>>> certain hypervisors.
>>> You can treat hypervisor type on the storage pool, is another tag, to
>>> help storage pool allocator to find proper storage pool. But seems
>>> hypervisor type is not enough for your case, as your storage pool can work
>>> with both vmware/xenserver, but not for other hypervisors(that's your
>>> current code's implementation limitation, not your storage itself can't do
>>> that).
>>> So I'd think you need to extend ZoneWideStoragePoolAllocator, maybe, a
>>> new allocator called: solidfirezonewidestoragepoolAllocator. And, replace
>>> the following line in applicationContext.xml:
>>>  >> class="org.apache.cloudstack.storage.allocator.ZoneWideStoragePoolAllocator"
>>> />
>>> With your solidfirezonewidestoragepoolAllocator
>>> It also means, for each CloudStack mgt server deployment, admin needs to
>>> configure applicationContext.xml for their needs.
>>> 
 -Original Message-
 From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
 Sent: Saturday, June 15, 2013 11:34 AM
 To: dev@cloudstack.apache.o

Re: Hypervisor Host Type Required at Zone Level for Primary Storage?

2013-06-17 Thread Mike Tutkowski
I figured you might have something to say about this, John. :)

Yeah, I have no idea behind the motivation for this change other than what
Edison just said in a recent e-mail.

It sounds like this change went in so that the allocators could look at the
VM characteristics and see the hypervisor type. With this info, the
allocator can decide if a particular zone-wide storage is acceptable. This
doesn't apply for my situation as I'm dealing with a SAN, but some
zone-wide storage is static (just a volume "out there" somewhere). Once
this volume is used for, say, XenServer purposes, it can only be used for
XenServer going forward.

For more details, I would recommend Edison comment.


On Mon, Jun 17, 2013 at 2:01 PM, John Burwell  wrote:

> Mike,
>
> I know my thoughts will come as a galloping shock, but the idea of a
> hypervisor type being attached to a volume is the type of dependency I
> think we need to remove from the Storage layer.  What attributes of a
> DataStore/StoragePool require association to a hypervisor type?  My thought
> is that we should expose query methods allow the Hypervisor layer to
> determine if a DataStore/StoragePool requires such a reservation, and we
> track that reservation in the Hypervisor layer.
>
> Thanks,
> -John
>
> On Jun 17, 2013, at 3:48 PM, Mike Tutkowski 
> wrote:
>
> > Hi Edison,
> >
> > How's about if I add this logic into ZoneWideStoragePoolAllocator
> (below)?
> >
> > After filtering storage pools by tags, it saves off the ones that are for
> > any hypervisor.
> >
> > Next, we filter the list down more by hypervisor.
> >
> > Then, we add the storage pools back into the list that were for any
> > hypervisor.
> >
> > @Override
> >
> > protected List select(DiskProfile dskCh,
> >
> > VirtualMachineProfile vmProfile,
> >
> > DeploymentPlan plan, ExcludeList avoid, int returnUpTo) {
> >
> >s_logger.debug("ZoneWideStoragePoolAllocator to find storage pool");
> >
> > List suitablePools = new ArrayList();
> >
> >
> >List storagePools =
> > _storagePoolDao.findZoneWideStoragePoolsByTags(plan.getDataCenterId(),
> > dskCh.getTags());
> >
> >
> >if (storagePools == null) {
> >
> >storagePools = new ArrayList();
> >
> >}
> >
> >
> >List anyHypervisorStoragePools =
> > newArrayList();
> >
> >
> >for (StoragePoolVO storagePool : storagePools) {
> >
> >if (storagePool.getHypervisor().equals(HypervisorType.Any)) {
> >
> >anyHypervisorStoragePools.add(storagePool);
> >
> >}
> >
> >}
> >
> >
> >List storagePoolsByHypervisor =
> >
> _storagePoolDao.findZoneWideStoragePoolsByHypervisor(plan.getDataCenterId(),
> > dskCh.getHypervisorType());
> >
> >
> >storagePools.retainAll(storagePoolsByHypervisor);
> >
> >
> >storagePools.addAll(anyHypervisorStoragePools);
> >
> >
> >// add remaining pools in zone, that did not match tags, to avoid
> > set
> >
> >List allPools =
> > _storagePoolDao.findZoneWideStoragePoolsByTags(plan.getDataCenterId(),
> > null);
> >
> >allPools.removeAll(storagePools);
> >
> >for (StoragePoolVO pool : allPools) {
> >
> >avoid.addPool(pool.getId());
> >
> >}
> >
> >
> >for (StoragePoolVO storage : storagePools) {
> >
> >if (suitablePools.size() == returnUpTo) {
> >
> >break;
> >
> >}
> >
> >StoragePool pol = (StoragePool)this.dataStoreMgr
> > .getPrimaryDataStore(storage.getId());
> >
> >if (filter(avoid, pol, dskCh, plan)) {
> >
> >suitablePools.add(pol);
> >
> >} else {
> >
> >avoid.addPool(pol.getId());
> >
> >}
> >
> >}
> >
> >return suitablePools;
> >
> >}
> >
> >
> > On Mon, Jun 17, 2013 at 11:40 AM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com> wrote:
> >
> >> Hi Edison,
> >>
> >> I haven't looked into this much, so maybe what I suggest here won't make
> >> sense, but here goes:
> >>
> >> What about a Hypervisor.MULTIPLE enum option ('Hypervisor' might not be
> >> the name of the enumeration...I forget). The
> ZoneWideStoragePoolAllocator
> >> could use this to be less choosy about if a storage pool qualifies to be
> >> used.
> >>
> >> Does that make any sense?
> >>
> >> Thanks!
> >>
> >>
> >> On Mon, Jun 17, 2013 at 11:28 AM, Edison Su 
> wrote:
> >>
> >>> I think it's due to this
> >>>
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Zone-wide+primary+storage+target
> >>> There are zone-wide storages, may only work with one particular
> >>> hypervisor. For example, the data store created on VCenter can be
> shared by
> >>> all the clusters in a DC, but only for vmware. And, CloudStack supports
> >>> multiple hypervisors in one Zone, so, somehow, need a way to tell mgt
> >>> server, for a particular zone-wide storage, which can only work with
> >>> certain hypervisors.
> >>> You can treat hypervisor type on the storage pool, 

Re: Hypervisor Host Type Required at Zone Level for Primary Storage?

2013-06-17 Thread Marcus Sorensen
I can understand the intention, for example templates are tied to a
hypervisor because the OS installed works with that hypervisor (drivers,
etc), and templates end up on primary storage.

To some extent what's on the volume is hypervisor dependent, AND the
storage technology is possibly hypervisor dependent. But I agree that it
doesn't sit well to have the dependency.
On Jun 17, 2013 3:12 PM, "Mike Tutkowski" 
wrote:

> I figured you might have something to say about this, John. :)
>
> Yeah, I have no idea behind the motivation for this change other than what
> Edison just said in a recent e-mail.
>
> It sounds like this change went in so that the allocators could look at the
> VM characteristics and see the hypervisor type. With this info, the
> allocator can decide if a particular zone-wide storage is acceptable. This
> doesn't apply for my situation as I'm dealing with a SAN, but some
> zone-wide storage is static (just a volume "out there" somewhere). Once
> this volume is used for, say, XenServer purposes, it can only be used for
> XenServer going forward.
>
> For more details, I would recommend Edison comment.
>
>
> On Mon, Jun 17, 2013 at 2:01 PM, John Burwell  wrote:
>
> > Mike,
> >
> > I know my thoughts will come as a galloping shock, but the idea of a
> > hypervisor type being attached to a volume is the type of dependency I
> > think we need to remove from the Storage layer.  What attributes of a
> > DataStore/StoragePool require association to a hypervisor type?  My
> thought
> > is that we should expose query methods allow the Hypervisor layer to
> > determine if a DataStore/StoragePool requires such a reservation, and we
> > track that reservation in the Hypervisor layer.
> >
> > Thanks,
> > -John
> >
> > On Jun 17, 2013, at 3:48 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com>
> > wrote:
> >
> > > Hi Edison,
> > >
> > > How's about if I add this logic into ZoneWideStoragePoolAllocator
> > (below)?
> > >
> > > After filtering storage pools by tags, it saves off the ones that are
> for
> > > any hypervisor.
> > >
> > > Next, we filter the list down more by hypervisor.
> > >
> > > Then, we add the storage pools back into the list that were for any
> > > hypervisor.
> > >
> > > @Override
> > >
> > > protected List select(DiskProfile dskCh,
> > >
> > > VirtualMachineProfile vmProfile,
> > >
> > > DeploymentPlan plan, ExcludeList avoid, int returnUpTo) {
> > >
> > >s_logger.debug("ZoneWideStoragePoolAllocator to find storage pool");
> > >
> > > List suitablePools = new ArrayList();
> > >
> > >
> > >List storagePools =
> > > _storagePoolDao.findZoneWideStoragePoolsByTags(plan.getDataCenterId(),
> > > dskCh.getTags());
> > >
> > >
> > >if (storagePools == null) {
> > >
> > >storagePools = new ArrayList();
> > >
> > >}
> > >
> > >
> > >List anyHypervisorStoragePools =
> > > newArrayList();
> > >
> > >
> > >for (StoragePoolVO storagePool : storagePools) {
> > >
> > >if (storagePool.getHypervisor().equals(HypervisorType.Any))
> {
> > >
> > >anyHypervisorStoragePools.add(storagePool);
> > >
> > >}
> > >
> > >}
> > >
> > >
> > >List storagePoolsByHypervisor =
> > >
> >
> _storagePoolDao.findZoneWideStoragePoolsByHypervisor(plan.getDataCenterId(),
> > > dskCh.getHypervisorType());
> > >
> > >
> > >storagePools.retainAll(storagePoolsByHypervisor);
> > >
> > >
> > >storagePools.addAll(anyHypervisorStoragePools);
> > >
> > >
> > >// add remaining pools in zone, that did not match tags, to
> avoid
> > > set
> > >
> > >List allPools =
> > > _storagePoolDao.findZoneWideStoragePoolsByTags(plan.getDataCenterId(),
> > > null);
> > >
> > >allPools.removeAll(storagePools);
> > >
> > >for (StoragePoolVO pool : allPools) {
> > >
> > >avoid.addPool(pool.getId());
> > >
> > >}
> > >
> > >
> > >for (StoragePoolVO storage : storagePools) {
> > >
> > >if (suitablePools.size() == returnUpTo) {
> > >
> > >break;
> > >
> > >}
> > >
> > >StoragePool pol = (StoragePool)this.dataStoreMgr
> > > .getPrimaryDataStore(storage.getId());
> > >
> > >if (filter(avoid, pol, dskCh, plan)) {
> > >
> > >suitablePools.add(pol);
> > >
> > >} else {
> > >
> > >avoid.addPool(pol.getId());
> > >
> > >}
> > >
> > >}
> > >
> > >return suitablePools;
> > >
> > >}
> > >
> > >
> > > On Mon, Jun 17, 2013 at 11:40 AM, Mike Tutkowski <
> > > mike.tutkow...@solidfire.com> wrote:
> > >
> > >> Hi Edison,
> > >>
> > >> I haven't looked into this much, so maybe what I suggest here won't
> make
> > >> sense, but here goes:
> > >>
> > >> What about a Hypervisor.MULTIPLE enum option ('Hypervisor' might not
> be
> > >> the name of the enumeration...I forget). The
> > ZoneWideStoragePoolAllocator
> > >> could use this to be less choosy

RE: Hypervisor Host Type Required at Zone Level for Primary Storage?

2013-06-17 Thread Edison Su
There are storages which can only work with one hypervisor,
 e.g. Currently, Ceph can only work on KVM. And the data store created in 
VCenter, can only work with Vmware.



> -Original Message-
> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> Sent: Monday, June 17, 2013 1:12 PM
> To: dev@cloudstack.apache.org
> Subject: Re: Hypervisor Host Type Required at Zone Level for Primary Storage?
> 
> I figured you might have something to say about this, John. :)
> 
> Yeah, I have no idea behind the motivation for this change other than what
> Edison just said in a recent e-mail.
> 
> It sounds like this change went in so that the allocators could look at the VM
> characteristics and see the hypervisor type. With this info, the allocator can
> decide if a particular zone-wide storage is acceptable. This doesn't apply for
> my situation as I'm dealing with a SAN, but some zone-wide storage is static
> (just a volume "out there" somewhere). Once this volume is used for, say,
> XenServer purposes, it can only be used for XenServer going forward.
> 
> For more details, I would recommend Edison comment.
> 
> 
> On Mon, Jun 17, 2013 at 2:01 PM, John Burwell 
> wrote:
> 
> > Mike,
> >
> > I know my thoughts will come as a galloping shock, but the idea of a
> > hypervisor type being attached to a volume is the type of dependency I
> > think we need to remove from the Storage layer.  What attributes of a
> > DataStore/StoragePool require association to a hypervisor type?  My
> > thought is that we should expose query methods allow the Hypervisor
> > layer to determine if a DataStore/StoragePool requires such a
> > reservation, and we track that reservation in the Hypervisor layer.
> >
> > Thanks,
> > -John
> >
> > On Jun 17, 2013, at 3:48 PM, Mike Tutkowski
> > 
> > wrote:
> >
> > > Hi Edison,
> > >
> > > How's about if I add this logic into ZoneWideStoragePoolAllocator
> > (below)?
> > >
> > > After filtering storage pools by tags, it saves off the ones that
> > > are for any hypervisor.
> > >
> > > Next, we filter the list down more by hypervisor.
> > >
> > > Then, we add the storage pools back into the list that were for any
> > > hypervisor.
> > >
> > > @Override
> > >
> > > protected List select(DiskProfile dskCh,
> > >
> > > VirtualMachineProfile vmProfile,
> > >
> > > DeploymentPlan plan, ExcludeList avoid, int returnUpTo) {
> > >
> > >s_logger.debug("ZoneWideStoragePoolAllocator to find storage
> > > pool");
> > >
> > > List suitablePools = new ArrayList();
> > >
> > >
> > >List storagePools =
> > >
> _storagePoolDao.findZoneWideStoragePoolsByTags(plan.getDataCenterId(
> > > ),
> > > dskCh.getTags());
> > >
> > >
> > >if (storagePools == null) {
> > >
> > >storagePools = new ArrayList();
> > >
> > >}
> > >
> > >
> > >List anyHypervisorStoragePools =
> > > newArrayList();
> > >
> > >
> > >for (StoragePoolVO storagePool : storagePools) {
> > >
> > >if
> > > (storagePool.getHypervisor().equals(HypervisorType.Any)) {
> > >
> > >anyHypervisorStoragePools.add(storagePool);
> > >
> > >}
> > >
> > >}
> > >
> > >
> > >List storagePoolsByHypervisor =
> > >
> >
> _storagePoolDao.findZoneWideStoragePoolsByHypervisor(plan.getDataCent
> e
> > rId(),
> > > dskCh.getHypervisorType());
> > >
> > >
> > >storagePools.retainAll(storagePoolsByHypervisor);
> > >
> > >
> > >storagePools.addAll(anyHypervisorStoragePools);
> > >
> > >
> > >// add remaining pools in zone, that did not match tags, to
> > > avoid set
> > >
> > >List allPools =
> > >
> _storagePoolDao.findZoneWideStoragePoolsByTags(plan.getDataCenterId(
> > > ),
> > > null);
> > >
> > >allPools.removeAll(storagePools);
> > >
> > >for (StoragePoolVO pool : allPools) {
> > >
> > >avoid.addPool(pool.getId());
> > >
> > >}
> > >
> > >
> > >for (StoragePoolVO storage : storagePools) {
> > >
> > >if (suitablePools.size() == returnUpTo) {
> > >
> > >break;
> > >
> > >}
> > >
> > >StoragePool pol = (StoragePool)this.dataStoreMgr
> > > .getPrimaryDataStore(storage.getId());
> > >
> > >if (filter(avoid, pol, dskCh, plan)) {
> > >
> > >suitablePools.add(pol);
> > >
> > >} else {
> > >
> > >avoid.addPool(pol.getId());
> > >
> > >}
> > >
> > >}
> > >
> > >return suitablePools;
> > >
> > >}
> > >
> > >
> > > On Mon, Jun 17, 2013 at 11:40 AM, Mike Tutkowski <
> > > mike.tutkow...@solidfire.com> wrote:
> > >
> > >> Hi Edison,
> > >>
> > >> I haven't looked into this much, so maybe what I suggest here won't
> > >> make sense, but here goes:
> > >>
> > >> What about a Hypervisor.MULTIPLE enum option ('Hypervisor' might
> > >> not be the name of the enumeration...I forget). The
> > ZoneWideStoragePoolAllocator
> > >> could use this to be less choosy ab

Re: Hypervisor Host Type Required at Zone Level for Primary Storage?

2013-06-17 Thread Mike Tutkowski
What do we do, though, if the storage can only work on a subset of the ones
listed in the enum?

For example, XenServer and VMware.


On Mon, Jun 17, 2013 at 2:27 PM, Edison Su  wrote:

> There are storages which can only work with one hypervisor,
>  e.g. Currently, Ceph can only work on KVM. And the data store created in
> VCenter, can only work with Vmware.
>
>
>
> > -Original Message-
> > From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> > Sent: Monday, June 17, 2013 1:12 PM
> > To: dev@cloudstack.apache.org
> > Subject: Re: Hypervisor Host Type Required at Zone Level for Primary
> Storage?
> >
> > I figured you might have something to say about this, John. :)
> >
> > Yeah, I have no idea behind the motivation for this change other than
> what
> > Edison just said in a recent e-mail.
> >
> > It sounds like this change went in so that the allocators could look at
> the VM
> > characteristics and see the hypervisor type. With this info, the
> allocator can
> > decide if a particular zone-wide storage is acceptable. This doesn't
> apply for
> > my situation as I'm dealing with a SAN, but some zone-wide storage is
> static
> > (just a volume "out there" somewhere). Once this volume is used for, say,
> > XenServer purposes, it can only be used for XenServer going forward.
> >
> > For more details, I would recommend Edison comment.
> >
> >
> > On Mon, Jun 17, 2013 at 2:01 PM, John Burwell 
> > wrote:
> >
> > > Mike,
> > >
> > > I know my thoughts will come as a galloping shock, but the idea of a
> > > hypervisor type being attached to a volume is the type of dependency I
> > > think we need to remove from the Storage layer.  What attributes of a
> > > DataStore/StoragePool require association to a hypervisor type?  My
> > > thought is that we should expose query methods allow the Hypervisor
> > > layer to determine if a DataStore/StoragePool requires such a
> > > reservation, and we track that reservation in the Hypervisor layer.
> > >
> > > Thanks,
> > > -John
> > >
> > > On Jun 17, 2013, at 3:48 PM, Mike Tutkowski
> > > 
> > > wrote:
> > >
> > > > Hi Edison,
> > > >
> > > > How's about if I add this logic into ZoneWideStoragePoolAllocator
> > > (below)?
> > > >
> > > > After filtering storage pools by tags, it saves off the ones that
> > > > are for any hypervisor.
> > > >
> > > > Next, we filter the list down more by hypervisor.
> > > >
> > > > Then, we add the storage pools back into the list that were for any
> > > > hypervisor.
> > > >
> > > > @Override
> > > >
> > > > protected List select(DiskProfile dskCh,
> > > >
> > > > VirtualMachineProfile vmProfile,
> > > >
> > > > DeploymentPlan plan, ExcludeList avoid, int returnUpTo) {
> > > >
> > > >s_logger.debug("ZoneWideStoragePoolAllocator to find storage
> > > > pool");
> > > >
> > > > List suitablePools = new ArrayList();
> > > >
> > > >
> > > >List storagePools =
> > > >
> > _storagePoolDao.findZoneWideStoragePoolsByTags(plan.getDataCenterId(
> > > > ),
> > > > dskCh.getTags());
> > > >
> > > >
> > > >if (storagePools == null) {
> > > >
> > > >storagePools = new ArrayList();
> > > >
> > > >}
> > > >
> > > >
> > > >List anyHypervisorStoragePools =
> > > > newArrayList();
> > > >
> > > >
> > > >for (StoragePoolVO storagePool : storagePools) {
> > > >
> > > >if
> > > > (storagePool.getHypervisor().equals(HypervisorType.Any)) {
> > > >
> > > >anyHypervisorStoragePools.add(storagePool);
> > > >
> > > >}
> > > >
> > > >}
> > > >
> > > >
> > > >List storagePoolsByHypervisor =
> > > >
> > >
> > _storagePoolDao.findZoneWideStoragePoolsByHypervisor(plan.getDataCent
> > e
> > > rId(),
> > > > dskCh.getHypervisorType());
> > > >
> > > >
> > > >storagePools.retainAll(storagePoolsByHypervisor);
> > > >
> > > >
> > > >storagePools.addAll(anyHypervisorStoragePools);
> > > >
> > > >
> > > >// add remaining pools in zone, that did not match tags, to
> > > > avoid set
> > > >
> > > >List allPools =
> > > >
> > _storagePoolDao.findZoneWideStoragePoolsByTags(plan.getDataCenterId(
> > > > ),
> > > > null);
> > > >
> > > >allPools.removeAll(storagePools);
> > > >
> > > >for (StoragePoolVO pool : allPools) {
> > > >
> > > >avoid.addPool(pool.getId());
> > > >
> > > >}
> > > >
> > > >
> > > >for (StoragePoolVO storage : storagePools) {
> > > >
> > > >if (suitablePools.size() == returnUpTo) {
> > > >
> > > >break;
> > > >
> > > >}
> > > >
> > > >StoragePool pol = (StoragePool)this.dataStoreMgr
> > > > .getPrimaryDataStore(storage.getId());
> > > >
> > > >if (filter(avoid, pol, dskCh, plan)) {
> > > >
> > > >suitablePools.add(pol);
> > > >
> > > >} else {
> > > >
> > > >avoid.addPool(pol.getId());
> > > >
> > > >}
> > > >
> > > >}
> > > >
> > > >   

Re: Hypervisor Host Type Required at Zone Level for Primary Storage?

2013-06-17 Thread John Burwell
Edison,

For me, this issue comes back to the whole notion of the overloaded 
StoragePoolType.  A hypervisor plugin should declare a method akin to 
getSupportedStorageProtocols() : ImmutableSet which the 
Hypervisor layer can use to filter the available DataStores from the Storage 
subsystem.  For example, as RBD support expands to other hypervisors, we should 
only have to modify those hypervisor plugins -- not the Hypervisor 
orchestration components or any aspect of the Storage layer.

Thanks,
-John

On Jun 17, 2013, at 4:27 PM, Edison Su  wrote:

> There are storages which can only work with one hypervisor,
> e.g. Currently, Ceph can only work on KVM. And the data store created in 
> VCenter, can only work with Vmware.
> 
> 
> 
>> -Original Message-
>> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
>> Sent: Monday, June 17, 2013 1:12 PM
>> To: dev@cloudstack.apache.org
>> Subject: Re: Hypervisor Host Type Required at Zone Level for Primary Storage?
>> 
>> I figured you might have something to say about this, John. :)
>> 
>> Yeah, I have no idea behind the motivation for this change other than what
>> Edison just said in a recent e-mail.
>> 
>> It sounds like this change went in so that the allocators could look at the 
>> VM
>> characteristics and see the hypervisor type. With this info, the allocator 
>> can
>> decide if a particular zone-wide storage is acceptable. This doesn't apply 
>> for
>> my situation as I'm dealing with a SAN, but some zone-wide storage is static
>> (just a volume "out there" somewhere). Once this volume is used for, say,
>> XenServer purposes, it can only be used for XenServer going forward.
>> 
>> For more details, I would recommend Edison comment.
>> 
>> 
>> On Mon, Jun 17, 2013 at 2:01 PM, John Burwell 
>> wrote:
>> 
>>> Mike,
>>> 
>>> I know my thoughts will come as a galloping shock, but the idea of a
>>> hypervisor type being attached to a volume is the type of dependency I
>>> think we need to remove from the Storage layer.  What attributes of a
>>> DataStore/StoragePool require association to a hypervisor type?  My
>>> thought is that we should expose query methods allow the Hypervisor
>>> layer to determine if a DataStore/StoragePool requires such a
>>> reservation, and we track that reservation in the Hypervisor layer.
>>> 
>>> Thanks,
>>> -John
>>> 
>>> On Jun 17, 2013, at 3:48 PM, Mike Tutkowski
>>> 
>>> wrote:
>>> 
 Hi Edison,
 
 How's about if I add this logic into ZoneWideStoragePoolAllocator
>>> (below)?
 
 After filtering storage pools by tags, it saves off the ones that
 are for any hypervisor.
 
 Next, we filter the list down more by hypervisor.
 
 Then, we add the storage pools back into the list that were for any
 hypervisor.
 
 @Override
 
 protected List select(DiskProfile dskCh,
 
 VirtualMachineProfile vmProfile,
 
 DeploymentPlan plan, ExcludeList avoid, int returnUpTo) {
 
   s_logger.debug("ZoneWideStoragePoolAllocator to find storage
 pool");
 
 List suitablePools = new ArrayList();
 
 
   List storagePools =
 
>> _storagePoolDao.findZoneWideStoragePoolsByTags(plan.getDataCenterId(
 ),
 dskCh.getTags());
 
 
   if (storagePools == null) {
 
   storagePools = new ArrayList();
 
   }
 
 
   List anyHypervisorStoragePools =
 newArrayList();
 
 
   for (StoragePoolVO storagePool : storagePools) {
 
   if
 (storagePool.getHypervisor().equals(HypervisorType.Any)) {
 
   anyHypervisorStoragePools.add(storagePool);
 
   }
 
   }
 
 
   List storagePoolsByHypervisor =
 
>>> 
>> _storagePoolDao.findZoneWideStoragePoolsByHypervisor(plan.getDataCent
>> e
>>> rId(),
 dskCh.getHypervisorType());
 
 
   storagePools.retainAll(storagePoolsByHypervisor);
 
 
   storagePools.addAll(anyHypervisorStoragePools);
 
 
   // add remaining pools in zone, that did not match tags, to
 avoid set
 
   List allPools =
 
>> _storagePoolDao.findZoneWideStoragePoolsByTags(plan.getDataCenterId(
 ),
 null);
 
   allPools.removeAll(storagePools);
 
   for (StoragePoolVO pool : allPools) {
 
   avoid.addPool(pool.getId());
 
   }
 
 
   for (StoragePoolVO storage : storagePools) {
 
   if (suitablePools.size() == returnUpTo) {
 
   break;
 
   }
 
   StoragePool pol = (StoragePool)this.dataStoreMgr
 .getPrimaryDataStore(storage.getId());
 
   if (filter(avoid, pol, dskCh, plan)) {
 
   suitablePools.add(pol);
 
   } else {
 
   avoid.addPool(pol.getId());
 
   }
 
  

Re: [MERGE] disk_io_throttling to MASTER

2013-06-17 Thread Mike Tutkowski
FYI: I added the IOPS-capacity parameter and related code over the weekend.

The final bit of work for me comes when Wei's code is merged into master
and I pull it down.

At that point, I need to add the GUI and API logic to support mutual
exclusion of our features.


On Sat, Jun 15, 2013 at 11:12 AM, Chip Childers
wrote:

> On Fri, Jun 14, 2013 at 05:56:27PM -0400, John Burwell wrote:
> > Mike,
> >
> > Awesome.  +1.
> >
> > Thanks for your patience with the back and forth,
> > -John
>
> +1 - this now looks like a great approach.
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud
*™*


Re: Hypervisor Host Type Required at Zone Level for Primary Storage?

2013-06-17 Thread John Burwell
Marcus,

I am coming of the viewpoint that ImageService (ISOs and Templates), hypervisor 
snapshotting, and DataMotionService should moved from the Storage layer into 
the Hypervisor layer for the following reasons:

The storage layer should treat the data it stores as opaque.  These services 
deal with content, not data management, in a manner that is specific to one or 
more hypervisors.  The Storage should simply provide operations to read as a 
stream, read through a file handle, write through a steam, write a file handle, 
list contents, and delete data based on a logical URI.   These higher level, 
content-oriented services then compose these lower-level primitive operations 
to operate on content.  
These elements are Hypervisor specific.  Therefore, tracking their storage 
location and association with a hypervisor should be part of the hypervisor 
layer.

As I have said in numerous threads (so I apologize for the repetition), we have 
to break this cyclic dependency for a whole range of good reasons.  I am 
beginning to think that unit these services are moved to the Hypervisor layer, 
we won't be able to break it.

Thanks,
-John

On Jun 17, 2013, at 4:23 PM, Marcus Sorensen  wrote:

> I can understand the intention, for example templates are tied to a
> hypervisor because the OS installed works with that hypervisor (drivers,
> etc), and templates end up on primary storage.
> 
> To some extent what's on the volume is hypervisor dependent, AND the
> storage technology is possibly hypervisor dependent. But I agree that it
> doesn't sit well to have the dependency.
> On Jun 17, 2013 3:12 PM, "Mike Tutkowski" 
> wrote:
> 
>> I figured you might have something to say about this, John. :)
>> 
>> Yeah, I have no idea behind the motivation for this change other than what
>> Edison just said in a recent e-mail.
>> 
>> It sounds like this change went in so that the allocators could look at the
>> VM characteristics and see the hypervisor type. With this info, the
>> allocator can decide if a particular zone-wide storage is acceptable. This
>> doesn't apply for my situation as I'm dealing with a SAN, but some
>> zone-wide storage is static (just a volume "out there" somewhere). Once
>> this volume is used for, say, XenServer purposes, it can only be used for
>> XenServer going forward.
>> 
>> For more details, I would recommend Edison comment.
>> 
>> 
>> On Mon, Jun 17, 2013 at 2:01 PM, John Burwell  wrote:
>> 
>>> Mike,
>>> 
>>> I know my thoughts will come as a galloping shock, but the idea of a
>>> hypervisor type being attached to a volume is the type of dependency I
>>> think we need to remove from the Storage layer.  What attributes of a
>>> DataStore/StoragePool require association to a hypervisor type?  My
>> thought
>>> is that we should expose query methods allow the Hypervisor layer to
>>> determine if a DataStore/StoragePool requires such a reservation, and we
>>> track that reservation in the Hypervisor layer.
>>> 
>>> Thanks,
>>> -John
>>> 
>>> On Jun 17, 2013, at 3:48 PM, Mike Tutkowski <
>> mike.tutkow...@solidfire.com>
>>> wrote:
>>> 
 Hi Edison,
 
 How's about if I add this logic into ZoneWideStoragePoolAllocator
>>> (below)?
 
 After filtering storage pools by tags, it saves off the ones that are
>> for
 any hypervisor.
 
 Next, we filter the list down more by hypervisor.
 
 Then, we add the storage pools back into the list that were for any
 hypervisor.
 
 @Override
 
 protected List select(DiskProfile dskCh,
 
 VirtualMachineProfile vmProfile,
 
 DeploymentPlan plan, ExcludeList avoid, int returnUpTo) {
 
   s_logger.debug("ZoneWideStoragePoolAllocator to find storage pool");
 
 List suitablePools = new ArrayList();
 
 
   List storagePools =
 _storagePoolDao.findZoneWideStoragePoolsByTags(plan.getDataCenterId(),
 dskCh.getTags());
 
 
   if (storagePools == null) {
 
   storagePools = new ArrayList();
 
   }
 
 
   List anyHypervisorStoragePools =
 newArrayList();
 
 
   for (StoragePoolVO storagePool : storagePools) {
 
   if (storagePool.getHypervisor().equals(HypervisorType.Any))
>> {
 
   anyHypervisorStoragePools.add(storagePool);
 
   }
 
   }
 
 
   List storagePoolsByHypervisor =
 
>>> 
>> _storagePoolDao.findZoneWideStoragePoolsByHypervisor(plan.getDataCenterId(),
 dskCh.getHypervisorType());
 
 
   storagePools.retainAll(storagePoolsByHypervisor);
 
 
   storagePools.addAll(anyHypervisorStoragePools);
 
 
   // add remaining pools in zone, that did not match tags, to
>> avoid
 set
 
   List allPools =
 _storagePoolDao.findZoneWideStoragePoolsByTags(plan.getDataCenterId(),
 null);
 
   allPools.removeAll(storagePools

Re: [MERGE] disk_io_throttling to MASTER

2013-06-17 Thread John Burwell
Mike,

Great news.  Let me know when Review Board is updated, and I will perform a 
second round review.

Thanks,
-John

On Jun 17, 2013, at 4:48 PM, Mike Tutkowski  
wrote:

> FYI: I added the IOPS-capacity parameter and related code over the weekend.
> 
> The final bit of work for me comes when Wei's code is merged into master
> and I pull it down.
> 
> At that point, I need to add the GUI and API logic to support mutual
> exclusion of our features.
> 
> 
> On Sat, Jun 15, 2013 at 11:12 AM, Chip Childers
> wrote:
> 
>> On Fri, Jun 14, 2013 at 05:56:27PM -0400, John Burwell wrote:
>>> Mike,
>>> 
>>> Awesome.  +1.
>>> 
>>> Thanks for your patience with the back and forth,
>>> -John
>> 
>> +1 - this now looks like a great approach.
>> 
> 
> 
> 
> -- 
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud
> *™*



Re: [MERGE] disk_io_throttling to MASTER

2013-06-17 Thread Mike Tutkowski
Sounds good

I have a little bit of clean-up work to do and then I must figure out how
to generate a patch (since I was using merge rather than rebase during
development), so it will probably be tomorrow.


On Mon, Jun 17, 2013 at 2:50 PM, John Burwell  wrote:

> Mike,
>
> Great news.  Let me know when Review Board is updated, and I will perform
> a second round review.
>
> Thanks,
> -John
>
> On Jun 17, 2013, at 4:48 PM, Mike Tutkowski 
> wrote:
>
> > FYI: I added the IOPS-capacity parameter and related code over the
> weekend.
> >
> > The final bit of work for me comes when Wei's code is merged into master
> > and I pull it down.
> >
> > At that point, I need to add the GUI and API logic to support mutual
> > exclusion of our features.
> >
> >
> > On Sat, Jun 15, 2013 at 11:12 AM, Chip Childers
> > wrote:
> >
> >> On Fri, Jun 14, 2013 at 05:56:27PM -0400, John Burwell wrote:
> >>> Mike,
> >>>
> >>> Awesome.  +1.
> >>>
> >>> Thanks for your patience with the back and forth,
> >>> -John
> >>
> >> +1 - this now looks like a great approach.
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkow...@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud
> > *™*
>
>


-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud
*™*


RE: Hypervisor Host Type Required at Zone Level for Primary Storage?

2013-06-17 Thread Edison Su
But currently there is no such hypervisor layer yet, and to me it's related to 
storage, not related to hypervisor. It's a property of a storage to support one 
hypervisor, two hypervisors, or all the hypervisors, not a property of 
hypervisor.
I agree, that add a hypervisor type on the storagepoolcmd is not a proper 
solution, as we already see, it's not flexible enough for Solidfire.
How about add a getSupportedHypervisors on storage plugin, which will return 
ImmutableSet?


> -Original Message-
> From: John Burwell [mailto:jburw...@basho.com]
> Sent: Monday, June 17, 2013 1:42 PM
> To: dev@cloudstack.apache.org
> Subject: Re: Hypervisor Host Type Required at Zone Level for Primary Storage?
> 
> Edison,
> 
> For me, this issue comes back to the whole notion of the overloaded
> StoragePoolType.  A hypervisor plugin should declare a method akin to
> getSupportedStorageProtocols() : ImmutableSet which
> the Hypervisor layer can use to filter the available DataStores from the
> Storage subsystem.  For example, as RBD support expands to other
> hypervisors, we should only have to modify those hypervisor plugins -- not
> the Hypervisor orchestration components or any aspect of the Storage layer.
> 
> Thanks,
> -John
> 
> On Jun 17, 2013, at 4:27 PM, Edison Su  wrote:
> 
> > There are storages which can only work with one hypervisor, e.g.
> > Currently, Ceph can only work on KVM. And the data store created in
> VCenter, can only work with Vmware.
> >
> >
> >
> >> -Original Message-
> >> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> >> Sent: Monday, June 17, 2013 1:12 PM
> >> To: dev@cloudstack.apache.org
> >> Subject: Re: Hypervisor Host Type Required at Zone Level for Primary
> Storage?
> >>
> >> I figured you might have something to say about this, John. :)
> >>
> >> Yeah, I have no idea behind the motivation for this change other than
> >> what Edison just said in a recent e-mail.
> >>
> >> It sounds like this change went in so that the allocators could look
> >> at the VM characteristics and see the hypervisor type. With this
> >> info, the allocator can decide if a particular zone-wide storage is
> >> acceptable. This doesn't apply for my situation as I'm dealing with a
> >> SAN, but some zone-wide storage is static (just a volume "out there"
> >> somewhere). Once this volume is used for, say, XenServer purposes, it
> can only be used for XenServer going forward.
> >>
> >> For more details, I would recommend Edison comment.
> >>
> >>
> >> On Mon, Jun 17, 2013 at 2:01 PM, John Burwell 
> >> wrote:
> >>
> >>> Mike,
> >>>
> >>> I know my thoughts will come as a galloping shock, but the idea of a
> >>> hypervisor type being attached to a volume is the type of dependency
> >>> I think we need to remove from the Storage layer.  What attributes
> >>> of a DataStore/StoragePool require association to a hypervisor type?
> >>> My thought is that we should expose query methods allow the
> >>> Hypervisor layer to determine if a DataStore/StoragePool requires
> >>> such a reservation, and we track that reservation in the Hypervisor layer.
> >>>
> >>> Thanks,
> >>> -John
> >>>
> >>> On Jun 17, 2013, at 3:48 PM, Mike Tutkowski
> >>> 
> >>> wrote:
> >>>
>  Hi Edison,
> 
>  How's about if I add this logic into ZoneWideStoragePoolAllocator
> >>> (below)?
> 
>  After filtering storage pools by tags, it saves off the ones that
>  are for any hypervisor.
> 
>  Next, we filter the list down more by hypervisor.
> 
>  Then, we add the storage pools back into the list that were for any
>  hypervisor.
> 
>  @Override
> 
>  protected List select(DiskProfile dskCh,
> 
>  VirtualMachineProfile vmProfile,
> 
>  DeploymentPlan plan, ExcludeList avoid, int returnUpTo) {
> 
>    s_logger.debug("ZoneWideStoragePoolAllocator to find storage
>  pool");
> 
>  List suitablePools = new ArrayList();
> 
> 
>    List storagePools =
> 
> >>
> _storagePoolDao.findZoneWideStoragePoolsByTags(plan.getDataCenterId(
>  ),
>  dskCh.getTags());
> 
> 
>    if (storagePools == null) {
> 
>    storagePools = new ArrayList();
> 
>    }
> 
> 
>    List anyHypervisorStoragePools =
>  newArrayList();
> 
> 
>    for (StoragePoolVO storagePool : storagePools) {
> 
>    if
>  (storagePool.getHypervisor().equals(HypervisorType.Any)) {
> 
>    anyHypervisorStoragePools.add(storagePool);
> 
>    }
> 
>    }
> 
> 
>    List storagePoolsByHypervisor =
> 
> >>>
> >>
> _storagePoolDao.findZoneWideStoragePoolsByHypervisor(plan.getDataCent
> >> e
> >>> rId(),
>  dskCh.getHypervisorType());
> 
> 
>    storagePools.retainAll(storagePoolsByHypervisor);
> 
> 
>    storagePools.addAll(anyHypervisorStoragePools);
> 
> 

Re: Hypervisor Host Type Required at Zone Level for Primary Storage?

2013-06-17 Thread Mike Tutkowski
I think Zadara Storage will be looking to implement a plug-in in an
upcoming release.

They have a similar use case to SolidFire where, I believe, their primary
storage represents a SAN at the zone level.


On Mon, Jun 17, 2013 at 2:54 PM, Edison Su  wrote:

> But currently there is no such hypervisor layer yet, and to me it's
> related to storage, not related to hypervisor. It's a property of a storage
> to support one hypervisor, two hypervisors, or all the hypervisors, not a
> property of hypervisor.
> I agree, that add a hypervisor type on the storagepoolcmd is not a proper
> solution, as we already see, it's not flexible enough for Solidfire.
> How about add a getSupportedHypervisors on storage plugin, which will
> return ImmutableSet?
>
>
> > -Original Message-
> > From: John Burwell [mailto:jburw...@basho.com]
> > Sent: Monday, June 17, 2013 1:42 PM
> > To: dev@cloudstack.apache.org
> > Subject: Re: Hypervisor Host Type Required at Zone Level for Primary
> Storage?
> >
> > Edison,
> >
> > For me, this issue comes back to the whole notion of the overloaded
> > StoragePoolType.  A hypervisor plugin should declare a method akin to
> > getSupportedStorageProtocols() : ImmutableSet which
> > the Hypervisor layer can use to filter the available DataStores from the
> > Storage subsystem.  For example, as RBD support expands to other
> > hypervisors, we should only have to modify those hypervisor plugins --
> not
> > the Hypervisor orchestration components or any aspect of the Storage
> layer.
> >
> > Thanks,
> > -John
> >
> > On Jun 17, 2013, at 4:27 PM, Edison Su  wrote:
> >
> > > There are storages which can only work with one hypervisor, e.g.
> > > Currently, Ceph can only work on KVM. And the data store created in
> > VCenter, can only work with Vmware.
> > >
> > >
> > >
> > >> -Original Message-
> > >> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> > >> Sent: Monday, June 17, 2013 1:12 PM
> > >> To: dev@cloudstack.apache.org
> > >> Subject: Re: Hypervisor Host Type Required at Zone Level for Primary
> > Storage?
> > >>
> > >> I figured you might have something to say about this, John. :)
> > >>
> > >> Yeah, I have no idea behind the motivation for this change other than
> > >> what Edison just said in a recent e-mail.
> > >>
> > >> It sounds like this change went in so that the allocators could look
> > >> at the VM characteristics and see the hypervisor type. With this
> > >> info, the allocator can decide if a particular zone-wide storage is
> > >> acceptable. This doesn't apply for my situation as I'm dealing with a
> > >> SAN, but some zone-wide storage is static (just a volume "out there"
> > >> somewhere). Once this volume is used for, say, XenServer purposes, it
> > can only be used for XenServer going forward.
> > >>
> > >> For more details, I would recommend Edison comment.
> > >>
> > >>
> > >> On Mon, Jun 17, 2013 at 2:01 PM, John Burwell 
> > >> wrote:
> > >>
> > >>> Mike,
> > >>>
> > >>> I know my thoughts will come as a galloping shock, but the idea of a
> > >>> hypervisor type being attached to a volume is the type of dependency
> > >>> I think we need to remove from the Storage layer.  What attributes
> > >>> of a DataStore/StoragePool require association to a hypervisor type?
> > >>> My thought is that we should expose query methods allow the
> > >>> Hypervisor layer to determine if a DataStore/StoragePool requires
> > >>> such a reservation, and we track that reservation in the Hypervisor
> layer.
> > >>>
> > >>> Thanks,
> > >>> -John
> > >>>
> > >>> On Jun 17, 2013, at 3:48 PM, Mike Tutkowski
> > >>> 
> > >>> wrote:
> > >>>
> >  Hi Edison,
> > 
> >  How's about if I add this logic into ZoneWideStoragePoolAllocator
> > >>> (below)?
> > 
> >  After filtering storage pools by tags, it saves off the ones that
> >  are for any hypervisor.
> > 
> >  Next, we filter the list down more by hypervisor.
> > 
> >  Then, we add the storage pools back into the list that were for any
> >  hypervisor.
> > 
> >  @Override
> > 
> >  protected List select(DiskProfile dskCh,
> > 
> >  VirtualMachineProfile vmProfile,
> > 
> >  DeploymentPlan plan, ExcludeList avoid, int returnUpTo) {
> > 
> >    s_logger.debug("ZoneWideStoragePoolAllocator to find storage
> >  pool");
> > 
> >  List suitablePools = new ArrayList();
> > 
> > 
> >    List storagePools =
> > 
> > >>
> > _storagePoolDao.findZoneWideStoragePoolsByTags(plan.getDataCenterId(
> >  ),
> >  dskCh.getTags());
> > 
> > 
> >    if (storagePools == null) {
> > 
> >    storagePools = new ArrayList();
> > 
> >    }
> > 
> > 
> >    List anyHypervisorStoragePools =
> >  newArrayList();
> > 
> > 
> >    for (StoragePoolVO storagePool : storagePools) {
> > 
> >    if
> >  (storagePool.getHypervisor().equal

Re: Hypervisor Host Type Required at Zone Level for Primary Storage?

2013-06-17 Thread John Burwell
Edison,

As part of the hack day discussion, I think we need to determine how to 
establish that layer and invert these dependencies.  Hypervisors must know 
about storage and network devices.  A VM is the nexus of a particular set of 
storage devices/volumes and network devices/interfaces.  From an architectural 
perspective, we sustain a system circular dependencies between these layers.  
Since VM must know about storage and networking, I want to invert the 
dependencies such that storage and network are hypervisor agnostic.  I believe 
it is entirely feasible, and will yield a more robust, general purpose storage 
layer with wider potential use than just to support hypervisors.

Thanks,
-John

On Jun 17, 2013, at 4:54 PM, Edison Su  wrote:

> But currently there is no such hypervisor layer yet, and to me it's related 
> to storage, not related to hypervisor. It's a property of a storage to 
> support one hypervisor, two hypervisors, or all the hypervisors, not a 
> property of hypervisor.
> I agree, that add a hypervisor type on the storagepoolcmd is not a proper 
> solution, as we already see, it's not flexible enough for Solidfire.
> How about add a getSupportedHypervisors on storage plugin, which will return 
> ImmutableSet?
> 
> 
>> -Original Message-
>> From: John Burwell [mailto:jburw...@basho.com]
>> Sent: Monday, June 17, 2013 1:42 PM
>> To: dev@cloudstack.apache.org
>> Subject: Re: Hypervisor Host Type Required at Zone Level for Primary Storage?
>> 
>> Edison,
>> 
>> For me, this issue comes back to the whole notion of the overloaded
>> StoragePoolType.  A hypervisor plugin should declare a method akin to
>> getSupportedStorageProtocols() : ImmutableSet which
>> the Hypervisor layer can use to filter the available DataStores from the
>> Storage subsystem.  For example, as RBD support expands to other
>> hypervisors, we should only have to modify those hypervisor plugins -- not
>> the Hypervisor orchestration components or any aspect of the Storage layer.
>> 
>> Thanks,
>> -John
>> 
>> On Jun 17, 2013, at 4:27 PM, Edison Su  wrote:
>> 
>>> There are storages which can only work with one hypervisor, e.g.
>>> Currently, Ceph can only work on KVM. And the data store created in
>> VCenter, can only work with Vmware.
>>> 
>>> 
>>> 
 -Original Message-
 From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
 Sent: Monday, June 17, 2013 1:12 PM
 To: dev@cloudstack.apache.org
 Subject: Re: Hypervisor Host Type Required at Zone Level for Primary
>> Storage?
 
 I figured you might have something to say about this, John. :)
 
 Yeah, I have no idea behind the motivation for this change other than
 what Edison just said in a recent e-mail.
 
 It sounds like this change went in so that the allocators could look
 at the VM characteristics and see the hypervisor type. With this
 info, the allocator can decide if a particular zone-wide storage is
 acceptable. This doesn't apply for my situation as I'm dealing with a
 SAN, but some zone-wide storage is static (just a volume "out there"
 somewhere). Once this volume is used for, say, XenServer purposes, it
>> can only be used for XenServer going forward.
 
 For more details, I would recommend Edison comment.
 
 
 On Mon, Jun 17, 2013 at 2:01 PM, John Burwell 
 wrote:
 
> Mike,
> 
> I know my thoughts will come as a galloping shock, but the idea of a
> hypervisor type being attached to a volume is the type of dependency
> I think we need to remove from the Storage layer.  What attributes
> of a DataStore/StoragePool require association to a hypervisor type?
> My thought is that we should expose query methods allow the
> Hypervisor layer to determine if a DataStore/StoragePool requires
> such a reservation, and we track that reservation in the Hypervisor layer.
> 
> Thanks,
> -John
> 
> On Jun 17, 2013, at 3:48 PM, Mike Tutkowski
> 
> wrote:
> 
>> Hi Edison,
>> 
>> How's about if I add this logic into ZoneWideStoragePoolAllocator
> (below)?
>> 
>> After filtering storage pools by tags, it saves off the ones that
>> are for any hypervisor.
>> 
>> Next, we filter the list down more by hypervisor.
>> 
>> Then, we add the storage pools back into the list that were for any
>> hypervisor.
>> 
>> @Override
>> 
>> protected List select(DiskProfile dskCh,
>> 
>> VirtualMachineProfile vmProfile,
>> 
>> DeploymentPlan plan, ExcludeList avoid, int returnUpTo) {
>> 
>>  s_logger.debug("ZoneWideStoragePoolAllocator to find storage
>> pool");
>> 
>> List suitablePools = new ArrayList();
>> 
>> 
>>  List storagePools =
>> 
 
>> _storagePoolDao.findZoneWideStoragePoolsByTags(plan.getDataCenterId(
>> ),
>> dskCh.getTags());
>> 
>> 
>>  if (storagePools == n

Re: Hypervisor Host Type Required at Zone Level for Primary Storage?

2013-06-17 Thread Mike Tutkowski
I think a hack-day session on this would be great.

To me, since we're so late in the game for 4.2, I think we need to take two
approaches here: 1) Short-term solution for 4.2 (that hopefully will not
make future refactoring work too much more difficult than it might already
be) and 2) Long-term solution such as what John is talking about.


On Mon, Jun 17, 2013 at 3:03 PM, John Burwell  wrote:

> Edison,
>
> As part of the hack day discussion, I think we need to determine how to
> establish that layer and invert these dependencies.  Hypervisors must know
> about storage and network devices.  A VM is the nexus of a particular set
> of storage devices/volumes and network devices/interfaces.  From an
> architectural perspective, we sustain a system circular dependencies
> between these layers.  Since VM must know about storage and networking, I
> want to invert the dependencies such that storage and network are
> hypervisor agnostic.  I believe it is entirely feasible, and will yield a
> more robust, general purpose storage layer with wider potential use than
> just to support hypervisors.
>
> Thanks,
> -John
>
> On Jun 17, 2013, at 4:54 PM, Edison Su  wrote:
>
> > But currently there is no such hypervisor layer yet, and to me it's
> related to storage, not related to hypervisor. It's a property of a storage
> to support one hypervisor, two hypervisors, or all the hypervisors, not a
> property of hypervisor.
> > I agree, that add a hypervisor type on the storagepoolcmd is not a
> proper solution, as we already see, it's not flexible enough for Solidfire.
> > How about add a getSupportedHypervisors on storage plugin, which will
> return ImmutableSet?
> >
> >
> >> -Original Message-
> >> From: John Burwell [mailto:jburw...@basho.com]
> >> Sent: Monday, June 17, 2013 1:42 PM
> >> To: dev@cloudstack.apache.org
> >> Subject: Re: Hypervisor Host Type Required at Zone Level for Primary
> Storage?
> >>
> >> Edison,
> >>
> >> For me, this issue comes back to the whole notion of the overloaded
> >> StoragePoolType.  A hypervisor plugin should declare a method akin to
> >> getSupportedStorageProtocols() : ImmutableSet which
> >> the Hypervisor layer can use to filter the available DataStores from the
> >> Storage subsystem.  For example, as RBD support expands to other
> >> hypervisors, we should only have to modify those hypervisor plugins --
> not
> >> the Hypervisor orchestration components or any aspect of the Storage
> layer.
> >>
> >> Thanks,
> >> -John
> >>
> >> On Jun 17, 2013, at 4:27 PM, Edison Su  wrote:
> >>
> >>> There are storages which can only work with one hypervisor, e.g.
> >>> Currently, Ceph can only work on KVM. And the data store created in
> >> VCenter, can only work with Vmware.
> >>>
> >>>
> >>>
>  -Original Message-
>  From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
>  Sent: Monday, June 17, 2013 1:12 PM
>  To: dev@cloudstack.apache.org
>  Subject: Re: Hypervisor Host Type Required at Zone Level for Primary
> >> Storage?
> 
>  I figured you might have something to say about this, John. :)
> 
>  Yeah, I have no idea behind the motivation for this change other than
>  what Edison just said in a recent e-mail.
> 
>  It sounds like this change went in so that the allocators could look
>  at the VM characteristics and see the hypervisor type. With this
>  info, the allocator can decide if a particular zone-wide storage is
>  acceptable. This doesn't apply for my situation as I'm dealing with a
>  SAN, but some zone-wide storage is static (just a volume "out there"
>  somewhere). Once this volume is used for, say, XenServer purposes, it
> >> can only be used for XenServer going forward.
> 
>  For more details, I would recommend Edison comment.
> 
> 
>  On Mon, Jun 17, 2013 at 2:01 PM, John Burwell 
>  wrote:
> 
> > Mike,
> >
> > I know my thoughts will come as a galloping shock, but the idea of a
> > hypervisor type being attached to a volume is the type of dependency
> > I think we need to remove from the Storage layer.  What attributes
> > of a DataStore/StoragePool require association to a hypervisor type?
> > My thought is that we should expose query methods allow the
> > Hypervisor layer to determine if a DataStore/StoragePool requires
> > such a reservation, and we track that reservation in the Hypervisor
> layer.
> >
> > Thanks,
> > -John
> >
> > On Jun 17, 2013, at 3:48 PM, Mike Tutkowski
> > 
> > wrote:
> >
> >> Hi Edison,
> >>
> >> How's about if I add this logic into ZoneWideStoragePoolAllocator
> > (below)?
> >>
> >> After filtering storage pools by tags, it saves off the ones that
> >> are for any hypervisor.
> >>
> >> Next, we filter the list down more by hypervisor.
> >>
> >> Then, we add the storage pools back into the list that were for any
> >> hy

Re: Hypervisor Host Type Required at Zone Level for Primary Storage?

2013-06-17 Thread John Burwell
Mike,

My goal is not incur further technical debt in 4.2 by adding more 
Storage->Hypervisor dependencies that need to be inverted.  Recognizing that we 
are close to 4.2, the question becomes is there a simple approach that will 
permit this dependency to be inverted?   I will dig into the code tomorrow to 
see if there is something straightforward we can do for 4.2.  I invite others 
to do the same ...

Thanks,
-John 

On Jun 17, 2013, at 5:09 PM, Mike Tutkowski  
wrote:

> I think a hack-day session on this would be great.
> 
> To me, since we're so late in the game for 4.2, I think we need to take two
> approaches here: 1) Short-term solution for 4.2 (that hopefully will not
> make future refactoring work too much more difficult than it might already
> be) and 2) Long-term solution such as what John is talking about.
> 
> 
> On Mon, Jun 17, 2013 at 3:03 PM, John Burwell  wrote:
> 
>> Edison,
>> 
>> As part of the hack day discussion, I think we need to determine how to
>> establish that layer and invert these dependencies.  Hypervisors must know
>> about storage and network devices.  A VM is the nexus of a particular set
>> of storage devices/volumes and network devices/interfaces.  From an
>> architectural perspective, we sustain a system circular dependencies
>> between these layers.  Since VM must know about storage and networking, I
>> want to invert the dependencies such that storage and network are
>> hypervisor agnostic.  I believe it is entirely feasible, and will yield a
>> more robust, general purpose storage layer with wider potential use than
>> just to support hypervisors.
>> 
>> Thanks,
>> -John
>> 
>> On Jun 17, 2013, at 4:54 PM, Edison Su  wrote:
>> 
>>> But currently there is no such hypervisor layer yet, and to me it's
>> related to storage, not related to hypervisor. It's a property of a storage
>> to support one hypervisor, two hypervisors, or all the hypervisors, not a
>> property of hypervisor.
>>> I agree, that add a hypervisor type on the storagepoolcmd is not a
>> proper solution, as we already see, it's not flexible enough for Solidfire.
>>> How about add a getSupportedHypervisors on storage plugin, which will
>> return ImmutableSet?
>>> 
>>> 
 -Original Message-
 From: John Burwell [mailto:jburw...@basho.com]
 Sent: Monday, June 17, 2013 1:42 PM
 To: dev@cloudstack.apache.org
 Subject: Re: Hypervisor Host Type Required at Zone Level for Primary
>> Storage?
 
 Edison,
 
 For me, this issue comes back to the whole notion of the overloaded
 StoragePoolType.  A hypervisor plugin should declare a method akin to
 getSupportedStorageProtocols() : ImmutableSet which
 the Hypervisor layer can use to filter the available DataStores from the
 Storage subsystem.  For example, as RBD support expands to other
 hypervisors, we should only have to modify those hypervisor plugins --
>> not
 the Hypervisor orchestration components or any aspect of the Storage
>> layer.
 
 Thanks,
 -John
 
 On Jun 17, 2013, at 4:27 PM, Edison Su  wrote:
 
> There are storages which can only work with one hypervisor, e.g.
> Currently, Ceph can only work on KVM. And the data store created in
 VCenter, can only work with Vmware.
> 
> 
> 
>> -Original Message-
>> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
>> Sent: Monday, June 17, 2013 1:12 PM
>> To: dev@cloudstack.apache.org
>> Subject: Re: Hypervisor Host Type Required at Zone Level for Primary
 Storage?
>> 
>> I figured you might have something to say about this, John. :)
>> 
>> Yeah, I have no idea behind the motivation for this change other than
>> what Edison just said in a recent e-mail.
>> 
>> It sounds like this change went in so that the allocators could look
>> at the VM characteristics and see the hypervisor type. With this
>> info, the allocator can decide if a particular zone-wide storage is
>> acceptable. This doesn't apply for my situation as I'm dealing with a
>> SAN, but some zone-wide storage is static (just a volume "out there"
>> somewhere). Once this volume is used for, say, XenServer purposes, it
 can only be used for XenServer going forward.
>> 
>> For more details, I would recommend Edison comment.
>> 
>> 
>> On Mon, Jun 17, 2013 at 2:01 PM, John Burwell 
>> wrote:
>> 
>>> Mike,
>>> 
>>> I know my thoughts will come as a galloping shock, but the idea of a
>>> hypervisor type being attached to a volume is the type of dependency
>>> I think we need to remove from the Storage layer.  What attributes
>>> of a DataStore/StoragePool require association to a hypervisor type?
>>> My thought is that we should expose query methods allow the
>>> Hypervisor layer to determine if a DataStore/StoragePool requires
>>> such a reservation, and we track that reservation in the Hyper

Re: Hypervisor Host Type Required at Zone Level for Primary Storage?

2013-06-17 Thread Mike Tutkowski
Sounds good...We don't want to add technical debt if it's going to make our
work a lot harder in the future.


On Mon, Jun 17, 2013 at 3:14 PM, John Burwell  wrote:

> Mike,
>
> My goal is not incur further technical debt in 4.2 by adding more
> Storage->Hypervisor dependencies that need to be inverted.  Recognizing
> that we are close to 4.2, the question becomes is there a simple approach
> that will permit this dependency to be inverted?   I will dig into the code
> tomorrow to see if there is something straightforward we can do for 4.2.  I
> invite others to do the same ...
>
> Thanks,
> -John
>
> On Jun 17, 2013, at 5:09 PM, Mike Tutkowski 
> wrote:
>
> > I think a hack-day session on this would be great.
> >
> > To me, since we're so late in the game for 4.2, I think we need to take
> two
> > approaches here: 1) Short-term solution for 4.2 (that hopefully will not
> > make future refactoring work too much more difficult than it might
> already
> > be) and 2) Long-term solution such as what John is talking about.
> >
> >
> > On Mon, Jun 17, 2013 at 3:03 PM, John Burwell 
> wrote:
> >
> >> Edison,
> >>
> >> As part of the hack day discussion, I think we need to determine how to
> >> establish that layer and invert these dependencies.  Hypervisors must
> know
> >> about storage and network devices.  A VM is the nexus of a particular
> set
> >> of storage devices/volumes and network devices/interfaces.  From an
> >> architectural perspective, we sustain a system circular dependencies
> >> between these layers.  Since VM must know about storage and networking,
> I
> >> want to invert the dependencies such that storage and network are
> >> hypervisor agnostic.  I believe it is entirely feasible, and will yield
> a
> >> more robust, general purpose storage layer with wider potential use than
> >> just to support hypervisors.
> >>
> >> Thanks,
> >> -John
> >>
> >> On Jun 17, 2013, at 4:54 PM, Edison Su  wrote:
> >>
> >>> But currently there is no such hypervisor layer yet, and to me it's
> >> related to storage, not related to hypervisor. It's a property of a
> storage
> >> to support one hypervisor, two hypervisors, or all the hypervisors, not
> a
> >> property of hypervisor.
> >>> I agree, that add a hypervisor type on the storagepoolcmd is not a
> >> proper solution, as we already see, it's not flexible enough for
> Solidfire.
> >>> How about add a getSupportedHypervisors on storage plugin, which will
> >> return ImmutableSet?
> >>>
> >>>
>  -Original Message-
>  From: John Burwell [mailto:jburw...@basho.com]
>  Sent: Monday, June 17, 2013 1:42 PM
>  To: dev@cloudstack.apache.org
>  Subject: Re: Hypervisor Host Type Required at Zone Level for Primary
> >> Storage?
> 
>  Edison,
> 
>  For me, this issue comes back to the whole notion of the overloaded
>  StoragePoolType.  A hypervisor plugin should declare a method akin to
>  getSupportedStorageProtocols() : ImmutableSet which
>  the Hypervisor layer can use to filter the available DataStores from
> the
>  Storage subsystem.  For example, as RBD support expands to other
>  hypervisors, we should only have to modify those hypervisor plugins --
> >> not
>  the Hypervisor orchestration components or any aspect of the Storage
> >> layer.
> 
>  Thanks,
>  -John
> 
>  On Jun 17, 2013, at 4:27 PM, Edison Su  wrote:
> 
> > There are storages which can only work with one hypervisor, e.g.
> > Currently, Ceph can only work on KVM. And the data store created in
>  VCenter, can only work with Vmware.
> >
> >
> >
> >> -Original Message-
> >> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> >> Sent: Monday, June 17, 2013 1:12 PM
> >> To: dev@cloudstack.apache.org
> >> Subject: Re: Hypervisor Host Type Required at Zone Level for Primary
>  Storage?
> >>
> >> I figured you might have something to say about this, John. :)
> >>
> >> Yeah, I have no idea behind the motivation for this change other
> than
> >> what Edison just said in a recent e-mail.
> >>
> >> It sounds like this change went in so that the allocators could look
> >> at the VM characteristics and see the hypervisor type. With this
> >> info, the allocator can decide if a particular zone-wide storage is
> >> acceptable. This doesn't apply for my situation as I'm dealing with
> a
> >> SAN, but some zone-wide storage is static (just a volume "out there"
> >> somewhere). Once this volume is used for, say, XenServer purposes,
> it
>  can only be used for XenServer going forward.
> >>
> >> For more details, I would recommend Edison comment.
> >>
> >>
> >> On Mon, Jun 17, 2013 at 2:01 PM, John Burwell 
> >> wrote:
> >>
> >>> Mike,
> >>>
> >>> I know my thoughts will come as a galloping shock, but the idea of
> a
> >>> hypervisor type being attached to a volume is th

  1   2   >