NetApp&Citrix Storage plugin meeting notes -- Sep 17, 2013

2013-09-19 Thread Edison Su
Meeting notes:

Date: Sep 17, 2013

Attendees:

NetApp: Chris, David

Citrix: Alex, Animesh, Edison, Srinivas



1. TakeSnapshot method on storage plugin.

Chris: TakeSnapshot method is implemented, but backup snapshot is failed on 
the hypervisor side, as the default backup snapshot procedure will send a 
copycommand to Xenserver resource, which can't recognize the snapshot created 
by NetApp storage plugin.

   Edison: take snapshot, then backup snapshot to secondary storage immediately 
is the default behavior in CloudStack. There is a way to customize it, by 
extends SnapshotStrategy interface. For example, we can write a 
SnapshotStrategy, which doesn't back up unless anther backup snapshot API is 
called. Or we can add a global configuration parameter to disable backup.

   Chris: backup snapshot is fine, but we need a way to customize the backup 
logic. Should we need to let hypervisor knows the snapshot created by NetApp, 
Xenserver has an API to introduce a volume/snapshot, but VMware seems doesn't 
have that kind of API.

   Edison: There is ways to customize the copy logic, by extending copyAsync in 
storage plugin. Everytime, mgt server sends a copycommand, it will go through 
both source storage provider and destination storage provider, If any of them  
knows how to handle the copycommand, then mgt server will handle it to storage 
provider, otherwise, mgt server will handle it to DataMotionService. So there 
are two ways to customize copycommand:

  a. implement copyAsync in storage plugin,

  b. add a new DataMotionService.

  Method a, is easier than method b.

   In copyAsync implementation, we can send the copyCommand to SSVM to handle 
the actual snapshot copy, instead of sending to hypervisor host.

   Chris: Need the detail about how to implement copyAsync in storage plugin:

  [Action]: Edison needs to provide detailed information on how to 
implementation of copyAsync



2. RevertSnapshot

Chris: RevertSnapshot is never been called in CloudStack.

Edison: CloudStack always assumes the snapshot is created, then will be 
backed up to image store immediately, so no need to revert snapshot.

Chris: need to add revert snapshot functionality on the UI, and API to 
implement revert Snapshot functionality, as the snapshot created on NetApp can 
be reverted back.

[Action]: Edison needs to implement RevertSnapshot, and call storage 
plugin's RevertSnapshot accordingly.



3.  NetApp volume snapshot.

   David: Due to the limitation of NetApp hardware(the number of snapshots can 
be stored on one NetApp volume is limited, less than 255?), we need to 
implement volume snapshot, instead of per VM volume snapshot.

   David: Need to change storage plugin API to take snapshots for all the VMs 
created on the primary storage, instead of per one volume, such as 
takeSnapshot(List vms), then NetApp can take only one snapshot on the whole 
NetApp volume.

   Animesh: Does NetApp's hardware has the information for VM's each volumes, 
as there is only one snapshot taking on primary storage?

   David: Yes, there is information per volume, we can store them in snapshot's 
path column in CloudStack DB.

   Alex: How about queue taking snapshot methods call in storage plugin? For 
example, queue the taking snapshot for several minutes, then issue only one 
take-snapshot command to NetApp hardware.

   David: It's doable, but maybe it's better to be done at the CloudStack mgt 
server level.

   Alex: Better to send out a proposal on the mailing list, let's see what 
other storage vendor's feedback. If we agree on it's better to implement it at 
mgt server level, then can definitely can implement that.

   Alex: Due to tight 4.3 schedule, maybe implement it at the storage driver 
level is more practical.

   [Action] NetApp will send out a proposal to mailing list.



4. Can't add multiple image storage providers in one CloudStack setup.

   Chris: There is on way to let customer to migrate existing NFS secondary 
storage to NetApp's image store, due to the above limitation.

   Edison: The reason for only one image storage provider is that: whenever 
CloudStack mgt server needs to access image store, mgt server has not hint to 
pick up an image store, if there are multiple image stores. As both image 
stores are in the same scope. And for operations, like template sync between 
image stores, template copy between zones, will be much complicated if there 
are multiple image stores.

   Edison: the proposal is putting the existing image store into maintenance 
mode, then migrate existing templates/snapshots stored on the image store to 
new image store.

   [Action] Edison needs to implement image store maintenance mode. NetApp may 
need to provide way to migrate image store.



5. How to store configuration for storage plugin:

Animesh: Alex, is it OK to create DB table in storage plugin?

Alex: Better to use *_details table

Chris: For configurations like NetApp serv

RE: LocalHostEndPoint seems to get called

2013-09-25 Thread Edison Su


> -Original Message-
> From: Darren Shepherd [mailto:darren.s.sheph...@gmail.com]
> Sent: Wednesday, September 25, 2013 12:38 PM
> To: dev@cloudstack.apache.org
> Subject: LocalHostEndPoint seems to get called
> 
> While I'm doing development and restarting things and what not, it seems
> often storage commands get routed to LocalHostEndPoint.  This seems bad.  I
> don't have sudo setup for my user on my laptop, so things like "Unable to
> create local folder for:
> /mnt/secStorage/64d6e26f-e656-3ba3-908f-ce6610ede011 in order to mount
> nfs://192.168.3.134:/exports/secondary1" fail.  But the bigger problem is,
> shouldn't that not happen at all.  It seems like in a normal setup it should
> never try to use LocalHostEndpoint.  Do I have some setting flipped that is
> enabling that?

The current code has bug if ssvm agent is not in the up state, then template 
downloading will likely choose localhostendpoint to download template.
Localhostendpoint should only be used to download system vm template.

> 
> Seems like with the current code you might accidentally mount secondary to
> the management server if the conditions are right...
> 
> Darren


RE: [DISCUSS] UI: New look and feel

2013-09-27 Thread Edison Su
That's nice! Do you need help to setup a demo, or coding? Just finished read < 
Mastering Web Application Development with AngularJS >, so trying to mastering 
something:)

> -Original Message-
> From: Shiva Teja [mailto:shivate...@gmail.com]
> Sent: Thursday, September 26, 2013 9:53 PM
> To: dev@cloudstack.apache.org
> Subject: Re: [DISCUSS] UI: New look and feel
> 
> On Fri, Sep 27, 2013 at 9:28 AM, Ian Duffy  wrote:
> 
> > I think so
> > implementation of AngularJS like the way Shiva did it for his GSoC
> > project would be good.
> >
> 
> I'm trying to setup a demo for my project. This should give an idea about the
> code.
> 
> https://github.com/shivateja/cloudstack-ui/blob/angular-
> rawapi/static/js/common/resources/virtualmachines.js
> https://github.com/shivateja/cloudstack-ui/blob/angular-
> rawapi/static/js/app/instances/instances.js
> https://github.com/shivateja/cloudstack-ui/blob/angular-
> rawapi/static/js/app/instances/instances.tpl.html
> 
> Thanks,
> Shiva Teja


RE: [DISCUSS] UI: New look and feel

2013-09-27 Thread Edison Su
Is there anybody like Win8 style UI: especially for the icons: 
http://aozora.github.io/bootmetro/docs/docs-advanced-components.html

> -Original Message-
> From: Brian Federle [mailto:brian.fede...@citrix.com]
> Sent: Friday, September 27, 2013 10:53 AM
> To: dev@cloudstack.apache.org; Sonny Chhen
> Subject: RE: [DISCUSS] UI: New look and feel
> 
> Right now the plan was to remove the icons, though if people think that they
> are important to usability then we can definitely put them back in. I'm
> thinking flat icons though, which would look better with the new design. I'll
> play around with it and maybe post a screenshot with icons included.
> 
> The action icons on the detail pages will still be there, and of course if 
> plugins
> supply their own icons they will be displayed.
> 
> -Brian
> 
> 
> From: SuichII, Christopher [chris.su...@netapp.com]
> Sent: Friday, September 27, 2013 5:14 AM
> To: 
> Subject: Re: [DISCUSS] UI: New look and feel
> 
> Brian - The new style looks great, but I'd like to repeat someone else's
> question: Are we getting rid of the icons on the nav bar? As a plugin dev, it
> would be really nice to keep our company logo by our UI plugin.
> 
> Shiva & Sebastien - What impact would this angular.js project have on UI
> plugins?
> --
> Chris Suich
> chris.su...@netapp.com
> NetApp Software Engineer
> Data Center Platforms - Cloud Solutions
> Citrix, Cisco & Red Hat
> 
> On Sep 27, 2013, at 2:44 AM, sebgoa  wrote:
> 
> >
> > On Sep 27, 2013, at 6:52 AM, Shiva Teja  wrote:
> >
> >> On Fri, Sep 27, 2013 at 9:28 AM, Ian Duffy  wrote:
> >>
> >>> I think so
> >>> implementation of AngularJS like the way Shiva did it for his GSoC
> >>> project would be good.
> >>>
> >>
> >> I'm trying to setup a demo for my project. This should give an idea
> >> about the code.
> >>
> >> https://github.com/shivateja/cloudstack-ui/blob/angular-rawapi/static
> >> /js/common/resources/virtualmachines.js
> >> https://github.com/shivateja/cloudstack-ui/blob/angular-rawapi/static
> >> /js/app/instances/instances.js
> >> https://github.com/shivateja/cloudstack-ui/blob/angular-rawapi/static
> >> /js/app/instances/instances.tpl.html
> >>
> >> Thanks,
> >> Shiva Teja
> >
> > Thanks Shiva, I was going to mention it.
> >
> > Shiva has worked on an angular.js app for a cloudstack frontend.
> > All the code has been contributed in tools/ngui
> >
> > This could easily be used with Brian new "CSS" and it would clean up all the
> javascript.
> >
> > -Sebastien
> >



RE: [DISCUSS] Breaking out Marvin from CloudStack

2013-10-02 Thread Edison Su
Seems Marvin won't depend on commands.xml any more, thus won't depend on any 
artifacts build from cloudstack. So I am +1.

> -Original Message-
> From: Prasanna Santhanam [mailto:t...@apache.org]
> Sent: Wednesday, October 02, 2013 10:14 AM
> To: CloudStack Dev
> Subject: [DISCUSS] Breaking out Marvin from CloudStack
> 
> I would like to seperate marvin from the main cloudstack repo. Much of
> marvin's development has little coupling with CloudStack.
> 
> Similar to CloudMonkey, marvin undergoes rapid changes and it is essential
> to provide a smooth workflow and faster releases for those working with it.
> 
> There are also a small set of people currently looking at marvin for testing
> right now. Often, their reviews and QA effort is mixed with those of
> cloudstack itself. By having a different repo I'd like to be able to provide
> commit access to those working on marvin alone quickly to help with testing.
> 
> After separating marvin
> 0. we will have a separate release cycle for marvin 1. we will have a new
> home for marvin's docs using Sphinx 2. if possible, a different criteria for
> providing commit access to marvin's repos.
> 3. all tests of cloudstack will also move to marvin's repository
> 
> Thoughts?
> 
> --
> Prasanna.,
> 
> 
> Powered by BigRock.com



RE: [DISCUSS] Breaking out Marvin from CloudStack

2013-10-02 Thread Edison Su
any user can just simply "pip install marvin", without download/build 
cloudstack source code. 

> -Original Message-
> From: Alex Huang [mailto:alex.hu...@citrix.com]
> Sent: Wednesday, October 02, 2013 12:39 PM
> To: dev@cloudstack.apache.org
> Subject: RE: [DISCUSS] Breaking out Marvin from CloudStack
> 
> I don't really understand what purpose would this serve.  Would we ever use
> newer marvin against older CloudStack or vice versa?  What's the benefit?
> 
> I can understand it for cloudmonkey because cloudmonkey is an admin cli
> tool and reving it differently is not a bad idea.  I just don't see it for 
> marvin
> and, especially for the tests.
> 
> --Alex
> 
> > -Original Message-
> > From: Prasanna Santhanam [mailto:t...@apache.org]
> > Sent: Wednesday, October 2, 2013 10:14 AM
> > To: CloudStack Dev
> > Subject: [DISCUSS] Breaking out Marvin from CloudStack
> >
> > I would like to seperate marvin from the main cloudstack repo. Much of
> > marvin's development has little coupling with CloudStack.
> >
> > Similar to CloudMonkey, marvin undergoes rapid changes and it is
> > essential to provide a smooth workflow and faster releases for those
> working with it.
> >
> > There are also a small set of people currently looking at marvin for
> > testing right now. Often, their reviews and QA effort is mixed with
> > those of cloudstack itself. By having a different repo I'd like to be
> > able to provide commit access to those working on marvin alone quickly to
> help with testing.
> >
> > After separating marvin
> > 0. we will have a separate release cycle for marvin 1. we will have a
> > new home for marvin's docs using Sphinx 2. if possible, a different
> > criteria for providing commit access to marvin's repos.
> > 3. all tests of cloudstack will also move to marvin's repository
> >
> > Thoughts?
> >
> > --
> > Prasanna.,
> >
> > 
> > Powered by BigRock.com



RE: [PROPOSAL] Modularize Spring

2013-10-04 Thread Edison Su


> -Original Message-
> From: SuichII, Christopher [mailto:chris.su...@netapp.com]
> Sent: Thursday, October 03, 2013 9:02 PM
> To: 
> Subject: Re: [PROPOSAL] Modularize Spring
> 
> Sure - I could see that working. Anyone have thoughts whether an enum
> could be used instead of an integer? That way we can provide categories or a
> well defined scale (like 0-5)? If we give them free range of 1-100 (or any
> integer range), I imagine people will likely go to the extremes and just use 0
> for can't handle, 1 for low priority (default) and 100 for high priority 
> (storage
> providers). We still have the problem of handling conflicts, or implementers
> who return the same value. However, I'm not sure there is much we can do
> beyond selecting the first implementation that we come across with the
> highest priority. We should document and suggest that implementations
> ensure their canHandle() method is as intelligent as possible and only takes
> control of operations they are truly authorities on.

How to use the way spring  4.0 did: 
https://jira.springsource.org/browse/SPR-5574?
http://stackoverflow.com/questions/16967971/spring-ordered-list-of-beans
Add order in bean.

> 
> -Chris
> --
> Chris Suich
> chris.su...@netapp.com
> NetApp Software Engineer
> Data Center Platforms - Cloud Solutions
> Citrix, Cisco & Red Hat
> 
> On Oct 3, 2013, at 7:44 PM, Darren Shepherd 
> wrote:
> 
> > Could it be just as simple as the canhandle() returns an int and not 
> > Boolean.
> So the ancient would return 1 but if the netapp matches it returns 100.  If it
> does not handle at all you return -1.  This effectively gives you 
> prioritization.
> So the calling code would still loop through all strategies each time looking 
> for
> the best match.  I don't want the priority to be determined at load time as
> that is less flexible.
> >
> > Darren
> >
> >> On Oct 3, 2013, at 5:42 PM, "SuichII, Christopher"
>  wrote:
> >>
> >> Thanks for the quick reply.
> >>
> >> After talking with Edison, I think what we could do is allow the strategies
> to either specify a priority or 'type' and then order them by that when they
> are loaded. For example, we could have an enum of types like 'PLUGIN,
> HYPERVISOR and DEFAULT' so that we can make sure plugins are asked
> canHandle() first, then hypervisor implementations, then finally resort to the
> default/ancient implementation. This 'type' or 'category' could be specified
> as a bean parameter or as part of the strategy/provider interface. Any
> thoughts on which are better?
> >>
> >> The problem with just making canHandle() more intelligent is that we do
> need some kind of ordering. Ideally, the non-default implementations
> should be asked first, then fall back onto the stock implementations. You saw
> the problem yourself - the XenServerSnapshotStrategy just says it will handle
> all requests, so if a non-standard strategy wants to be given a chance, it 
> must
> be asked before the hypervisor or ancient implementation.
> >>
> >> Alternatively if this matches the same usage of the global configuration
> ordering system, we could add in the storage subsystem strategies and
> providers.
> >>
> >> The reason for log4j changes is that we may want to enable a different
> level of verbosity by default. Our support organization likes to have very
> verbose logs to assist them in triaging support issues. The lowest level log4j
> categories are 'com.cloud', 'org' and 'net' and those are set to DEBUG and
> INFO. Even if we add a line for 'com', the default value should not be 'TRACE'
> like we would like ours to be. I'm not all that great with log4j, though, so
> maybe I'm missing a simpler solution.
> >>
> >> I'll try to keep an eye on the commands.properties/rbac stuff - that is
> good to know.
> >>
> >> Thanks,
> >> Chris
> >> --
> >> Chris Suich
> >> chris.su...@netapp.com
> >> NetApp Software Engineer
> >> Data Center Platforms - Cloud Solutions Citrix, Cisco & Red Hat
> >>
> >>> On Oct 3, 2013, at 4:30 PM, Darren Shepherd
>  wrote:
> >>>
> >>> I forgot to mention about canHandle() ordering.  For extensions, it
> >>> should be preferred that we do not rely on loaded or configured order.
> >>> The canHandle() implementation should assume that they're may be any
> >>> order.  So having said that, I looked at XenServerSnapshotStrategy
> >>> and its canHandle() is brilliantly implemented as "return true;"
> >>>
> >>> Can we look at making the strategies order independent and not
> >>> having another order configuration parameter?
> >>>
> >>> Darren
> >>>
> >>> On Thu, Oct 3, 2013 at 4:26 PM, Darren Shepherd
> >>>  wrote:
>  Chris,
> 
>  I've updated the wiki [1].  Copying from the wiki
> 
>  "Extensions are automatically discovered based on the interfaces
>  they implement and which module is their parent. For example, if
>  you place a storage extension in a child module of the network
>  module, it will not be discovered. Additionally, depending on t

Re: Review Request 14477: Refactor Storage Related Resource Code

2013-10-04 Thread edison su

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14477/#review26693
---

Ship it!


Ship It!

- edison su


On Oct. 4, 2013, 12:57 a.m., Chris Suich wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/14477/
> ---
> 
> (Updated Oct. 4, 2013, 12:57 a.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> These changes are a joint effort between Edison and I to refactor some of the 
> code around snapshotting VM volumes and creating templates/volumes from VM 
> volume snapshots. In general, we were working towards allowing 
> PrimaryDataStoreDrivers to create snapshots on primary storage and not 
> requiring the snapshots to be transferred to secondary storage.
> 
> High level changes:
> -Added uuid to NfsTO, SwiftTO & S3TO to cut down on the requirement of 
> PrimaryDataStoreTO and ImageStoreTO which don't really serve much of a purpose
> -Initial work towards enable reverting VM volume from snapshots
> -Added hypervisor commands for introducing and forgetting new hypervisor 
> objects (snapshots, templates & volumes)
> 
> 
> Diffs
> -
> 
>   api/src/com/cloud/agent/api/to/DataStoreTO.java 9014f8e 
>   api/src/com/cloud/agent/api/to/NfsTO.java 415c95c 
>   api/src/com/cloud/agent/api/to/SwiftTO.java 7349d77 
>   api/src/com/cloud/event/EventTypes.java ec9604e 
>   api/src/com/cloud/storage/snapshot/SnapshotApiService.java 23e6522 
>   
> api/src/org/apache/cloudstack/api/command/admin/storage/ListStoragePoolsCmd.java
>  26351bb 
>   
> api/src/org/apache/cloudstack/api/command/user/snapshot/RevertSnapshotCmd.java
>  PRE-CREATION 
>   core/src/com/cloud/storage/resource/StorageProcessor.java 5fa9f8a 
>   core/src/com/cloud/storage/resource/StorageSubsystemCommandHandlerBase.java 
> ab9aa2a 
>   core/src/org/apache/cloudstack/storage/command/ForgetObjectCmd.java 
> PRE-CREATION 
>   core/src/org/apache/cloudstack/storage/command/IntroduceObjectAnswer.java 
> PRE-CREATION 
>   core/src/org/apache/cloudstack/storage/command/IntroduceObjectCmd.java 
> PRE-CREATION 
>   core/src/org/apache/cloudstack/storage/to/ImageStoreTO.java 0037ea5 
>   core/src/org/apache/cloudstack/storage/to/PrimaryDataStoreTO.java 5e870df 
>   
> engine/api/src/org/apache/cloudstack/engine/subsystem/api/storage/EndPointSelector.java
>  ca0cc2c 
>   
> engine/api/src/org/apache/cloudstack/engine/subsystem/api/storage/SnapshotService.java
>  d594a07 
>   
> engine/api/src/org/apache/cloudstack/engine/subsystem/api/storage/SnapshotStrategy.java
>  86ae532 
>   
> engine/storage/datamotion/src/org/apache/cloudstack/storage/motion/AncientDataMotionStrategy.java
>  96d1f5a 
>   
> engine/storage/image/src/org/apache/cloudstack/storage/image/store/ImageStoreImpl.java
>  855d8cb 
>   
> engine/storage/integration-test/test/org/apache/cloudstack/storage/test/SnapshotTestWithFakeData.java
>  2aaabda 
>   
> engine/storage/snapshot/src/org/apache/cloudstack/storage/snapshot/SnapshotServiceImpl.java
>  3ead93f 
>   
> engine/storage/snapshot/src/org/apache/cloudstack/storage/snapshot/SnapshotStrategyBase.java
>  1b57922 
>   
> engine/storage/snapshot/src/org/apache/cloudstack/storage/snapshot/XenserverSnapshotStrategy.java
>  60d9407 
>   
> engine/storage/src/org/apache/cloudstack/storage/endpoint/DefaultEndPointSelector.java
>  fdc12bf 
>   
> engine/storage/src/org/apache/cloudstack/storage/helper/HypervisorHelper.java 
> PRE-CREATION 
>   
> engine/storage/src/org/apache/cloudstack/storage/helper/HypervisorHelperImpl.java
>  PRE-CREATION 
>   
> plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java
>  82fd2ce 
>   
> plugins/hypervisors/simulator/src/com/cloud/resource/SimulatorStorageProcessor.java
>  c7768aa 
>   
> plugins/hypervisors/vmware/src/com/cloud/storage/resource/VmwareStorageProcessor.java
>  4982d87 
>   
> plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/XenServerStorageProcessor.java
>  739b974 
>   server/src/com/cloud/storage/snapshot/SnapshotManagerImpl.java 2297e6a 
>   
> services/secondary-storage/src/org/apache/cloudstack/storage/resource/NfsSecondaryStorageResource.java
>  3ef950b 
> 
> Diff: https://reviews.apache.org/r/14477/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Chris Suich
> 
>



Re: Review Request 14477: Refactor Storage Related Resource Code

2013-10-04 Thread edison su


> On Oct. 4, 2013, 8:23 p.m., Chip Childers wrote:
> > api/src/org/apache/cloudstack/api/command/user/snapshot/RevertSnapshotCmd.java,
> >  line 36
> > 
> >
> > Are there any unit or integration tests for this new API call? I can't 
> > find any in this diff.

There is no implementation for revertsnapshot yet in the current storage 
drivers. If any storage vendor wants to implement this feature, they can add 
their implementation in their storage driver. Also, if anybody is interested in 
implementing this feature in the default storage driver, for example, for Ceph, 
then it's the place to start with. 


> On Oct. 4, 2013, 8:23 p.m., Chip Childers wrote:
> > core/src/org/apache/cloudstack/storage/command/ForgetObjectCmd.java, line 24
> > 
> >
> > Are there any unit or integration tests for this new API call? I can't 
> > find any in this diff.

It's not an api, it's just a simple command will send to hypervisor host, will 
be used by HypervisorHelper to introduce/forget an object on hypervisor host.


> On Oct. 4, 2013, 8:23 p.m., Chip Childers wrote:
> > engine/storage/src/org/apache/cloudstack/storage/helper/HypervisorHelperImpl.java,
> >  lines 35-76
> > 
> >
> > I feel like this new code should have unit test for the if logic trees.

The logic here is very simple: just send a command to hypervisor host. Not sure 
need unit test here or not.


> On Oct. 4, 2013, 8:23 p.m., Chip Childers wrote:
> > plugins/hypervisors/simulator/src/com/cloud/resource/SimulatorStorageProcessor.java,
> >  lines 226-236
> > 
> >
> > Can we get rid of these TODOs?

Yes, I can remove the TODO


> On Oct. 4, 2013, 8:23 p.m., Chip Childers wrote:
> > server/src/com/cloud/storage/snapshot/SnapshotManagerImpl.java, lines 
> > 264-283
> > 
> >
> > Unit tests for this if logic?

Again, straight forward code, not sure need add unit test here.


- edison


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14477/#review26694
---


On Oct. 4, 2013, 12:57 a.m., Chris Suich wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/14477/
> ---
> 
> (Updated Oct. 4, 2013, 12:57 a.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> These changes are a joint effort between Edison and I to refactor some of the 
> code around snapshotting VM volumes and creating templates/volumes from VM 
> volume snapshots. In general, we were working towards allowing 
> PrimaryDataStoreDrivers to create snapshots on primary storage and not 
> requiring the snapshots to be transferred to secondary storage.
> 
> High level changes:
> -Added uuid to NfsTO, SwiftTO & S3TO to cut down on the requirement of 
> PrimaryDataStoreTO and ImageStoreTO which don't really serve much of a purpose
> -Initial work towards enable reverting VM volume from snapshots
> -Added hypervisor commands for introducing and forgetting new hypervisor 
> objects (snapshots, templates & volumes)
> 
> 
> Diffs
> -
> 
>   api/src/com/cloud/agent/api/to/DataStoreTO.java 9014f8e 
>   api/src/com/cloud/agent/api/to/NfsTO.java 415c95c 
>   api/src/com/cloud/agent/api/to/SwiftTO.java 7349d77 
>   api/src/com/cloud/event/EventTypes.java ec9604e 
>   api/src/com/cloud/storage/snapshot/SnapshotApiService.java 23e6522 
>   
> api/src/org/apache/cloudstack/api/command/admin/storage/ListStoragePoolsCmd.java
>  26351bb 
>   
> api/src/org/apache/cloudstack/api/command/user/snapshot/RevertSnapshotCmd.java
>  PRE-CREATION 
>   core/src/com/cloud/storage/resource/StorageProcessor.java 5fa9f8a 
>   core/src/com/cloud/storage/resource/StorageSubsystemCommandHandlerBase.java 
> ab9aa2a 
>   core/src/org/apache/cloudstack/storage/command/ForgetObjectCmd.java 
> PRE-CREATION 
>   core/src/org/apache/cloudstack/storage/command/IntroduceObjectAnswer.java 
> PRE-CREATION 
>   core/src/org/apache/cloudstack/storage/command/IntroduceObjectCmd.java 
> PRE-CREATION 
>   core/src/org/apache/cloudstack/storage/to/ImageStoreTO.java 0037ea5 
>   core/src/org/apache/cloudstack/storage/to/PrimaryDataStoreTO.java 5e870df 
>   
> engine/api/src/org/apache/cloudstack/engine/subsystem/api/storage/EndPointSelector.java
>  ca0cc2c 
>   
> engine/api/src/org/apache/cloudstack/engine/subsystem/api/storage/SnapshotService.java
>  d594a07 
>   
> engine/api/src/org/apache/clo

RE: Master doesn't build

2013-10-04 Thread Edison Su
Sorry, forget to check in some files.

> -Original Message-
> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> Sent: Friday, October 04, 2013 2:25 PM
> To: dev@cloudstack.apache.org
> Subject: Re: Master doesn't build
> 
> Looks like the missing files were just added.
> 
> We should be OK now.
> 
> 
> On Fri, Oct 4, 2013 at 3:23 PM, Mike Tutkowski
>  > wrote:
> 
> > Here is how I tried to build it:
> >
> > mvn -P developer,systemvm clean install -Dnoredist
> >
> >
> > On Fri, Oct 4, 2013 at 3:22 PM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com> wrote:
> >
> >> [INFO] BUILD FAILURE
> >> [INFO]
> >> -
> >> ---
> >> [INFO] Total time: 55.164s
> >> [INFO] Finished at: Fri Oct 04 15:20:55 MDT 2013 [INFO] Final Memory:
> >> 40M/483M [INFO]
> >> -
> >> ---
> >> [ERROR] Failed to execute goal
> >> org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile
> >> (default-compile) on project cloud-core: Compilation failure:
> >> Compilation
> >> failure:
> >> [ERROR]
> >>
> /Users/mtutkowski/Documents/CloudStack/src/CloudStack/core/src/com/cl
> >>
> oud/storage/resource/StorageSubsystemCommandHandlerBase.java:[27,44]
> >> error: cannot find symbol
> >> [ERROR] symbol:   class IntroduceObjectCmd
> >> [ERROR] location: package org.apache.cloudstack.storage.command
> >> [ERROR]
> >>
> /Users/mtutkowski/Documents/CloudStack/src/CloudStack/core/src/com/cl
> >> oud/storage/resource/StorageProcessor.java:[26,44]
> >> error: cannot find symbol
> >> [ERROR] symbol:   class ForgetObjectCmd
> >> [ERROR] location: package org.apache.cloudstack.storage.command
> >> [ERROR]
> >>
> /Users/mtutkowski/Documents/CloudStack/src/CloudStack/core/src/com/cl
> >> oud/storage/resource/StorageProcessor.java:[27,44]
> >> error: cannot find symbol
> >> [ERROR] symbol:   class IntroduceObjectCmd
> >> [ERROR] location: package org.apache.cloudstack.storage.command
> >> [ERROR]
> >>
> /Users/mtutkowski/Documents/CloudStack/src/CloudStack/core/src/com/cl
> >> oud/storage/resource/StorageProcessor.java:[30,44]
> >> error: cannot find symbol
> >> [ERROR] symbol:   class ForgetObjectCmd
> >> [ERROR] location: package org.apache.cloudstack.storage.command
> >> [ERROR]
> >>
> /Users/mtutkowski/Documents/CloudStack/src/CloudStack/core/src/com/cl
> >> oud/storage/resource/StorageProcessor.java:[31,44]
> >> error: cannot find symbol
> >> [ERROR] symbol:   class IntroduceObjectCmd
> >> [ERROR] location: package org.apache.cloudstack.storage.command
> >> [ERROR]
> >>
> /Users/mtutkowski/Documents/CloudStack/src/CloudStack/core/src/com/cl
> >> oud/storage/resource/StorageProcessor.java:[50,27]
> >> error: cannot find symbol
> >> [ERROR] symbol:   class IntroduceObjectCmd
> >> [ERROR] location: interface StorageProcessor [ERROR]
> >>
> /Users/mtutkowski/Documents/CloudStack/src/CloudStack/core/src/com/cl
> >> oud/storage/resource/StorageProcessor.java:[51,24]
> >> error: cannot find symbol
> >> [ERROR] symbol:   class ForgetObjectCmd
> >> [ERROR] location: interface StorageProcessor [ERROR]
> >>
> /Users/mtutkowski/Documents/CloudStack/src/CloudStack/core/src/com/cl
> >>
> oud/storage/resource/StorageSubsystemCommandHandlerBase.java:[59,38]
> >> error: cannot find symbol
> >> [ERROR] symbol:   class IntroduceObjectCmd
> >> [ERROR] location: class StorageSubsystemCommandHandlerBase
> >> [ERROR]
> >>
> /Users/mtutkowski/Documents/CloudStack/src/CloudStack/core/src/com/cl
> >>
> oud/storage/resource/StorageSubsystemCommandHandlerBase.java:[60,46]
> >> error: cannot find symbol
> >>
> >> --
> >> *Mike Tutkowski*
> >> *Senior CloudStack Developer, SolidFire Inc.*
> >> e: mike.tutkow...@solidfire.com
> >> o: 303.746.7302
> >> Advancing the way the world uses the
> >> cloud
> >> *(tm)*
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkow...@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud
> > *(tm)*
> >
> 
> 
> 
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud
> *(tm)*


[DISCUSS] Pluggable VM snapshot related operations?

2013-10-04 Thread Edison Su
In 4.2, we added VM snapshot for Vmware/Xenserver. The current workflow will be 
like the following:
createVMSnapshot api -> VMSnapshotManagerImpl: creatVMSnapshot -> send 
CreateVMSnapshotCommand to hypervisor to create vm snapshot.

If anybody wants to change the workflow, then need to either change 
VMSnapshotManagerImpl directly or subclass VMSnapshotManagerImpl. Both are not 
the ideal choice, as VMSnapshotManagerImpl should be able to handle different 
ways to take vm snapshot, instead of hard code.

The requirements for the pluggable VM snapshot coming from:
Storage vendor may have their optimization, such as NetApp.
VM snapshot can be implemented in a totally different way(For example, I could 
just send a command to guest VM, to tell my application to flush disk and hold 
disk write, then come to hypervisor to take a volume snapshot). 

If we agree on enable pluggable VM snapshot, then we can move on discuss how to 
implement it.

The possible options:
1. coarse grained interface. Add a VMSnapshotStrategy interface, which has the 
following interfaces:
VMSnapshot takeVMSnapshot(VMSnapshot vmSnapshot);
Boolean revertVMSnapshot(VMSnapshot vmSnapshot);
Boolean DeleteVMSnapshot(VMSnapshot vmSnapshot);

   The work flow will be: createVMSnapshot api -> VMSnapshotManagerImpl: 
creatVMSnapshot -> VMSnapshotStrategy: takeVMSnapshot
   VMSnapshotManagerImpl will manage VM state, do the sanity check, then will 
handle over to VMSnapshotStrategy. 
   In VMSnapshotStrategy implementation, it may just send a 
Create/revert/delete VMSnapshotCommand to hypervisor host, or do anything 
special operations.

2. fine-grained interface. Not only add a VMSnapshotStrategy interface, but 
also add certain methods on the storage driver.
The VMSnapshotStrategy interface will be the same as option 1.
Will add the following methods on storage driver:
   /* volumesBelongToVM  is the list of volumes of the VM that created on this 
storage, storage vendor can either take one snapshot for this volumes in one 
shot, or take snapshot for each volume separately
   The pre-condition: vm is unquiesced. 
   It will return a Boolean to indicate, do need unquiesce vm or not.
   In the default storage driver, it will return false.
*/
boolean takeVMSnapshot(List volumesBelongToVM, VMSnapshot 
vmSnapshot);  
Boolean revertVMSnapshot(List volumesBelongToVM, VMSnapshot 
vmSnapshot);
   Boolean deleteVMSnapshot(List volumesBelongToVM, VMSnapshot 
vmSNapshot);

The work flow will be: createVMSnapshot api -> VMSnapshotManagerImpl: 
creatVMSnapshot -> VMSnapshotStrategy: takeVMSnapshot -> storage 
driver:takeVMSnapshot
 In the implementation of VMSnapshotStrategy's takeVMSnapshot, the pseudo code 
looks like:
   HypervisorHelper.quiesceVM(vm);
   val volumes = vm.getVolumes();
   val maps = new Map[driver, list[VolumeInfo]]();
   Volumes.foreach(volume => maps.put(volume.getDriver, volume :: 
maps.get(volume.getdriver(
   val needUnquiesce = true;
maps.foreach((driver, volumes) => needUnquiesce  = needUnquiesce  && 
driver.takeVMSnapshot(volumes))
  if (needUnquiesce ) {
   HypervisorHelper.unquiesce(vm);
} 

By default, the quiesceVM in HypervisorHelper will actually take vm snapshot 
through hypervisor. 
Does above logic makes senesce?

The pros of option 1 is that: it's simple, no need to change storage driver 
interfaces. The cons is that each storage vendor need to implement a strategy, 
maybe they will do the same thing.
The pros of option 2 is that, storage driver won't need to worry about how to 
quiesce/unquiesce vm. The cons is that, it will add these methods on each 
storage drivers, so it assumes that this work flow will work for everybody.

So which option we should take? Or if you have other options, please let's 
know. 



   



Re: Review Request 14516: Added storage_provider_name to storage_pool_view

2013-10-07 Thread edison su

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14516/#review26743
---

Ship it!


Ship It!

- edison su


On Oct. 7, 2013, 12:43 p.m., Chris Suich wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/14516/
> ---
> 
> (Updated Oct. 7, 2013, 12:43 p.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> I added the 'storage_provider_name' field to the 'storage_pool_view' just as 
> 'image_provider_name' is in 'image_store_view'.
> 
> This is my first time making a DB change, so please let me know if I did not 
> properly update the db upgrade script.
> 
> 
> Diffs
> -
> 
>   server/src/com/cloud/api/query/vo/StoragePoolJoinVO.java 69f2204 
>   setup/db/db/schema-420to430.sql 653ff77 
> 
> Diff: https://reviews.apache.org/r/14516/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Chris Suich
> 
>



RE: [DISCUSS] Pluggable VM snapshot related operations?

2013-10-07 Thread Edison Su
I created a design document page at 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Pluggable+VM+snapshot+related+operations,
 feel free to add items on it.
And a new branch "pluggable_vm_snapshot" is created. 

> -Original Message-
> From: SuichII, Christopher [mailto:chris.su...@netapp.com]
> Sent: Monday, October 07, 2013 10:02 AM
> To: 
> Subject: Re: [DISCUSS] Pluggable VM snapshot related operations?
> 
> I'm a fan of option 2 - this gives us the most flexibility (as you stated). 
> The
> option is given to completely override the way VM snapshots work AND
> storage providers are given to opportunity to work within the default VM
> snapshot workflow.
> 
> I believe this option should satisfy your concern, Mike. The snapshot and
> quiesce strategy would be in charge of communicating with the hypervisor.
> Storage providers should be able to leverage the default strategies and
> simply perform the storage operations.
> 
> I don't think it should be much of an issue that new method to the storage
> driver interface may not apply to everyone. In fact, that is already the case.
> Some methods such as un/maintain(), attachToXXX() and takeSnapshot() are
> already not implemented by every driver - they just return false when asked
> if they can handle the operation.
> 
> --
> Chris Suich
> chris.su...@netapp.com
> NetApp Software Engineer
> Data Center Platforms - Cloud Solutions
> Citrix, Cisco & Red Hat
> 
> On Oct 5, 2013, at 12:11 AM, Mike Tutkowski 
> wrote:
> 
> > Well, my first thought on this is that the storage driver should not
> > be telling the hypervisor to do anything. It should be responsible for
> > creating/deleting volumes, snapshots, etc. on its storage system only.
> >
> >
> > On Fri, Oct 4, 2013 at 5:57 PM, Edison Su  wrote:
> >
> >> In 4.2, we added VM snapshot for Vmware/Xenserver. The current
> >> workflow will be like the following:
> >> createVMSnapshot api -> VMSnapshotManagerImpl: creatVMSnapshot ->
> >> send CreateVMSnapshotCommand to hypervisor to create vm snapshot.
> >>
> >> If anybody wants to change the workflow, then need to either change
> >> VMSnapshotManagerImpl directly or subclass VMSnapshotManagerImpl.
> >> Both are not the ideal choice, as VMSnapshotManagerImpl should be
> >> able to handle different ways to take vm snapshot, instead of hard code.
> >>
> >> The requirements for the pluggable VM snapshot coming from:
> >> Storage vendor may have their optimization, such as NetApp.
> >> VM snapshot can be implemented in a totally different way(For
> >> example, I could just send a command to guest VM, to tell my
> >> application to flush disk and hold disk write, then come to hypervisor to
> take a volume snapshot).
> >>
> >> If we agree on enable pluggable VM snapshot, then we can move on
> >> discuss how to implement it.
> >>
> >> The possible options:
> >> 1. coarse grained interface. Add a VMSnapshotStrategy interface,
> >> which has the following interfaces:
> >>VMSnapshot takeVMSnapshot(VMSnapshot vmSnapshot);
> >>Boolean revertVMSnapshot(VMSnapshot vmSnapshot);
> >>Boolean DeleteVMSnapshot(VMSnapshot vmSnapshot);
> >>
> >>   The work flow will be: createVMSnapshot api ->
> VMSnapshotManagerImpl:
> >> creatVMSnapshot -> VMSnapshotStrategy: takeVMSnapshot
> >>   VMSnapshotManagerImpl will manage VM state, do the sanity check,
> >> then will handle over to VMSnapshotStrategy.
> >>   In VMSnapshotStrategy implementation, it may just send a
> >> Create/revert/delete VMSnapshotCommand to hypervisor host, or do
> >> anything special operations.
> >>
> >> 2. fine-grained interface. Not only add a VMSnapshotStrategy
> >> interface, but also add certain methods on the storage driver.
> >>The VMSnapshotStrategy interface will be the same as option 1.
> >>Will add the following methods on storage driver:
> >>   /* volumesBelongToVM  is the list of volumes of the VM that created
> >> on this storage, storage vendor can either take one snapshot for this
> >> volumes in one shot, or take snapshot for each volume separately
> >>   The pre-condition: vm is unquiesced.
> >>   It will return a Boolean to indicate, do need unquiesce vm or not.
> >>   In the default storage driver, it will return false.
> >>*/
> >>boolean takeVMSnapshot(List volumesBelongToVM,
> >> VMSnapshot vmSnapshot);
> >>Boolean rev

RE: [DISCUSS] Breaking out Marvin from CloudStack

2013-10-07 Thread Edison Su
Few questions:
1. About the "more object-oriented" CloudStack API python binding: Is the 
proposed api  good enough?
For example, 
The current hand written create virtual machine looks like:
class VirtualMachine(object):

@classmethod
def create(cls, apiclient, services, templateid=None, accountid=None,
domainid=None, zoneid=None, networkids=None, 
serviceofferingid=None,
securitygroupids=None, projectid=None, startvm=None,
diskofferingid=None, affinitygroupnames=None, group=None,
hostid=None, keypair=None, mode='basic', method='GET'):

the proposed api may look like:

class VirtualMachine(object):
   def create(self, apiclient, accountId, templateId, **kwargs)

The proposed api will look better than previous one, and it's automatically 
generated, so easy to maintain. But as a consumer of the api, how do people 
know what kind of parameters should be passed in? Will you have an online 
document for your api? Or you assume people will look at the api docs generated 
by CloudStack? 
Or why not make the api itself as self-contained? For example, add docs before 
create method:

class VirtualMachine(object):
   '''
 Args:  
  accountId: what ever
   templateId: whatever
   networkids: whatever
   '''
   '''
   Response:
'''
   def create(self, apiclient, accountId, templateId, **kwargs)

All the api documents should be included in api discovery already, so it should 
be easy to add them in your api binding.

2. Regarding to data factories. From the proposed factories, in each test case, 
does test writer still need to write the code to get data, such as writing code 
to get account during the setupclass?
I looked at some of the existing test cases, most of them have the same code 
snippet:
class Services:
def __init__(self):
self.services = {
"account": {
"email": "t...@test.com",
"firstname": "Test",
"lastname": "User",
"username": "test",
"password": "password",
},
"virtual_machine": {
"displayname": "Test VM",
"username": "root",
"password": "password",
"ssh_port": 22,
"hypervisor": 'XenServer',
"privateport": 22,
"publicport": 22,
"protocol": 'TCP',
},

With the data factories, the code will look like the following?

Class TestFoo:
 Def setupClass():
  Account = UserAccount(apiclient)
   VM = UserVM(apiClient)

And if I want to customize the default data factories, I should be able to use 
something like: UserAccount(apiclient, username='myfoo')?
And the data factories should be able to customized based on test environment, 
right? 
For example, the current iso test cases are hardcoded to test against 
http://people.apache.org/~tsp/dummy.iso, but it won't work for devcloud, or in 
an internal network. The ISO data factory should be able to return an url based 
on different test environment, thus iso test cases can be reused.




> -Original Message-
> From: Santhosh Edukulla [mailto:santhosh.eduku...@citrix.com]
> Sent: Monday, October 07, 2013 7:06 AM
> To: dev@cloudstack.apache.org
> Subject: RE: [DISCUSS] Breaking out Marvin from CloudStack
> 
> Team,
> 
> Apart\Away from breaking out marvin from cloudstack, please check the
> other new details provided as part of the new proposal for marvin
> refactoring. Your inputs out of experience are invaluable. Any new feature
> tests for CS will be followed with the new approach, provided if we agree to
> all. Pasting the proposal link one more time below.
> 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Marvin+Refactor
> 
> Regards,
> Santhosh
> 
> From: Daan Hoogland [daan.hoogl...@gmail.com]
> Sent: Sunday, October 06, 2013 3:05 PM
> To: dev
> Subject: Re: [DISCUSS] Breaking out Marvin from CloudStack
> 
> On Sun, Oct 6, 2013 at 8:10 PM, Animesh Chaturvedi <
> animesh.chaturv...@citrix.com> wrote:
> 
> > > Yes and we will need to work down a backlog of scenarios before we
> > > ever can rely on guys like me doing that. Not because they won't but
> > > because there is to much to write tests for edging on the new
> > > features they write. Just because those tests aren't there yet. I
> > > think giving Citrix QA a repo to work on is fine but I would like to
> > > see it merged back at some point and a continued possibility to write
> them in the main tree.
> > >
> > [Animesh>] While I don't agree to a separate repo for tests (marvin
> > framework is ok) I do want to call out the proposal is not for giving
> > Citrix QA a repo to work on and I don't think Prasanna meant that way.
> >
> 
> 
> I have to apologize for the formulations I choose to express my thoughts
> with. I did not mean to talk of a dep

RE: [DISCUSS] Breaking out Marvin from CloudStack

2013-10-08 Thread Edison Su


> -Original Message-
> From: Santhosh Edukulla [mailto:santhosh.eduku...@citrix.com]
> Sent: Tuesday, October 08, 2013 1:28 AM
> To: dev@cloudstack.apache.org
> Subject: RE: [DISCUSS] Breaking out Marvin from CloudStack
> 
> Comments Inline.
> 
> -Original Message-
> From: Edison Su [mailto:edison...@citrix.com]
> Sent: Tuesday, October 08, 2013 4:18 AM
> To: dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org>
> Subject: RE: [DISCUSS] Breaking out Marvin from CloudStack
> 
> Few questions:
> 1. About the "more object-oriented" CloudStack API python binding: Is the
> proposed api  good enough?
> For example,
> The current hand written create virtual machine looks like:
> class VirtualMachine(object):
> 
> @classmethod
> def create(cls, apiclient, services, templateid=None, accountid=None,
> domainid=None, zoneid=None, networkids=None,
> serviceofferingid=None,
> securitygroupids=None, projectid=None, startvm=None,
> diskofferingid=None, affinitygroupnames=None, group=None,
> hostid=None, keypair=None, mode='basic', method='GET'):
> 
> the proposed api may look like:
> 
> class VirtualMachine(object):
>def create(self, apiclient, accountId, templateId, **kwargs)
> 
> The proposed api will look better than previous one, and it's automatically
> generated, so easy to maintain. But as a consumer of the api, how do people
> know what kind of parameters should be passed in? Will you have an online
> document for your api? Or you assume people will look at the api docs
> generated by CloudStack?
> Or why not make the api itself as self-contained? For example, add docs
> before create method:
> 
> class VirtualMachine(object):
>'''
>  Args:
>   accountId: what ever
>templateId: whatever
>networkids: whatever
>'''
>'''
>Response:
> '''
>def create(self, apiclient, accountId, templateId, **kwargs)
> 
> All the api documents should be included in api discovery already, so it 
> should
> be easy to add them in your api binding.
> >> [Santhosh]: Each verb as an action on entity, will have provision as 
> >> earlier
> to have all required and as well optional arguments.  Regarding doc strings, 
> If
> the API docs are having  this facilitation, we will  add them as corresponding
> doc strings during generation of python binding and as well entities. As you
> rightly mentioned, it will good to add this . We will make sure to get it.
> Adding adequate doc strings applies even while writing test feature\lib as
> well, it will improve  ease ,readability,usage etc. Anyways a wiki page, and
> additional pydoc documents posted online will be there.
That's great! The way you separate required parameters from optional parameters 
on the method signature, is quite a good idea.

> 
> 2. Regarding to data factories. From the proposed factories, in each test 
> case,
> does test writer still need to write the code to get data, such as writing 
> code
> to get account during the setupclass?
> I looked at some of the existing test cases, most of them have the same code
> snippet:
> class Services:
> def __init__(self):
> self.services = {
> "account": {
> "email": "t...@test.com<mailto:t...@test.com>",
> "firstname": "Test",
> "lastname": "User",
> "username": "test",
> "password": "password",
> },
> "virtual_machine": {
> "displayname": "Test VM",
> "username": "root",
> "password": "password",
> "ssh_port": 22,
> "hypervisor": 'XenServer',
> "privateport": 22,
> "publicport": 22,
> "protocol": 'TCP',
> },
> 
> With the data factories, the code will look like the following?
> 
> Class TestFoo:
>  Def setupClass():
>   Account = UserAccount(apiclient)
>VM = UserVM(apiClient)
> 
> And if I want to customize the default data factories, I should be able to use
> something like: UserAccount(apiclient, username='myfoo')?
> And the data factories should be able to customized base

Re: Review Request 14522: [CLOUDSTACK-4771] Support Revert VM Disk from Snapshot

2013-10-08 Thread edison su

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14522/#review26786
---



ui/scripts/storage.js
<https://reviews.apache.org/r/14522/#comment52129>

The ui change here, is there way to disable it from ui, if the storage 
provider is not NetApp? Or move the ui change into your plugin?


- edison su


On Oct. 7, 2013, 8:26 p.m., Chris Suich wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/14522/
> ---
> 
> (Updated Oct. 7, 2013, 8:26 p.m.)
> 
> 
> Review request for cloudstack, Brian Federle and edison su.
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> After the last batch of work to the revertSnapshot API, SnapshotServiceImpl 
> was not tied into the workflow to be used by storage providers. I have added 
> the logic in a similar fashion to takeSnapshot(), backupSnapshot() and 
> deleteSnapshot().
> 
> I have also added a 'Revert to Snapshot' action to the volume snapshots list 
> in the UI.
> 
> 
> Diffs
> -
> 
>   
> api/src/org/apache/cloudstack/api/command/user/snapshot/RevertSnapshotCmd.java
>  946eebd 
>   client/WEB-INF/classes/resources/messages.properties f92b85a 
>   client/tomcatconf/commands.properties.in 58c770d 
>   
> engine/storage/snapshot/src/org/apache/cloudstack/storage/snapshot/SnapshotServiceImpl.java
>  c09adca 
>   server/src/com/cloud/server/ManagementServerImpl.java 0a0fcdc 
>   server/src/com/cloud/storage/snapshot/SnapshotManagerImpl.java 0b53cfd 
>   ui/dictionary.jsp f93f9dc 
>   ui/scripts/storage.js 88fb9f2 
> 
> Diff: https://reviews.apache.org/r/14522/diff/
> 
> 
> Testing
> ---
> 
> I have tested all of this locally with a custom storage provider.
> 
> Unfortunately, I'm still in the middle of figuring out how to properly unit 
> test this type of code. If anyone has any recommendations, please let me know.
> 
> 
> Thanks,
> 
> Chris Suich
> 
>



RE: [DISCUSS] Pluggable VM snapshot related operations?

2013-10-08 Thread Edison Su
ud Solutions Citrix, Cisco & Red Hat
> >>>
> >>> On Oct 8, 2013, at 2:24 PM, Darren Shepherd
>  wrote:
> >>>
> >>>> So in the implementation, when we say "quiesce" is that actually
> >>>> being implemented as a VM snapshot (memory and disk).  And then
> >>>> when you say "unquiesce" you are talking about deleting the VM
> snapshot?
> >>>
> >>> If the VM snapshot is not going to the hypervisor, then yes, it will
> actually be a hypervisor snapshot. Just to be clear, the unquiesce is not 
> quite
> a delete - it is a collapse of the VM snapshot and the active VM back into one
> file.
> >>>
> >>>>
> >>>> In NetApp, what are you snapshotting?  The whole netapp volume (I
> >>>> don't know the correct term), a file on NFS, an iscsi volume?  I
> >>>> don't know a whole heck of a lot about the netapp snapshot
> capabilities.
> >>>
> >>> Essentially we are using internal APIs to create file level backups - 
> >>> don't
> worry too much about the terminology.
> >>>
> >>>>
> >>>> I know storage solutions can snapshot better and faster than
> >>>> hypervisors can with COW files.  I've personally just been always
> >>>> perplexed on whats the best way to implement it.  For storage
> >>>> solutions that are block based, its really easy to have the storage
> >>>> doing the snapshot.  For shared file systems, like NFS, its seems
> >>>> way more complicated as you don't want to snapshot the entire
> >>>> filesystem in order to snapshot one file.
> >>>
> >>> With filesystems like NFS, things are certainly more complicated, but that
> is taken care of by our controller's operating system, Data ONTAP, and we
> simply use APIs to communicate with it.
> >>>
> >>>>
> >>>> Darren
> >>>>
> >>>> On Tue, Oct 8, 2013 at 11:10 AM, SuichII, Christopher
> >>>>  wrote:
> >>>>> I can comment on the second half.
> >>>>>
> >>>>> Through storage operations, storage providers can create backups
> much faster than hypervisors and over time, their snapshots are more
> efficient than the snapshot chains that hypervisors create. It is true that a 
> VM
> snapshot taken at the storage level is slightly different as it would be 
> psuedo-
> quiesced, not have it's memory snapshotted. This is accomplished through
> hypervisor snapshots:
> >>>>>
> >>>>> 1) VM snapshot request (lets say VM 'A'
> >>>>> 2) Create hypervisor snapshot (optional) -VM 'A' is snapshotted,
> >>>>> creating active VM 'A*'
> >>>>> -All disk traffic now goes to VM 'A*' and A is a snapshot of 'A*'
> >>>>> 3) Storage driver(s) take snapshots of each volume
> >>>>> 4) Undo hypervisor snapshot (optional) -VM snapshot 'A' is rolled
> >>>>> back into VM 'A*' so the hypervisor snapshot no longer exists
> >>>>>
> >>>>> Now, a couple notes:
> >>>>> -The reason this is optional is that not all users necessarily care 
> >>>>> about
> the memory or disk consistency of their VMs and would prefer faster
> snapshots to consistency.
> >>>>> -Preemptively, yes, we are actually taking hypervisor snapshots which
> means there isn't actually a performance of taking storage snapshots when
> quiescing the VM. However, the performance gain will come both during
> restoring the VM and during normal operations as described above.
> >>>>>
> >>>>> Although you can think of it as a poor man's VM snapshot, I would
> think of it more as a consistent multi-volume snapshot. Again, the difference
> being that this snapshot was not truly quiesced like a hypervisor snapshot
> would be.
> >>>>>
> >>>>> --
> >>>>> Chris Suich
> >>>>> chris.su...@netapp.com
> >>>>> NetApp Software Engineer
> >>>>> Data Center Platforms - Cloud Solutions Citrix, Cisco & Red Hat
> >>>>>
> >>>>> On Oct 8, 2013, at 1:47 PM, Darren Shepherd
>  wrote:
> >>>>>
> >>>>>> My only comment is that having the return type as boolean and
> >>>>>> using to that ind

RE: [New Feature FS] SSL Offload Support for Cloudstack

2013-10-08 Thread Edison Su
There is command in ACS, UploadCustomCertificateCmd, which can receive ssl 
cert, key can chain as input. Maybe can share some code?

> -Original Message-
> From: Darren Shepherd [mailto:darren.s.sheph...@gmail.com]
> Sent: Tuesday, October 08, 2013 1:54 PM
> To: dev@cloudstack.apache.org
> Subject: Re: [New Feature FS] SSL Offload Support for Cloudstack
> 
> The API should do input validation on the SSL cert, key and chain.
> Getting those three pieces of info is usually difficult for most people to get
> right as they don't really know what those three things are.  There's about a
> 80% chance most calls will fail.  If you rely on the provider it will 
> probably just
> give back some general failure message that we won't be able to map back to
> the user as useful information.
> 
> I would implement the cert management as a separate CertificateService.
> 
> Darren
> 
> On Tue, Oct 8, 2013 at 1:31 PM, Syed Ahmed 
> wrote:
> > A question about implementation. I was looking at other commands and
> > the
> > execute() method for each of the other commands seem to call a service
> > ( _lbservice for example ) which takes care of updating the DB and
> > calling the resource layer. Should the Certificate management be
> > implemented as a service or is there something else that I can use? An
> > example would be immensely helpful.
> >
> > Thanks
> > -Syed
> >
> >
> >
> > On Tue 08 Oct 2013 03:22:14 PM EDT, Syed Ahmed wrote:
> >>
> >> Thanks for the feedback guys. Really appreciate it.
> >>
> >> 1) Changing the name to SSL Termination.
> >>
> >> I don't have a problem with that. I was looking at Netscaler all the
> >> time and they call it SSL offloading. But I agree that termination is
> >> a more general term.
> >> I have changed the name. The new page is at
> >>
> >>
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/SSL+Terminatio
> >> n+Support
> >>
> >>
> >> 2) Specify the protocol type.
> >>
> >> Currently the protocol type of a loadbalncer gets set by checking the
> >> source and destination port ( see getNetScalerProtocol() in
> >> NetscalerResouce.java ) . So, we should change that and add another
> >> optional field in the createLoadBalancerRule for protocol.
> >>
> >> 3) Certificate chain as a seperate parameter.
> >>
> >> Again, I was looking at Netscaler as an example but separating the
> >> chain and certificate makes sense. I have updated the document
> >> accordingly.
> >>
> >> I was assuming that the certificate parsing/validation would be done
> >> by the device and we would just pass the certficate data as-is. But
> >> if we are adding chains separately, we should have the ability to
> >> parse and combine the chain and certificate for some devices as you
> mentioned.
> >>
> >> Thanks
> >> -Syed
> >>
> >>
> >> On Tue 08 Oct 2013 02:49:52 PM EDT, Chip Childers wrote:
> >>>
> >>>
> >>> On Tue, Oct 08, 2013 at 11:41:42AM -0700, Darren Shepherd wrote:
> 
> 
>  Technicality here, can we call the functionality SSL termination?
>  While technically we are "offloading" ssl from the VM, offloading
>  typically carries a connotation that its being done in hardware. So
>  we are really talking about SSL termination.
> >>>
> >>>
> >>>
> >>> +1 - completely agree. There's certainly the possibility of an
> >>> *implementation* being true offloading, but I'd generalize to
> >>> "termination" to account for a non-hardware offload of the crypto
> >>> processing.
> >>>
> 
> 
>  Couple comments. I wouldn't want to assume anything about SSL
> based
>  on port numbers. So instead specify the protocol
>  (http/https/ssl/tcp) for the front and back side of the load
>  balancer. Additionally, I'd prefer the chain not be in the cert.
>  When configuring some backends you need the cert and chain
>  separate. It would be easier if they were stored that way.
>  Otherwise you have to do logic of parsing all the certs in the "keystore"
> and look for the one that matches the key.
> >>>
> >>>
> >>>
> >>> Also +1 to this. Cert chains may be optional, certainly, but should
> >>> actually be separate from the actual cert in the configuration. The
> >>> implementation may need to combine them into one document, but
> >>> that's implementation specific.
> >>>
> 
> 
>  Otherwise, awesome feature. I'll tell you, from an impl
>  perspective, parsing and validating the SSL certs is a pain. I can
>  probably find some java code to help out here on this as I've done
>  this before in the past.
> >>>
> >>>
> >>>
> >>> Yes, this is a sorely needed feature. I'm happy to see this be added
> >>> to the Netscaler plugin, and await a time when HA proxy has a stable
> >>> release that includes SSL term.
> >>>
> 
> 
>  Darren
> 
>  On Tue, Oct 8, 2013 at 11:14 AM, Syed Ahmed 
>  wrote:
> >
> >
> > Hi,
> >
> > I have been working on adding SSL offload functionality to
> > cloudstack and make it work

RE: [DISCUSS] Pluggable VM snapshot related operations?

2013-10-10 Thread Edison Su
Personally, I am +1 on the coarse grain interface, and totally agree with your 
points.
As long as we separate vmsnasphotmanager and vmsnapshotstrategy, and provide 
enough helper functions(such as quiesce / un-quiesce vm) for vendors, then 
write a new vmsnapshotStrategy should be easy.

> -Original Message-
> From: Darren Shepherd [mailto:darren.s.sheph...@gmail.com]
> Sent: Wednesday, October 09, 2013 9:13 PM
> To: dev@cloudstack.apache.org
> Subject: Re: [DISCUSS] Pluggable VM snapshot related operations?
> 
> Edison,
> 
> I would lean toward doing the coarse grain interface only.  I'm having a hard
> time seeing how the whole flow is generic and makes sense for everyone.
> With starting with the coarse grain you have the advantage in that you avoid
> possible upfront over engineering/over design that could wreak havoc down
> the line.  If you implement the VMSnapshotStrategy and find that it really is
> useful to other implementations, you can then implement the fine grain
> interface later to allow others to benefit from it.
> 
> Darren
> 
> On Wed, Oct 9, 2013 at 8:54 PM, Mike Tutkowski
>  wrote:
> > Hey guys,
> >
> > I haven't been giving this thread much attention, but am reviewing it
> > somewhat now.
> >
> > I'm not really clear how this would work if, say, a VM has two data
> > disks and they are not being provided by the same vendor.
> >
> > Can someone clarify that for me?
> >
> > My understanding for how this works today is that it doesn't matter.
> > For XenServer, a VDI is on an SR, which could be supported by storage
> vendor X.
> > Another VDI could be on another SR, supported by storage vendor Y.
> >
> > In this case, a new VDI appears on each SR after a hypervisor snapshot.
> >
> > Same idea for VMware.
> >
> > I don't really know how (or if) this works for KVM.
> >
> > I'm not clear how this multi-vendor situation would play out in this
> > pluggable approach.
> >
> > Thanks!
> >
> >
> > On Tue, Oct 8, 2013 at 4:43 PM, Edison Su  wrote:
> >
> >>
> >>
> >> > -Original Message-
> >> > From: Darren Shepherd [mailto:darren.s.sheph...@gmail.com]
> >> > Sent: Tuesday, October 08, 2013 2:54 PM
> >> > To: dev@cloudstack.apache.org
> >> > Subject: Re: [DISCUSS] Pluggable VM snapshot related operations?
> >> >
> >> > A hypervisor snapshot will snapshot memory also.  So determining
> >> > whether
> >> The memory is optional for hypervisor vm snapshot, a.k.a, the
> >> "Disk-only
> >> snapshots":
> >> http://support.citrix.com/proddocs/topic/xencenter-61/xs-xc-vms-snaps
> >> hots-about.html It's supported by both xenserver/kvm/vmware.
> >>
> >> > do to the hypervisor snapshot from the quiesce option does not seem
> >> > proper.
> >> >
> >> > Sorry, for all the questions, I'm trying to get to the point of
> >> understand if this
> >> > functionality makes sense at this point of code or if maybe their
> >> > is a
> >> different
> >> > approach.  This is what I'm seeing, what if we state it this way
> >> >
> >> > 1) VM snapshot, AFAIK, are not backed up today and exist solely on
> >> primary.
> >> > What if we added a backup phase to VM snapshots that can be
> >> > optionally supported by the storage providers to possibly backup
> >> > the VM snapshot volumes.
> >> It's not about backup vm snapshot, it's about how to take vm snapshot.
> >> Usually, take/revert vm snapshot is handled by hypervisor itself, but
> >> in NetApp(or other storage vendor) case, They want to change the
> >> default behavior of hypervisor-base vm snapshot.
> >>
> >> Some examples:
> >> 1. take hypervisor based vm snapshots, on primary storage, hypervisor
> >> will maintain the snapshot chain.
> >> 2. take vm snapshot through NetApp:
> >>  a. first, quiesce VM if user specified. There is no separate API
> >> to quiesce VM on the hypervisor, so here we will take a VM snapshot
> >> through hypervisor API call, hypervisor will take volume snapshot  on
> >> each volume of the VM. Let's say, on the primary storage, the disk
> >> chain looks like:
> >>base-image
> >> |
> >> V
> >> Parent disk
> >> / \
> >>

RE: High CPU utilization on KVM hosts while doing RBD snapshot - was Re: snapshot caused host disconnected

2013-10-11 Thread Edison Su


> -Original Message-
> From: Indra Pramana [mailto:in...@sg.or.id]
> Sent: Wednesday, October 09, 2013 11:07 PM
> To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
> Subject: Re: High CPU utilization on KVM hosts while doing RBD snapshot -
> was Re: snapshot caused host disconnected
> 
> Dear all,
> 
> I and my colleague tried to scrutinize the source code and found the script
> which is performing the copying of the snapshot from primary storage to
> secondary storage on this file:
> 
> ./plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/KVMSto
> rageProcessor.java
> 
> Specifically under this function:
> 
> @Override
> public Answer backupSnapshot(CopyCommand cmd) {
> 
> ===
> File snapDir = new File(snapshotDestPath);
> s_logger.debug("Attempting to create " +
> snapDir.getAbsolutePath() + " recursively");
> FileUtils.forceMkdir(snapDir);
> 
> File snapFile = new File(snapshotDestPath + "/" + 
> snapshotName);
> s_logger.debug("Backing up RBD snapshot to " +
> snapFile.getAbsolutePath());
> BufferedOutputStream bos = new BufferedOutputStream(new
> FileOutputStream(snapFile));
> int chunkSize = 4194304;
> long offset = 0;
> while(true) {
> byte[] buf = new byte[chunkSize];
> 
> int bytes = image.read(offset, buf, chunkSize);
> if (bytes <= 0) {
> break;
> }
> bos.write(buf, 0, bytes);
> offset += bytes;
> }
> s_logger.debug("Completed backing up RBD snapshot " +
> snapshotName + " to  " + snapFile.getAbsolutePath() + ". Bytes written: " +
> offset);
> bos.close();
> ===
> 
> (1) Is it safe to comment out the above lines and recompile/reinstall, to
> prevent CloudStack from copying the snapshots from the RBD primary
> storage to the secondary storage?
> 
> (2) What would be the impact to CloudStack operations if we leave the
> snapshots on primary storage without copying them to secondary storage?
> Are we still able to do restoration from the snapshots kept in the primary
> storage?

Yes, it's doable, but need to write a SnapshotStrategy in mgt server. The 
current SnapshotStrategy will always backup snapshot into secondary after 
taking snapshot.
You can write a new SnapshotStrategy, just take snapshot without backup to 
secondary storage.
After the commit: 180cfa19e87b909cb1c8a738359e31a6111b11c5 checked into master, 
you will get a lot of freedom to manipulate snapshot.

> 
> Looking forward to your reply, thank you.
> 
> Cheers.
> 
> 
> 
> On Wed, Oct 9, 2013 at 2:36 PM, Indra Pramana  wrote:
> 
> > Hi Wido and all,
> >
> > Good day to you, and thank you for your e-mail reply.
> >
> > Yes, from the agent logs I can see that the RBD snapshot was created.
> > However, it seems that CPU utilisation goes up drastically during the
> > copying over of the snapshot from primary storage to the secondary
> storage.
> >
> > ===
> > 2013-10-08 00:01:58,765 DEBUG [cloud.agent.Agent]
> > (agentRequest-Handler-5:null) Request:Seq 34-898172006:  { Cmd , MgmtId:
> > 161342671900, via: 34, Ver: v1, Flags: 100011,
> >
> [{"org.apache.cloudstack.storage.command.CreateObjectCommand":{"data"
> :{"org.apache.cloudstack.storage.to.SnapshotObjectTO":{"volume":{"uuid":"
> 0c4f8e41-dfd8-4fc2-a22e-
> 1a79738560a1","volumeType":"DATADISK","dataStore":{"org.apache.cloudst
> ack.storage.to.PrimaryDataStoreTO":{"uuid":"d433809b-01ea-3947-ba0f-
> 48077244e4d6","id":214,"poolType":"RBD","host":"
> > ceph-mon.simplercloud.com
> > ","path":"simplercloud-sg-01","port":6789}},"name":"DATA-
> 2051","size":64424509440,"path":"fc5dfa05-2431-4b42-804b-
> b2fb72e219d0","volumeId":2289,"vmName":"i-195-2051-
> VM","accountId":195,"format":"RAW","id":2289,"hypervisorType":"KVM"},"
> parentSnapshotPath":"simplercloud-sg-01/fc5dfa05-2431-4b42-804b-
> b2fb72e219d0/61042668-23ab-4f63-8a21-
> ce5a24f9c883","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataS
> toreTO":{"uuid":"d433809b-01ea-3947-ba0f-
> 48077244e4d6","id":214,"poolType":"RBD","host":"
> > ceph-mon.simplercloud.com","path":"simplercloud-sg-01","port":6789}},"
> > vmName":"i-195-2051-VM","name":"test-snapshot-and-ip-1_DATA-
> 2051_20131
> > 007160158","hypervisorType":"KVM","id":22}},"wait":0}}]
> > }
> > 2013-10-08 00:01:58,765 DEBUG [cloud.agent.Agent]
> > (agentRequest-Handler-5:null) Processing command:
> > org.apache.cloudstack.storage.command.CreateObjectCommand
> > 2013-10-08 00:02:08,071 DEBUG [cloud.agent.Agent]
> > (agentRequest-Handler-1:null) Request:Seq 34-898172007:  { Cmd , MgmtId:
> > 161342671900, via: 34, Ver: v1, Flags: 100011,
> >
> [{"org.apache.cloudstack.storage.command.CreateObjectCommand":{"data"
> :{"org.apache

Re: Review Request 14609: Added Categorized Sorting of SnapshotStrategy and DataMotionStrategy

2013-10-14 Thread edison su

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14609/#review26990
---

Ship it!


Ship It!

- edison su


On Oct. 11, 2013, 7:32 p.m., Chris Suich wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/14609/
> ---
> 
> (Updated Oct. 11, 2013, 7:32 p.m.)
> 
> 
> Review request for cloudstack, Darren Shepherd and edison su.
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> I have added a sorting system for SnapshotStrategies and DataMotionStrategies 
> to ensure that plugin strategies are asked to handle operations before 
> hypervisors and the default strategies.
> 
> 
> Diffs
> -
> 
>   
> engine/api/src/org/apache/cloudstack/engine/subsystem/api/storage/DataMotionStrategy.java
>  6deb6c1 
>   
> engine/api/src/org/apache/cloudstack/engine/subsystem/api/storage/SnapshotStrategy.java
>  47e595b 
>   
> engine/api/src/org/apache/cloudstack/engine/subsystem/api/storage/StrategyPriority.java
>  PRE-CREATION 
>   
> engine/api/test/org/apache/cloudstack/engine/subsystem/api/storage/StrategyPriorityTest.java
>  PRE-CREATION 
>   
> engine/storage/datamotion/src/org/apache/cloudstack/storage/motion/AncientDataMotionStrategy.java
>  fb6962a 
>   
> engine/storage/datamotion/src/org/apache/cloudstack/storage/motion/DataMotionServiceImpl.java
>  9f0f531 
>   
> engine/storage/integration-test/test/org/apache/cloudstack/storage/test/MockStorageMotionStrategy.java
>  6c0bd55 
>   
> engine/storage/integration-test/test/org/apache/cloudstack/storage/test/SnapshotTest.java
>  f1eed3a 
>   
> engine/storage/snapshot/src/org/apache/cloudstack/storage/snapshot/XenserverSnapshotStrategy.java
>  aae4cde 
>   
> plugins/hypervisors/simulator/src/org/apache/cloudstack/storage/motion/SimulatorDataMotionStrategy.java
>  05b3e6b 
>   
> plugins/hypervisors/vmware/src/org/apache/cloudstack/storage/motion/VmwareStorageMotionStrategy.java
>  bdba61b 
>   
> plugins/hypervisors/vmware/test/org/apache/cloudstack/storage/motion/VmwareStorageMotionStrategyTest.java
>  b3ea5d5 
>   
> plugins/hypervisors/xen/src/org/apache/cloudstack/storage/motion/XenServerStorageMotionStrategy.java
>  c796b69 
>   server/src/com/cloud/storage/snapshot/SnapshotManagerImpl.java 0b53cfd 
> 
> Diff: https://reviews.apache.org/r/14609/diff/
> 
> 
> Testing
> ---
> 
> I have added appropriate tests for the new code and updated appropriate tests 
> to handle the new sorting. However, the existing storage tests cannot even be 
> run. I am working on finding a way to get those tests to work again (a huge 
> spring dependency issue is causing the problem).
> 
> 
> Thanks,
> 
> Chris Suich
> 
>



Re: Review Request 14522: [CLOUDSTACK-4771] Support Revert VM Disk from Snapshot

2013-10-14 Thread edison su

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14522/#review26989
---

Ship it!


Ship It!

- edison su


On Oct. 7, 2013, 8:26 p.m., Chris Suich wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/14522/
> ---
> 
> (Updated Oct. 7, 2013, 8:26 p.m.)
> 
> 
> Review request for cloudstack, Brian Federle and edison su.
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> After the last batch of work to the revertSnapshot API, SnapshotServiceImpl 
> was not tied into the workflow to be used by storage providers. I have added 
> the logic in a similar fashion to takeSnapshot(), backupSnapshot() and 
> deleteSnapshot().
> 
> I have also added a 'Revert to Snapshot' action to the volume snapshots list 
> in the UI.
> 
> 
> Diffs
> -
> 
>   
> api/src/org/apache/cloudstack/api/command/user/snapshot/RevertSnapshotCmd.java
>  946eebd 
>   client/WEB-INF/classes/resources/messages.properties f92b85a 
>   client/tomcatconf/commands.properties.in 58c770d 
>   
> engine/storage/snapshot/src/org/apache/cloudstack/storage/snapshot/SnapshotServiceImpl.java
>  c09adca 
>   server/src/com/cloud/server/ManagementServerImpl.java 0a0fcdc 
>   server/src/com/cloud/storage/snapshot/SnapshotManagerImpl.java 0b53cfd 
>   ui/dictionary.jsp f93f9dc 
>   ui/scripts/storage.js 88fb9f2 
> 
> Diff: https://reviews.apache.org/r/14522/diff/
> 
> 
> Testing
> ---
> 
> I have tested all of this locally with a custom storage provider.
> 
> Unfortunately, I'm still in the middle of figuring out how to properly unit 
> test this type of code. If anyone has any recommendations, please let me know.
> 
> 
> Thanks,
> 
> Chris Suich
> 
>



Re: Review Request 14577: Remove Setters from *JoinVO Classes

2013-10-14 Thread edison su

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14577/#review26991
---

Ship it!


Ship It!

- edison su


On Oct. 14, 2013, 1:04 p.m., Chris Suich wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/14577/
> ---
> 
> (Updated Oct. 14, 2013, 1:04 p.m.)
> 
> 
> Review request for cloudstack, Koushik Das, Mike Tutkowski, and Min Chen.
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> Removed setters from all *JoinVO classes as they represent MySQL views which 
> are not editable.
> 
> The one exception to this was that I left setPassword(String) in 
> UserVmJoinVO. This is because the view does not actually have the user's 
> password, but it is a field in UseVmJoinVO, so it must be set manually, not 
> auto-populated from the DB.
> 
> 
> Diffs
> -
> 
>   server/src/com/cloud/api/query/vo/AccountJoinVO.java fbcc934 
>   server/src/com/cloud/api/query/vo/AffinityGroupJoinVO.java ae63a8a 
>   server/src/com/cloud/api/query/vo/AsyncJobJoinVO.java c45be1c 
>   server/src/com/cloud/api/query/vo/BaseViewVO.java 6b1ddd6 
>   server/src/com/cloud/api/query/vo/DataCenterJoinVO.java c6a80e7 
>   server/src/com/cloud/api/query/vo/DiskOfferingJoinVO.java 58e8370 
>   server/src/com/cloud/api/query/vo/DomainRouterJoinVO.java bfe4486 
>   server/src/com/cloud/api/query/vo/EventJoinVO.java 12d7e5a 
>   server/src/com/cloud/api/query/vo/HostJoinVO.java cf3cfdc 
>   server/src/com/cloud/api/query/vo/ImageStoreJoinVO.java ac161af 
>   server/src/com/cloud/api/query/vo/InstanceGroupJoinVO.java 3fb4309 
>   server/src/com/cloud/api/query/vo/ProjectAccountJoinVO.java 1a8818a 
>   server/src/com/cloud/api/query/vo/ProjectInvitationJoinVO.java f6e6760 
>   server/src/com/cloud/api/query/vo/ProjectJoinVO.java 3885fa0 
>   server/src/com/cloud/api/query/vo/ResourceTagJoinVO.java 9ce9555 
>   server/src/com/cloud/api/query/vo/SecurityGroupJoinVO.java 258b613 
>   server/src/com/cloud/api/query/vo/ServiceOfferingJoinVO.java 05ff5f3 
>   server/src/com/cloud/api/query/vo/StoragePoolJoinVO.java d98bb3b 
>   server/src/com/cloud/api/query/vo/TemplateJoinVO.java bb1cfed 
>   server/src/com/cloud/api/query/vo/UserAccountJoinVO.java c44027b 
>   server/src/com/cloud/api/query/vo/UserVmJoinVO.java 745db56 
>   server/src/com/cloud/api/query/vo/VolumeJoinVO.java 9fe9fd1 
> 
> Diff: https://reviews.apache.org/r/14577/diff/
> 
> 
> Testing
> ---
> 
> There were no compile errors after the deletions, so there shouldn't be any 
> issues.
> 
> However, I did do a clean build and played around with the UI while watching 
> vmops.log to make sure there were no errors being thrown over this.
> 
> 
> Thanks,
> 
> Chris Suich
> 
>



RE: CLVM broken on master

2013-10-17 Thread Edison Su
For CLVM, the copy template from secondary to primary and create volume from 
template logic is handled by CloudStackPrimaryDataStoreDriverImpl->copyAsync, 
not in AncientDataMotionStrategy
You can check the code: 4fb459355337c874a10f47c0224af72d6fef1ff2.

> -Original Message-
> From: Marcus Sorensen [mailto:shadow...@gmail.com]
> Sent: Thursday, October 17, 2013 2:07 PM
> To: dev@cloudstack.apache.org
> Subject: Re: CLVM broken on master
> 
> I think if we can change this line:
> 
> if ((srcData.getObjectType() == DataObjectType.TEMPLATE) &&
> (destData.getObjectType() == DataObjectType.TEMPLATE &&
> destData.getDataStore().getRole() == DataStoreRole.Primary)) {
> 
> to something like:
> 
> if (srcData.getObjectType() == DataObjectType.TEMPLATE &&
> srcData.getDataStore().getRole() == DataStoreRole.Image &&
> destData.getDataStore().getRole() == DataStoreRole.Primary) {
> 
> Maybe that will work? That way it's strictly secondary -> primary templates,
> not primary->primary templates.
> 
> Alternatively we could put it back to where it was:
> 
> if (srcData.getObjectType() == DataObjectType.TEMPLATE && srcDataStore
> instanceof NfsTO && destData.getDataStore().getRole() ==
> DataStoreRole.Primary) {
> 
> But your patch on the reviewboard removes NfsTO, and I'm assuming the
> idea was to work towards getting away from NFS-specific secondary storage.
> 
> On Thu, Oct 17, 2013 at 2:57 PM, Marcus Sorensen 
> wrote:
> > I ran that through my tester, it didn't like it.  That actually kept
> > the system vms from starting. Since CopyCommand is used for both
> > template to template and template to primary, it seems that the
> > original template copy is fine but now this catches the case where the
> > source template is on primary and we are making a root disk.
> > copyTemplateToPrimaryStorage has:
> >
> > if (!(imageStore instanceof NfsTO)) {
> > return new CopyCmdAnswer("unsupported protocol");
> > }
> >
> > we should be calling 'cloneVolumeFromBaseTemplate', but the original
> > if statement is now too loose.  I'll play with it a bit and see if I
> > can suggest a solution that works.
> >
> > 2013-09-17 17:58:07,178 DEBUG [cloud.agent.Agent]
> > (agentRequest-Handler-2:null) Request:Seq 1-829816935:  { Cmd ,
> > MgmtId: 52241639751, via: 1, Ver: v1, Flags: 100011,
> >
> [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.
> a
> > pache.cloudstack.storage.to.TemplateObjectTO":{"path":"bf53a7c6-1fed-1
> > 1e3-a1ff-000c29d82947","origUrl":"http://download.cloud.com/templates/
> > 4.2/systemvmtemplate-2013-06-12-master-
> kvm.qcow2.bz2","uuid":"bf53a7c6
> > -1fed-11e3-a1ff-000c29d82947","id":3,"format":"QCOW2","accountId":1,"c
> >
> hecksum":"6cea42b2633841648040becb588bd8f0","hvm":false,"displayText":
> > "SystemVM Template
> > (KVM)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryData
> > StoreTO":{"uuid":"8932daaf-272c-45c9-a078-d601dfc5ca56","id":1,"poolTy
> > pe":"Filesystem","host":"172.17.10.10","path":"/var/lib/libvirt/images
> > ","port":0}},"name":"routing-3","hypervisorType":"KVM"}},"destTO":{"or
> > g.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"0c15b340-228b-
> > 48f1-88c4-
> b717ad08d4e3","volumeType":"ROOT","dataStore":{"org.apache.c
> > loudstack.storage.to.PrimaryDataStoreTO":{"uuid":"8932daaf-272c-45c9-a
> > 078-d601dfc5ca56","id":1,"poolType":"Filesystem","host":"172.17.10.10"
> > ,"path":"/var/lib/libvirt/images","port":0}},"name":"ROOT-1","size":0,
> > "volumeId":2,"vmName":"s-1-
> VM","accountId":1,"format":"QCOW2","id":2,"
> > hypervisorType":"KVM"}},"executeInSequence":false,"wait":0}}]
> > }
> >
> > 2013-09-17 17:58:07,179 DEBUG [cloud.agent.Agent]
> > (agentRequest-Handler-2:null) Processing command:
> > org.apache.cloudstack.storage.command.CopyCommand
> > 2013-09-17 17:58:07,179 DEBUG [cloud.agent.Agent]
> > (agentRequest-Handler-2:null) Seq 1-829816935:  { Ans: , MgmtId:
> > 52241639751, via: 1, Ver: v1, Flags: 10,
> >
> [{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":fals
> > e,"details":"unsupported
> > protocol","wait":0}}] }
> >
> > On Thu, Oct 17, 2013 at 1:45 PM, SuichII, Christopher
> >  wrote:
> >> Hm, interesting.
> >>
> >> Since nothing else in the if/else if series there depends on the src being 
> >> a
> template, I'd imagine it would be safe to just have the check be:
> >>
> >> } else if (srcData.getObjectType() == DataObjectType.TEMPLATE &&
> >> destDataStore.getRole() == DataStoreRole.Primary) {
> >>
> >> In hindsight, adding the check for the destination being a template was
> just overkill and shouldn't have been added. So, if that fixes your problem, I
> believe it is in line with that Edison and I were doing with the storage
> subsystem, however, we should check with him as well.
> >>
> >> --
> >> Chris Suich
> >> chris.su...@netapp.com
> >> NetApp Software Engineer
> >> Data Center Platforms - Cloud Solutions Citrix, Cisco & Red Hat
> >>
> >> On

RE: CLVM broken on master

2013-10-17 Thread Edison Su
Marcus, would you mind to check in your fix into master? Thanks.

> -Original Message-
> From: SuichII, Christopher [mailto:chris.su...@netapp.com]
> Sent: Thursday, October 17, 2013 4:35 PM
> To: 
> Subject: Re: CLVM broken on master
> 
> I've done some testing on our plugin and that appears to work fine for me.
> 
> --
> Chris Suich
> chris.su...@netapp.com
> NetApp Software Engineer
> Data Center Platforms - Cloud Solutions
> Citrix, Cisco & Red Hat
> 
> On Oct 17, 2013, at 6:05 PM, Marcus Sorensen 
> wrote:
> 
> > The following seems to fix the issue, by the way, but again since I
> > didn't initially change the code I'd like someone else to
> > review/handle it.
> >
> > diff --git
> >
> a/core/src/com/cloud/storage/resource/StorageSubsystemCommandHandl
> erBa
> > se.java
> >
> b/core/src/com/cloud/storage/resource/StorageSubsystemCommandHandl
> erBa
> > se.java
> > index 002143f..3ac82e3 100644
> > ---
> >
> a/core/src/com/cloud/storage/resource/StorageSubsystemCommandHandl
> erBa
> > se.java
> > +++
> b/core/src/com/cloud/storage/resource/StorageSubsystemCommandHandl
> > +++ erBase.java
> > @@ -68,7 +68,7 @@ public class StorageSubsystemCommandHandlerBase
> > implements StorageSubsystemComma
> > DataStoreTO srcDataStore = srcData.getDataStore();
> > DataStoreTO destDataStore = destData.getDataStore();
> >
> > -if ((srcData.getObjectType() == DataObjectType.TEMPLATE) &&
> > (destData.getObjectType() == DataObjectType.TEMPLATE &&
> > destData.getDataStore().getRole() == DataStoreRole.Primary)) {
> > +if (srcData.getObjectType() == DataObjectType.TEMPLATE &&
> > srcData.getDataStore().getRole() == DataStoreRole.Image &&
> > destData.getDataStore().getRole() == DataStoreRole.Primary) {
> > //copy template to primary storage
> > return processor.copyTemplateToPrimaryStorage(cmd);
> > } else if (srcData.getObjectType() == DataObjectType.TEMPLATE
> > && srcDataStore.getRole() == DataStoreRole.Primary &&
> > destDataStore.getRole() == DataStoreRole.Primary) {
> >
> > On Thu, Oct 17, 2013 at 4:03 PM, Marcus Sorensen 
> wrote:
> >> All of the above mentioned is in
> >>
> core/src/com/cloud/storage/resource/StorageSubsystemCommandHandler
> Bas
> >> e.java
> >> , by the way.
> >>
> >> On Thu, Oct 17, 2013 at 4:02 PM, Marcus Sorensen
>  wrote:
> >>> Sure, but CopyCommand is being triggered in this code. I've tested
> >>> several variations to this one line, some work, some don't.
> >>>
> >>> On Thu, Oct 17, 2013 at 3:38 PM, Edison Su 
> wrote:
> >>>> For CLVM, the copy template from secondary to primary and create
> >>>> volume from template logic is handled by
> CloudStackPrimaryDataStoreDriverImpl->copyAsync, not in
> AncientDataMotionStrategy You can check the code:
> 4fb459355337c874a10f47c0224af72d6fef1ff2.
> >>>>
> >>>>> -Original Message-
> >>>>> From: Marcus Sorensen [mailto:shadow...@gmail.com]
> >>>>> Sent: Thursday, October 17, 2013 2:07 PM
> >>>>> To: dev@cloudstack.apache.org
> >>>>> Subject: Re: CLVM broken on master
> >>>>>
> >>>>> I think if we can change this line:
> >>>>>
> >>>>> if ((srcData.getObjectType() == DataObjectType.TEMPLATE) &&
> >>>>> (destData.getObjectType() == DataObjectType.TEMPLATE &&
> >>>>> destData.getDataStore().getRole() == DataStoreRole.Primary)) {
> >>>>>
> >>>>> to something like:
> >>>>>
> >>>>> if (srcData.getObjectType() == DataObjectType.TEMPLATE &&
> >>>>> srcData.getDataStore().getRole() == DataStoreRole.Image &&
> >>>>> destData.getDataStore().getRole() == DataStoreRole.Primary) {
> >>>>>
> >>>>> Maybe that will work? That way it's strictly secondary -> primary
> >>>>> templates, not primary->primary templates.
> >>>>>
> >>>>> Alternatively we could put it back to where it was:
> >>>>>
> >>>>> if (srcData.getObjectType() == DataObjectType.TEMPLATE &&
> >>>>> srcDataStore instanceof NfsTO && destData.getDataStore().getRole()
> >>>>> ==
> >>&

Re: Review Request 14715: [CLOUDSTACK-4887] CLVM broken

2013-10-17 Thread edison su

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14715/#review27164
---

Ship it!


Ship It!

- edison su


On Oct. 17, 2013, 6:59 p.m., Chris Suich wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/14715/
> ---
> 
> (Updated Oct. 17, 2013, 6:59 p.m.)
> 
> 
> Review request for cloudstack, edison su and Marcus Sorensen.
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> Fixed a bug with backup snapshot condition logic causing snapshots to not be 
> backed up and the API to fail.
> 
> 
> Diffs
> -
> 
>   core/src/com/cloud/storage/resource/StorageSubsystemCommandHandlerBase.java 
> 002143f 
> 
> Diff: https://reviews.apache.org/r/14715/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Chris Suich
> 
>



[Merge]pluggable_vm_snapshot branch

2013-10-21 Thread Edison Su
Hi All,
Per discussion on the thread: http://markmail.org/message/ybw2yy5snkvkuc57, 
we decide to use coarse-graind interface to make taking vm snapshot pluggable. 
On pluggable_vm_snapshot, a new interface is added:

public interface VMSnapshotStrategy {
VMSnapshot takeVMSnapshot(VMSnapshot vmSnapshot);
boolean deleteVMSnapshot(VMSnapshot vmSnapshot);
boolean revertVMSnapshot(VMSnapshot vmSnapshot);
boolean canHandle(VMSnapshot vmSnapshot);
}

Any vendor can implement this interface to customize vm snapshot procedure.
Unit test is added.
If no objection, I'd like to merge it into master. Comments/feedback are 
welcome, thanks!


RE: [Merge]pluggable_vm_snapshot branch

2013-10-22 Thread Edison Su
I tested locally with existing marvin test cases: test_vm_snapshots.py on 
xenserver, it works.

> -Original Message-
> From: David Nalley [mailto:da...@gnsa.us]
> Sent: Tuesday, October 22, 2013 6:07 AM
> To: dev@cloudstack.apache.org
> Subject: Re: [Merge]pluggable_vm_snapshot branch
> 
> Has any testing been run against this?
> 
> --David
> 
> On Mon, Oct 21, 2013 at 6:26 PM, Edison Su  wrote:
> > Hi All,
> > Per discussion on the thread:
> http://markmail.org/message/ybw2yy5snkvkuc57, we decide to use coarse-
> graind interface to make taking vm snapshot pluggable. On
> pluggable_vm_snapshot, a new interface is added:
> >
> > public interface VMSnapshotStrategy {
> > VMSnapshot takeVMSnapshot(VMSnapshot vmSnapshot);
> > boolean deleteVMSnapshot(VMSnapshot vmSnapshot);
> > boolean revertVMSnapshot(VMSnapshot vmSnapshot);
> > boolean canHandle(VMSnapshot vmSnapshot); }
> >
> > Any vendor can implement this interface to customize vm snapshot
> procedure.
> > Unit test is added.
> > If no objection, I'd like to merge it into master. Comments/feedback are
> welcome, thanks!


RE: Question about StoragePoolHostVO

2013-10-22 Thread Edison Su
The storage_pool_host_ref table is usually used to build the relationship 
between storage pool and hypervisor host only. The local path in this table is 
not used by other code any more.
If you have specific requirement, such as store a path in this table, we can 
add it DefaultHostListener, or you subclass DefaultHostListener.
HypervisorHostListener is a callback you can use, in case of host up and down. 
If you don't care about this events, you don't need to use the 
DefaultHostListener at all.
>From cloudstack's point of view, the only requirement is that, there should be 
>one entry in storage_pool_host_ref for each host and its primary storage. As 
>you pointed out, we should remove the entry when hostDisconnected is called.
From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
Sent: Tuesday, October 22, 2013 12:28 PM
To: dev@cloudstack.apache.org; Edison Su
Subject: Question about StoragePoolHostVO

Hi Edison (or anyone else viewing this :) ),

I'm looking at the DefaultHostListener class you had written up a while ago.

A couple questions:

1) In the "else" below, it does not appear we update the DB with the local 
path. Is this correct behavior (this is just a snippet of the method, of 
course, but I didn't see any DB update related to this)?

2) As you know, my storage is zone wide. Should I be adding a row to this table 
for each host that connects to the management server that's in the same zone as 
my storage? Assuming I should, should I implement the hostDisconnected method 
to remove the row? The DefaultHostListener just returns true for the 
hostDisconnected method.

StoragePoolHostVO poolHost = 
storagePoolHostDao.findByPoolHost(pool.getId(), hostId);
if (poolHost == null) {
poolHost = new StoragePoolHostVO(pool.getId(), hostId, 
mspAnswer.getPoolInfo().getLocalPath()
.replaceAll("//", "/"));
storagePoolHostDao.persist(poolHost);
} else {

poolHost.setLocalPath(mspAnswer.getPoolInfo().getLocalPath().replaceAll("//", 
"/"));
}
Thanks!

--
Mike Tutkowski
Senior CloudStack Developer, SolidFire Inc.
e: mike.tutkow...@solidfire.com<mailto:mike.tutkow...@solidfire.com>
o: 303.746.7302
Advancing the way the world uses the 
cloud<http://solidfire.com/solution/overview/?video=play>(tm)


Re: Review Request 15250: Fix Hyper-V plugin RAT issue

2013-11-05 Thread edison su

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15250/#review28235
---

Ship it!


Ship It!

- edison su


On Nov. 5, 2013, 11:08 p.m., Donal Lafferty wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/15250/
> ---
> 
> (Updated Nov. 5, 2013, 11:08 p.m.)
> 
> 
> Review request for cloudstack and Rayees Namathponnan.
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> git config files were missing the ASF license
> 
> 
> Diffs
> -
> 
>   plugins/hypervisors/hyperv/DotNet/ServerResource/.gitignore 
> 99afc0b89f247f97ff133e48fbf0746306cc8c9e 
> 
> Diff: https://reviews.apache.org/r/15250/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Donal Lafferty
> 
>



Re: Review Request 15326: Added option to reload VM during in RevertToVMSnapshotCommand

2013-11-07 Thread edison su

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15326/#review28456
---

Ship it!


Ship It!

- edison su


On Nov. 7, 2013, 9 p.m., Chris Suich wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/15326/
> ---
> 
> (Updated Nov. 7, 2013, 9 p.m.)
> 
> 
> Review request for cloudstack and edison su.
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> Added option to reload VM during in RevertToVMSnapshotCommand. For now, only 
> in VMWare as I don't believe that XenServer supports anything similar.
> 
> This gives plugins which may have unique snapshot strategies the ability to 
> request the hypervisor to reload the VM's state before attempting to revert 
> to a snapshot.
> 
> 
> Diffs
> -
> 
>   core/src/com/cloud/agent/api/RevertToVMSnapshotCommand.java 1e5fd6c 
>   
> plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/manager/VmwareStorageManagerImpl.java
>  0e2423e 
> 
> Diff: https://reviews.apache.org/r/15326/diff/
> 
> 
> Testing
> ---
> 
> This has been tested locally with a custom plugin. However, the changes are 
> not used by the existing CloudStack codebase and should not impact any 
> existing workflows or use cases. They are simply for plugins to take 
> advantage of.
> 
> 
> Thanks,
> 
> Chris Suich
> 
>



Re: Review Request 15313: Updated VMSnapshotDetails* to match *Details* pattern (e.g. UserVMDetails*)

2013-11-07 Thread edison su

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15313/#review28457
---

Ship it!


Ship It!

- edison su


On Nov. 7, 2013, 9 p.m., Chris Suich wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/15313/
> ---
> 
> (Updated Nov. 7, 2013, 9 p.m.)
> 
> 
> Review request for cloudstack and edison su.
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> VMSnapshotDetailsDao/Impl/VO were not using the ResourceDetails/Dao/Base 
> pattern which caused issues with adding details and did not give access to 
> useful methods like the detail getters/deleters.
> 
> 
> Diffs
> -
> 
>   engine/schema/src/com/cloud/vm/snapshot/VMSnapshotDetailsVO.java 934dd92 
>   engine/schema/src/com/cloud/vm/snapshot/dao/VMSnapshotDetailsDao.java 
> e84178c 
>   engine/schema/src/com/cloud/vm/snapshot/dao/VMSnapshotDetailsDaoImpl.java 
> b528b39 
>   
> engine/storage/integration-test/test/com/cloud/vm/snapshot/dao/VmSnapshotDaoTest.java
>  fc52f89 
> 
> Diff: https://reviews.apache.org/r/15313/diff/
> 
> 
> Testing
> ---
> 
> This has been tested locally with a custom plugin. However, the changes are 
> not used by the existing CloudStack codebase and should not impact any 
> existing workflows or use cases. They are simply for plugins to take 
> advantage of.
> 
> 
> Thanks,
> 
> Chris Suich
> 
>



Re: Review Request 15309: Fixed bug with deleting VMWare VM Snapshots

2013-11-07 Thread edison su

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15309/#review28458
---

Ship it!


Ship It!

- edison su


On Nov. 7, 2013, 9:30 p.m., Chris Suich wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/15309/
> ---
> 
> (Updated Nov. 7, 2013, 9:30 p.m.)
> 
> 
> Review request for cloudstack and edison su.
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> A bug was introduced in a6ce66e55a65eb0fbae9ead92de6ceac7a87c531 which does 
> not allow VMWare VM snapshots to be deleted.
> 
> https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=commitdiff;h=a6ce66e55a65eb0fbae9ead92de6ceac7a87c531
> 
> In refactoring, volumeTo.getPoolUuid() was supposed to be changed to 
> store.getUuid() but was actually changed to volumeTo.getUuid().
> 
> 
> Diffs
> -
> 
>   
> plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/manager/VmwareStorageManagerImpl.java
>  0e2423e 
> 
> Diff: https://reviews.apache.org/r/15309/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Chris Suich
> 
>



RE: [VOTE] Release Apache CloudStack 4.2.1

2013-11-15 Thread Edison Su
+1(binding)
Tested on my local machine with one KVM host, basic marvin vm test passed.

> -Original Message-
> From: Abhinandan Prateek [mailto:abhinandan.prat...@citrix.com]
> Sent: Tuesday, November 12, 2013 7:53 AM
> To: CloudStack Dev
> Subject: [VOTE] Release Apache CloudStack 4.2.1
> 
> 
>This vote is to approve the current RC build for 4.2.1 maintenance release.
> For this particular release various upgrade paths have been tested apart from
> regression tests and BVTs.
> Around 175 bugs have been fixed some new features added (see CHANGES).
> 
> Following are the particulars for this release:
> 
> https://git-wip-
> us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.2
> commit: 0b9eadaf14513f5c72de672963b0e2f12ee7206f
> 
> List of changes:
> https://git-wip-
> us.apache.org/repos/asf?p=cloudstack.git;a=blob_plain;f=CHANGES;hb=4.2.
> 1
> 
> Source release revision 3492 (checksums and signatures are available at the
> same location):
> https://dist.apache.org/repos/dist/dev/cloudstack/4.2.1/
> 
> PGP release keys (signed using RSA Key ID = 42443AA1):
> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
> 
> Vote will be open for 72 hours (until 11/15 End of day PST).
> 
> For sanity in tallying the vote, can PMC members please be sure to indicate
> "(binding)" with their vote?
> 
> [ ] +1  approve
> [ ] +0  no opinion
> [ ] -1  disapprove (and reason why)


RE: Windows support for KVM

2013-11-18 Thread Edison Su
Certain Windows server hard code on how many sockets it supports: 
http://www.openwebit.com/c/how-to-run-windows-vm-on-more-than-2-cores-under-kvm/

> -Original Message-
> From: Chiradeep Vittal [mailto:chiradeep.vit...@citrix.com]
> Sent: Monday, November 18, 2013 1:53 PM
> To: dev@cloudstack.apache.org
> Subject: Re: Windows support for KVM
> 
> I don't really understand the issue.
> What is the difference between
> 8
> 
> and
> 
> 1
> 
>  
> 
> 
> Why does Windows see only 4 cores in the first case? Is it because the 8
> cores are split across physical sockets?
> 
> 
> On 11/18/13 6:55 AM, "Arnaud Gaillard" 
> wrote:
> 
> >Hello,
> >
> >A few days ago I created a new Jira ticket for the support of topology
> >in network offering for KVM. This is needed in order to support Windows
> >VM in KVM (currently the limitation are such that it is not really
> >possible to deploy real Windows VM with this configuration).
> >
> >The JIRA is
> >CLOUDSTACK-5071 5071>
> >and
> >is refering to a bug opened before:
> >CLOUDSTACK-904 904>
> >.
> >
> >As I have received no comment on it, I would like to know if the
> >support of topology in service offering was considered as a priority,
> >and if the impact on the GUI was studied?
> >
> >Cheers!



RE: Windows support for KVM

2013-11-18 Thread Edison Su


> -Original Message-
> From: Arnaud Gaillard [mailto:arnaud.gaill...@xtendsys.net]
> Sent: Monday, November 18, 2013 6:56 AM
> To: dev@cloudstack.apache.org
> Subject: Windows support for KVM
> 
> Hello,
> 
> A few days ago I created a new Jira ticket for the support of topology in
> network offering for KVM. This is needed in order to support Windows VM in
> KVM (currently the limitation are such that it is not really possible to 
> deploy
> real Windows VM with this configuration).
> 
> The JIRA is CLOUDSTACK-
> 5071
> and
> is refering to a bug opened before:
> CLOUDSTACK-904
> .
> 
> As I have received no comment on it, I would like to know if the support of
> topology in service offering was considered as a priority, and if the impact 
> on
> the GUI was studied?

Using whatever Vmware supported is good enough?, e.g. Add coresPerSocket on the 
service offering and UI: 
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1010184

> 
> Cheers!


RE: [MERGE]object_store branch into master

2013-05-20 Thread Edison Su


> -Original Message-
> From: John Burwell [mailto:jburw...@basho.com]
> Sent: Monday, May 20, 2013 12:56 PM
> To: dev@cloudstack.apache.org
> Subject: Re: [MERGE]object_store branch into master
> 
> All,
> 
> Since this change is so large, it makes reviewing and commenting in detail
> extremely difficult.  Would it be possible to push this patch through Review
> Board to ease comprehension and promote a conversation about this patch?

We can try to push it into Review Board.

> 
> Reading through the FS, I have the following questions regarding the
> operation of the NFS cache:
> 
> What happens if/when the disk space of the NFS cache is exhausted?  What
> are the sizing recommendations/guidelines for it?
> What strategy is used to age files out of the NFS cache?
As usual, admin can have multiple NFS secondary storages, admin can also add 
multiple NFS cache storages. The NFS cache storage capacity plan will be the 
same as NFS secondary storage.
If there multiple NFS cache storages, the current strategy will randomly choose 
one of them. Currently, no clean up/aging out strategy implemented yet
But the situation can be improved: most of cached object can be deleted after 
accessed once. Take template as example, if zone wide storage is used, put 
template on cache storage has little value, as once the template is downloaded 
into primary storage, suddenly all the hypervisor host can access it.
I think the simple LRU algorithm to delete cached objects should be enough. It 
can be added later, the cache storage has its own pom project, it's place to 
add more intelligence. 

> If two processes, process1 and process2, are both using a template,
> templateA, will both processes reference the same file in the NFS cache?  If
It's possible, that one template can be downloaded into cache storage twice, in 
case of concurrent accessed by two processes. The current implementation is 
that, if two processes want to download the same template from s3 into one 
primary storage at the same time, there is only one template will be downloaded 
into cache storage. While, if two processes want to download the same template 
into different primary storage, the template will be cached twice. 
> they are reading from the same file and process1 finishes before process2,
> will process1 attempt to delete process2?

There is no way to delete while read, as each cached object has its own state 
machine. If it's accessed by one process, the state will be changed to 
"Copying", you can't delete an object when it's in "Copying" state.

> If a file transfer from the NFS cache to the object store fails, what is the
> recovery/retry strategy?  What durability guarantees will CloudStack supply
> when a snapshot, template, or ISO is in the cache, but can't be written to the
> object store?

The error handling of cache storage shouldn't be different than without cache 
storage. For example, directly backup snapshot from primary storage to S3, 
without cache storage. If backup failed, then the whole process failed, user 
needs to do it again through cloudstack API. So in cache storage case, if push 
object from cache storage to s3 failed, then the whole backup process failed.

> What will be the migration strategy for the objects contained in S3
> buckets/Swift containers from pre-4.2.0 instances?  Currently, CloudStack
> tracks a mapping between these objects and templates/ISOs in the
> template_switt_ref and template_s3_ref table.

We need to migrate DB from existing template_s3_ref to template_store_ref, and 
put all the s3 information into image_store and image_store_details tables.

> 
> Finally, does the S3 implementation use multi-part upload to transfer files to
> the object store?  If not, the implementation will be limited to storing 
> files no
> larger than 5GB in size.
Oh, this is something we don't know yet. We haven't try to upload a template 
which is large than 5GB, so haven't met this issue.
Could you help to hack it up?:)

> 
> Thanks,
> -John
> 
> On May 20, 2013, at 1:52 PM, Chip Childers 
> wrote:
> 
> > On Fri, May 17, 2013 at 08:19:57AM -0400, David Nalley wrote:
> >> On Fri, May 17, 2013 at 4:11 AM, Edison Su  wrote:
> >>> Hi all,
> >>> Min and I worked on object_store branch during the last one and half
> month. We made a lot of refactor on the storage code, mostly related to
> secondary storage, but also on the general storage framework. The following
> goals are made:
> >>>
> >>> 1.   An unified storage framework. Both secondary
> storages(nfs/s3/swift etc) and primary storages will share the same plugin
> model, the same interface. Add any other new storages into cloudstack will
> much easier and straightforward.
> &

Review Request: object storage refactor

2013-05-20 Thread edison su
/cloud/storage/VolumeManagerImpl.java 2f4b2c8 
  server/src/com/cloud/storage/dao/GuestOSHypervisorDao.java PRE-CREATION 
  server/src/com/cloud/storage/dao/GuestOSHypervisorDaoImpl.java PRE-CREATION 
  server/src/com/cloud/storage/download/DownloadAbandonedState.java 200683c 
  server/src/com/cloud/storage/download/DownloadActiveState.java f2cd5af 
  server/src/com/cloud/storage/download/DownloadCompleteState.java 6e8edcb 
  server/src/com/cloud/storage/download/DownloadErrorState.java 0fdfd52 
  server/src/com/cloud/storage/download/DownloadListener.java 1d48803 
  server/src/com/cloud/storage/download/DownloadMonitor.java 897befa 
  server/src/com/cloud/storage/download/DownloadMonitorImpl.java 220cbff 
  server/src/com/cloud/storage/download/DownloadState.java 471ab61 
  server/src/com/cloud/storage/download/NotDownloadedState.java 7752173 
  server/src/com/cloud/storage/listener/StoragePoolMonitor.java f957ca3 
  server/src/com/cloud/storage/listener/StorageSyncListener.java d9282a3 
  server/src/com/cloud/storage/resource/DummySecondaryStorageResource.java 
8f25514 
  server/src/com/cloud/storage/s3/S3Manager.java 0f74e43 
  server/src/com/cloud/storage/s3/S3ManagerImpl.java 61e5573 
  server/src/com/cloud/storage/secondary/SecondaryStorageListener.java 6635b3c 
  server/src/com/cloud/storage/secondary/SecondaryStorageManagerImpl.java 
3cf9a7e 
  server/src/com/cloud/storage/secondary/SecondaryStorageVmManager.java d315d22 
  server/src/com/cloud/storage/snapshot/SnapshotManager.java 8181330 
  server/src/com/cloud/storage/snapshot/SnapshotManagerImpl.java 26aae48 
  server/src/com/cloud/storage/swift/SwiftManagerImpl.java 5a7f01a 
  server/src/com/cloud/storage/upload/UploadListener.java ee13cf9 
  server/src/com/cloud/storage/upload/UploadMonitor.java 1c3590e 
  server/src/com/cloud/storage/upload/UploadMonitorImpl.java 77f0d20 
  server/src/com/cloud/template/HypervisorTemplateAdapter.java 322f32e 
  server/src/com/cloud/template/TemplateAdapter.java 9a2d877 
  server/src/com/cloud/template/TemplateAdapterBase.java 0940d3e 
  server/src/com/cloud/template/TemplateManager.java 19ba3b5 
  server/src/com/cloud/template/TemplateManagerImpl.java a8729e1 
  server/src/com/cloud/vm/UserVmManagerImpl.java 0f6adc0 
  server/src/com/cloud/vm/VirtualMachineManagerImpl.java 521b5e0 
  server/src/com/cloud/vm/VirtualMachineProfileImpl.java 24f44cb 
  server/test/com/cloud/agent/MockAgentManagerImpl.java 7e3462d 
  server/test/com/cloud/resource/MockResourceManagerImpl.java 5202c31 
  server/test/org/apache/cloudstack/networkoffering/ChildTestConfiguration.java 
6f52397 
  
services/secondary-storage/src/org/apache/cloudstack/storage/resource/CifsSecondaryStorageResource.java
 de4cfe0 
  
services/secondary-storage/src/org/apache/cloudstack/storage/resource/LocalNfsSecondaryStorageResource.java
 PRE-CREATION 
  
services/secondary-storage/src/org/apache/cloudstack/storage/resource/LocalSecondaryStorageResource.java
 b904254 
  
services/secondary-storage/src/org/apache/cloudstack/storage/resource/NfsSecondaryStorageResource.java
 e7fa5b2 
  
services/secondary-storage/src/org/apache/cloudstack/storage/resource/SecondaryStorageDiscoverer.java
 d3af792 
  
services/secondary-storage/src/org/apache/cloudstack/storage/template/DownloadManager.java
 3e5072a 
  
services/secondary-storage/src/org/apache/cloudstack/storage/template/DownloadManagerImpl.java
 a9d23cb 
  setup/db/db/schema-410to420.sql 334aae7 
  test/integration/smoke/test_volumes.py 4bf8203 
  tools/apidoc/gen_toc.py 8b6460e 
  tools/devcloud/devcloud.cfg e6ab71b 
  tools/devcloud/devcloud_s3.cfg PRE-CREATION 
  tools/marvin/marvin/configGenerator.py 4e82bbe 
  tools/marvin/marvin/deployDataCenter.py 7059059 
  ui/scripts/cloudStack.js b943a94 
  ui/scripts/system.js 0164e21 
  ui/scripts/zoneWizard.js 9b28c32 
  utils/src/com/cloud/utils/S3Utils.java b7273a1 
  utils/src/com/cloud/utils/UriUtils.java 3bcee7a 
  utils/src/com/cloud/utils/script/Script.java 3632bf5 

Diff: https://reviews.apache.org/r/11277/diff/


Testing
---


Thanks,

edison su



RE: [MERGE]object_store branch into master

2013-05-20 Thread Edison Su


> -Original Message-
> From: Edison Su [mailto:edison...@citrix.com]
> Sent: Monday, May 20, 2013 2:30 PM
> To: dev@cloudstack.apache.org
> Subject: RE: [MERGE]object_store branch into master
> 
> 
> 
> > -Original Message-
> > From: John Burwell [mailto:jburw...@basho.com]
> > Sent: Monday, May 20, 2013 12:56 PM
> > To: dev@cloudstack.apache.org
> > Subject: Re: [MERGE]object_store branch into master
> >
> > All,
> >
> > Since this change is so large, it makes reviewing and commenting in
> > detail extremely difficult.  Would it be possible to push this patch
> > through Review Board to ease comprehension and promote a conversation
> about this patch?
> 
> We can try to push it into Review Board.

The review board url is: https://reviews.apache.org/r/11277/, 25 pages...

> 
> >
> > Reading through the FS, I have the following questions regarding the
> > operation of the NFS cache:
> >
> > What happens if/when the disk space of the NFS cache is exhausted?
> > What are the sizing recommendations/guidelines for it?
> > What strategy is used to age files out of the NFS cache?
> As usual, admin can have multiple NFS secondary storages, admin can also
> add multiple NFS cache storages. The NFS cache storage capacity plan will be
> the same as NFS secondary storage.
> If there multiple NFS cache storages, the current strategy will randomly
> choose one of them. Currently, no clean up/aging out strategy implemented
> yet But the situation can be improved: most of cached object can be deleted
> after accessed once. Take template as example, if zone wide storage is used,
> put template on cache storage has little value, as once the template is
> downloaded into primary storage, suddenly all the hypervisor host can access
> it.
> I think the simple LRU algorithm to delete cached objects should be enough.
> It can be added later, the cache storage has its own pom project, it's place 
> to
> add more intelligence.
> 
> > If two processes, process1 and process2, are both using a template,
> > templateA, will both processes reference the same file in the NFS
> > cache?  If
> It's possible, that one template can be downloaded into cache storage twice,
> in case of concurrent accessed by two processes. The current
> implementation is that, if two processes want to download the same
> template from s3 into one primary storage at the same time, there is only
> one template will be downloaded into cache storage. While, if two processes
> want to download the same template into different primary storage, the
> template will be cached twice.
> > they are reading from the same file and process1 finishes before
> > process2, will process1 attempt to delete process2?
> 
> There is no way to delete while read, as each cached object has its own state
> machine. If it's accessed by one process, the state will be changed to
> "Copying", you can't delete an object when it's in "Copying" state.
> 
> > If a file transfer from the NFS cache to the object store fails, what
> > is the recovery/retry strategy?  What durability guarantees will
> > CloudStack supply when a snapshot, template, or ISO is in the cache,
> > but can't be written to the object store?
> 
> The error handling of cache storage shouldn't be different than without
> cache storage. For example, directly backup snapshot from primary storage
> to S3, without cache storage. If backup failed, then the whole process failed,
> user needs to do it again through cloudstack API. So in cache storage case, if
> push object from cache storage to s3 failed, then the whole backup process
> failed.
> 
> > What will be the migration strategy for the objects contained in S3
> > buckets/Swift containers from pre-4.2.0 instances?  Currently,
> > CloudStack tracks a mapping between these objects and templates/ISOs
> > in the template_switt_ref and template_s3_ref table.
> 
> We need to migrate DB from existing template_s3_ref to
> template_store_ref, and put all the s3 information into image_store and
> image_store_details tables.
> 
> >
> > Finally, does the S3 implementation use multi-part upload to transfer
> > files to the object store?  If not, the implementation will be limited
> > to storing files no larger than 5GB in size.
> Oh, this is something we don't know yet. We haven't try to upload a
> template which is large than 5GB, so haven't met this issue.
> Could you help to hack it up?:)
> 
> >
> > Thanks,
> > -John
> >
> > On May 20, 2013, at 1:52 PM, Chip Childers 
> > wrote:
> >
&

RE: Using rados-java as a new Maven dependency for KVM

2013-05-21 Thread Edison Su
Just curious: Do you plan or already implement librbd api call with progress 
callback? The storage operation usually takes a long time, without progress 
bar, it's very difficult to know what's going on at the storage side. 

> -Original Message-
> From: Wido den Hollander [mailto:w...@widodh.nl]
> Sent: Tuesday, May 21, 2013 12:16 PM
> To: dev@cloudstack.apache.org
> Subject: Using rados-java as a new Maven dependency for KVM
> 
> Hi,
> 
> In the rbd-snap-clone [0] branch I'm working on the new RBD features like
> snapshotting, cloning and deploying System VMs on RBD.
> 
> To do this correctly I wrote Java bindings for librbd and librados (part of 
> the
> Ceph project).
> 
> These bindings [1] are just like libvirt-java just JNA bindings for these 
> libraries.
> Since these bindings aren't in Maven central I created a Maven repository on
> Ceph.com [2] and I added it to the pom.xml of the KVM plugin for the Agent.
> 
> Can we accept this as a dependency? It's just a Maven dependency which
> doesn't include any binary code into the Git repo.
> 
> The bindings are currently GPLv2 licensed since that's what Ceph uses, but
> does this conflict with the Apache project? I want to make sure it will be
> included in the OSS builds of CloudStack, so I can change the license if
> required.
> 
> I'm not a lawyer, so I'm not so sure about this.
> 
> Thoughts?
> 
> Wido
> 
> [0]:
> https://git-wip-
> us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/rbd-
> snap-clone
> [1]: https://github.com/wido/rados-java
> [2]: http://ceph.com/maven/


RE: [MERGE]object_store branch into master

2013-05-22 Thread Edison Su


> -Original Message-
> From: Chip Childers [mailto:chip.child...@sungard.com]
> Sent: Wednesday, May 22, 2013 1:26 PM
> To: dev@cloudstack.apache.org
> Subject: Re: [MERGE]object_store branch into master
> 
> On Wed, May 22, 2013 at 08:15:41PM +, Animesh Chaturvedi wrote:
> >
> >
> > > -Original Message-
> > > From: Chip Childers [mailto:chip.child...@sungard.com]
> > > Sent: Wednesday, May 22, 2013 12:08 PM
> > > To: dev@cloudstack.apache.org
> > > Subject: Re: [MERGE]object_store branch into master
> > >
> > > On Wed, May 22, 2013 at 07:00:51PM +, Animesh Chaturvedi wrote:
> > > >
> > > >
> > > > > -Original Message-
> > > > > From: John Burwell [mailto:jburw...@basho.com]
> > > > > Sent: Tuesday, May 21, 2013 8:51 AM
> > > > > To: dev@cloudstack.apache.org
> > > > > Subject: Re: [MERGE]object_store branch into master
> > > > >
> > > > > Edison,
> > > > >
> > > > > Thanks, I will start going through it today.  Based on other
> > > > > $dayjob responsibilities, it may take me a couple of days.
> > > > >
> > > > > Thanks,
> > > > > -John
> > > > [Animesh>] John we are just a few days away  from 4.2 feature
> > > > freeze, can
> > > you provide your comments by Friday 5/24.   I would like all feature
> threads
> > > to be resolved sooner so that we don't have last minute rush.
> > >
> > > I'm just going to comment on this, but not take it much further...
> > > this type of change is an "architectural" change.  We had previously
> > > discussed (on several
> > > threads) that the appropriate time for this sort of thing to hit
> > > master was
> > > *early* in the release cycle.  Any reason that that consensus
> > > doesn't apply here?
> > [Animesh>] Yes it is an architectural change and discussion on this started 
> > a
> few weeks back already, Min and Edison wanted to get it in sooner by  4/30
> but it took longer than anticipated in  preparing for merge and testing on
> feature branch.
> >
> >
> 
> You're not following me I think.  See this thread on the Javelin merge:
> 
> http://markmail.org/message/e6peml5ddkqa6jp4
> 
> We have discussed that our preference is for architectural changes to hit
> master shortly after a feature branch is cut.  Why are we not doing that here?

This kind of refactor takes time, a lot of time. I think I worked on the merge 
of primary storage refactor into master and bug fixes during 
March(http://comments.gmane.org/gmane.comp.apache.cloudstack.devel/14469), then 
started to work on the secondary storage refactor in 
April(http://markmail.org/message/cspb6xweeupfvpit). Min and I finished the 
coding at end of April, then tested for two weeks, send out the merge request 
at middle of May.
With the refactor, the  storage code will be much cleaner, and the performance 
of S3 will be improved, and integration with other storage vendor will be much 
easier, and the quality is ok(33 bugs fired, only 5 left: 
https://issues.apache.org/jira/issues/?jql=text%20~%20%22Object_Store_Refactor%22).
 Anyway, it's up to the community to decide, merge it or not, we already tried 
our best to get it done ASAP.



RE: [MERGE]object_store branch into master

2013-05-23 Thread Edison Su
We are focusing on how to the flow right, how to make the code cleaner and 
maintainable at the mgt server side. We are not S3 expert, so I would say the 
"naïve" S3 implementation is a bug, and if you can help us to get it right(on 
object_store branch or after merged into master), that will be great!:)

> -Original Message-
> From: John Burwell [mailto:jburw...@basho.com]
> Sent: Thursday, May 23, 2013 1:56 PM
> To: dev@cloudstack.apache.org
> Subject: Re: [MERGE]object_store branch into master
> 
> Min,
> 
> Yes, I feel that class needs to be fixed before it can be merged into master.
> 
> Thanks,
> -John
> 
> On May 23, 2013, at 4:54 PM, Min Chen  wrote:
> 
> > Thanks for your suggestions, John. Current code is working fine
> > functionality-wise, can we do such code cleanup and optimization
> > post-merge?
> >
> > -min
> >
> > On 5/23/13 12:54 PM, "John Burwell"  wrote:
> >
> >> Min,
> >>
> >> TL;DR There are rare circumstances where directly access the AWS HTTP
> >> API makes sense, but generally, you should use a client.
> >>
> >> If you need the size of an S3 object in advance, I recommend using
> >> AmazonS3Client#getObjectMeta method.  Also, large swathes of the
> >> class appear to be copied directly from HTTPTemplateDownloader and
> >> don't make sense when interacting S3.  For example, S3 does not
> >> support chunked HTTP operations or HTTP authentication.  As such,
> >> this class could be greatly simplified to operate in terms of S3Utils
> >> (likely with a few additional
> >> methods) to get this information.
> >>
> >> In terms of progress tracking of downloads and transfer operations, I
> >> recommend retrofitting S3Utils to leverage the TransferManager
> mechanism.
> >> In addition to providing progress information (TransferProgress), it
> >> also provides various operation optimizations (particularly for uploads).
> >> It also simplifies the interaction tremendously (e.g. it builds the
> >> PutRequest objects for you).
> >>
> >> Thanks,
> >> -John
> >>
> >> On May 23, 2013, at 12:12 PM, Min Chen  wrote:
> >>
> >>> John, sorry for my ignorance. In S3TemplateDownloader.download, we
> >>> are only using HttpClient method to get the size and InputStream
> >>> from a given URL, then we are invoking S3 client library
> >>> putObject(PutObjectRequest) to download the content to S3. By using
> >>> PutObjectRequest, I can set ProgressListener to show download
> >>> progress to end user. What is your recommendation in this case?
> >>>
> >>> Thanks
> >>> -min
> >>>
> >>> On 5/23/13 7:20 AM, "John Burwell"  wrote:
> >>>
> >>>> Min,
> >>>>
> >>>> The com.cloud.storage.template.S3TemplateDownloader is directly
> >>>> accessing the S3 API using HTTP client.
> >>>>
> >>>> Thanks,
> >>>> John
> >>>>
> >>>> On May 22, 2013, at 5:57 PM, Min Chen  wrote:
> >>>>
> >>>>> John,
> >>>>>
> >>>>> Can you clarify a bit on your last comment about directly
> >>>>> accessing
> >>>>> S3
> >>>>> HTTP API? We are only invoking routines in S3Utils to perform
> >>>>> operations with S3, not invoke any REST api if that is what you
> >>>>> meant.
> >>>>>
> >>>>> Thanks
> >>>>> -min
> >>>>>
> >>>>> On 5/22/13 2:49 PM, "John Burwell"  wrote:
> >>>>>
> >>>>>> Edison,
> >>>>>>
> >>>>>> For changes that take as long as described, it should be expected
> >>>>>> that the review will take a proportional amount of time.  In
> >>>>>> future releases, we should think through ways to divide changes
> >>>>>> such as these into a set of smaller patches submitted throughout
> >>>>>> the course of the release cycle.
> >>>>>>
> >>>>>> So far, I can say I am very concerned about failure scenarios and
> >>>>>> potential race conditions around the NFS cache. However, I am
> >>>>>> only a quarter of the way through the code so my concerns may be
> >>>>>>

RE: [MERGE]object_store branch into master

2013-05-28 Thread Edison Su
them is arguable. I prefer fixing them after the merge. Here is the reason:
If we agreed on the issues you pointed out are existing on the master also, 
then do we want to fix them on the 4.2 or not? If the answer is yes, then who 
will fix them, and how long it will take to fix them on master? If I propose an 
offer, that I'll fix these issues after merging object_store into master, will 
it be acceptable? If the answer is no,  then there is no technical reason to 
not merge object_store into master, as these issues are not introduced by 
object_store branch.


> master.  In my opinion, the object_store represents a good first step, but it
> needs a few more review/refinement iterations before it will be ready for a
> master merge.
> 
> Thanks,
> -John
> 
> On May 28, 2013, at 10:12 AM, Nitin Mehta  wrote:
> 
> > Agree with Wido. This would be a great feature to have in 4.2. Yes,
> > its a lot of change and probably needs more education from Edison and
> > Min maybe through code walkthroughs, documentation, IRC meetings but I
> > am +1 for this to make it to 4.2 and would go as far to say that I
> > would even volunteer for any bug fixes required.
> >
> > I would say its not too bad to merge it now as most of the features
> > for
> > 4.2 are merged by now and not a lot of them would be blocked because
> > of this. Yes, the master would be unstable but it would be even if we
> > merge it post cutting 4.2 branch. I would rather see this coming in
> > 4.2 than wait for another 6 months or so for it. Yes, this is an
> > architectural change and we are learning as a community to time these kind
> of changes.
> > We should also try and raise alarms for these changes much early when
> > the FS was proposed rather than when its done, probably a learning for
> > all of us :)
> >
> > Thanks,
> > -Nitin
> >
> > On 28/05/13 4:23 PM, "Wido den Hollander"  wrote:
> >
> >>
> >>
> >> On 05/23/2013 06:35 PM, Chip Childers wrote:
> >>> On Wed, May 22, 2013 at 09:25:10PM +, Edison Su wrote:
> >>>>
> >>>>
> >>>>> -Original Message-
> >>>>> From: Chip Childers [mailto:chip.child...@sungard.com]
> >>>>> Sent: Wednesday, May 22, 2013 1:26 PM
> >>>>> To: dev@cloudstack.apache.org
> >>>>> Subject: Re: [MERGE]object_store branch into master
> >>>>>
> >>>>> On Wed, May 22, 2013 at 08:15:41PM +, Animesh Chaturvedi
> wrote:
> >>>>>>
> >>>>>>
> >>>>>>> -Original Message-
> >>>>>>> From: Chip Childers [mailto:chip.child...@sungard.com]
> >>>>>>> Sent: Wednesday, May 22, 2013 12:08 PM
> >>>>>>> To: dev@cloudstack.apache.org
> >>>>>>> Subject: Re: [MERGE]object_store branch into master
> >>>>>>>
> >>>>>>> On Wed, May 22, 2013 at 07:00:51PM +, Animesh Chaturvedi
> wrote:
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>> -Original Message-
> >>>>>>>>> From: John Burwell [mailto:jburw...@basho.com]
> >>>>>>>>> Sent: Tuesday, May 21, 2013 8:51 AM
> >>>>>>>>> To: dev@cloudstack.apache.org
> >>>>>>>>> Subject: Re: [MERGE]object_store branch into master
> >>>>>>>>>
> >>>>>>>>> Edison,
> >>>>>>>>>
> >>>>>>>>> Thanks, I will start going through it today.  Based on other
> >>>>>>>>> $dayjob responsibilities, it may take me a couple of days.
> >>>>>>>>>
> >>>>>>>>> Thanks,
> >>>>>>>>> -John
> >>>>>>>> [Animesh>] John we are just a few days away  from 4.2 feature
> >>>>>>>> freeze, can
> >>>>>>> you provide your comments by Friday 5/24.   I would like all feature
> >>>>> threads
> >>>>>>> to be resolved sooner so that we don't have last minute rush.
> >>>>>>>
> >>>>>>> I'm just going to comment on this, but not take it much further...
> >>>>>>> this type of change is an "architectural" change.  We had
> >>>>>>> previously discussed (on several
> >>>>>>> threads)

RE: [VOTE] Release Apache CloudStack 4.1.0 (fifth round)

2013-05-28 Thread Edison Su
+1, tested on the same environment and at the same time.

> -Original Message-
> From: Animesh Chaturvedi [mailto:animesh.chaturv...@citrix.com]
> Sent: Tuesday, May 28, 2013 6:08 PM
> To: dev@cloudstack.apache.org
> Subject: RE: [VOTE] Release Apache CloudStack 4.1.0 (fifth round)
> 
> +1
> 
> Tested with Management Sever on CentOS 6.4 and Xen
> 
> > -Original Message-
> > From: Chip Childers [mailto:chip.child...@sungard.com]
> > Sent: Tuesday, May 28, 2013 6:48 AM
> > To: dev@cloudstack.apache.org
> > Subject: [VOTE] Release Apache CloudStack 4.1.0 (fifth round)
> >
> > Hi All,
> >
> > I've created a 4.1.0 release, with the following artifacts up for a
> > vote.
> >
> > The changes from round 4 are related to DEB packaging, some
> > translation strings, and a functional patch to make bridge type
> > optional during the agent setup (for backward compatibility).
> >
> > Git Branch and Commit SH:
> > https://git-wip-
> > us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.1
> > Commit: a5214bee99f6c5582d755c9499f7d99fd7b5b701
> >
> > List of changes:
> > https://git-wip-
> >
> us.apache.org/repos/asf?p=cloudstack.git;a=blob_plain;f=CHANGES;hb=4.1
> >
> > Source release (checksums and signatures are available at the same
> > location):
> > https://dist.apache.org/repos/dist/dev/cloudstack/4.1.0/
> >
> > PGP release keys (signed using A99A5D58):
> > https://dist.apache.org/repos/dist/release/cloudstack/KEYS
> >
> > Testing instructions are here:
> >
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+test+pr
> > oc
> > edure
> >
> > Vote will be open for 72 hours.
> >
> > For sanity in tallying the vote, can PMC members please be sure to
> > indicate "(binding)" with their vote?
> >
> > [ ] +1  approve
> > [ ] +0  no opinion
> > [ ] -1  disapprove (and reason why)


RE: [MERGE]object_store branch into master

2013-05-29 Thread Edison Su


> -Original Message-
> From: Chip Childers [mailto:chip.child...@sungard.com]
> Sent: Wednesday, May 29, 2013 1:41 PM
> To: dev@cloudstack.apache.org
> Subject: Re: [MERGE]object_store branch into master
> 
> On Tue, May 28, 2013 at 11:42:49PM +, Edison Su wrote:
> >
> >
> > > -Original Message-
> > > From: John Burwell [mailto:jburw...@basho.com]
> > > Sent: Tuesday, May 28, 2013 8:43 AM
> > > To: dev@cloudstack.apache.org
> > > Subject: Re: [MERGE]object_store branch into master
> > >
> > > All,
> > >
> > > I have gone a through a large chunk of this patch, and published my
> review
> > > thus far (https://reviews.apache.org/r/11277/).   TL;DR is that this patch
> has a
> > > number of significant issues which can be summarized as follows:
> >
> >
> > Thanks for your time to review this patch.
> >
> > >
> > > 1. While it appeas that the requirement of NFS for secondary storage
> > > has largely been removed, it has basically been if blocked out
> > > instead of pushed down as an detail choice in the physical layer.
> > > Rather than exploiting polymorpish to vary behavior through a set of
> > > higher level abstracttions, the orchestration performs instanceof
> > > NFSTO checks. The concern is that future evolution of secondary storage
> layer will still have dependencies on NFS.
> >
> > As long as mgt server doesn't explicitly depend on nfs secondary storage
> during the orchestration, and the nfs storage can be used based on different
> configurations, then the resource side "NFSTO checks" is not much concern
> to me at the current stage. We can add a " polymorpish high level
> abstraction"  at the resource side, as you suggested before, in the future.
> The current code change is already so huge now, add a new level a
> abstraction will make the change even much bigger. But adding this kind of
> "high level abstraction" is not hard any more, as all the storage related
> commands, are only dealing with DataTO/DataStoreTO. Right now, there is
> no read/write data operations on this *TO, one possible way to add a "high
> level abstraction" would be add one read/write data operations at each
> hypervisor resource based on the DataTO and DataStoreTO.
> > My point is that refactor mgt server code is much much harder than the
> resource code. If we can get the mgt server right, then we can move on to
> refactor resource code, in the future.
> >
> > >
> > > 2. In some sceanrios, NFS is still a requirement for secondary
> > > storage.  In particular, Xen users will still have to maintain an
> > > "NFS cache".  Given the degradation of capability, I think it would
> > > be helpful to put a few more eyeballs on the Xen implementation to
> > > determine if we could avoid using an intermediate file system.
> >
> > Nfs will not disappear soon, for some hypervisors, for some type of
> storages. But as long as the mgt server doesn't have this requirement, then
> people can add new type of hypervisors(e.g. hyperv), new type of
> storages(ceph, solidfire etc) which don't need NFS. The optimization for a
> particular hypervisor can be done by other people, who may be familiar with
> some type of storage, or a certain hypervisor.
> >
> > > 3. I have the following concerns regarding potential race conditions
> > > and resource exhaustion in the NFS cache implementation.
> > >
> > > - The current random allocator algorithm does not account for
> > > the amount space that will be required for the operation (i.e.
> > > checking to ensure that the cache it picks has enough space to
> > > transfer to hold the object being
> > > downloaded) nor does it reserve the space.Given the long (in compute
> > > time) running nature of these of processes, the available space in a
> > > cache could be exhausted by a number of concurrently transfering
> > > templates and/or snapshots.  By reserving space before the transfer,
> > > the allocator would be able to account for both pending operations
> > > and the current contents of the cache.
> >
> > 1. It's the current behavior of 4.1 and master secondary storage code. We
> didn't introduce new issue here.
> > 2. Add a new cache storage allocator is much easier than adding it in master
> branch.
> > 3. I treat it as a bug, and plan to fix it after the merge.
> 
> We should not be merging in major changes with *known* bugs like this.

first and foremost, it's

RE: 4.2 snapshot: setting up ISOs - no go

2013-05-29 Thread Edison Su
Can you try: 
select status from host where type = 'SecondaryStorageVM';
in db?

Make sure the agent in ssvm can connect back to mgt serer.

> -Original Message-
> From: La Motta, David [mailto:david.lamo...@netapp.com]
> Sent: Wednesday, May 29, 2013 2:28 PM
> To: 
> Subject: 4.2 snapshot: setting up ISOs - no go
> 
> So my 4.2 instance is up and running, system VMs are running (secondary
> storage VM and console proxy VM), primary and secondary storage are up
> on NFS... all things considered and at a glance, things look good.  However,
> when I try to add an ISO so that I can provision a VM, I am getting:
> 
> WARN  [storage.download.DownloadMonitorImpl] (1018141210@qtp-
> 2013781391-38:) There is no secondary storage VM for secondary storage
> host nfs://172.20.39.74/SnapCreator_secondary
> 
> Which is throwing me off, because the secondary storage VM seems to be
> up and happy.  I can see it on XenCenter alive and well, too.  When I go
> through the ISO registration wizard, registration seems to be successful in
> the UI, since the ISOs get added to the templates table--except I get the
> warning in the console above.
> 
> If I try to add another secondary storage, I get:
> 
> ERROR [cloud.api.ApiServer] (126087@qtp-2013781391-40:) unhandled
> exception executing api command: addSecondaryStorage
> com.cloud.utils.exception.CloudRuntimeException: Cannot transit agent
> status with event AgentDisconnected for host 9, mangement server id is
> 271423992217301,Unable to transition to a new state from Creating via
> AgentDisconnected at
> com.cloud.agent.manager.AgentManagerImpl.agentStatusTransitTo(Agent
> ManagerImpl.java:1436)
> 
> 
> When in doubt, reboot, right?  So I shutdown my CS mgmt server and
> XenServer host and tried again.  I even added more secondary storage
> without anything in it (the first NFS volume had the system VM template on
> it).  Still, I get the same behavior as described above:  the UI says the ISO 
> was
> added, but the log file warns about the lack of a secondary storage VM.
> 
> If I attempt to go through the instance provisioning wizard, long and behold,
> the ISO is not listed in the "My ISOs" tab.
> 
> 
> There is also something strange that I've noticed (slightly off topic): if I 
> have
> primary storage allocated and resize the underlying volume on the storage
> controller, CS doesn't see the new size.  Is there any way to force that to 
> take
> place from the UI?  Clicking on refresh did nothing about it.
> 
> Thanks!
> 
> 
> 
> David La Motta
> Technical Marketing Engineer - Citrix
> 
> NetApp
> 919.476.5042
> dlamo...@netapp.com
> 
> 



RE: [MERGE]object_store branch into master

2013-05-30 Thread Edison Su


> -Original Message-
> From: John Burwell [mailto:jburw...@basho.com]
> Sent: Thursday, May 30, 2013 7:43 AM
> To: dev@cloudstack.apache.org
> Subject: Re: [MERGE]object_store branch into master
> 
> Edison,
> 
> Please see my comment in-line.
> 
> Thanks,
> -John
> 
> On May 29, 2013, at 5:15 PM, Edison Su  wrote:
> 
> >
> >
> >> -Original Message-
> >> From: Chip Childers [mailto:chip.child...@sungard.com]
> >> Sent: Wednesday, May 29, 2013 1:41 PM
> >> To: dev@cloudstack.apache.org
> >> Subject: Re: [MERGE]object_store branch into master
> >>
> >> On Tue, May 28, 2013 at 11:42:49PM +, Edison Su wrote:
> >>>
> >>>
> >>>> -Original Message-
> >>>> From: John Burwell [mailto:jburw...@basho.com]
> >>>> Sent: Tuesday, May 28, 2013 8:43 AM
> >>>> To: dev@cloudstack.apache.org
> >>>> Subject: Re: [MERGE]object_store branch into master
> >>>>
> >>>> All,
> >>>>
> >>>> I have gone a through a large chunk of this patch, and published my
> >> review
> >>>> thus far (https://reviews.apache.org/r/11277/).   TL;DR is that this 
> >>>> patch
> >> has a
> >>>> number of significant issues which can be summarized as follows:
> >>>
> >>>
> >>> Thanks for your time to review this patch.
> >>>
> >>>>
> >>>> 1. While it appeas that the requirement of NFS for secondary
> >>>> storage has largely been removed, it has basically been if blocked
> >>>> out instead of pushed down as an detail choice in the physical layer.
> >>>> Rather than exploiting polymorpish to vary behavior through a set
> >>>> of higher level abstracttions, the orchestration performs
> >>>> instanceof NFSTO checks. The concern is that future evolution of
> >>>> secondary storage
> >> layer will still have dependencies on NFS.
> >>>
> >>> As long as mgt server doesn't explicitly depend on nfs secondary
> >>> storage
> >> during the orchestration, and the nfs storage can be used based on
> >> different configurations, then the resource side "NFSTO checks" is
> >> not much concern to me at the current stage. We can add a "
> >> polymorpish high level abstraction"  at the resource side, as you
> suggested before, in the future.
> >> The current code change is already so huge now, add a new level a
> >> abstraction will make the change even much bigger. But adding this
> >> kind of "high level abstraction" is not hard any more, as all the
> >> storage related commands, are only dealing with DataTO/DataStoreTO.
> >> Right now, there is no read/write data operations on this *TO, one
> >> possible way to add a "high level abstraction" would be add one
> >> read/write data operations at each hypervisor resource based on the
> DataTO and DataStoreTO.
> >>> My point is that refactor mgt server code is much much harder than
> >>> the
> >> resource code. If we can get the mgt server right, then we can move
> >> on to refactor resource code, in the future.
> 
> Due to their impact on dependent subsystems and lower layers, these types
> of design questions need to be addressed before a merge to master.
> 
> >>>
> >>>>
> >>>> 2. In some sceanrios, NFS is still a requirement for secondary
> >>>> storage.  In particular, Xen users will still have to maintain an
> >>>> "NFS cache".  Given the degradation of capability, I think it would
> >>>> be helpful to put a few more eyeballs on the Xen implementation to
> >>>> determine if we could avoid using an intermediate file system.
> >>>
> >>> Nfs will not disappear soon, for some hypervisors, for some type of
> >> storages. But as long as the mgt server doesn't have this
> >> requirement, then people can add new type of hypervisors(e.g.
> >> hyperv), new type of storages(ceph, solidfire etc) which don't need
> >> NFS. The optimization for a particular hypervisor can be done by
> >> other people, who may be familiar with some type of storage, or a certain
> hypervisor.
> 
> It feels like we have jumped to a solution without completely understanding
> the scope of the problem and the associated assumptions.  We have a
> community of hypervisor experts w

RE: [VOTE] Pushback 4.2.0 Feature Freeze

2013-06-03 Thread Edison Su
+1[binding] on pushing back feature freeze date.

> -Original Message-
> From: Chip Childers [mailto:chip.child...@sungard.com]
> Sent: Friday, May 31, 2013 8:00 AM
> To: dev@cloudstack.apache.org
> Subject: [VOTE] Pushback 4.2.0 Feature Freeze
> 
> Following our discussion on the proposal to push back the feature freeze
> date for 4.2.0 [1], we have not yet achieved a clear consensus.  Well...
> we have already defined the "project rules" for figuring out what to do.
> In out project by-laws [2], we have defined a "release plan" decision as
> follows:
> 
> > 3.4.2. Release Plan
> >
> > Defines the timetable and work items for a release. The plan also
> > nominates a Release Manager.
> >
> > A lazy majority of active committers is required for approval.
> >
> > Any active committer or PMC member may call a vote. The vote must
> > occur on a project development mailing list.
> 
> And our lazy majority is defined as:
> 
> > 3.2.2. Lazy Majority - A lazy majority vote requires 3 binding +1
> > votes and more binding +1 votes than binding -1 votes.
> 
> Our current plan is the starting point, so this VOTE is a vote to change the
> current plan.  We require a 72 hour window for this vote, so IMO we are in an
> odd position where the feature freeze date is at least extended until
> Tuesday of next week.
> 
> Our current plan of record for 4.2.0 is at [3].
> 
> [1] http://markmail.org/message/vi3nsd2yo763kzua
> [2] http://s.apache.org/csbylaws
> [3]
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Cloudstack+4.2+
> Release
> 
> 
> 
> I'd like to call a VOTE on the following:
> 
> Proposal: Extend the feature freeze date for our 4.2.0 feature release from
> today (2013-05-31) to 2013-06-28.  All other dates following the feature
> freeze date in the plan would be pushed out 4 weeks as well.
> 
> Please respond with one of the following:
> 
> +1 : change the plan as listed above
> +/-0 : no strong opinion, but leaning + or -
> -1 : do not change the plan
> 
> This vote will remain open until Tuesday morning US eastern time.
> 
> -chip


[DISCUSS] NFS cache storage issue on object_store

2013-06-03 Thread Edison Su
Let's start a new thread about NFS cache storage issues on object_store.
First, I'll go through how NFS storage works on master branch, then how it 
works on object_store branch, then let's talk about the "issues".

0.   Why we need NFS secondary storage? Nfs secondary storage is used as a 
place to store templates/snapshots etc, it's zone wide, and it's widely 
supported by most of hypervisors(except HyperV). NFS storage exists in 
CloudStack since 1.x. With the rising of object storage, like S3/Swift, 
CloudStack adds the support of Swift in 3.x, and S3 in 4.0. You may wonder, if 
S3/Swift is used as the place to store templates/snapshots, then why we still 
need NFS secondary storage?

There are two reasons for that:

a.   CloudStack storage code is tightly coupled with NFS secondary storage, 
so when adding Swift/S3 support, it's likely to take shortcut, leave NFS 
secondary storage as it is.

b.  Certain hypervisors, and certain storage related operations, can not 
directly operate on object storage.
Examples:

b.1 When backing up snapshot(the snapshot taken from xenserver hypervisor) from 
primary storage to S3 in xenserver

If there are snapshot chains on the volume, and if we want to coalesce the 
snapshot chains into a new disk, then copy it to S3, we either, coalesce the 
snapshot chains on primary storage, or on an extra storage repository (SR) that 
supported by Xenserver.

If we coalesce it on primary storage, then may blow up the primary storage, as 
the coalesced new disk may need a lot of space(thinking about, the new disk 
will contain all the content in from leaf snapshot, all the way up to base 
template), but the primary storage is not planned to this operation(cloudstack 
mgt server is unaware of this operation, the mgt server may think the primary 
storage still has enough space to create volumes).

While xenserver doesn't have API to coalesce snapshots directly to S3, so we 
have to use other storages that supported by Xenserver, that's why the NFS 
storage is used during snapshot backup. So what we did is that first call 
xenserver api to coalesce the snapshot to NFS storage, then copy the newly 
created file into S3. This is what we did on both master branch and 
object_store branch.
   b.2 When create volume from snapshot if the 
snapshot is stored on S3.
 If the snapshot is a delta 
snapshot, we need to coalesce them into a new volume. We can't coalesce 
snapshots directly on S3, AFAIK, so we have to download the snapshot and its 
parents into somewhere, then coalesce them with xenserver's tools. Again, there 
are two options, one is to download all the snapshots into primary storage, or 
download them into NFS storage:
If we download all the 
snapshots into primary storage directly from S3, then first we need find a way 
import snapshot from S3 into Primary storage(if primary storage is a block 
device, then need extra care) and then coalesce them. If we go this way, need 
to find a primary storage with enough space, and even worse, if the primary 
storage is not zone-wide, then later on, we may need to copy the volume from 
one primary storage to another, which is time consuming.
If we download all the 
snapshots into NFS storage from S3, then coalesce them, and then copy the 
volume to primary storage. As the NFS storage is zone wide, so, you can copy 
the volume into whatever primary storage, without extra copy. This is what we 
did in master branch and object_store branch.
  b.3, some hypervisors, or some storages do not 
support directly import template into primary storage from a URL. For example, 
if Ceph is used as primary storage, when import a template into RBD, need 
transform a Qcow2 image into RAW disk, then into RBD format 2. In order to 
transform an image from Qcow2 image into RAW disk, you need extra file system, 
either a local file system(this is what other stack does, which is not scalable 
to me), or a NFS storage(this is what can be done on both master and 
object_store). Or one can modify hypervisor or storage to support directly 
import template from S3 into RBD. Here is the 
link(http://www.mail-archive.com/ceph-devel@vger.kernel.org/msg14411.html), 
that Wido posted.
 Anyway, there are so many combination of hypervisors and 
storages: for some hypervisors with zone wide file system based storage(e.g. 
KVM + gluster/NFS as primary storage), you don't need extra nfs storage. Also 
if you are using VMware or HyperV, which can import template from a URL, 
regardless which storage your are using, then you don't need extra NFS storage. 
While if you are using xenserver, in order to create volume from delta 
snapshot, you will need a NFS storage, or if you are using KVM + Ceph, you also 
may need a NFS storage.
Due to ab

[DISCUSS] The plan for object_store branch

2013-06-04 Thread Edison Su
We send out the following threads to discuss the issues of object_store branch:
http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201306.mbox/%3C77B337AF224FD84CBF8401947098DD87036A76%40SJCPEX01CL01.citrite.net%3E
http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201306.mbox/%3CCDD22955.3DDDC%25min.chen%40citrix.com%3E
http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201306.mbox/%3CCDD2300D.3DE0C%25min.chen%40citrix.com%3E
Comments/Feedback are welcome.  Min and I will fix these issues listed in the 
above threads, by end of this week.  We want to send out the 2nd round of 
review request by end of this week, which will fix above issues, and also some 
of comments in https://reviews.apache.org/r/11277.

If you don't agree with what we are doing, please send out your comments as 
soon as possible.
If you want to collaborate on the coding, please do, code is always welcome.
If you want to have a weekly IRC meeting on the subjects, definitely, we can do 
that.


RE: KVM development, libvirt

2013-06-05 Thread Edison Su
I think we miss  a VOTE from Jenkins, the vote from Jenkins should be taken as 
highest priority in each release. This kind of regression should be easily 
identified in Jenkins(If we have a regression test for each environment).

> -Original Message-
> From: Marcus Sorensen [mailto:shadow...@gmail.com]
> Sent: Wednesday, June 05, 2013 9:03 AM
> To: dev@cloudstack.apache.org
> Subject: KVM development, libvirt
> 
>It looks like a bug was probably introduced into 4.1, where stock Ubuntu
> 12.04 doesn't support part of the libvirt xml format for system vms. I feel 
> bad
> that it got in there, but I think it highlights an issue that needs to be
> addressed within our development.  Libvirt versioning is somewhat of a
> moving target. Features are introduced rapidly, and versions vary quite a bit
> between distributions and releases of those distributions. Despite this,
> we've largely ignored what libvirt we are targeting, assuming "whatever is on
> the distribution". There is the occasional discussion about this or that being
> available in libvirt x.x.x during the development cycle, but when it comes to
> qualifying the release we don't pay attention to it.
> We should. Looking at the vote for 4.1.0, several people call out which
> OS/distribution they use, but I'd like to see the libvirt version as well.
> 
> Here are some initial thoughts, please add to these if you can:
> 
> 1) When voting for a release, should we require a minimum number of votes
> FOR EACH supported OS? Meaning that we require positive test results from
> every OS listed as supported? In retrospect this seems like a no-brainer,
> however it may change the bylaws.
> 
> 
> 2) Do we want to pull libvirt out as a standalone dependency? Meaning that
> we code to a specific version and make that more visible. This could be a
> "least common denominator" type thing where we pick the lowest version
> from all supported OSes, or it could be independent of distribution,
> whatever we decide, but we would make an effort to call out the version and
> treat it independently of OS.
> 
> 3) I can think of a few things we could do in packaging to help catch
> versioning, but I'm not sure they would entirely address the issues.


RE: [DISCUSS] NFS cache storage issue on object_store

2013-06-05 Thread Edison Su
se the least
> recently used resource may be still be in use by the system.  I think we
> should look to a reservation model with reference counting where files are
> deleted when once no processes are accessing them.  The following is a
> (handwave-handwave) overview of the process I think would meet these
> requirements:
> 
>   1. Request a reservation for the maximum size of the file(s) that will
> be processed in the staging area.
>   - If the file is already in the staging area, increase its
> reference count
>   - If the reservation can not be fulfilled, we can either drop
> the process in a retry queue or reject it.
>   2. Perform work and transfer file(s) to/from the object store
>   3. Release the file(s) -- decrementing the reference count.  When
> the reference count is <= 0, delete the file(s) from the staging area

I assume the reference count is stored in memory and inside SSVM?
The reference count may not work properly, in case of multiple secondary 
storage VMs and multiple mgt servers. And there may have a lot of places other 
than SSVM can directly use the cached object.
If we store the reference count on file system, then need to take a lock(such 
as nfs lock, or lock file)to update, while the lock can be failed to release, 
due to all kind of reasons(such as network).

I thought about it yesterday, about how to implement LRU. Originally, I though, 
we could eliminate race condition and track who is using objects stored on 
cache storage by using state machine
For example, whenever mgt server wants to use the cached object, mgt server can 
change the state for the cached object to "Copying"(there is a DB entry for 
each cached object), after the copy is finished, then change the state into 
"Ready", and also update "updated" column. It will eliminate the race 
condition, as only one thread can access the cached object, and change its 
state. But the problem of this way, is that, there are cases that multiple 
reader threads may want to read the cached object at the same time: e.g. copy 
the same cached template to multiple primary storages at the same time.

In order to accommodate multiple readers, I am trying to add a new db table to 
track the users of  the cached object.
The follow will be like the following:
1. mgt server wants to use the cached object, first, need to check the state of 
the cached object, the state must be in ready state.
2. mgt server writes a db entry into DB, the entry will contain, the id of 
cached object, the id of cached storage, the issued time. The db entry will 
also contain a state: the state can be initial/processing/finished/failed. Mgt 
server needs to set the state as "processing".
3. mgt server finishes the operation related the cached object, then mark state 
of above db entry as "finished",  also update the time column of above entry.
4. the above db entries will be removed if the state is not in "processing" for 
a while(let's say one week?), or if the entry is in the "processing" state for 
a while(let's say one day). In this way, mgt server can easily know which 
cached object is used or not used recently, by take a look this db table.
5. If mgt server find a cached object is not used(there is no db entry in the 
above table) for a while(let's say one week), then change the state of the 
cached object into "destroying", then send command to ssvm to destroy the 
object.
6. There is small window, that mgt server is changing the state of cached 
object into "destroying"(there is no db entry is in "processing" state in the 
above table,), while another thread is trying to copying(as the cached object 
state is still in ready state), both DB operations will success, we can hold a 
DB lock on the cached object entry, before both DB opeations.

How do you think?
 
> 
> We would also likely want to consider a TTL to purge files after a 
> configurable
> period of inactivity as a backstop against crashed processes failing to 
> properly
> decrementing the reference count.  In this model, we will either defer or
> reject work if resources are not available, and we properly bound resources.

Yes, it should be taken into consideration for all the time consuming 
operations.

> 
> Finally, in terms of decoupling the decision to use of this mechanism by
> hypervisor plugins from the storage subsystem, I think we should expose
> methods on the secondary storage services that allow clients to explicitly
> request or create resources using files (i.e. java.io.File) instead of streams
> (e.g. createXXX(File) or readXXXAsFile).  These interfaces would provide the
> storage subsystem with the hint that the client requires file access to the
> request resource.   For object store plugins, this hint would be used to wrap
> the reso

RE: [DISCUSS] NFS cache storage issue on object_store

2013-06-05 Thread Edison Su
TL to purge files after a
> configurable period of inactivity as a backstop against crashed processes
> failing to properly decrementing the reference count.  In this model, we will
> either defer or reject work if resources are not available, and we properly
> bound resources.
> >
> > Finally, in terms of decoupling the decision to use of this mechanism by
> hypervisor plugins from the storage subsystem, I think we should expose
> methods on the secondary storage services that allow clients to explicitly
> request or create resources using files (i.e. java.io.File) instead of streams
> (e.g. createXXX(File) or readXXXAsFile).  These interfaces would provide the
> storage subsystem with the hint that the client requires file access to the
> request resource.   For object store plugins, this hint would be used to wrap
> the resource in an object that would transfer in and/out of the staging area.
> >
> > Thoughts?
> > -John
> >
> > On Jun 3, 2013, at 7:17 PM, Edison Su  wrote:
> >
> >> Let's start a new thread about NFS cache storage issues on object_store.
> >> First, I'll go through how NFS storage works on master branch, then how it
> works on object_store branch, then let's talk about the "issues".
> >>
> >> 0.   Why we need NFS secondary storage? Nfs secondary storage is used
> as a place to store templates/snapshots etc, it's zone wide, and it's widely
> supported by most of hypervisors(except HyperV). NFS storage exists in
> CloudStack since 1.x. With the rising of object storage, like S3/Swift,
> CloudStack adds the support of Swift in 3.x, and S3 in 4.0. You may wonder, if
> S3/Swift is used as the place to store templates/snapshots, then why we still
> need NFS secondary storage?
> >>
> >> There are two reasons for that:
> >>
> >> a.   CloudStack storage code is tightly coupled with NFS secondary
> storage, so when adding Swift/S3 support, it's likely to take shortcut, leave
> NFS secondary storage as it is.
> >>
> >> b.  Certain hypervisors, and certain storage related operations, can 
> >> not
> directly operate on object storage.
> >> Examples:
> >>
> >> b.1 When backing up snapshot(the snapshot taken from xenserver
> >> hypervisor) from primary storage to S3 in xenserver
> >>
> >> If there are snapshot chains on the volume, and if we want to coalesce
> the snapshot chains into a new disk, then copy it to S3, we either, coalesce
> the snapshot chains on primary storage, or on an extra storage repository (SR)
> that supported by Xenserver.
> >>
> >> If we coalesce it on primary storage, then may blow up the primary
> storage, as the coalesced new disk may need a lot of space(thinking about,
> the new disk will contain all the content in from leaf snapshot, all the way 
> up
> to base template), but the primary storage is not planned to this
> operation(cloudstack mgt server is unaware of this operation, the mgt server
> may think the primary storage still has enough space to create volumes).
> >>
> >> While xenserver doesn't have API to coalesce snapshots directly to S3, so
> we have to use other storages that supported by Xenserver, that's why the
> NFS storage is used during snapshot backup. So what we did is that first call
> xenserver api to coalesce the snapshot to NFS storage, then copy the newly
> created file into S3. This is what we did on both master branch and
> object_store branch.
> >>  b.2 When create volume from snapshot if the 
> >> snapshot is
> stored on S3.
> >>If the snapshot is a delta 
> >> snapshot, we need to
> coalesce them into a new volume. We can't coalesce snapshots directly on S3,
> AFAIK, so we have to download the snapshot and its parents into
> somewhere, then coalesce them with xenserver's tools. Again, there are two
> options, one is to download all the snapshots into primary storage, or
> download them into NFS storage:
> >>   If we download all the 
> >> snapshots into primary
> storage directly from S3, then first we need find a way import snapshot from
> S3 into Primary storage(if primary storage is a block device, then need extra
> care) and then coalesce them. If we go this way, need to find a primary
> storage with enough space, and even worse, if the primary storage is not
> zone-wide, then later on, we may need to copy the volume from one
> primary storage to another, which is time consuming.
> >> 

RE: [DISCUSS] NFS cache storage issue on object_store

2013-06-06 Thread Edison Su


> -Original Message-
> From: John Burwell [mailto:jburw...@basho.com]
> Sent: Thursday, June 06, 2013 7:47 AM
> To: dev@cloudstack.apache.org
> Subject: Re: [DISCUSS] NFS cache storage issue on object_store
> 
> Edison,
> 
> Please my comments in-line below.
> 
> Thanks,
> -John
> 
> On Jun 5, 2013, at 6:55 PM, Edison Su  wrote:
> 
> >
> >
> >> -Original Message-
> >> From: John Burwell [mailto:jburw...@basho.com]
> >> Sent: Wednesday, June 05, 2013 1:04 PM
> >> To: dev@cloudstack.apache.org
> >> Subject: Re: [DISCUSS] NFS cache storage issue on object_store
> >>
> >> Edison,
> >>
> >> You have provided some great information below which helps greatly to
> >> understand the role of the "NFS cache" mechanism.  To summarize, this
> >> mechanism is only currently required for Xen snapshot operations
> >> driven by Xen's coalescing operations.  Is my understanding correct?
> >> Just out of
> >
> > I think Ceph may still need "NFS cache", for example, during delta snapshot
> backup:
> > http://ceph.com/dev-notes/incremental-snapshots-with-rbd/
> > You need to create a delta snapshot into a file, then upload the file into 
> > S3.
> >
> > For KVM, if the snapshot is taken on qcow2, then need to copy the
> snapshot into a file system, then backup it to S3.
> >
> > Another usage case for "NFS cache " is to cache template stored on S3, if
> there is no zone-wide primary storage. We need to download template from
> S3 into every primary storage, if there is no cache, each download will take a
> while: comparing download template directly from S3(if the S3 is region wide)
> with download from a zone wide "cache" storage, I would say, the download
> from zone wide cache storage should be faster than from region wide S3. If
> there is no zone wide primary storage, then we will download the template
> from S3 several times, which is quite time consuming.
> >
> >
> > There may have other places to use "NFS cache", but the point is as
> > long as mgt server can be decoupled from this "cache" storage, then we
> can decide when/how to use cache storage based on different kind of
> hypervisor/storage combinations in the future.
> 
> I think we would do well to re-orient the way we think about roles and
> requirements.  Ceph doesn't need a file system to perform a delta snapshot
> operation.  Xen, KVM, and/or VMWare need access to a file system to

For Ceph delta snapshot case, it's Ceph has the requirement that needs a file 
system to perform delta snapshot(http://ceph.com/docs/next/man/8/rbd/):

export-diff [image-name] [dest-path] [-from-snap snapname]
Exports an incremental diff for an image to dest path (use - for stdout). If an 
initial snapshot is specified, only changes since that snapshot are included; 
otherwise, any regions of the image that contain data are included. The end 
snapshot is specified using the standard -snap option or @snap syntax (see 
below). The image diff format includes metadata about image size changes, and 
the start and end snapshots. It efficiently represents discarded or 'zero' 
regions of the image.

The dest-path is either a file, or stdout, if using stdout, then need a lot of 
memory. If using hypervisor's local file system, then the local file system may 
don't have enough space to store the delta diff.

> perform these operations.  The hypervisor plugin should request a
> reservation of x size as a file handle from the Storage subsystem.  The Ceph
> driver implements this request by using a staging area + transfer operation.
> This approach encapsulates the operation/rules around the staging area from
> clients, protects against concurrent requests flooding a resource, and allows
> hypervisor-specific behavior/rules to encapsulated in the appropriate plugin.
> 
> >
> >> curiosity, is their a Xen expert on the list who can provide a
> >> high-level description of the coalescing operation -- in particular,
> >> the way it interacts with storage?  I have Googled a bit, and found very
> little information about it.
> >> Has the object_store branch been tested with VMWare and KVM?  If so,
> >> what operations on these hypervisors have been tested?
> >
> > Both vmware and KVM is tested, but without S3 support. Haven't have
> time to take a look at how to use S3 in both hypervisors yet.
> > For example, we should take a look at how to import a template from url
> into vmware data store, thus, we can eliminate "NFS cache" during template
> import.
> 
> Given the releas

RE: Object based Secondary storage.

2013-06-06 Thread Edison Su
The Etag created by both RIAK CS and Amazon S3 seems a little bit different, in 
case of multi part upload.

Here is the result I tested on both RIAK CS and Amazon S3, with s3cmd.
Test environment:
S3cmd: version: version 1.5.0-alpha1
Riak cs:
Name: riak
Arch: x86_64
Version : 1.3.1
Release : 1.el6
Size: 40 M
Repo: installed
>From repo   : basho-products

The command I used to put:
s3cmd put some-file s3://some-path --multipart-chunk-size-mb=100 -v -d

The etag created for the file, when using Riak CS is WxEUkiQzTWm_2C8A92fLQg==

EBUG: Sending request method_string='POST', 
uri='http://imagestore.s3.amazonaws.com/tmpl/1/1/routing-1/test?uploadId=kfDkh7Q_QCWN7r0ZTqNq4Q==',
 headers={'content-length': '309', 'Authorization': 'AWS 
OYAZXCAFUC1DAFOXNJWI:xlkHI9tUfUV/N+Ekqpi7Jz/pbOI=', 'x-amz-date': 'Thu, 06 Jun 
2013 22:54:28 +'}, body=(309 bytes)
DEBUG: Response: {'status': 200, 'headers': {'date': 'Thu, 06 Jun 2013 22:40:09 
GMT', 'content-length': '326', 'content-type': 'application/xml', 'server': 
'Riak CS'}, 'reason': 'OK', 'data': 'http://s3.amazonaws.com/doc/2006-03-01/";>http://imagestore.s3.amazonaws.com/tmpl/1/1/routing-1/testimagestoretmpl/1/1/routing-1/testkfDkh7Q_QCWN7r0ZTqNq4Q=='}

While the etag created by Amazon S3 is: 
"70e1860be687d43c039873adef4280f2-3"

DEBUG: Sending request method_string='POST', 
uri='/fixes/icecake/systdfdfdfemvm.iso1?uploadId=vdkPSAtaA7g.fdfdfdfdf..iaKRNW_8QGz.bXdfdfdfdfdfkFXwUwLzRcG5obVvJFDvnhYUFdT6fYr1rig--',
 
DEBUG: Response: {'status': 200, 'headers': {, 'server': 'AmazonS3', 
'transfer-encoding': 'chunked', 'connection': 'Keep-Alive', 'x-amz-request-id': 
'8DFF5D8025E58E99', 'cache-control': 'proxy-revalidate', 'date': 'Thu, 06 Jun 
2013 22:39:47 GMT', 'content-type': 'application/xml'}, 'reason': 'OK', 'data': 
'\n\nhttp://s3.amazonaws.com/doc/2006-03-01/";>http://fdfdfdfdfdfdfKey>fixes/icecake/systemvm.iso1"70e1860be687d43c039873adef4280f2-3"'}

So the etag created on Amazon S3 has "-"(dash) in it, but there is only "_" 
(underscore) on Riak cs. 

Do you know the reason? What should we need to do to make it compatible with 
Amazon S3 SDK?

> -Original Message-
> From: John Burwell [mailto:jburw...@basho.com]
> Sent: Thursday, June 06, 2013 2:03 PM
> To: dev@cloudstack.apache.org
> Subject: Re: Object based Secondary storage.
> 
> Min,
> 
> Are you calculating the MD5 or letting the Amazon client do it?
> 
> Thanks,
> -John
> 
> On Jun 6, 2013, at 4:54 PM, Min Chen  wrote:
> 
> > Thanks Tom. Indeed I have a S3 question that need some advise from
> > some S3 experts. To support upload object > 5G, I have used
> > TransferManager.upload to upload object to S3, upload went fine and
> > object are successfully put to S3. However, later on when I am using
> > "s3cmd get " to retrieve this object, I always got this 
> > exception:
> >
> > "MD5 signatures do not match: computed=Y, received="X"
> >
> > It seems that Amazon S3 kept a different Md5 sum for the multi-part
> > uploaded object. We have been using Riak CS for our S3 testing. If I
> > changed to not using multi-part upload and directly invoking S3
> > putObject, I will not run into this issue. Do you have such experience
> before?
> >
> > -min
> >
> > On 6/6/13 1:56 AM, "Thomas O'Dowd"  wrote:
> >
> >> Thanks Min. I've printed out the material and am reading new threads.
> >> Can't comment much yet until I understand things a bit more.
> >>
> >> Meanwhile, feel free to hit me up with any S3 questions you have. I'm
> >> looking forward to playing with the object_store branch and testing
> >> it out.
> >>
> >> Tom.
> >>
> >> On Wed, 2013-06-05 at 16:14 +, Min Chen wrote:
> >>> Welcome Tom. You can check out this FS
> >>>
> >>>
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Storage+Backu
> >>> p+Obj
> >>> ec
> >>> t+Store+Plugin+Framework for secondary storage architectural work
> >>> t+Store+Plugin+done
> >>> in
> >>> object_store branch.You may also check out the following recent
> >>> threads regarding 3 major technical questions raised by community as
> >>> well as our answers and clarification.
> >>>
> >>> http://mail-archives.apache.org/mod_mbox/cloudstack-
> dev/201306.mbox/
> >>> %3C77
> >>> B3
> >>>
> 37AF224FD84CBF8401947098DD87036A76%40SJCPEX01CL01.citrite.net%3E
> >>>
> >>> http://mail-archives.apache.org/mod_mbox/cloudstack-
> dev/201306.mbox/
> >>> %3CCD
> >>> D2
> >>> 2955.3DDDC%25min.chen%40citrix.com%3E
> >>>
> >>> http://mail-archives.apache.org/mod_mbox/cloudstack-
> dev/201306.mbox/
> >>> %3CCD
> >>> D2
> >>> 300D.3DE0C%25min.chen%40citrix.com%3E
> >>>
> >>>
> >>> That branch is mainly worked on by Edison and me, and we are at PST
> >>> timezone.
> >>>
> >>> Thanks
> >>> -min
> >> --
> >> Cloudian KK - http://www.cloudian.com/get-started.html
> >> Fancy 100TB of full featured S3 Storage?
> >> Checkout the Cloudian(r) Community Edition!
> >>
> >



RE: Object based Secondary storage.

2013-06-07 Thread Edison Su


> -Original Message-
> From: John Burwell [mailto:jburw...@basho.com]
> Sent: Friday, June 07, 2013 7:54 AM
> To: dev@cloudstack.apache.org
> Cc: Kelly McLaughlin
> Subject: Re: Object based Secondary storage.
> 
> Thomas,
> 
> The AWS API explicitly states the ETag is not guaranteed to be an integrity
> hash [1].  According to RFC 2616 [2], clients should not infer any meaning to
> the content of an ETag.  Essentially, it is an opaque version identifier which
> should only be compared for equality to another ETag value to detect a
> resource change.  As such, I agree with your assessment that s3cmd is
> making an invalid assumption regarding the value of the ETag.


Not only s3cmd, but Amazon S3 java SDK also makes the "invalid" assumption.
What's your opinion to solve the SDK incompatibility issue? 

> 
> Min, could you please send the stack trace you receiving from
> TransferManager?  Also, could send a reference to the code in the Git repo?
> With that information, we can start run down the source of the problem.
> 
> Thanks,
> -John
> 
> [1]: http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html
> [2]: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
> 
> On Jun 7, 2013, at 1:08 AM, Thomas O'Dowd 
> wrote:
> 
> > Min,
> >
> > This looks like an s3cmd problem. I just downloaded the latest s3cmd
> > to check the source code.
> >
> > In S3/FileLists.py:
> >
> >compare_md5 = 'md5' in cfg.sync_checks
> ># Multipart-uploaded files don't have a valid md5 sum - it ends
> > with "...-nn"
> >if compare_md5:
> >if (src_remote == True and src_list[file]['md5'].find("-")
> >> = 0) or (dst_remote == True and dst_list[file]['md5'].find("-") >= 0):
> >
> > Basically, s3cmd is trying to verify that the checksum of the data
> > that it downloads is the same as the etag unless the etag ends with "-YYY".
> > This is an AWS convention (as I mentioned in an earlier mail) so it
> > works but it seems that RiakCS has a different ETAG format which
> > doesn't match -YYY so s3cmd assumes the other type of ETAG which is
> > the same as the MD5 checksum. For RiakCS however, this is not the
> > case. This is why you get the checksum error.
> >
> > Chances are that Riak is doing the right thing here and the data file
> > will be the same as what you uploaded. You could change the s3cmd code
> > to be more lenient for Riak. The Basho guys might either like to
> > change their format or talk to the different tool vendors about
> > changing the tools to work with Riak. For Cloudian, we choose to try
> > to keep it similar to AWS so we could avoid stuff like this.
> >
> > Tom.
> >
> > On Fri, 2013-06-07 at 04:02 +, Min Chen wrote:
> >> John,
> >>  We are not able to successfully download file that was uploaded to Riak
> CS with TransferManager using S3cmd. Same error as we encountered using
> amazon s3 java client due to the incompatible ETAG format ( - and _
> difference).
> >>
> >> Thanks
> >> -min
> >>
> >>
> >>
> >> On Jun 6, 2013, at 5:40 PM, "John Burwell"  wrote:
> >>
> >>> Edison,
> >>>
> >>> Riak CS and S3 seed their hashes differently -- causing the form to
> appear slightly different.  In particular, Riak CS uses URI-safe base64 
> encoding
> which explains why the ETag values contain "-"s instead of "_"s.  From a 
> client
> perspective, the ETags are treated as opaque strings that are passed through
> to the server for processing and compared strictly for equality.  Therefore,
> the form of the hash will not cause the client to choke, and the Riak CS
> behavior you are seeing is S3 API compatible (see
> http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html for
> more details).
> >>>
> >>> Were you able to successfully download the file from Riak CS using
> s3cmd?
> >>>
> >>> Thanks,
> >>> -John
> >>>
> >>>
> >>> On Jun 6, 2013, at 6:57 PM, Edison Su  wrote:
> >>>
> >>>> The Etag created by both RIAK CS and Amazon S3 seems a little bit
> different, in case of multi part upload.
> >>>>
> >>>> Here is the result I tested on both RIAK CS and Amazon S3, with s3cmd.
> >>>> Test environment:
> >>>> S3cmd: version: version 1.5.0-alpha1 Riak cs:
> >>>> Name: riak
> >>&g

RE: [MERGE] disk_io_throttling to MASTER

2013-06-10 Thread Edison Su


> -Original Message-
> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> Sent: Monday, June 10, 2013 4:07 PM
> To: dev@cloudstack.apache.org
> Subject: Re: [MERGE] disk_io_throttling to MASTER
> 
> That's a good point about the "managed" property being storable in the
> storage_pool_details table. We don't need another column for it in the
> storage_pool table. In the current url field is where this kind of information
> can be passed along and it can be stored in the storage_pool_details table, if
> the plug-in wants to do so.

+1. CreateStoragePoolCmd->details field is the place to pass extra info about a 
storage pool to storage provider.

> 
> Let's decide on how the hypervisor code should operate, though. Should we
> pass in info to the "attach" command to have it create an SR if need be?
> This is sort of how I did it in the patch code I submitted (but I used the
> StoragePoolType.Dynamic field, which we don't have to use).

Master branch doesn't carry a VolumeTO in attach command, which means, there is 
no place for storage provider to pass down some extra info to hypervisor 
resource.
While on object_store branch, it has the place to hook up with storage provider.
So there are two options:
1. Use your StoragePoolType.Dynamic as it is, and merge into master branch. 
After your patch is merged, and also object_store branch is merged, then we can 
remove StoragePoolType.Dynamic totally.
2. Rebase your patch on object_store branch, merge your patch after 
object_store. It will take extra time for you, I don't think it's worth the 
effort though, as I can consolidate your patch with object_store during 
object_store merge into master.

So I'd prefer option 1, as you only did few line of changes on the mgt server 
common code(regardless of iops), it should be easy for me to consolidate your 
patch with object_store branch.





RE: [MERGE] disk_io_throttling to MASTER

2013-06-11 Thread Edison Su
Will you be on today's Go to meeting? We can talk about your stuff.

> -Original Message-
> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> Sent: Tuesday, June 11, 2013 10:20 AM
> To: dev@cloudstack.apache.org
> Subject: Re: [MERGE] disk_io_throttling to MASTER
> 
> I am OK with it either way.
> 
> Edison, do you still have a preference?
> 
> Thanks!
> 
> 
> On Tue, Jun 11, 2013 at 11:14 AM, John Burwell 
> wrote:
> 
> > Mike,
> >
> > So my dependency graph below is incorrect.  If there is no dependency
> > between object_store and solidfire, why wouldn't merge them
> > separately?  I ask because the object_store patch is already very
> > large.  As a reviewer try to comprehend the changes, a series of
> > smaller of patches is easier to digest .
> >
> > Thanks,
> > -John
> >
> > On Jun 11, 2013, at 1:10 PM, Mike Tutkowski
> > 
> > wrote:
> >
> > > Hey John,
> > >
> > > The SolidFire patch does not depend on the object_store branch, but
> > > - as Edison mentioned - it might be easier if we merge the SolidFire
> > > branch
> > into
> > > the object_store branch before object_store goes into master.
> > >
> > > I'm not sure how the disk_io_throttling fits into this merge strategy.
> > > Perhaps Wei can chime in on that.
> > >
> > >
> > > On Tue, Jun 11, 2013 at 11:07 AM, John Burwell 
> > wrote:
> > >
> > >> Mike,
> > >>
> > >> We have a delicate merge dance to perform.  The disk_io_throttling,
> > >> solidfire, and object_store appear to have a number of overlapping
> > >> elements.  I understand the dependencies between the patches to be
> > >> as
> > >> follows:
> > >>
> > >>object_store <- solidfire -> disk_io_throttling
> > >>
> > >> Am I correct that the device management aspects of SolidFire are
> > additive
> > >> to the object_store branch or there are circular dependency between
> > >> the branches?  Once we understand the dependency graph, we can
> > >> determine the best approach to land the changes in master.
> > >>
> > >> Thanks,
> > >> -John
> > >>
> > >>
> > >> On Jun 10, 2013, at 11:10 PM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com>
> > >> wrote:
> > >>
> > >>> Also, if we are good with Edison merging my code into his branch
> > >>> before going into master, I am good with that.
> > >>>
> > >>> We can remove the StoragePoolType.Dynamic code after his merge
> and
> > >>> we
> > can
> > >>> deal with Burst IOPS then, as well.
> > >>>
> > >>>
> > >>> On Mon, Jun 10, 2013 at 9:08 PM, Mike Tutkowski <
> > >>> mike.tutkow...@solidfire.com> wrote:
> > >>>
> >  Let me make sure I follow where we're going here:
> > 
> >  1) There should be NO references to hypervisor code in the
> >  storage plug-ins code (this includes the default storage plug-in,
> >  which
> > >> currently
> >  sends several commands to the hypervisor in use (although it does
> >  not
> > >> know
> >  which hypervisor (XenServer, ESX(i), etc.) is actually in use))
> > 
> >  2) managed=true or managed=false can be placed in the url field
> >  (if
> > not
> >  present, we default to false). This info is stored in the
> >  storage_pool_details table.
> > 
> >  3) When the "attach" command is sent to the hypervisor in
> >  question, we pass the managed property along (this takes the
> >  place of the StoragePoolType.Dynamic check).
> > 
> >  4) execute(AttachVolumeCommand) in the hypervisor checks for the
> > managed
> >  property. If true for an attach, the necessary hypervisor data
> > >> structure is
> >  created and the rest of the attach command executes to attach the
> > >> volume.
> > 
> >  5) When execute(AttachVolumeCommand) is invoked to detach a
> >  volume,
> > the
> >  same check is made. If managed, the hypervisor data structure is
> > >> removed.
> > 
> >  6) I do not see an clear way to support Burst IOPS in 4.2 unless
> >  it is stored in the volumes and disk_offerings table. If we have
> >  some idea, that'd be cool.
> > 
> >  Thanks!
> > 
> > 
> >  On Mon, Jun 10, 2013 at 8:58 PM, Mike Tutkowski <
> >  mike.tutkow...@solidfire.com> wrote:
> > 
> > > "+1 -- Burst IOPS can be implemented while avoiding
> > > implementation attributes.  I always wondered about the details
> > > field.  I think we
> > >> should
> > > beef up the description in the documentation regarding the
> > > expected
> > >> format
> > > of the field.  In 4.1, I noticed that the details are not
> > > returned on
> > >> the
> > > createStoratePool updateStoragePool, or listStoragePool response.
> >  Why
> > > don't we return it?  It seems like it would be useful for
> > > clients to
> > be
> > > able to inspect the contents of the details field."
> > >
> > > Not sure how this would work storing Burst IOPS here.
> > >
> > > Burst IOPS need to be variable on a Disk Offering-by-Disk
> > > Offering basis. For eac

RE: PCI-Passthrough with CloudStack

2013-06-11 Thread Edison Su


> -Original Message-
> From: Marcus Sorensen [mailto:shadow...@gmail.com]
> Sent: Tuesday, June 11, 2013 12:10 PM
> To: dev@cloudstack.apache.org
> Cc: Ryousei Takano; Kelven Yang
> Subject: Re: PCI-Passthrough with CloudStack
> 
> What we need is some sort of plugin system for the libvirt guest agent,
> where people can inject their own additions to the xml. So we pass the VM
> parameters (including name, os, nics, volumes etc) to your plugin, and it
> returns either nothing, or some xml. Or perhaps an object that defines
> additional xml for various resources.
> 
> Or maybe we just pass the final cloudstack-generated XML to your plugin,
> the external plugin processes it and returns it, complete with whatever
> modifications it wants before cloudstack starts the VM. That would actually
> be very simple to put in. Via the KVM host's agent.properties file we could
> point to an external script. That script could be in whatever language, as 
> long
> as it's executable. It filters the XML and returns new XML which is used to
> start the VM.

If change vm's xml is enough, then how about use libvirt's hook system:
http://www.libvirt.org/hooks.html

I think, the issue is that, how to let cloudstack only create one VM per KVM 
host, or few VMs per host(based on the available PCI devices on the host).
If we think PCI devices are the resource CloudStack should to take care of 
during the resource allocation, then we need a framework:
1. During host discovering, host can report whatever resources it can detect to 
mgt server. RAM/CPU freq/local storage are the resources, that currently 
supported by kvm agent. Here we may need to add PCI devices as another 
resource.  Such as, KVM agent host returns a StartupAuxiliaryDevicesReportCmd 
along as with other startupRouteringcmd/StartStorage*cmd etc, during the 
startup.
2. There will be a listener on the mgt server, which can listen on 
StartupAuxiliaryDevicesReportCmd, then records available PCI devices into DB,  
such as host_pci_device_ref table.
3. Need to extend FirstFitAllocator, take PCI devices as another resource 
during the allocation. And also need to find a place to mark the PCI device as 
used in host_pci_device_ref table, so the pci device won't be allocated to more 
than one VM. 
4. Have api to create a customized computing offering, the offering can contain 
info about PCI device, such as how many PCI devices plugged into a VM.
5. If user chooses above customized computing offering during the VM 
deployment, then the allocator in step 3 will be triggered, which will choose a 
KVM host which has enough PCI devices to fulfill the computing offering.
6. In the startupcommand, the mgt server send to kvm host, it should contain 
the PCI devices allocated to this VM.
7. At the KVM agent code, change VM's xml file properly based on the 
startupcommand.

How do you think?

> 
> On Tue, Jun 11, 2013 at 12:59 PM, Paul Angus 
> wrote:
> > We're working with 'a very large broadcasting company' how are using
> > cavium cards for ssl offload in all of their hosts
> >
> > We need to add:
> >
> > 
> > 
> >  > function='0x1'/>
> > 
> > 
> >
> > Into the xml definition of the guest VMs
> >
> > I'm very interested in working you guys to make this an integrated
> > part of CloudStack
> >
> > Interestingly cavium card drivers can present a number of virtual interfaces
> specifically designed to be passed through to guest vms, but these must be
> addressed separately so a single 'stock' xml definition wouldn't be flexible
> enough to fully utilise the card.
> >
> >
> > Regards,
> >
> > Paul Angus
> > S: +44 20 3603 0540 | M: +447711418784 paul.an...@shapeblue.com
> >
> > -Original Message-
> > From: Kelven Yang [mailto:kelven.y...@citrix.com]
> > Sent: 11 June 2013 18:10
> > To: dev@cloudstack.apache.org
> > Cc: Ryousei Takano
> > Subject: Re: PCI-Passthrough with CloudStack
> >
> >
> >
> > On 6/11/13 12:52 AM, "Pawit Pornkitprasan"  wrote:
> >
> >>Hi,
> >>
> >>I am implementing PCI-Passthrough to use with CloudStack for use with
> >>high-performance networking (10 Gigabit Ethernet/Infiniband).
> >>
> >>The current design is to attach a PCI ID (from lspci) to a compute
> >>offering. (Not a network offering since from CloudStack¹s point of
> >>view, the pass through device has nothing to do with network and may
> >>as well be used for other things.) A host tag can be used to limit
> >>deployment to machines with the required PCI device.
> >
> >
> >>
> >>Then, when starting the virtual machine, the PCI ID is passed into
> >>VirtualMachineTO to the agent (currently using KVM) and the agent
> >>creates a corresponding  (
> >>http://libvirt.org/guide/html/Application_Development_Guide-
> Device_Con
> >>f
> >>ig-
> >>PCI_Pass.html)
> >>tag and then libvirt will handle the rest.
> >
> >
> > VirtualMachineTO.params is designed to carry generic VM specific
> configurations, these configuration parameters can either be statically link

Re: Review Request: Protect VNC port with password on KVM.

2013-06-13 Thread edison su

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11058/#review21882
---

Ship it!


Ship It!

- edison su


On May 13, 2013, 10:32 p.m., Fang Wang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11058/
> ---
> 
> (Updated May 13, 2013, 10:32 p.m.)
> 
> 
> Review request for cloudstack, edison su and Kelven Yang.
> 
> 
> Description
> ---
> 
> Add the protection for VNC port with password on KVM. 
> 
> 
> Diffs
> -
> 
>   
> plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
>  8fe8c88 
> 
> Diff: https://reviews.apache.org/r/11058/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Fang Wang
> 
>



RE: [MERGE] disk_io_throttling to MASTER (Second Round)

2013-06-13 Thread Edison Su
How about set hypervisorType to Any? Haven't take a look at the master change 
yet.

From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
Sent: Thursday, June 13, 2013 1:41 PM
To: dev@cloudstack.apache.org
Cc: Edison Su
Subject: Re: [MERGE] disk_io_throttling to MASTER (Second Round)

Actually, I am noticing some new behavior around picking a storage pool for 
zone-wide storage.

The current implementation that I've brought down from master no longer finds 
storage for me because my plug-in is zone wide and not associated with a 
hypervisor.

Edison?

On Thu, Jun 13, 2013 at 1:13 PM, Mike Tutkowski 
mailto:mike.tutkow...@solidfire.com>> wrote:
Hi Edison,

I notice after I updated from master that Hypervisor Type is now a required 
parameter for creating a storage pool.

Should I just use HypervisorType.Any?

Thanks!

On Thu, Jun 13, 2013 at 12:21 PM, John Burwell 
mailto:jburw...@basho.com>> wrote:
Wei,

I published my review.  I didn't see any code to validating the rate values 
(e.g. values greater than 0, values less than an maximum value).  Did I miss it?

I also noticed that 0 is being used when no value has been specified.  I 
recommend using the Long type rather primitive long in order to use null to 
represent unspecified values rather than a magic value.

Thanks,
-John

On Jun 13, 2013, at 11:34 AM, Wei ZHOU 
mailto:ustcweiz...@gmail.com>> wrote:

> John,
>
> Please review the code on https://reviews.apache.org/r/11782
> The storage provisioned IOPS does not affect hypervisor throttled I/O, I
> think.
> Mike may change UI and java code for storage provisioned IOPS after the
> merge.
>
> -Wei
>
>
> 2013/6/13 John Burwell mailto:jburw...@basho.com>>
>
>> Wei,
>>
>> There are open questions on the thread regarding mutual exclusion of
>> hypervisor throttled I/O and storage provisioned IOPS.  We need to
>> understand how and where it will be implemented in both the UI and
>> service layers.  Also, can you resend the Review Board review?  My
>> email search skills have failed to find it.
>>
>> Thanks,
>> -John
>>
>>



--
Mike Tutkowski
Senior CloudStack Developer, SolidFire Inc.
e: mike.tutkow...@solidfire.com<mailto:mike.tutkow...@solidfire.com>
o: 303.746.7302
Advancing the way the world uses the 
cloud<http://solidfire.com/solution/overview/?video=play>(tm)



--
Mike Tutkowski
Senior CloudStack Developer, SolidFire Inc.
e: mike.tutkow...@solidfire.com<mailto:mike.tutkow...@solidfire.com>
o: 303.746.7302
Advancing the way the world uses the 
cloud<http://solidfire.com/solution/overview/?video=play>(tm)


[MERGE]object_store branch into master [Second round]

2013-06-14 Thread Edison Su
Hi all, 
 The second round, call for merge object_store branch into master, is 
coming!
  The issues fixed:
   1. All the major issues addressed by John are fixed:
1.1 A cache storage replacement algorithm is added: 
StorageCacheReplacementAlgorithmLRU, based on the reference count and least 
recently used. 
1.2 A new S3 transport is added, can upload > 5G template into 
S3 directly.
1.3 Retry, if S3 upload failed.
1.4 some comments from https://reviews.apache.org/r/11277/, 
mostly, the coding style, are addressed and clean up some unused code.
 2. DB upgrade path from 4.1 to 4.2
 3. Bug fix

The size of the patch is even bigger now, around 10 LOC, you can find 
about the patch from https://reviews.apache.org/r/11277/diff/2/. 
 Comments/feedback are welcome. Thanks.

> -Original Message-
> From: Edison Su [mailto:edison...@citrix.com]
> Sent: Friday, May 17, 2013 1:11 AM
> To: dev@cloudstack.apache.org
> Subject: [MERGE]object_store branch into master
> 
> Hi all,
>  Min and I worked on object_store branch during the last one and half
> month. We made a lot of refactor on the storage code, mostly related to
> secondary storage, but also on the general storage framework. The following
> goals are made:
> 
> 1.   An unified storage framework. Both secondary storages(nfs/s3/swift
> etc) and primary storages will share the same plugin model, the same
> interface. Add any other new storages into cloudstack will much easier and
> straightforward.
> 
> 2.   The storage interface  between mgt server and resource is unified,
> currently there are only 5 commands send out by mgt server:
> copycommand/createobjectcommand/deletecommand/attachcommand/de
> ttachcommand, and each storage vendor can decode/encode all the
> entities(volume/snapshot/storage pool/ template etc) by its own.
> 
> 3.   NFS secondary storage is not explicitly depended on by other
> components. For example, when registering template into S3, template will
> be write into S3 directly, instead of storing into nfs secondary storage, then
> push to S3. If s3 is used as secondary storage, then nfs storage will be used 
> as
> cache storage, but from other components point of view, cache storage is
> invisible. So, it's possible to make nfs storage as optional if s3 is used for
> certain hypervisors.
> The detailed FS is at
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Storage+Backup
> +Object+Store+Plugin+Framework
> The test we did:
> 
> 1.   We modified marvin to use new storage api
> 
> 2.   Test_volume and test_vm_life_cycle, test_template under smoke test
> folder are executed against xenserver/kvm/vmware and devcloud, some of
> them are failed, it's partly due to bugs introduced by our code, partly master
> branch itself has issue(e.g. resizevolume doesn't work). We want to fix these
> issues after merging into master.
> 
> The basic follow does work: create user vm, attach/detach volume, register
> template, create template from volume/snapshot, take snapshot, create
> volume from snapshot.
>   It's a huge change, around 60k LOC patch, to review the code, you can try:
> git diff master..object_store, will show all the diff.
>   Comments/feedback are welcome. Thanks.
> 



RE: Hypervisor Host Type Required at Zone Level for Primary Storage?

2013-06-17 Thread Edison Su
I think it's due to this 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Zone-wide+primary+storage+target
There are zone-wide storages, may only work with one particular hypervisor. For 
example, the data store created on VCenter can be shared by all the clusters in 
a DC, but only for vmware. And, CloudStack supports multiple hypervisors in one 
Zone, so, somehow, need a way to tell mgt server, for a particular zone-wide 
storage, which can only work with certain hypervisors.
You can treat hypervisor type on the storage pool, is another tag, to help 
storage pool allocator to find proper storage pool. But seems hypervisor type 
is not enough for your case, as your storage pool can work with both 
vmware/xenserver, but not for other hypervisors(that's your current code's 
implementation limitation, not your storage itself can't do that). 
So I'd think you need to extend ZoneWideStoragePoolAllocator, maybe, a new 
allocator called: solidfirezonewidestoragepoolAllocator. And, replace the 
following line in applicationContext.xml:
  
With your solidfirezonewidestoragepoolAllocator
It also means, for each CloudStack mgt server deployment, admin needs to 
configure applicationContext.xml for their needs.

> -Original Message-
> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> Sent: Saturday, June 15, 2013 11:34 AM
> To: dev@cloudstack.apache.org
> Subject: Hypervisor Host Type Required at Zone Level for Primary Storage?
> 
> Hi,
> 
> I recently updated my local repo and noticed that we now require a
> hypervisor type to be associated with zone-wide primary storage.
> 
> I was wondering what the motivation for this might be?
> 
> In my case, my zone-wide primary storage represents a SAN. Volumes are
> carved out of the SAN as needed and can currently be utilized on both Xen
> and VMware (although, of course, once you've used a given volume on one
> hypervisor type or the other, you can only continue to use it with that
> hypervisor type).
> 
> I guess the point being my primary storage can be associated with more than
> one hypervisor type because of its dynamic nature.
> 
> Can someone fill me in on the reasons behind this recent change and
> recommendations on how I should proceed here?
> 
> Thanks!
> 
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud
> *(tm)*


RE: enableStorageMaintenance

2013-06-17 Thread Edison Su


> -Original Message-
> From: La Motta, David [mailto:david.lamo...@netapp.com]
> Sent: Friday, June 14, 2013 7:54 AM
> To: 
> Subject: enableStorageMaintenance
> 
> ...works great for putting down the storage into maintenance mode (looking
> forward seeing this for secondary storage as well!).
> 
> Now the question is, after I've run it... how do I know when it is done so I 
> can
> operate on the volume?

enableStorageMaintenance will return a job id, which can be used in 
queryAsyncJobResult. Here is the doc:
http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.1.0/html/Developers_Guide/asynchronous-commands.html


> 
> Poll using updateStoragePool and query the state for "Maintenance"?  What
> about introducing the ability to pass in callback URLs to the REST call?


> 
> Thx.
> 
> 
> 
> David La Motta
> Technical Marketing Engineer
> Citrix Solutions
> 
> NetApp
> 919.476.5042
> dlamo...@netapp.com
> 
> 



RE: enableStorageMaintenance

2013-06-17 Thread Edison Su


> -Original Message-
> From: La Motta, David [mailto:david.lamo...@netapp.com]
> Sent: Monday, June 17, 2013 8:37 AM
> To: 
> Subject: Re: enableStorageMaintenance
> 
> Along the same lines... is there a REST command coming in 4.2 to quiesce one
> or multiple virtual machines?

Quiesce means quiesce guest VM file system? Like 
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1015180
4.2 will support VM snapshot for vmware, but don't know it sets quiesce flag to 
1 or not.

 
> 
> David La Motta
> Technical Marketing Engineer
> Citrix Solutions
> 
> NetApp
> 919.476.5042
> dlamo...@netapp.com
> 
> 
> 
> On Jun 14, 2013, at 10:53 AM, "La Motta, David"
> mailto:david.lamo...@netapp.com>> wrote:
> 
> ...works great for putting down the storage into maintenance mode (looking
> forward seeing this for secondary storage as well!).
> 
> Now the question is, after I've run it... how do I know when it is done so I 
> can
> operate on the volume?
> 
> Poll using updateStoragePool and query the state for "Maintenance"?  What
> about introducing the ability to pass in callback URLs to the REST call?
> 
> Thx.
> 
> 
> 
> David La Motta
> Technical Marketing Engineer
> Citrix Solutions
> 
> NetApp
> 919.476.5042
> dlamo...@netapp.com etapp.com>
> 
> 
> 



RE: Hypervisor Host Type Required at Zone Level for Primary Storage?

2013-06-17 Thread Edison Su
t;
> > >if (suitablePools.size() == returnUpTo) {
> > >
> > >break;
> > >
> > >}
> > >
> > >StoragePool pol = (StoragePool)this.dataStoreMgr
> > > .getPrimaryDataStore(storage.getId());
> > >
> > >if (filter(avoid, pol, dskCh, plan)) {
> > >
> > >suitablePools.add(pol);
> > >
> > >} else {
> > >
> > >avoid.addPool(pol.getId());
> > >
> > >}
> > >
> > >}
> > >
> > >return suitablePools;
> > >
> > >}
> > >
> > >
> > > On Mon, Jun 17, 2013 at 11:40 AM, Mike Tutkowski <
> > > mike.tutkow...@solidfire.com> wrote:
> > >
> > >> Hi Edison,
> > >>
> > >> I haven't looked into this much, so maybe what I suggest here won't
> > >> make sense, but here goes:
> > >>
> > >> What about a Hypervisor.MULTIPLE enum option ('Hypervisor' might
> > >> not be the name of the enumeration...I forget). The
> > ZoneWideStoragePoolAllocator
> > >> could use this to be less choosy about if a storage pool qualifies
> > >> to be used.
> > >>
> > >> Does that make any sense?
> > >>
> > >> Thanks!
> > >>
> > >>
> > >> On Mon, Jun 17, 2013 at 11:28 AM, Edison Su 
> > wrote:
> > >>
> > >>> I think it's due to this
> > >>>
> > https://cwiki.apache.org/confluence/display/CLOUDSTACK/Zone-
> wide+prima
> > ry+storage+target
> > >>> There are zone-wide storages, may only work with one particular
> > >>> hypervisor. For example, the data store created on VCenter can be
> > shared by
> > >>> all the clusters in a DC, but only for vmware. And, CloudStack
> > >>> supports multiple hypervisors in one Zone, so, somehow, need a way
> > >>> to tell mgt server, for a particular zone-wide storage, which can
> > >>> only work with certain hypervisors.
> > >>> You can treat hypervisor type on the storage pool, is another tag,
> > >>> to help storage pool allocator to find proper storage pool. But
> > >>> seems hypervisor type is not enough for your case, as your storage
> > >>> pool can
> > work
> > >>> with both vmware/xenserver, but not for other hypervisors(that's
> > >>> your current code's implementation limitation, not your storage
> > >>> itself
> > can't do
> > >>> that).
> > >>> So I'd think you need to extend ZoneWideStoragePoolAllocator,
> > >>> maybe, a new allocator called:
> > >>> solidfirezonewidestoragepoolAllocator. And,
> > replace
> > >>> the following line in applicationContext.xml:
> > >>>   > >>>
> >
> class="org.apache.cloudstack.storage.allocator.ZoneWideStoragePoolAllocat
> or"
> > >>> />
> > >>> With your solidfirezonewidestoragepoolAllocator
> > >>> It also means, for each CloudStack mgt server deployment, admin
> > >>> needs
> > to
> > >>> configure applicationContext.xml for their needs.
> > >>>
> > >>>> -Original Message-
> > >>>> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> > >>>> Sent: Saturday, June 15, 2013 11:34 AM
> > >>>> To: dev@cloudstack.apache.org
> > >>>> Subject: Hypervisor Host Type Required at Zone Level for Primary
> > >>> Storage?
> > >>>>
> > >>>> Hi,
> > >>>>
> > >>>> I recently updated my local repo and noticed that we now require
> > >>>> a hypervisor type to be associated with zone-wide primary storage.
> > >>>>
> > >>>> I was wondering what the motivation for this might be?
> > >>>>
> > >>>> In my case, my zone-wide primary storage represents a SAN.
> > >>>> Volumes are carved out of the SAN as needed and can currently be
> > >>>> utilized on both
> > >>> Xen
> > >>>> and VMware (although, of course, once you've used a given volume
> > >>>> on
> > one
> > >>>> hypervisor type or the other, you can only continue to use it
> > >>>> with
> > that
> > >>>> hypervisor type).
> > >>>>
> > >>>> I guess the point being my primary storage can be associated with
> > >>>> more
> > >>> than
> > >>>> one hypervisor type because of its dynamic nature.
> > >>>>
> > >>>> Can someone fill me in on the reasons behind this recent change
> > >>>> and recommendations on how I should proceed here?
> > >>>>
> > >>>> Thanks!
> > >>>>
> > >>>> --
> > >>>> *Mike Tutkowski*
> > >>>> *Senior CloudStack Developer, SolidFire Inc.*
> > >>>> e: mike.tutkow...@solidfire.com
> > >>>> o: 303.746.7302
> > >>>> Advancing the way the world uses the
> > >>>> cloud<http://solidfire.com/solution/overview/?video=play>
> > >>>> *(tm)*
> > >>>
> > >>
> > >>
> > >>
> > >> --
> > >> *Mike Tutkowski*
> > >> *Senior CloudStack Developer, SolidFire Inc.*
> > >> e: mike.tutkow...@solidfire.com
> > >> o: 303.746.7302
> > >> Advancing the way the world uses the cloud<
> > http://solidfire.com/solution/overview/?video=play>
> > >> *(tm)*
> > >>
> > >
> > >
> > >
> > > --
> > > *Mike Tutkowski*
> > > *Senior CloudStack Developer, SolidFire Inc.*
> > > e: mike.tutkow...@solidfire.com
> > > o: 303.746.7302
> > > Advancing the way the world uses the
> > > cloud<http://solidfire.com/solution/overview/?video=play>
> > > *(tm)*
> >
> >
> 
> 
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *(tm)*


RE: Hypervisor Host Type Required at Zone Level for Primary Storage?

2013-06-17 Thread Edison Su
But currently there is no such hypervisor layer yet, and to me it's related to 
storage, not related to hypervisor. It's a property of a storage to support one 
hypervisor, two hypervisors, or all the hypervisors, not a property of 
hypervisor.
I agree, that add a hypervisor type on the storagepoolcmd is not a proper 
solution, as we already see, it's not flexible enough for Solidfire.
How about add a getSupportedHypervisors on storage plugin, which will return 
ImmutableSet?


> -Original Message-
> From: John Burwell [mailto:jburw...@basho.com]
> Sent: Monday, June 17, 2013 1:42 PM
> To: dev@cloudstack.apache.org
> Subject: Re: Hypervisor Host Type Required at Zone Level for Primary Storage?
> 
> Edison,
> 
> For me, this issue comes back to the whole notion of the overloaded
> StoragePoolType.  A hypervisor plugin should declare a method akin to
> getSupportedStorageProtocols() : ImmutableSet which
> the Hypervisor layer can use to filter the available DataStores from the
> Storage subsystem.  For example, as RBD support expands to other
> hypervisors, we should only have to modify those hypervisor plugins -- not
> the Hypervisor orchestration components or any aspect of the Storage layer.
> 
> Thanks,
> -John
> 
> On Jun 17, 2013, at 4:27 PM, Edison Su  wrote:
> 
> > There are storages which can only work with one hypervisor, e.g.
> > Currently, Ceph can only work on KVM. And the data store created in
> VCenter, can only work with Vmware.
> >
> >
> >
> >> -Original Message-
> >> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> >> Sent: Monday, June 17, 2013 1:12 PM
> >> To: dev@cloudstack.apache.org
> >> Subject: Re: Hypervisor Host Type Required at Zone Level for Primary
> Storage?
> >>
> >> I figured you might have something to say about this, John. :)
> >>
> >> Yeah, I have no idea behind the motivation for this change other than
> >> what Edison just said in a recent e-mail.
> >>
> >> It sounds like this change went in so that the allocators could look
> >> at the VM characteristics and see the hypervisor type. With this
> >> info, the allocator can decide if a particular zone-wide storage is
> >> acceptable. This doesn't apply for my situation as I'm dealing with a
> >> SAN, but some zone-wide storage is static (just a volume "out there"
> >> somewhere). Once this volume is used for, say, XenServer purposes, it
> can only be used for XenServer going forward.
> >>
> >> For more details, I would recommend Edison comment.
> >>
> >>
> >> On Mon, Jun 17, 2013 at 2:01 PM, John Burwell 
> >> wrote:
> >>
> >>> Mike,
> >>>
> >>> I know my thoughts will come as a galloping shock, but the idea of a
> >>> hypervisor type being attached to a volume is the type of dependency
> >>> I think we need to remove from the Storage layer.  What attributes
> >>> of a DataStore/StoragePool require association to a hypervisor type?
> >>> My thought is that we should expose query methods allow the
> >>> Hypervisor layer to determine if a DataStore/StoragePool requires
> >>> such a reservation, and we track that reservation in the Hypervisor layer.
> >>>
> >>> Thanks,
> >>> -John
> >>>
> >>> On Jun 17, 2013, at 3:48 PM, Mike Tutkowski
> >>> 
> >>> wrote:
> >>>
> >>>> Hi Edison,
> >>>>
> >>>> How's about if I add this logic into ZoneWideStoragePoolAllocator
> >>> (below)?
> >>>>
> >>>> After filtering storage pools by tags, it saves off the ones that
> >>>> are for any hypervisor.
> >>>>
> >>>> Next, we filter the list down more by hypervisor.
> >>>>
> >>>> Then, we add the storage pools back into the list that were for any
> >>>> hypervisor.
> >>>>
> >>>> @Override
> >>>>
> >>>> protected List select(DiskProfile dskCh,
> >>>>
> >>>> VirtualMachineProfile vmProfile,
> >>>>
> >>>> DeploymentPlan plan, ExcludeList avoid, int returnUpTo) {
> >>>>
> >>>>   s_logger.debug("ZoneWideStoragePoolAllocator to find storage
> >>>> pool");
> >>>>
> >>>> List suitablePools = new ArrayList();
> >>>>
> >>>>
> >>>>   List st

RE: Hack Day at CloudStack Collaboration Conference

2013-06-17 Thread Edison Su
+1:)

> -Original Message-
> From: John Burwell [mailto:jburw...@basho.com]
> Sent: Monday, June 17, 2013 2:21 PM
> To: dev@cloudstack.apache.org
> Subject: Re: Hack Day at CloudStack Collaboration Conference
> 
> All,
> 
> I added a Storage Architecture topic to discuss the next steps in evolving the
> Storage layer including (my hobby horse) breaking Storage->Hypervisor
> dependencies.
> 
> Thanks,
> -John
> 
> On Jun 16, 2013, at 6:30 PM, John Burwell  wrote:
> 
> > All,
> >
> > I added a topic to discuss Guava's features, and how we can exploit it
> > in CloudStack.
> >
> > Thanks,
> > -John
> >
> >
> >
> >
> > On Jun 16, 2013, at 4:53 PM, Sebastien Goasguen 
> wrote:
> >
> >> I have added three items:
> >>
> >> -libcloud: I have been submitting patches to libcloud for the
> >> cloudstack driver, it works for basic zones but we now need to
> >> improve the advanced zone support
> >>
> >> -whirr: I have tested apache whirr, it kind of works but there is an
> >> open bug
> >>
> >> -deltacloud: discuss plans to have a cloudstack driver in delta cloud. I
> believe Chip's work on the ruby driver are a step in that direction...maybe we
> can knock it down in a day.
> >>
> >> -sebastien
> >>
> >> On Jun 14, 2013, at 1:23 PM, Mike Tutkowski
>  wrote:
> >>
> >>> Sounds good
> >>>
> >>> I will update the Wiki.
> >>>
> >>>
> >>> On Fri, Jun 14, 2013 at 10:58 AM, Joe Brockmeier 
> wrote:
> >>>
>  Hi Mike,
> 
>  On Thu, Jun 13, 2013, at 03:55 PM, Mike Tutkowski wrote:
> > I was wondering if we have the following documentation (below). If
> > not, I was thinking it might be a good session to discuss and
> > start in (at a high level) on developing such documentation.
> >
> > 1) Class diagrams highlighting the main classes that make up the
> > Compute, Networking, and Storage components of CloudStack and
> how
> > they relate to each other.
> >
> > 2) Object-interaction diagrams showing how the main instances in
> > the system coordinate execution of tasks.
> >
> > 3) What kinds of threads are involved in the system (this will
> > help developers better understand what resources are shared among
> > threads and need to be locked at certain times)?
> 
>  AFAIK, this doesn't exist currently. Would love to see this - so if
>  you want to take lead, I'm happy to attend to help facilitate and
>  take down notes, etc.
> 
>  Best,
> 
>  jzb
>  --
>  Joe Brockmeier
>  j...@zonker.net
>  Twitter: @jzb
>  http://www.dissociatedpress.net/
> >>>
> >>>
> >>>
> >>> --
> >>> *Mike Tutkowski*
> >>> *Senior CloudStack Developer, SolidFire Inc.*
> >>> e: mike.tutkow...@solidfire.com
> >>> o: 303.746.7302
> >>> Advancing the way the world uses the
> >>> cloud
> >>> *(tm)*
> >>



RE: NFS Cache storage query

2013-06-18 Thread Edison Su
Currently, no, I can add an API for it.

> -Original Message-
> From: Jessica Wang [mailto:jessica.w...@citrix.com]
> Sent: Tuesday, June 18, 2013 4:24 PM
> To: Min Chen
> Cc: dev@cloudstack.apache.org
> Subject: RE: NFS Cache storage query
> 
> Min,
> 
> > We may need to provide a way from UI to allow users to configure and
> display their NFS cache.
> Is there an API that list NFS cache?
> 
> Jessica
> 
> -Original Message-
> From: Min Chen
> Sent: Friday, June 14, 2013 9:26 AM
> To: dev@cloudstack.apache.org
> Cc: Jessica Wang
> Subject: Re: NFS Cache storage query
> 
> Hi Sanjeev,
> 
>   In 4.2 release, we require that a NFS cache storage has to be added if
> you choose S3 as the storage provider since we haven't refactored
> hypervisor side code to handle s3 directly by bypassing NFS caching, which is
> the goal for 4.3 release. I see an issue with current UI, where user can only
> add cache storage when he/she adds a S3 storage. We may need to provide
> a way from UI to allow users to configure and display their NFS cache. You
> can file a JIRA ticket for this UI enhancement.
> 
>   Thanks
>   -min
> 
> On 6/14/13 6:35 AM, "Chip Childers"  wrote:
> 
> >On Fri, Jun 14, 2013 at 01:06:30PM +, Sanjeev Neelarapu wrote:
> >> Hi,
> >>
> >> I have a query on how to add NFS Cache storage.
> >> Before creating a zone if we create a secondary storage with s3 as
> >>the storage provider and don't select NFS Cache Storage then we treat
> >>it as
> >>S3 at region level.
> >> Later I create a zone and at "add secondary storage" creation wizard
> >>in UI if I select NFS as secondary storage provider will it be treated
> >>as NFS Cache Storage? If not is there a way to add NFS cache storage
> >>for that zone?
> >>
> >> Thanks,
> >> Sanjeev
> >>
> >
> >Based on the thread talking about this [1], I'm not sure that it will
> >be implemented this way anymore.
> >
> >-chip
> >
> >[1] http://markmail.org/message/c73nagj45q6iktfh



RE: NFS Cache storage query

2013-06-19 Thread Edison Su


> -Original Message-
> From: John Burwell [mailto:jburw...@basho.com]
> Sent: Wednesday, June 19, 2013 10:42 AM
> To: dev@cloudstack.apache.org
> Cc: Edison Su
> Subject: Re: NFS Cache storage query
> 
> Chip,
> 
> Your concern had not occurred to me -- making me realize that either destroy
> or a zone attach/detach operation for the staging/temporary area
> mechanism in 4.2.  Thinking through it, there are two types of operations
> occurring with the staging/temporary area.  The first is data being pulled 
> from
> the object store to support some activity (e.g. copying a template down to
> create a VM).  From a data integrity perspective, it is safe to kill these
> operations since the data has already been persisted to the object store.
> The second is data being pushed to the object store which are much more
> problematic.  Of particular concern would be snapshots that are in the
> staging/temporary area, but not yet transferred to the object store.
> 
> Edison/Min: Does the current implementation provide a destroy or
> attach/detach behavior?  If so, how are in-flight operations handled to
> ensure there is no data loss?

The current mater branch, there is no such operation for secondary storage yet, 
so does the object_store branch.
Maybe we can discuss/implement a better life cycle management of both nfs 
secondary storage and staging area in collab next week.

> 
> Thanks,
> -John
> 
> On Jun 19, 2013, at 1:26 PM, Chip Childers 
> wrote:
> 
> > On Wed, Jun 19, 2013 at 01:23:47PM -0400, John Burwell wrote:
> >> Chip,
> >>
> >> Good catch.  Yes, we need definitely need a create staging/temporary
> area API call.  However, destroy is a bit more complicated due in-flight
> operations.  Given the complexities and time pressures, I recommend
> supporting only create in 4.2, and addressing delete in 4.3.  Does that make
> sense?
> >>
> >
> > If the existence of the staging datastore blocks the deleting of a
> > zone, or any other entity, then that doesn't work for me.
> >
> > I'd rather give an operator the ability to decide how to best ensure
> > that in-flight operations are halted (i.e.: block users from the
> > environment or something else), than not give them a way to change
> > their configuration.
> >
> >> Thanks,
> >> -John
> >>
> >> On Jun 19, 2013, at 1:11 PM, Chip Childers 
> wrote:
> >>
> >>> On Wed, Jun 19, 2013 at 01:07:29PM -0400, John Burwell wrote:
> >>>> Nitin,
> >>>>
> >>>> If we provide any APIs for the staging/temporary area, they must be
> read-only.  Allowing any external manipulation of its content could cause
> break in-flight transfers or cause data loss.  I think we should consider the
> following APIs:
> >>>>
> >>>> List contents including name, reference count, and size Summary
> >>>> statistics for a staging area including consumed/available/total
> >>>> space and timestamp of the last garbage collection operation Force
> >>>> garbage collection/cleanup operation
> >>>>
> >>>> I think we should these are new API calls specific to the
> staging/temporary area mechanism rather than trying to overload existing
> API calls.
> >>>>
> >>>> Thanks,
> >>>> -John
> >>>
> >>> What about creating / destroying them?
> >>
> >>



RE: NFS Cache storage query

2013-06-19 Thread Edison Su
Yes and No:)
Yes, as all the hypervisors(KVM/Vmware/Xenserver) still need NFS in 4.2, even 
S3 is used as the place to store templates.
No, we make NFS is optional, if you don't want to use NFS. E.g the HyperV 
implementation will not depend on NFS. 

In 4.3, we can start work on the hypervisor side refactor, to eliminate the 
dependence on NFS as much as possible, then we may can truly make the statement 
that, S3 will be the "secondary storage".

> -Original Message-
> From: John Burwell [mailto:jburw...@basho.com]
> Sent: Wednesday, June 19, 2013 11:00 AM
> To: Edison Su
> Cc: dev@cloudstack.apache.org
> Subject: Re: NFS Cache storage query
> 
> Edison,
> 
> For 4.2, are we going to state that the object store is just a backup of NFS 
> (i.e.
> the same as 4.1)?
> 
> Thanks,
> -John
> 
> On Jun 19, 2013, at 1:58 PM, Edison Su  wrote:
> 
> >
> >
> >> -Original Message-
> >> From: John Burwell [mailto:jburw...@basho.com]
> >> Sent: Wednesday, June 19, 2013 10:42 AM
> >> To: dev@cloudstack.apache.org
> >> Cc: Edison Su
> >> Subject: Re: NFS Cache storage query
> >>
> >> Chip,
> >>
> >> Your concern had not occurred to me -- making me realize that either
> >> destroy or a zone attach/detach operation for the staging/temporary
> >> area mechanism in 4.2.  Thinking through it, there are two types of
> >> operations occurring with the staging/temporary area.  The first is
> >> data being pulled from the object store to support some activity
> >> (e.g. copying a template down to create a VM).  From a data integrity
> >> perspective, it is safe to kill these operations since the data has already
> been persisted to the object store.
> >> The second is data being pushed to the object store which are much
> >> more problematic.  Of particular concern would be snapshots that are
> >> in the staging/temporary area, but not yet transferred to the object store.
> >>
> >> Edison/Min: Does the current implementation provide a destroy or
> >> attach/detach behavior?  If so, how are in-flight operations handled
> >> to ensure there is no data loss?
> >
> > The current mater branch, there is no such operation for secondary storage
> yet, so does the object_store branch.
> > Maybe we can discuss/implement a better life cycle management of both
> nfs secondary storage and staging area in collab next week.
> >
> >>
> >> Thanks,
> >> -John
> >>
> >> On Jun 19, 2013, at 1:26 PM, Chip Childers
> >> 
> >> wrote:
> >>
> >>> On Wed, Jun 19, 2013 at 01:23:47PM -0400, John Burwell wrote:
> >>>> Chip,
> >>>>
> >>>> Good catch.  Yes, we need definitely need a create
> >>>> staging/temporary
> >> area API call.  However, destroy is a bit more complicated due
> >> in-flight operations.  Given the complexities and time pressures, I
> >> recommend supporting only create in 4.2, and addressing delete in
> >> 4.3.  Does that make sense?
> >>>>
> >>>
> >>> If the existence of the staging datastore blocks the deleting of a
> >>> zone, or any other entity, then that doesn't work for me.
> >>>
> >>> I'd rather give an operator the ability to decide how to best ensure
> >>> that in-flight operations are halted (i.e.: block users from the
> >>> environment or something else), than not give them a way to change
> >>> their configuration.
> >>>
> >>>> Thanks,
> >>>> -John
> >>>>
> >>>> On Jun 19, 2013, at 1:11 PM, Chip Childers
> >>>> 
> >> wrote:
> >>>>
> >>>>> On Wed, Jun 19, 2013 at 01:07:29PM -0400, John Burwell wrote:
> >>>>>> Nitin,
> >>>>>>
> >>>>>> If we provide any APIs for the staging/temporary area, they must
> >>>>>> be
> >> read-only.  Allowing any external manipulation of its content could
> >> cause break in-flight transfers or cause data loss.  I think we
> >> should consider the following APIs:
> >>>>>>
> >>>>>> List contents including name, reference count, and size Summary
> >>>>>> statistics for a staging area including consumed/available/total
> >>>>>> space and timestamp of the last garbage collection operation
> >>>>>> Force garbage collection/cleanup operation
> >>>>>>
> >>>>>> I think we should these are new API calls specific to the
> >> staging/temporary area mechanism rather than trying to overload
> >> existing API calls.
> >>>>>>
> >>>>>> Thanks,
> >>>>>> -John
> >>>>>
> >>>>> What about creating / destroying them?
> >>>>
> >>>>
> >



RE: NFS Cache storage query

2013-06-19 Thread Edison Su


> -Original Message-
> From: John Burwell [mailto:jburw...@basho.com]
> Sent: Wednesday, June 19, 2013 11:43 AM
> To: Edison Su
> Cc: dev@cloudstack.apache.org
> Subject: Re: NFS Cache storage query
> 
> Edison,
> 
> Based on the stance you have outlined, the usage of NFS in the object_store
> branch and 4.1 are not comparable.  In 4.1, NFS is the store of record for 
> data.
> Therefore, presence of the file in the NFS volume indicates that the data is
> permanently stored.  However, in object_store, presence in NFS in the
> object_store branch means that the data may be awaiting permanent stage
> (dependent on the type of pending transfer operation).  As such, I think we
> will need to provide insurance that in-flight operations are completed before
> a staging/temporary area is destroyed.  Another option I can see is to change
> the way these staging/temporary areas are associated with zones.  If we
> approached them as standalone entities that are attached or detached from
> a zone, we could define the detach operation as only disassociation from a
> zone without impacting in-flight operations.  This solution would allow zones
> to be deleted without impacting in-flight operations.

The interface is there: 
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob;f=engine/api/src/org/apache/cloudstack/engine/subsystem/api/storage/DataStoreLifeCycle.java;h=1e893db6bb5b1dbae0142e8a26019ae34d44320a;hb=refs/heads/object_store
Admin should be able to put the data store into maintenance mode, and then 
delete it, but the implementation is not there yet for both NFS secondary 
storage and staging area.
For NFS, S3 secondary storage, we should at least implement 
maintenance/cancelMaintain in 4.2
For NFS staging area, we should at least implement maintenance/cancelMaintain 
in 4.2, the delete method in 4.3.
How do you think?


> 
> Thanks,
> -John
> 
> On Jun 19, 2013, at 2:15 PM, Edison Su  wrote:
> 
> > Yes and No:)
> > Yes, as all the hypervisors(KVM/Vmware/Xenserver) still need NFS in 4.2,
> even S3 is used as the place to store templates.
> > No, we make NFS is optional, if you don't want to use NFS. E.g the HyperV
> implementation will not depend on NFS.
> >
> > In 4.3, we can start work on the hypervisor side refactor, to eliminate the
> dependence on NFS as much as possible, then we may can truly make the
> statement that, S3 will be the "secondary storage".
> >
> >> -Original Message-
> >> From: John Burwell [mailto:jburw...@basho.com]
> >> Sent: Wednesday, June 19, 2013 11:00 AM
> >> To: Edison Su
> >> Cc: dev@cloudstack.apache.org
> >> Subject: Re: NFS Cache storage query
> >>
> >> Edison,
> >>
> >> For 4.2, are we going to state that the object store is just a backup of 
> >> NFS
> (i.e.
> >> the same as 4.1)?
> >>
> >> Thanks,
> >> -John
> >>
> >> On Jun 19, 2013, at 1:58 PM, Edison Su  wrote:
> >>
> >>>
> >>>
> >>>> -Original Message-
> >>>> From: John Burwell [mailto:jburw...@basho.com]
> >>>> Sent: Wednesday, June 19, 2013 10:42 AM
> >>>> To: dev@cloudstack.apache.org
> >>>> Cc: Edison Su
> >>>> Subject: Re: NFS Cache storage query
> >>>>
> >>>> Chip,
> >>>>
> >>>> Your concern had not occurred to me -- making me realize that
> >>>> either destroy or a zone attach/detach operation for the
> >>>> staging/temporary area mechanism in 4.2.  Thinking through it,
> >>>> there are two types of operations occurring with the
> >>>> staging/temporary area.  The first is data being pulled from the
> >>>> object store to support some activity (e.g. copying a template down
> >>>> to create a VM).  From a data integrity perspective, it is safe to
> >>>> kill these operations since the data has already
> >> been persisted to the object store.
> >>>> The second is data being pushed to the object store which are much
> >>>> more problematic.  Of particular concern would be snapshots that
> >>>> are in the staging/temporary area, but not yet transferred to the object
> store.
> >>>>
> >>>> Edison/Min: Does the current implementation provide a destroy or
> >>>> attach/detach behavior?  If so, how are in-flight operations
> >>>> handled to ensure there is no data loss?
> >>>
> >>> The current mater branch, there is no such operation for secondary
> >>

RE: fixPath (was: committer wanted for review)

2013-06-19 Thread Edison Su
The double slash can happen in every where, there is bug fix long time 
ago(http://bugs.cloud.com/show_bug.cgi?id=14066), which tries to change all the 
path concatenation, unfortunately, it's not been checked into cloudstack.
I am +1 with your patch, which at least fixes this particular issue, without 
changing too much code.

From: Daan Hoogland [mailto:daan.hoogl...@gmail.com]
Sent: Wednesday, June 19, 2013 12:38 PM
To: dev
Subject: fixPath (was: committer wanted for review)

John and others,
I have been looking for the point where the wrong path is instantiated. After 
analyses I came to the DownloadAnswer class which contains the original fixPath 
method that I c&p'ed to do my thing. I cannot support this with logging 
however. Where the DownloadAnswer is created eludes me however. I got trapped 
between DownloadManagerImpl and DownloadListener. The creation of the answer 
was as far as I could tell in handleDownloadProgressCmd but again I can not 
support this with logging.
As the reason I suppect eclipse class-file caching but also this theory seems 
to not work.
Anyone's got a good clue for me?

I have burned to much time on this so I will save the patch for if  it is 
locally needed by users, but I still want to find the best solution.
As for John's argument of creating technical debt, I am now convinced that I am 
not adding but only using the present debt that is in there. The 
DownloadAnswer.fixup() method is doing the same on a more obscure place then my 
solution.

Also, if we decide to apply my changes anyway I think we should up the loglevel 
at least as much as acceptable as to keep pointing to the technical debt that 
we have at the moment.

On Tue, Jun 18, 2013 at 5:28 PM, Daan Hoogland 
mailto:daan.hoogl...@gmail.com>> wrote:
On 2013/6/18 16:49 , John Burwell wrote:

Second, please don't take my feedback as passing judgements such as things 
being ugly or
don't worry, I like the discussion and i don't mind losing an argument if the 
best solution arises from it. Let's see about that.
--
[cid:part1.06080303.01090409@gmail.com]



Re: Review Request: FileUtil simplified

2013-06-19 Thread edison su

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11980/#review22133
---



plugins/hypervisors/kvm/test/com/cloud/hypervisor/kvm/resource/LibvirtComputingResourceTest.java
<https://reviews.apache.org/r/11980/#comment45535>

unit test will fail when compiling the code on Mac/Windows. How about add 
something like: 
http://www.thekua.com/atwork/2008/09/running-tests-on-a-specific-os-under-junit4/


- edison su


On June 19, 2013, 9:03 p.m., Laszlo Hornyak wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11980/
> ---
> 
> (Updated June 19, 2013, 9:03 p.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Description
> ---
> 
> - writeToFile removed since no references to it
> - readFileAsString replaced with FileUtils.readFileToString
> - minor code duplication removed in dependent method getNicStats
> - unit test added
> 
> 
> Diffs
> -
> 
>   
> plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
>  7d90f6a 
>   
> plugins/hypervisors/kvm/test/com/cloud/hypervisor/kvm/resource/LibvirtComputingResourceTest.java
>  c82c31f 
>   utils/src/com/cloud/utils/FileUtil.java 74f4088 
> 
> Diff: https://reviews.apache.org/r/11980/diff/
> 
> 
> Testing
> ---
> 
> test included
> 
> 
> Thanks,
> 
> Laszlo Hornyak
> 
>



RE: TemplateAdapterBase broken

2013-06-20 Thread Edison Su
I am fixing bugs related to object_store merge, didn't aware there are so many 
changes on the master related to storage code just in this Monday. 
Sorry guys.

> -Original Message-
> From: Prasanna Santhanam [mailto:t...@apache.org]
> Sent: Thursday, June 20, 2013 7:15 AM
> To: dev@cloudstack.apache.org
> Subject: Re: TemplateAdapterBase broken
> 
> On Thu, Jun 20, 2013 at 03:01:28PM +0530, Prasanna Santhanam wrote:
> > On Thu, Jun 20, 2013 at 11:20:54AM +0200, Daan Hoogland wrote:
> > > On Thu, Jun 20, 2013 at 10:50 AM, Prasanna Santhanam 
> wrote:
> > >
> > > > He's fixed it in the subsequent
> > > > commit however
> > > >
> > >
> > > Yeah, I apologized for the early noise making ;)
> >
> > Not at all. Prompted me to look at other broken stuff post the
> > object_store merge. :)
> >
> > usual suspects: apidocs, simulator, marvin are broken
> >
> 
> Fixed the simulator to get systemVMs running, checkin tests are however
> still blocked because the built-in template doesn't go to Ready state. Still a
> TODO.
> 
> 
> 
> Powered by BigRock.com



[DISCUSS] Do we need code review process for code changes related to storage subsystem?

2013-06-20 Thread Edison Su
For interface/API changes, we'd better have a code review, as more storage 
vendors and more developers outside Citrix are contributing code to CloudStack 
storage subsystem. The code change should have less surprise for everybody who 
cares about storage subsystem.


RE: fixPath (was: committer wanted for review)

2013-06-20 Thread Edison Su
I uploaded the patch to dropbox: 
https://www.dropbox.com/s/d9fn17xmho19fdc/cloud-3.0.1-bug14066.patch, can you 
access it? The patch is cooked by Fred Wittekind  one 
year ago.

> -Original Message-
> From: Daan Hoogland [mailto:daan.hoogl...@gmail.com]
> Sent: Thursday, June 20, 2013 12:13 AM
> To: dev
> Subject: Re: fixPath (was: committer wanted for review)
> 
> I am not authorized to access the link you are sending Edison, but interested
> in the contents. Could you send it please?
> 
> 
> On Wed, Jun 19, 2013 at 10:26 PM, Edison Su  wrote:
> 
> > The double slash can happen in every where, there is bug fix long time
> > ago( http://bugs.cloud.com/show_bug.cgi?id=14066), which tries to
> > change all the path concatenation, unfortunately, it's not been
> > checked into cloudstack.
> > I am +1 with your patch, which at least fixes this particular issue,
> > without changing too much code.
> >
> > From: Daan Hoogland [mailto:daan.hoogl...@gmail.com]
> > Sent: Wednesday, June 19, 2013 12:38 PM
> > To: dev
> > Subject: fixPath (was: committer wanted for review)
> >
> > John and others,
> > I have been looking for the point where the wrong path is instantiated.
> > After analyses I came to the DownloadAnswer class which contains the
> > original fixPath method that I c&p'ed to do my thing. I cannot support
> > this with logging however. Where the DownloadAnswer is created eludes
> > me however. I got trapped between DownloadManagerImpl and
> DownloadListener.
> > The creation of the answer was as far as I could tell in
> > handleDownloadProgressCmd but again I can not support this with logging.
> > As the reason I suppect eclipse class-file caching but also this
> > theory seems to not work.
> > Anyone's got a good clue for me?
> >
> > I have burned to much time on this so I will save the patch for if  it
> > is locally needed by users, but I still want to find the best solution.
> > As for John's argument of creating technical debt, I am now convinced
> > that I am not adding but only using the present debt that is in there.
> > The
> > DownloadAnswer.fixup() method is doing the same on a more obscure
> > place then my solution.
> >
> > Also, if we decide to apply my changes anyway I think we should up the
> > loglevel at least as much as acceptable as to keep pointing to the
> > technical debt that we have at the moment.
> >
> > On Tue, Jun 18, 2013 at 5:28 PM, Daan Hoogland
> > mailto:daan.hoogl...@gmail.com>> wrote:
> > On 2013/6/18 16:49 , John Burwell wrote:
> >
> > Second, please don't take my feedback as passing judgements such as
> > things being ugly or don't worry, I like the discussion and i don't
> > mind losing an argument if the best solution arises from it. Let's see
> > about that.
> > --
> > [cid:part1.06080303.01090409@gmail.com]<http://daan.sbpad6.nl/>
> >
> >


RE: fixPath (was: committer wanted for review)

2013-06-20 Thread Edison Su


> -Original Message-
> From: Daan Hoogland [mailto:daan.hoogl...@gmail.com]
> Sent: Thursday, June 20, 2013 1:03 PM
> To: dev
> Subject: Re: fixPath (was: committer wanted for review)
> 
> Yes, got it. It is quite big. It is dealing with more than just a path 
> format, isn't it?
> 
Yah, I agree the patch changes too much(join path, and remove double quote, all 
over the places).

> 
> On Thu, Jun 20, 2013 at 8:53 PM, Edison Su  wrote:
> 
> > I uploaded the patch to dropbox:
> > https://www.dropbox.com/s/d9fn17xmho19fdc/cloud-3.0.1-
> bug14066.patch,
> > can you access it? The patch is cooked by Fred Wittekind <
> > r...@twister.dyndns.org> one year ago.
> >
> > > -Original Message-
> > > From: Daan Hoogland [mailto:daan.hoogl...@gmail.com]
> > > Sent: Thursday, June 20, 2013 12:13 AM
> > > To: dev
> > > Subject: Re: fixPath (was: committer wanted for review)
> > >
> > > I am not authorized to access the link you are sending Edison, but
> > interested
> > > in the contents. Could you send it please?
> > >
> > >
> > > On Wed, Jun 19, 2013 at 10:26 PM, Edison Su 
> > wrote:
> > >
> > > > The double slash can happen in every where, there is bug fix long
> > > > time ago( http://bugs.cloud.com/show_bug.cgi?id=14066), which
> > > > tries to change all the path concatenation, unfortunately, it's
> > > > not been checked into cloudstack.
> > > > I am +1 with your patch, which at least fixes this particular
> > > > issue, without changing too much code.
> > > >
> > > > From: Daan Hoogland [mailto:daan.hoogl...@gmail.com]
> > > > Sent: Wednesday, June 19, 2013 12:38 PM
> > > > To: dev
> > > > Subject: fixPath (was: committer wanted for review)
> > > >
> > > > John and others,
> > > > I have been looking for the point where the wrong path is instantiated.
> > > > After analyses I came to the DownloadAnswer class which contains
> > > > the original fixPath method that I c&p'ed to do my thing. I cannot
> > > > support this with logging however. Where the DownloadAnswer is
> > > > created eludes me however. I got trapped between
> > > > DownloadManagerImpl and
> > > DownloadListener.
> > > > The creation of the answer was as far as I could tell in
> > > > handleDownloadProgressCmd but again I can not support this with
> > logging.
> > > > As the reason I suppect eclipse class-file caching but also this
> > > > theory seems to not work.
> > > > Anyone's got a good clue for me?
> > > >
> > > > I have burned to much time on this so I will save the patch for if
> > > > it is locally needed by users, but I still want to find the best 
> > > > solution.
> > > > As for John's argument of creating technical debt, I am now
> > > > convinced that I am not adding but only using the present debt that is 
> > > > in
> there.
> > > > The
> > > > DownloadAnswer.fixup() method is doing the same on a more obscure
> > > > place then my solution.
> > > >
> > > > Also, if we decide to apply my changes anyway I think we should up
> > > > the loglevel at least as much as acceptable as to keep pointing to
> > > > the technical debt that we have at the moment.
> > > >
> > > > On Tue, Jun 18, 2013 at 5:28 PM, Daan Hoogland
> > > > mailto:daan.hoogl...@gmail.com>>
> wrote:
> > > > On 2013/6/18 16:49 , John Burwell wrote:
> > > >
> > > > Second, please don't take my feedback as passing judgements such
> > > > as things being ugly or don't worry, I like the discussion and i
> > > > don't mind losing an argument if the best solution arises from it.
> > > > Let's see about that.
> > > > --
> > > > [cid:part1.06080303.01090409@gmail.com]<http://daan.sbpad6.nl/>
> > > >
> > > >
> >


Re: Review Request: CLOUDSTACK-2571 ZWPS issues with Enabling/Clearing the Maintenance State of the Storage

2013-06-20 Thread edison su

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11992/#review22204
---



server/src/com/cloud/storage/StoragePoolAutomationImpl.java
<https://reviews.apache.org/r/11992/#comment45626>

Could you put the code which checking how many storage pools available in a 
zone or cluster, into StoragePoolAutomationImpl? I think different vendor may 
want to do storage maintenance differently, StoragePoolAutomationImpl is the 
default implementation of how to do storage maintenance. 


- edison su


On June 20, 2013, 11:53 a.m., Rajesh Battala wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11992/
> ---
> 
> (Updated June 20, 2013, 11:53 a.m.)
> 
> 
> Review request for cloudstack, Sateesh Chodapuneedi, edison su, Alex Huang, 
> and Ram Ganesh.
> 
> 
> Description
> ---
> 
> Issue : NPE's are happening when ZWPS is put in maintenance, removed from 
> maintenance.
> 
> Fixed:
> 1. Added ZONE scope storage handling in StorageManagerImpl and 
> StoragePoolAutomationImpl 
> 2. Modified PrimaryDataStoreDao, listBy method to take poolid to Wrapper 
> class of long instead of primitive. Modified associated DaoImpl.
> 3. StoragePoolAutomationImpl, when storage is setting to Maintenance mode, 
> handled the case for ZONE wide scope of storage. 
>if the storage is zone wide, get all the hosts(kvm, vmware) in zone and 
> send the ModifyStoragePool command (with false)
> 4. When users, cancels maintenance mode, the handled the ZONE wide scope of 
> storage pool.
> 5. Once the Storage is in maintenance, Deletion of the Storage will remove 
> the mount points from all the hosts.
> 
> This patch will solve all the issues pertaining to keeping/cancelling the 
> ZONE wide primary storage.
> 
> 
> This addresses bug CLOUDSTACK-2571.
> 
> 
> Diffs
> -
> 
>   
> engine/api/src/org/apache/cloudstack/storage/datastore/db/PrimaryDataStoreDao.java
>  d436762 
>   
> engine/api/src/org/apache/cloudstack/storage/datastore/db/PrimaryDataStoreDaoImpl.java
>  d461d58 
>   server/src/com/cloud/storage/StorageManagerImpl.java 20b435c 
>   server/src/com/cloud/storage/StoragePoolAutomationImpl.java 9bba979 
> 
> Diff: https://reviews.apache.org/r/11992/diff/
> 
> 
> Testing
> ---
> 
> Manual Testing
> = 
> 1. Enable maintenance mode of Zone wide storage , There were no NPE's 
> happening and successfully kept the storage in maintenance mode. Verified DB 
> status.
> 2. Cancel maintenance mode of Zone wide storage, There were no NPE's 
> happening and successfully kept the storage in UP state.
> 3. Enable maintenance mode of zone wide, once successful then Delete the 
> storage, Storage got deleted successfuly. Verify the hosts,  Storage got 
> unmounted and verified the DB status.
> Addition Tests (As the common code path is modified):
> 1. Add the Cluster scope of primary storage (kvm , xenserver). Adding the 
> storage in both clusters is successful.
> kvm specific:
> 
> 2. Enable Maintenance Mode of cluster scope kvm storage. Successfully enabled 
> the storage in maintenance state. 
> 3. Cancel the Maintenance Mode of cluster scope kvm storage. Successfully 
> enabled the storage in UP state.
> 4. Enable Maintenance Mode of cluster scope kvm storage. Delete the storage. 
> Storage got successfully deleted, unmounted from hosts and from db.
> 
> Xenserver specific:
> ===
> 5. Enable Maintenance Mode of cluster scope Xenserver storage. Successfully 
> enabled the storage in maintenance state. 
> 6. Cancel the Maintenance Mode of cluster scope Xenserver storage. 
> Successfully enabled the storage in UP state.
> 7. Enable Maintenance Mode of cluster scope Xenserver storage. Delete the 
> storage. Storage got successfully deleted, unmounted from hosts and from db.
> 
> ZWPS is supported in KVM and VMware, the common code is modified. It should 
> work of VMWare as well without any issues
> 
> 
> Thanks,
> 
> Rajesh Battala
> 
>



Re: Review Request: Fix primary datastore NPE/incorrect db entry/exception propagation for KVM on cloudstack

2013-06-20 Thread edison su

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11984/#review22205
---



engine/api/src/org/apache/cloudstack/engine/subsystem/api/storage/DataStoreLifeCycle.java
<https://reviews.apache.org/r/11984/#comment45627>

If stoagemanager wants to delete datastore, then just call deletedatastore, 
don't need add another api. In implementation of deletedatastore method, you 
can add a conditional check, if the datastore's state is not in ready state, 
then don't need to send DeleteStoragePoolCommand to hypervisor, delete the db 
entry instead.



plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java
<https://reviews.apache.org/r/11984/#comment45628>

Don't need to check exeception error message here, if adding storage pool 
failed, then just failed, rethrow the exception to upper layer.



plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java
<https://reviews.apache.org/r/11984/#comment45629>

How about log the exception stack trace also?



server/src/com/cloud/storage/StorageManagerImpl.java
<https://reviews.apache.org/r/11984/#comment45630>

    Call lifecycle->deletedatastore


- edison su


On June 20, 2013, 2:42 a.m., Venkata Siva Vijayendra Bhamidipati wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11984/
> ---
> 
> (Updated June 20, 2013, 2:42 a.m.)
> 
> 
> Review request for cloudstack, Chip Childers, edison su, and Min Chen.
> 
> 
> Description
> ---
> 
> Patch for fixes for issues detected while working on bug CLOUDSTACK-1510 
> (https://issues.apache.org/jira/browse/CLOUDSTACK-1510).
> 
> 
> This addresses bug CLOUDSTACK-1510.
> 
> 
> Diffs
> -
> 
>   
> api/src/org/apache/cloudstack/api/command/admin/storage/CreateStoragePoolCmd.java
>  74eb2b9 
>   
> engine/api/src/org/apache/cloudstack/engine/subsystem/api/storage/DataStoreLifeCycle.java
>  cb46709 
>   
> plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java
>  89e22c8 
>   
> plugins/storage/volume/default/src/org/apache/cloudstack/storage/datastore/lifecycle/CloudStackPrimaryDataStoreLifeCycleImpl.java
>  fb37e8f 
>   server/src/com/cloud/storage/StorageManagerImpl.java 20b435c 
> 
> Diff: https://reviews.apache.org/r/11984/diff/
> 
> 
> Testing
> ---
> 
> Deploy KVM cluster in cloudstack. Attempt to add a primary NFS datastore 
> using an invalid path. NPE is not encountered anymore. If KVM host is down or 
> the cloud-agent on the KVM host is down, the primary datastore (whether valid 
> or otherwise) is not logged to the db's storage_pool table. So invalid 
> datastores do not show up in the GUI when listing the primary datastores 
> available. Also, exception is propagated to GUI.
> 
> 
> Thanks,
> 
> Venkata Siva Vijayendra Bhamidipati
> 
>



Re: Review Request: Bugfix CLOUDSTACK-1594: Secondary storage host always remains Alert status

2013-06-20 Thread edison su

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/9818/#review22206
---


NFS secondary storage is not stored in host table any more. Yes, finally, it's 
been removed.

- edison su


On June 19, 2013, 6:50 a.m., roxanne chang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/9818/
> ---
> 
> (Updated June 19, 2013, 6:50 a.m.)
> 
> 
> Review request for cloudstack, Abhinandan Prateek, Nitin Mehta, and edison su.
> 
> 
> Description
> ---
> 
> Bugfix CLOUDSTACK-1594: Secondary storage host always remains Alert status
> [https://issues.apache.org/jira/browse/CLOUDSTACK-1594]
> 
> In file SecondarySotrageManagerImpl.java, function generateSetupCommand, if 
> the host type is Secondary storage VM, the logic is to set secondarystorage 
> host, at this time, secondarystorage host stauts should become Up.
> 
> The secondary storage host always remains Alert status, because before the 
> secondary storage vm is deployed, the secondary storage host is created. The 
> tricky way (in the end of file AgentManagerImpl.java, function 
> NotifiMonitorsOfConnection) will try to disconnect secondary storage, 
> therefore the secondary storage host becomes Alert status. The code should 
> take SSVM into consider, not only Answer reponse.
> 
> File ResourceManagerImpl.java, function discoverHostsFull, in the end will 
> call discoverer.postDiscovery, in file 
> SecondarySotrageDiscover.postDiscovery, the condition _userServiceVM is not 
> needed since its use to make secondary storage host wait for SSVM is already 
> done in SecondaryStorageManagerImpl. This makes why secondary storage host 
> always remains Alert status.
> 
> 
> This addresses bug https://issues.apache.org/jira/browse/CLOUDSTACK-1594.
> 
> 
> Diffs
> -
> 
>   server/src/com/cloud/agent/manager/AgentManagerImpl.java 6baeecf 
>   server/src/com/cloud/storage/secondary/SecondaryStorageManagerImpl.java 
> c343286 
>   
> services/secondary-storage/src/org/apache/cloudstack/storage/resource/SecondaryStorageDiscoverer.java
>  d3af792 
> 
> Diff: https://reviews.apache.org/r/9818/diff/
> 
> 
> Testing
> ---
> 
> Test 4.0.0, 4.2.0 in basic mode, works well.
> 
> 
> Thanks,
> 
> roxanne chang
> 
>



RE: [DISCUSS] Do we need code review process for code changes related to storage subsystem?

2013-06-21 Thread Edison Su


> -Original Message-
> From: Chip Childers [mailto:chip.child...@sungard.com]
> Sent: Thursday, June 20, 2013 5:42 PM
> To: dev@cloudstack.apache.org
> Cc: Edison Su
> Subject: Re: [DISCUSS] Do we need code review process for code changes
> related to storage subsystem?
> 
> On Thu, Jun 20, 2013 at 05:59:01PM +, Edison Su wrote:
> > For interface/API changes, we'd better have a code review, as more
> storage vendors and more developers outside Citrix are contributing code to
> CloudStack storage subsystem. The code change should have less surprise
> for everybody who cares about storage subsystem.
> 
> I'm not following what you are saying Edison.  What are we not doing in this
> regard right now?  I'm also not getting the "Citrix" point of reference here.

We don't have a code review process for each commit currently, the result of 
that, as the code evolving, people add more and more code, features, bug fixes 
etc, etc. Then maybe one month later, when you take a look at the code, which 
may be quite different than what you known about. So I want to add a code 
review process here, maybe start from storage subsystem at first.
The reason I add "Citrix" here, let's take a look at what happened in the last 
month:
Mike, from SolidFire,  is asking why there is a hypervisor field in the storage 
pool, simply, the hypervisor field breaks his code.
And I don't understand why there is a column, called  dynamicallyScalable, in 
vm_template table.
I think you will understand my problem here...



> 
> -chip


RE: [DISCUSS] Do we need code review process for code changes related to storage subsystem?

2013-06-21 Thread Edison Su


> -Original Message-
> From: John Burwell [mailto:jburw...@basho.com]
> Sent: Friday, June 21, 2013 11:43 AM
> To: dev@cloudstack.apache.org
> Cc: 'Chip Childers'
> Subject: Re: [DISCUSS] Do we need code review process for code changes
> related to storage subsystem?
> 
> Edison,
> 
> My understanding of our process is that the merges of significant changes
> should be proposed to the mailing list with the "[MERGE]" tag and wait up to
> 72 hours for feedback.  I consider interface changes to meet that criteria
> given the potential to break other folks work.  It sounds like we had a change
> that inadvertently slipped through without notice to list.  Going forward, I

The problem of current merge request, is that, you don't know what kind of 
change the merge request did, unless you dig into the code.
Let's say, there is a merge request, the code touches both vm and storage code, 
then how do I know, by just taking look at the request, that, the storage part 
of code is been changed.
As there are lot of merge requests, a single person can't review all the merge 
requests, then likely, the change is slipped without the notice to other people 
who wants to review storage related code, even if the merge request is send out 
to the list.

Maybe, we should ask for each merge request, need to add a list of files been 
changed: like the output of "git diff --stat"?

> propose that we follow this process for significant patches and, additionally,
> push them to Review Board.  As a matter of collaboration, it might also be a
> good idea to open a "[DISCUSS]" thread during the design/planning stages,
> but I don't think we need to require it.
> 
> Do you think this approach will properly balance to our needs to
> coordinate/review work with maintaining a smooth flow?
> 
> Thanks,
> -John
> 
> 
> On Jun 21, 2013, at 2:15 PM, Edison Su  wrote:
> 
> >
> >
> >> -Original Message-
> >> From: Chip Childers [mailto:chip.child...@sungard.com]
> >> Sent: Thursday, June 20, 2013 5:42 PM
> >> To: dev@cloudstack.apache.org
> >> Cc: Edison Su
> >> Subject: Re: [DISCUSS] Do we need code review process for code
> >> changes related to storage subsystem?
> >>
> >> On Thu, Jun 20, 2013 at 05:59:01PM +, Edison Su wrote:
> >>> For interface/API changes, we'd better have a code review, as more
> >> storage vendors and more developers outside Citrix are contributing
> >> code to CloudStack storage subsystem. The code change should have
> >> less surprise for everybody who cares about storage subsystem.
> >>
> >> I'm not following what you are saying Edison.  What are we not doing
> >> in this regard right now?  I'm also not getting the "Citrix" point of
> reference here.
> >
> > We don't have a code review process for each commit currently, the result
> of that, as the code evolving, people add more and more code, features, bug
> fixes etc, etc. Then maybe one month later, when you take a look at the
> code, which may be quite different than what you known about. So I want to
> add a code review process here, maybe start from storage subsystem at first.
> > The reason I add "Citrix" here, let's take a look at what happened in the 
> > last
> month:
> > Mike, from SolidFire,  is asking why there is a hypervisor field in the 
> > storage
> pool, simply, the hypervisor field breaks his code.
> > And I don't understand why there is a column, called  dynamicallyScalable,
> in vm_template table.
> > I think you will understand my problem here...
> >
> >
> >
> >>
> >> -chip



RE: [DISCUSS] Do we need code review process for code changes related to storage subsystem?

2013-06-21 Thread Edison Su
How about, for all the interfaces, DB schema changes, related to storage 
subsystem, need to send out a merge request and push the patches into review 
board? It's not only for feature development, but also for bug fixes.
I am not sure how many people want to review the changes related to storage 
subsystem, though. If only I am interested in it, then don't need to do that:)

> -Original Message-
> From: John Burwell [mailto:jburw...@basho.com]
> Sent: Friday, June 21, 2013 1:00 PM
> To: dev@cloudstack.apache.org
> Cc: 'Chip Childers'
> Subject: Re: [DISCUSS] Do we need code review process for code changes
> related to storage subsystem?
> 
> Edison,
> 
> The person pushing the merge request should highlight that it includes
> interface changes (regardless of whether or not it is a storage patch).  I 
> also
> think that we should be pushing all patches that rise to merge requests into
> Review Board to allow potential reviewers a place to cleanly communicate
> and discuss issues.
> 
> Thanks,
> -John
> 
> On Jun 21, 2013, at 3:53 PM, Edison Su  wrote:
> 
> >
> >
> >> -Original Message-
> >> From: John Burwell [mailto:jburw...@basho.com]
> >> Sent: Friday, June 21, 2013 11:43 AM
> >> To: dev@cloudstack.apache.org
> >> Cc: 'Chip Childers'
> >> Subject: Re: [DISCUSS] Do we need code review process for code
> >> changes related to storage subsystem?
> >>
> >> Edison,
> >>
> >> My understanding of our process is that the merges of significant
> >> changes should be proposed to the mailing list with the "[MERGE]" tag
> >> and wait up to
> >> 72 hours for feedback.  I consider interface changes to meet that
> >> criteria given the potential to break other folks work.  It sounds
> >> like we had a change that inadvertently slipped through without
> >> notice to list.  Going forward, I
> >
> > The problem of current merge request, is that, you don't know what kind
> of change the merge request did, unless you dig into the code.
> > Let's say, there is a merge request, the code touches both vm and storage
> code, then how do I know, by just taking look at the request, that, the
> storage part of code is been changed.
> > As there are lot of merge requests, a single person can't review all the
> merge requests, then likely, the change is slipped without the notice to other
> people who wants to review storage related code, even if the merge request
> is send out to the list.
> >
> > Maybe, we should ask for each merge request, need to add a list of files
> been changed: like the output of "git diff --stat"?
> >
> >> propose that we follow this process for significant patches and,
> >> additionally, push them to Review Board.  As a matter of
> >> collaboration, it might also be a good idea to open a "[DISCUSS]"
> >> thread during the design/planning stages, but I don't think we need to
> require it.
> >>
> >> Do you think this approach will properly balance to our needs to
> >> coordinate/review work with maintaining a smooth flow?
> >>
> >> Thanks,
> >> -John
> >>
> >>
> >> On Jun 21, 2013, at 2:15 PM, Edison Su  wrote:
> >>
> >>>
> >>>
> >>>> -Original Message-
> >>>> From: Chip Childers [mailto:chip.child...@sungard.com]
> >>>> Sent: Thursday, June 20, 2013 5:42 PM
> >>>> To: dev@cloudstack.apache.org
> >>>> Cc: Edison Su
> >>>> Subject: Re: [DISCUSS] Do we need code review process for code
> >>>> changes related to storage subsystem?
> >>>>
> >>>> On Thu, Jun 20, 2013 at 05:59:01PM +, Edison Su wrote:
> >>>>> For interface/API changes, we'd better have a code review, as more
> >>>> storage vendors and more developers outside Citrix are contributing
> >>>> code to CloudStack storage subsystem. The code change should have
> >>>> less surprise for everybody who cares about storage subsystem.
> >>>>
> >>>> I'm not following what you are saying Edison.  What are we not
> >>>> doing in this regard right now?  I'm also not getting the "Citrix"
> >>>> point of
> >> reference here.
> >>>
> >>> We don't have a code review process for each commit currently, the
> >>> result
> >> of that, as the code evolving, people add more and more code,
> >> features, bug fixes etc, etc. Then maybe one month later, when you
> >> take a look at the code, which may be quite different than what you
> >> known about. So I want to add a code review process here, maybe start
> from storage subsystem at first.
> >>> The reason I add "Citrix" here, let's take a look at what happened
> >>> in the last
> >> month:
> >>> Mike, from SolidFire,  is asking why there is a hypervisor field in
> >>> the storage
> >> pool, simply, the hypervisor field breaks his code.
> >>> And I don't understand why there is a column, called
> >>> dynamicallyScalable,
> >> in vm_template table.
> >>> I think you will understand my problem here...
> >>>
> >>>
> >>>
> >>>>
> >>>> -chip
> >



RE: [DISCUSS] Do we need code review process for code changes related to storage subsystem?

2013-06-21 Thread Edison Su


> -Original Message-
> From: Chip Childers [mailto:chip.child...@sungard.com]
> Sent: Friday, June 21, 2013 1:22 PM
> To: 
> Subject: Re: [DISCUSS] Do we need code review process for code changes
> related to storage subsystem?
> 
> On Jun 21, 2013, at 4:18 PM, Edison Su  wrote:
> 
> > How about, for all the interfaces, DB schema changes, related to storage
> subsystem, need to send out a merge request and push the patches into
> review board? It's not only for feature development, but also for bug fixes.
> > I am not sure how many people want to review the changes related to
> > storage subsystem, though. If only I am interested in it, then don't
> > need to do that:)
> 
> I don't understand why storage is different from the rest of the code.

Because, there is no other body call for reviewing the change before. If we can 
make it as a standard process for all the changes related to interfaces, db 
changes,  on cloudstack code, and there are people like to review the changes, 
then let's do it.

> 
> >
> >> -Original Message-
> >> From: John Burwell [mailto:jburw...@basho.com]
> >> Sent: Friday, June 21, 2013 1:00 PM
> >> To: dev@cloudstack.apache.org
> >> Cc: 'Chip Childers'
> >> Subject: Re: [DISCUSS] Do we need code review process for code
> >> changes related to storage subsystem?
> >>
> >> Edison,
> >>
> >> The person pushing the merge request should highlight that it
> >> includes interface changes (regardless of whether or not it is a
> >> storage patch).  I also think that we should be pushing all patches
> >> that rise to merge requests into Review Board to allow potential
> >> reviewers a place to cleanly communicate and discuss issues.
> >>
> >> Thanks,
> >> -John
> >>
> >> On Jun 21, 2013, at 3:53 PM, Edison Su  wrote:
> >>
> >>>
> >>>
> >>>> -Original Message-
> >>>> From: John Burwell [mailto:jburw...@basho.com]
> >>>> Sent: Friday, June 21, 2013 11:43 AM
> >>>> To: dev@cloudstack.apache.org
> >>>> Cc: 'Chip Childers'
> >>>> Subject: Re: [DISCUSS] Do we need code review process for code
> >>>> changes related to storage subsystem?
> >>>>
> >>>> Edison,
> >>>>
> >>>> My understanding of our process is that the merges of significant
> >>>> changes should be proposed to the mailing list with the "[MERGE]"
> >>>> tag and wait up to
> >>>> 72 hours for feedback.  I consider interface changes to meet that
> >>>> criteria given the potential to break other folks work.  It sounds
> >>>> like we had a change that inadvertently slipped through without
> >>>> notice to list.  Going forward, I
> >>>
> >>> The problem of current merge request, is that, you don't know what
> >>> kind
> >> of change the merge request did, unless you dig into the code.
> >>> Let's say, there is a merge request, the code touches both vm and
> >>> storage
> >> code, then how do I know, by just taking look at the request, that,
> >> the storage part of code is been changed.
> >>> As there are lot of merge requests, a single person can't review all
> >>> the
> >> merge requests, then likely, the change is slipped without the notice
> >> to other people who wants to review storage related code, even if the
> >> merge request is send out to the list.
> >>>
> >>> Maybe, we should ask for each merge request, need to add a list of
> >>> files
> >> been changed: like the output of "git diff --stat"?
> >>>
> >>>> propose that we follow this process for significant patches and,
> >>>> additionally, push them to Review Board.  As a matter of
> >>>> collaboration, it might also be a good idea to open a "[DISCUSS]"
> >>>> thread during the design/planning stages, but I don't think we need
> >>>> to
> >> require it.
> >>>>
> >>>> Do you think this approach will properly balance to our needs to
> >>>> coordinate/review work with maintaining a smooth flow?
> >>>>
> >>>> Thanks,
> >>>> -John
> >>>>
> >>>>
> >>>> On Jun 21, 2013, at 2:15 PM, Edison Su  wrote:
> >>>>
> &g

Re: Review Request: CLOUDSTACK-2304 [ZWPS]NPE while migrating volume from one zone wide primary to another

2013-06-21 Thread edison su

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/12025/#review22276
---



engine/schema/src/com/cloud/storage/VolumeVO.java
<https://reviews.apache.org/r/12025/#comment45766>

I don't get it for this change. if that.getPodId() is null, then this.podId 
= this.getPodId() will still work, right?



engine/storage/volume/src/org/apache/cloudstack/storage/volume/VolumeServiceImpl.java
<https://reviews.apache.org/r/12025/#comment45767>

Again, if pool.getPodId() == null, then newVol.setPodId(pool.getPodId()) 
will work.


- edison su


On June 21, 2013, 12:32 p.m., Rajesh Battala wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/12025/
> ---
> 
> (Updated June 21, 2013, 12:32 p.m.)
> 
> 
> Review request for cloudstack, Sateesh Chodapuneedi, edison su, Alex Huang, 
> and Ram Ganesh.
> 
> 
> Description
> ---
> 
> Issue : while migrating the volume from one ZWPS to another ZWPS then NPE is 
> having which is failing the migration of volume
> Fixed: The issue is, if the volume is in ZWPS then the volume object won't be 
> having podid. 
>while volume migration, ZWPS specific volume cases are not handled.
> Fixed the issues by adding a new constructor in VolumeVO and taken 
> care in VolumeServiceImpl to handle ZWPS volume while migration.
> 
> 
> This addresses bug CLOUDSTACK-2304.
> 
> 
> Diffs
> -
> 
>   engine/schema/src/com/cloud/storage/VolumeVO.java 02c09a2 
>   
> engine/storage/volume/src/org/apache/cloudstack/storage/volume/VolumeServiceImpl.java
>  1d36f93 
> 
> Diff: https://reviews.apache.org/r/12025/diff/
> 
> 
> Testing
> ---
> 
> Setup:
> Create a KVM cluster and add zwps to the primary storage. ZWPS got mounted on 
> KVM. Created instances in KVM.
> 1. Create a Volume and attach the volume to the running VM. volume got 
> successfully attached to the VM. 
> 2. Detach the Volume and then try to Migrate the Volume to another ZWPS added 
> to the ZONE. volume got migrated successfully to another ZWPS.
>Observed Volume got copied to the new ZWPS and then the old volume is 
> deleted.
>Verified db, the volume uuid got updated and necessary fields.
> 
> Addition Testing:
>  
> Created Xenserver cluster add cluster scope storage.
> 1. create a volume and attach the instance running in Xenserver. Success.
> 2. detach the volume and try to migrate the volume to another cluster scope 
> storage. Volume got successfully migrate to another storage. 
>Observed Volume got copied to the new PS and then the old volume is 
> deleted.
>Verified db, the volume uuid got updated and necessary fields.
> 
> 
> Thanks,
> 
> Rajesh Battala
> 
>



RE: Resource Management/Allocation for CS

2013-06-21 Thread Edison Su
Found this paper sounds interesting: 
http://www.sigmetrics.org/sigmetrics2013/pdfs/p67.pdf

"The physical infrastructure is divided into a large pool of compute and 
storage servers. The former are organized into clusters consisting of tens of 
servers (typically 32 or so). A public cloud may contain hundreds of such 
clusters to get a large-scale deployment. The VMs from a single tenant may span 
an arbitrary set of clusters. This architecture exists for most of the 
deployments based on solutions from VMware
vSphere [27], Microsoft SCVMM [15] and others. In this environment it is 
infeasible to simply extend currently existing resource allocation mechanisms. 
The state-of-the-art today includes cluster management solutions like DRS [12] 
that collect information about VMs from each server in the cluster, and 
allocate CPU and memory re-sources based on the demand. This clustered model 
has certain advantages like facilitating VM migrations between servers if the 
total allocation to VMs on a server exceeds its physical capacity. However, 
when a tenant's VMs are spread across multiple clusters, a centralized strategy 
becomes impractical, since it requires dynamic per VM information to be made 
available at a cloud-level database shared among hundreds of clusters. Not only 
does this require a large amount of information to be frequently exchanged 
between clusters, but the centralized algorithms will be CPU intensive due to 
the large number of VMs it needs to consider.

The problem of scalable dynamic resource  and we are not aware of any practical 
existing solution. We envision our algorithm to run at the cluster-level and 
allow distributed clusters to work together to provide the customer with the 
abstraction of buying bulk capacity. 
"

Haven't read the whole paper yet, but the above problem statement resonates 
with me. Our current centralized allocation algorithm may have problem in case 
of a large of concurrent VMs allocation. 



> -Original Message-
> From: Linux TUX [mailto:azgil.i...@yahoo.com]
> Sent: Friday, June 21, 2013 2:27 PM
> To: Prachi Damle; dev@cloudstack.apache.org
> Subject: Re: Resource Management/Allocation for CS
> 
> HiPrachi,
> 
> Thank you for your reply. Surely, this helps. I will work on these
> implementations, and then report my ideas back to the community.
> 
> Thanks,
> Pouya
> 
> 
> 
> 
>  From: Prachi Damle 
> To: "dev@cloudstack.apache.org" ; Linux TUX
> 
> Sent: Saturday, 22 June 2013, 1:28
> Subject: RE: Resource Management/Allocation for CS
> 
> 
> Hi Pouya,
> 
> All of CloudStack's RA heuristics are applied by deployment planner modules
> and host/storagepool allocators to decide the order in which
> resource(pods,clusters,hosts,storage pools) will be considered for a VM
> deployment/migration.
> 
> Following are the existing flavors of these modules:
> random: This just shuffles the list of clusters/hosts/pools that is returned 
> by
> the DB lookup. Random does not mean round-robin - So if you are looking for
> a new host being picked up on every deployment - that may not happen.
> firstfit:  This makes sure that clusters are ordered by available capacity and
> first hosts/pools having enough capacity is chosen within a cluster.
> userdispersing: For a given account, this makes sure that VM's are
> dispersed  - so clusters/hosts with minimum number of running VM's for that
> account are chosen first. Storage Pool with minimum number of Ready
> storage pools for the account is chosen first.
> Userconcentratedpod: Always choose the pod/cluster with max. number of
> VMs for the account - concentrating VM's at one pod.
> 
> You can find the source code related to above under:
> server/src/com/cloud/deploy, plugins/deployment-planners, plugins/host-
> allocators, plugins/storage-allocators
> 
> Hope this helps.
> 
> Thanks,
> Prachi
> 
> -Original Message-
> From: Linux TUX [mailto:azgil.i...@yahoo.com]
> Sent: Friday, June 21, 2013 5:48 AM
> To: dev@cloudstack.apache.org
> Subject: Resource Management/Allocation for CS
> 
> Hi All,
> 
> Does anybody can tell me which parts of CloudStack's source code are
> related to its Resource Allocation (RA) process?
> By RA, I mean the part of code that is responsible for VM migration or
> process migration, if there is any.
> As you know, there are two kinds of RA, to wit: 1) server side such as VM
> migration, or 2) client side such as clients' proprietary schedulers.
> Furthermore, client side's RA's success is dependent on knowing sever side's
> RA very well.
> 
> So, since i am interested to work on RA of CloudStack, if, with regard to the
> above information, you have any idea, please tell me?
> Also, if your are working on it, please let me know. Finally, it would be 
> really
> approciated if you tell me which parts of the source code is related to
> implementation of CloudStack's RA algorithms.
> 
> Best regards,
> Pouya


RE: How does the plugin model work for storage providers?

2013-06-25 Thread Edison Su
I would say, the adapter thing is the legacy stuff, all the storage plugins 
will be managed by DataStoreProviderManager. But as you said, we can group 
plugins into one place by using spring's syntax, such as:








  


> -Original Message-
> From: Prasanna Santhanam [mailto:t...@apache.org]
> Sent: Tuesday, June 25, 2013 7:29 AM
> To: dev@cloudstack.apache.org
> Cc: Edison Su; John Burwell
> Subject: Re: How does the plugin model work for storage providers?
> 
> Understood, I've done that. But I was wondering if there was a generic way
> to group all components (driver, lifecycle, provider) of a vendor-
> implementation into a logical spring context like say:
> 
> In componentContext.xml.in
> 
> 
>   
> 
>   
>   
>   
>   
>   
>   
>   
> 
>   
> 
> On Tue, Jun 25, 2013 at 08:19:40AM -0600, Mike Tutkowski wrote:
> > Hi,
> >
> > Yeah, John Burwell is finishing up the review process for the
> > SolidFire plug-in, so - at present - the code is not in master.
> >
> > To try to answer your question, I had to modify the
> > applicationContext.xml.in file.
> >
> > Here is the line I added:
> >
> >  >
> class="org.apache.cloudstack.storage.datastore.provider.SolidfirePrimaryDat
> aStoreProvider"
> > />
> >
> > Talk to you later!
> >
> >
> > On Tue, Jun 25, 2013 at 7:32 AM, Prasanna Santhanam 
> wrote:
> >
> > > Hi
> > >
> > > I noticed that all the storage providers are plugged in via
> > > applicationContext by default. How does one plugin a custom provider
> > > - say for example CompanyXStorageProvider?
> > >
> > > On looking at the SolidFire implementation I found the plugin
> > > doesn't actually come into play when running in either OSS or nonOSS
> mode.
> > > IOW, the plugin isn't injected in either of -
> > > componentContext.xml.in / nonossComponentContext.xml.in. Is the
> merge still in progress?
> > >
> > > Since these are not 'Adapters' so I don't know how to plugin my own
> > > storage provider into the contexts - oss/non-oss. Unlike network
> > > elements the DataStoreProvider, DataStoreLifeCycle seem independant
> > > and don't follow the plugin model. How does this work?
> > >
> > > Thanks,
> > >
> > > --
> > > Prasanna.,
> > >
> > > 
> > > Powered by BigRock.com
> > >
> > >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkow...@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *?*
> 
> --
> Prasanna.,
> 
> 
> Powered by BigRock.com



RE: Where should a system VM image be uncompressed?

2013-06-26 Thread Edison Su


> -Original Message-
> From: Donal Lafferty [mailto:donal.laffe...@citrix.com]
> Sent: Wednesday, June 26, 2013 10:01 AM
> To: dev@cloudstack.apache.org
> Subject: Where should a system VM image be uncompressed?
> 
> I noticed that the system VM template is stored in S3 as a .bz2.  E.g. as
> a .vhd.bz2 when using a Hyper-V hypervisor.
> 
> Naturally, you can't run a bz2.  Nor can you make a thin copy of it, say if 
> it's a
> downloaded as a TEMPLATE to a primary storage pool.
> 
> Should it be uncompressed before it goes into S3, when it is copied from S3
> to primary storage, or when a volume is created from the TEMPLATE?
There are two options:
1. Put an uncompressed template url into cloudstack db for HyperV system vm 
template, and disable register compressed template/ISO into S3, if the zone is 
for hyperV. If there is no staging area between S3 and primary storage, there 
is no place to unzip template.
2. Add a file system supported by hyperV as staging area, so that we can copy 
compressed template from S3 into that staging area, then unzip, and import into 
hyperV. Currently, we can add NFS as staging area, possible you can add CIFS as 
staging area also.



Re: Review Request 11992: CLOUDSTACK-2571 ZWPS issues with Enabling/Clearing the Maintenance State of the Storage

2013-06-27 Thread edison su

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11992/#review22473
---



server/src/com/cloud/storage/StoragePoolAutomationImpl.java
<https://reviews.apache.org/r/11992/#comment46021>

What am trying to say is to move the following codee to 
storagepoolautomationimpl:

List spes = 
_storagePoolDao.listBy(primaryStorage.getDataCenterId(), 
primaryStorage.getPodId(), primaryStorage.getClusterId(),
ScopeType.CLUSTER);
for (StoragePoolVO sp : spes) {
if (sp.getStatus() == StoragePoolStatus.PrepareForMaintenance) {
throw new CloudRuntimeException("Only one storage pool in a 
cluster can be in PrepareForMaintenance mode, " + sp.getId()
+ " is already in  PrepareForMaintenance mode ");
}
        }




- edison su


On June 27, 2013, 12:39 p.m., Rajesh Battala wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11992/
> ---
> 
> (Updated June 27, 2013, 12:39 p.m.)
> 
> 
> Review request for cloudstack, Alex Huang, edison su, Ram Ganesh, and Sateesh 
> Chodapuneedi.
> 
> 
> Bugs: CLOUDSTACK-2571
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> Issue : NPE's are happening when ZWPS is put in maintenance, removed from 
> maintenance.
> 
> Fixed:
> 1. Added ZONE scope storage handling in StorageManagerImpl and 
> StoragePoolAutomationImpl 
> 2. Modified PrimaryDataStoreDao, listBy method to take poolid to Wrapper 
> class of long instead of primitive. Modified associated DaoImpl.
> 3. StoragePoolAutomationImpl, when storage is setting to Maintenance mode, 
> handled the case for ZONE wide scope of storage. 
>if the storage is zone wide, get all the hosts(kvm, vmware) in zone and 
> send the ModifyStoragePool command (with false)
> 4. When users, cancels maintenance mode, the handled the ZONE wide scope of 
> storage pool.
> 5. Once the Storage is in maintenance, Deletion of the Storage will remove 
> the mount points from all the hosts.
> 
> This patch will solve all the issues pertaining to keeping/cancelling the 
> ZONE wide primary storage.
> 
> 
> Diffs
> -
> 
>   
> engine/api/src/org/apache/cloudstack/storage/datastore/db/PrimaryDataStoreDao.java
>  99b7b9c 
>   
> engine/api/src/org/apache/cloudstack/storage/datastore/db/PrimaryDataStoreDaoImpl.java
>  8f7826f 
>   server/src/com/cloud/storage/StorageManagerImpl.java b3e8b96 
>   server/src/com/cloud/storage/StoragePoolAutomationImpl.java 4001775 
> 
> Diff: https://reviews.apache.org/r/11992/diff/
> 
> 
> Testing
> ---
> 
> Manual Testing
> = 
> 1. Enable maintenance mode of Zone wide storage , There were no NPE's 
> happening and successfully kept the storage in maintenance mode. Verified DB 
> status.
> 2. Cancel maintenance mode of Zone wide storage, There were no NPE's 
> happening and successfully kept the storage in UP state.
> 3. Enable maintenance mode of zone wide, once successful then Delete the 
> storage, Storage got deleted successfuly. Verify the hosts,  Storage got 
> unmounted and verified the DB status.
> Addition Tests (As the common code path is modified):
> 1. Add the Cluster scope of primary storage (kvm , xenserver). Adding the 
> storage in both clusters is successful.
> kvm specific:
> 
> 2. Enable Maintenance Mode of cluster scope kvm storage. Successfully enabled 
> the storage in maintenance state. 
> 3. Cancel the Maintenance Mode of cluster scope kvm storage. Successfully 
> enabled the storage in UP state.
> 4. Enable Maintenance Mode of cluster scope kvm storage. Delete the storage. 
> Storage got successfully deleted, unmounted from hosts and from db.
> 
> Xenserver specific:
> ===
> 5. Enable Maintenance Mode of cluster scope Xenserver storage. Successfully 
> enabled the storage in maintenance state. 
> 6. Cancel the Maintenance Mode of cluster scope Xenserver storage. 
> Successfully enabled the storage in UP state.
> 7. Enable Maintenance Mode of cluster scope Xenserver storage. Delete the 
> storage. Storage got successfully deleted, unmounted from hosts and from db.
> 
> ZWPS is supported in KVM and VMware, the common code is modified. It should 
> work of VMWare as well without any issues
> 
> 
> Thanks,
> 
> Rajesh Battala
> 
>



Re: Review Request 11980: FileUtil simplified

2013-06-27 Thread edison su

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11980/#review22474
---

Ship it!


Ship It!

- edison su


On June 27, 2013, 6:30 a.m., Laszlo Hornyak wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11980/
> ---
> 
> (Updated June 27, 2013, 6:30 a.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> - writeToFile removed since no references to it
> - readFileAsString replaced with FileUtils.readFileToString
> - minor code duplication removed in dependent method getNicStats
> - unit test added
> 
> 
> Diffs
> -
> 
>   
> plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
>  b1bc99d 
>   
> plugins/hypervisors/kvm/test/com/cloud/hypervisor/kvm/resource/LibvirtComputingResourceTest.java
>  c82c31f 
>   utils/src/com/cloud/utils/FileUtil.java 74f4088 
> 
> Diff: https://reviews.apache.org/r/11980/diff/
> 
> 
> Testing
> ---
> 
> test included
> 
> 
> Thanks,
> 
> Laszlo Hornyak
> 
>



Re: Review Request 12131: CLOUDSTACK-3215 Cannot Deploy VM when using S3 object store without NFS Cache

2013-06-27 Thread edison su

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/12131/#review22478
---



engine/storage/datamotion/src/org/apache/cloudstack/storage/motion/AncientDataMotionStrategy.java
<https://reviews.apache.org/r/12131/#comment46022>


In most of the cases(vmware/xen/kvm), must to have cache storage if S3 is 
used, in the current code. We can't say, if there is no cache storage available 
in the system for those hypervisors, we should throw exception immediately. 

Better to add code in needCacheStorage(), or subclass 
ancientDataMotionStrategy for hyperV.

For example, you can add following code in needcachestorage():

if (srcData.getType() == DataObjectType.Template) {
   TemplateInfo template = (TemplateInfo)srcData;
   if (template.getHypervisorType() == HypervisorType.HperV) {
  return false; 
   }
}
}
    



- edison su


On June 27, 2013, 10:27 a.m., Donal Lafferty wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/12131/
> ---
> 
> (Updated June 27, 2013, 10:27 a.m.)
> 
> 
> Review request for cloudstack, edison su and Min Chen.
> 
> 
> Bugs: CLOUDSTACK-3215
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> Fix https://issues.apache.org/jira/browse/CLOUDSTACK-3215 by changing code to 
> not use a cache for image transfer if one can't be found.  Previously, the 
> management server entered a failure state.
> Also, added addition debug logging.
> 
> 
> Diffs
> -
> 
>   
> engine/storage/cache/src/org/apache/cloudstack/storage/cache/manager/StorageCacheManagerImpl.java
>  4b4e52106ffbf70bcf2f6a656a8b8e4cacd6f91e 
>   
> engine/storage/datamotion/src/org/apache/cloudstack/storage/motion/AncientDataMotionStrategy.java
>  631de6a47a3eff510c84aa275fd87f8fa2f7780b 
> 
> Diff: https://reviews.apache.org/r/12131/diff/
> 
> 
> Testing
> ---
> 
> Code executed on deployement using S3 and no NFS cache.  Did not have 
> facilities to test on S3 with a cache. 
> 
> 
> Thanks,
> 
> Donal Lafferty
> 
>



Re: Review Request 12106: Removed Dead Code from Management Server Hyper-V 2012 Support

2013-06-27 Thread edison su

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/12106/#review22480
---

Ship it!


Ship It!

- edison su


On June 27, 2013, 8:56 a.m., Donal Lafferty wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/12106/
> ---
> 
> (Updated June 27, 2013, 8:56 a.m.)
> 
> 
> Review request for cloudstack, Alex Huang, Chip Childers, and edison su.
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> Update ImageFormat enum to include VHDX format introduced with Hyper-V Server 
> 2012.
> Remove existing Hyper-V plugin, because it does not work and is dead code.
> Remove references to existing Hyper-V plugin from config files.
> Remove Hypervisor.HypervisorType.Hyperv special cases from manager code that 
> are unused or unsupported.
> Specifically, there is no CIFS secondary storage class 
> "CifsSecondaryStorageResource".  Also, the Hyper-V plugin's ServerResource is 
> contacted by the management server and not the other way around.
> Add Hyperv-V support to ListHypervisorsCmd API call
> 
> 
> Diffs
> -
> 
>   agent/src/com/cloud/agent/VmmAgentShell.java 
> 190d1168284243f9e860677a03311da9db31c745 
>   api/src/com/cloud/storage/Storage.java 
> 9a50ffa786c9a1b516baa7b8bce04de7d3c7b2bc 
>   client/tomcatconf/applicationContext.xml.in 
> ac1f3e46fdb43b4a5e38f9f2b3c498a2c258a0aa 
>   core/src/com/cloud/hypervisor/hyperv/resource/HypervResource.java 
> 725f0cc1ae214447f20f5cd51c40e7e9128f0506 
>   engine/storage/integration-test/test/resource/component.xml 
> 5ba87e8ebe9a682b320c60caf0e0057d2eb92027 
>   plugins/network-elements/dns-notifier/resources/components-example.xml 
> 717c5e063fe0271eeb1143215a493c6342c5811e 
>   server/src/com/cloud/configuration/Config.java 
> ba508495c350714fff90da0923b046783d42b1c3 
>   server/src/com/cloud/hypervisor/guru/HypervGuru.java 
> 630080e21cbb6a501d67bda41179feb278b3d79c 
>   server/src/com/cloud/hypervisor/hyperv/HypervServerDiscoverer.java 
> 06658b7f3e2ea5e80edf012dcaf29d980758084d 
>   server/src/com/cloud/resource/ResourceManagerImpl.java 
> fe91cb337d0f5901012ca45e131d9cb2f7c54cf2 
>   server/src/com/cloud/storage/secondary/SecondaryStorageManagerImpl.java 
> 954c7e970f02e7ba0eae6e8e1616d9a08c9168b0 
> 
> Diff: https://reviews.apache.org/r/12106/diff/
> 
> 
> Testing
> ---
> 
> Compiled using mvn clean install, used in integration test on cshv3 branch 
> from github mirror.
> 
> 
> Thanks,
> 
> Donal Lafferty
> 
>



Re: Review Request 12131: CLOUDSTACK-3215 Cannot Deploy VM when using S3 object store without NFS Cache

2013-06-27 Thread edison su


> On June 27, 2013, 6:36 p.m., edison su wrote:
> > engine/storage/datamotion/src/org/apache/cloudstack/storage/motion/AncientDataMotionStrategy.java,
> >  line 170
> > <https://reviews.apache.org/r/12131/diff/1/?file=312666#file312666line170>
> >
> > 
> > In most of the cases(vmware/xen/kvm), must to have cache storage if S3 
> > is used, in the current code. We can't say, if there is no cache storage 
> > available in the system for those hypervisors, we should throw exception 
> > immediately. 
> > 
> > Better to add code in needCacheStorage(), or subclass 
> > ancientDataMotionStrategy for hyperV.
> > 
> > For example, you can add following code in needcachestorage():
> > 
> > if (srcData.getType() == DataObjectType.Template) {
> >TemplateInfo template = (TemplateInfo)srcData;
> >if (template.getHypervisorType() == HypervisorType.HperV) {
> >   return false; 
> >}
> > }
> > }
> > 
> >
> 
> Donal Lafferty wrote:
> Sure, what do you want to throw?  E.g. what object type do you want to 
> throw.

In StorageCacheManagerImpl->createCacheObject(DataObject data, Scope scope): we 
should have the following code:
DataStore cacheStore = this.getCacheStorage(scope);
if (cacheStore == null) {
   s_logger.debug(); throw new CloudRuntimeException("something");
}


- edison


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/12131/#review22478
---


On June 27, 2013, 10:27 a.m., Donal Lafferty wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/12131/
> ---
> 
> (Updated June 27, 2013, 10:27 a.m.)
> 
> 
> Review request for cloudstack, edison su and Min Chen.
> 
> 
> Bugs: CLOUDSTACK-3215
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> Fix https://issues.apache.org/jira/browse/CLOUDSTACK-3215 by changing code to 
> not use a cache for image transfer if one can't be found.  Previously, the 
> management server entered a failure state.
> Also, added addition debug logging.
> 
> 
> Diffs
> -
> 
>   
> engine/storage/cache/src/org/apache/cloudstack/storage/cache/manager/StorageCacheManagerImpl.java
>  4b4e52106ffbf70bcf2f6a656a8b8e4cacd6f91e 
>   
> engine/storage/datamotion/src/org/apache/cloudstack/storage/motion/AncientDataMotionStrategy.java
>  631de6a47a3eff510c84aa275fd87f8fa2f7780b 
> 
> Diff: https://reviews.apache.org/r/12131/diff/
> 
> 
> Testing
> ---
> 
> Code executed on deployement using S3 and no NFS cache.  Did not have 
> facilities to test on S3 with a cache. 
> 
> 
> Thanks,
> 
> Donal Lafferty
> 
>



RE: [MERGE] Simulator Storage Fixes to master (was Re: How does the plugin model work for storage providers?)

2013-06-27 Thread Edison Su
Hi Prasanna, thanks for your great effort to get simulator work. I reviewed all 
your changes, looks good to me, while there is only one  thing that I don't 
like: the implementation of SimulatorImageStoreDriverImpl, so I rewrote it with 
a simpler implementation. The commit number is 
a2ec1daf8422d0a1c789e331a3261b05a4501060, please double check.

> -Original Message-
> From: Prasanna Santhanam [mailto:t...@apache.org]
> Sent: Thursday, June 27, 2013 8:07 AM
> To: CloudStack Dev
> Cc: Edison Su; Min Chen
> Subject: [MERGE] Simulator Storage Fixes to master (was Re: How does the
> plugin model work for storage providers?)
> 
> On Wed, Jun 26, 2013 at 05:18:14PM +0530, Prasanna Santhanam wrote:
> > Reason I needed to do this is because I need to get the simulator to
> > use a separate spring context and override the default
> > CloudstackImageStore bean. The simulator bean has to intercept the
> > template download calls to say everything is present.
> >
> > But looks like the dependency graph for this goes up to the
> > DataMotionService -> DataMotionStrategy -> DataStoreManager ->
> > DataStoreProviderManager -> DataStoreProvider.
> >
> > I will push the changes to a branch for review.
> >
> 
> This is a MERGE request to include the new storage subsystem support for
> the simulator. I'm including this as a merge since it touches some storage
> code (minor setters here and there) and reorganizes the Spring beans in to
> different bean contexts. With this 4.2 can run simulator and we can help
> jclouds run their live tests against the simulator end points as described in 
> [1].
> And Ian to setup his pipeline for the LDAP testing [2]. And of course to get
> checkin tests working again [3]
> 
> I got spring contexts working by overriding the beans loaded by management
> server. I've separated the beans such that OSS hypervisor related beans go
> into componentContext.xml and the VmWare and Solidfire related beans go
> into nonossComponentContext.xml.
> 
> 
> The deployVM, registerTemplate, downloadVolume operations work fine on
> the simulator now along with the checkin tests. I will run some more tests on
> the branch later with devcloud and real hypervisors tomorrow and merge it
> with master if there are no objections.
> 
> branch: simulatorStorageFixes
> rebased on top of current master , will re-rebase if there are any other
> changes/reviews. The review-chunks are in the following commits:
> 
> a6bb56b10dd25b2b248644e0dbb2a0f394499cee Set all templates/volumes to
> Ready in the simulator
> 373dd2b8b9db50c4c5b63ed9ddf419f1296977c1 Don't report back resource
> state to ResourceManagerImpl
> 2488694ca8ed2ff5f54dce26e518be9c2e920aa1 Group storage subsystem
> components for spring
> 724b3f423016546116a1a0fb499fab9f90658523 DataStore - provider, lifecycle,
> driver implementations for simulator
> 
> [1] http://cloudstack.markmail.org/thread/ume76fqka7fi5nfz
> [2] http://markmail.org/message/hpy6vrbmxrpcqywx
> [3] https://cwiki.apache.org/confluence/display/CLOUDSTACK/Marvin+-
> +Testing+with+Python#Marvin-TestingwithPython-
> 
> Thanks,
> 
> --
> Prasanna.,
> 
> 
> Powered by BigRock.com



Re: Review Request 12144: Create template from snapshot, failed due to the vmdk file failure.

2013-06-27 Thread edison su

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/12144/#review22492
---

Ship it!


Ship It!

- edison su


On June 27, 2013, 11:10 p.m., Fang Wang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/12144/
> ---
> 
> (Updated June 27, 2013, 11:10 p.m.)
> 
> 
> Review request for cloudstack and edison su.
> 
> 
> Bugs: CLOUDSTACK-2384
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> This is for master-6-17-stable. Fix the bug cloudstack-2384
> 
> 
> Diffs
> -
> 
>   
> plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/manager/VmwareStorageManagerImpl.java
>  4ae0f30 
> 
> Diff: https://reviews.apache.org/r/12144/diff/
> 
> 
> Testing
> ---
> 
> Create snapshot;
> Create template of the snapshot, now the vmdk file is correctly created in 
> template directory.
> 
> [root@nfs2 209]# ls -lt
> total 410724
> -rw-r--r-- 1 root root 420134912 Jun 27 14:19 
> 585fcc4b-a00b-4dc3-a03b-cab09c41cd15-disk0.vmdk
> -rw-r--r-- 1 root root   239 Jun 27 14:19 
> d2d4d8f2-2916-4034-8053-3ed011d1be6e.ova.meta
> -rw-r--r-- 1 root root   299 Jun 27 14:19 template.properties
> -rw-r--r-- 1 root root  3839 Jun 27 14:19 
> 585fcc4b-a00b-4dc3-a03b-cab09c41cd15.ovf
> 
> 
> Thanks,
> 
> Fang Wang
> 
>



RE: Review Request 12144: Create template from snapshot, failed due to the vmdk file failure.

2013-06-27 Thread Edison Su
The bug seems introduced in 13691048fb622467271b75fc7b64298e3afe9912, which is 
committed at March 16. So 4.1 should have the same issue.

> -Original Message-
> From: Musayev, Ilya [mailto:imusa...@webmd.net]
> Sent: Thursday, June 27, 2013 4:13 PM
> To: dev@cloudstack.apache.org; Fang Wang; Edison Su
> Cc: cloudstack
> Subject: RE: Review Request 12144: Create template from snapshot, failed
> due to the vmdk file failure.
> 
> Is this applicable to 4.1?
> 
> > -Original Message-
> > From: Fang Wang [mailto:nore...@reviews.apache.org] On Behalf Of Fang
> > Wang
> > Sent: Thursday, June 27, 2013 7:11 PM
> > To: edison su
> > Cc: cloudstack; Fang Wang
> > Subject: Review Request 12144: Create template from snapshot, failed
> > due to the vmdk file failure.
> >
> >
> > ---
> > This is an automatically generated e-mail. To reply, visit:
> > https://reviews.apache.org/r/12144/
> > ---
> >
> > Review request for cloudstack and edison su.
> >
> >
> > Bugs: CLOUDSTACK-2384
> >
> >
> > Repository: cloudstack-git
> >
> >
> > Description
> > ---
> >
> > This is for master-6-17-stable. Fix the bug cloudstack-2384
> >
> >
> > Diffs
> > -
> >
> >
> >
> plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/manager/
> > VmwareStorageManagerImpl.java 4ae0f30
> >
> > Diff: https://reviews.apache.org/r/12144/diff/
> >
> >
> > Testing
> > ---
> >
> > Create snapshot;
> > Create template of the snapshot, now the vmdk file is correctly
> > created in template directory.
> >
> > [root@nfs2 209]# ls -lt
> > total 410724
> > -rw-r--r-- 1 root root 420134912 Jun 27 14:19 585fcc4b-a00b-4dc3-a03b-
> > cab09c41cd15-disk0.vmdk
> > -rw-r--r-- 1 root root   239 Jun 27 14:19 d2d4d8f2-2916-4034-8053-
> > 3ed011d1be6e.ova.meta
> > -rw-r--r-- 1 root root   299 Jun 27 14:19 template.properties
> > -rw-r--r-- 1 root root  3839 Jun 27 14:19 585fcc4b-a00b-4dc3-a03b-
> > cab09c41cd15.ovf
> >
> >
> > Thanks,
> >
> > Fang Wang



RE: Uncompressed SystemVM URL

2013-06-27 Thread Edison Su
I created a uncompressed a vhd in 
http://jenkins.cloudstack.org/view/master/job/build-systemvm-master/lastSuccessfulBuild/artifact/tools/appliance/dist/systemvmtemplate-2013-06-27-master-hyperv.vhd
You can find the latest system vm template from 
http://jenkins.cloudstack.org/view/master/job/build-systemvm-master/


> -Original Message-
> From: Donal Lafferty [mailto:donal.laffe...@citrix.com]
> Sent: Thursday, June 27, 2013 8:56 AM
> To: dev@cloudstack.apache.org
> Subject: Uncompressed SystemVM URL
> 
> Anyone know an uncompressed template url?  (vhd format)


RE: Master broken on systemVM start

2013-06-28 Thread Edison Su
Devcloud has a trick to clean up systemvm.iso:
There is a bug in xapi on Ubuntu/debian(not sure the bug still there in the 
latest debian or not), which can't attach an ISO to a VM. I worked around the 
bug by creating a VDI on primary storage, copy systemvm iso into that VDI, then 
attach the vdi into system vm. The downside of this workaround is that, the VDI 
is not synced with systemvm iso.
In order to clean up devcloud, need to either clean up primary storage, or 
restore devcloud from snapshot for each run in your test infra.

> -Original Message-
> From: Prasanna Santhanam [mailto:t...@apache.org]
> Sent: Friday, June 28, 2013 10:22 AM
> To: dev@cloudstack.apache.org
> Subject: Re: Master broken on systemVM start
> 
> I deleted all traces of systemvm.iso from my codebase $ find . -name
> systemvm.iso | xargs -n1 rm -f
> 
> Then reverted my devcloud snapshot to old state and don't see this error
> again.
> 
> I wonder by mvn doesn't do a good job of cleaning though :/
> 
> Thanks for the fixes to the bugs and the upcoming fixes for the new ones. I'll
> be switching over the test infra to run object store backed secondary only
> from next week so we should have more results.
> 
> --
> Prasanna.,
> 
> On Fri, Jun 28, 2013 at 04:35:29PM +, Min Chen wrote:
> > Prasanna, glad to know that CLOUDSTACK-3137 is resolved. I fixed
> > CLOUDSTACK-3249 yesterday for a corner case reported by Alena, but I
> > could not reproduce CLOUDSTACK-3137 on my environment at all. For the
> > JSON issue, I can only tell that it is inconsistent systemvm issue
> > between management server and system vm. In 4.2, we changed
> > ArrayTypeAdaptor to return full qualified command class name, so I
> > don't quite get why you are still seeing the abbreviated command name
> > in your log. Yes, StartupSecondaryStorageCommand is still there. From
> > your log, it seems that your ssvm has the old code.
> >
> > Thanks
> > -min
> >
> > On 6/28/13 12:03 AM, "Prasanna Santhanam"  wrote:
> >
> > >On Fri, Jun 28, 2013 at 12:17:55PM +0530, Prasanna Santhanam wrote:
> > >> On Tue, Jun 25, 2013 at 04:07:08PM +, Min Chen wrote:
> > >> > Json deserialization issue is caused by out-of-dated systemvm.iso
> > >> > on
> > >>your
> > >> > hypervisor host. You need rebuilding systemvm.iso and deployed to
> > >> > your hypervisor host.
> > >> >
> > >>
> > >> CLOUDSTACK-3137 exists and seems to be a problem with HA and
> > >> systemVMs. The JSON serialization also exists for me despite
> > >> rebuidling the systemvm with -Dsystemvm.
> > >
> > >Ok - CLOUDSTACK-3137 _may_ have been fixed. the last few test-matrix
> > >jobs have succeeded. Perhaps related to fix for CLOUDSTACK-3249.
> > >
> > >>
> > >> I see the deserialization error when the ssvm agent sends this to
> > >> management server:
> > >>
> > >> 2013-06-28 06:42:50,609 DEBUG [cloud.agent.Agent]
> > >>(Agent-Handler-1:) Sending Startup: Seq -1-2:  { Cmd , MgmtId: -1, via: 
> > >>-1,
> Ver: v1, Flags:
> > >>101,
> > >>[{"StartupSecondaryStorageCommand":{"type":"SecondaryStorage","da
> taC
> > >>enter
> > >>":"1","pod":"1","guid":"s-1-VM-
> NfsSecondaryStorageResource","name":"
> > >>s-1-V
> > >>M","version":"4.2.0-SNAPSHOT","iqn":"NoIqn","publicIpAddress":"192.1
> > >>68.56
> > >>.144","publicNetmask":"255.255.255.0","publicMacAddress":"06:5c:ac:00
> :00:
> > >>42","privateIpAddress":"192.168.56.217","privateMacAddress":"06:aa:84
> :00:
> > >>00:12","privateNetmask":"255.255.255.0","storageIpAddress":"192.168.
> > >>56.21
> > >>7","storageNetmask":"255.255.255.0","storageMacAddress":"06:aa:84:0
> 0
> > >>:00:1 2","resourceName":"NfsSecondaryStorageResource","wait":0}}] }
> > >>
> > >> This happens both when I have a s3 store added and a regular nfs
> > >>storage added.
> > >> Is the startupcommand still reqd?
> > >>
> > >> --
> > >> Prasanna.,
> > >>
> > >> 
> > >> Powered by BigRock.com
> > >
> > >--
> > >Prasanna.,
> > >
> > >
> > >Powered by BigRock.com
> > >
> 
> 
> 
> Powered by BigRock.com



Re: Review Request 12131: CLOUDSTACK-3215 Cannot Deploy VM when using S3 object store without NFS Cache

2013-06-28 Thread edison su

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/12131/#review22548
---

Ship it!


Ship It!

- edison su


On June 28, 2013, 7:46 p.m., Donal Lafferty wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/12131/
> ---
> 
> (Updated June 28, 2013, 7:46 p.m.)
> 
> 
> Review request for cloudstack, edison su and Min Chen.
> 
> 
> Bugs: CLOUDSTACK-3215
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> Fix https://issues.apache.org/jira/browse/CLOUDSTACK-3215 by changing code to 
> not use a cache for image transfer if one can't be found.  Previously, the 
> management server entered a failure state.
> Also, added addition debug logging.
> 
> 
> Diffs
> -
> 
>   
> engine/storage/cache/src/org/apache/cloudstack/storage/cache/manager/StorageCacheManagerImpl.java
>  4b4e52106ffbf70bcf2f6a656a8b8e4cacd6f91e 
>   
> engine/storage/datamotion/src/org/apache/cloudstack/storage/motion/AncientDataMotionStrategy.java
>  631de6a47a3eff510c84aa275fd87f8fa2f7780b 
> 
> Diff: https://reviews.apache.org/r/12131/diff/
> 
> 
> Testing
> ---
> 
> Code executed on deployement using S3 and no NFS cache.  Did not have 
> facilities to test on S3 with a cache. 
> 
> 
> Thanks,
> 
> Donal Lafferty
> 
>



  1   2   3   4   >