[GitHub] cloudstack issue #1639: CLOUDSTACK-9453: WIP : Marvin optimizations and fixe...

2016-09-09 Thread jburwell
Github user jburwell commented on the issue:

https://github.com/apache/cloudstack/pull/1639
  
@abhinandanprateek could you please investigate the cause of the Jenkins 
failure?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1658: Added an additional JSON diff output to the ApiXmlDo...

2016-09-09 Thread pdion891
Github user pdion891 commented on the issue:

https://github.com/apache/cloudstack/pull/1658
  
lgtm, that's usefull


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1658: Added an additional JSON diff output to the ApiXmlDo...

2016-09-09 Thread swill
Github user swill commented on the issue:

https://github.com/apache/cloudstack/pull/1658
  
@jburwell can we merge this?  Not sure what process you have in place right 
now.  Thx...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1658: Added an additional JSON diff output to the ApiXmlDo...

2016-09-09 Thread jburwell
Github user jburwell commented on the issue:

https://github.com/apache/cloudstack/pull/1658
  
@swill any committer may merge so long as there is at least 1 code review 
LGTM, 1 test LGTM, and no -1s.  I see 2 code review LGTMS.  Since this PR is 
for docs, is there a way to test it?   If not, LGTM.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1658: Added an additional JSON diff output to the ApiXmlDo...

2016-09-09 Thread swill
Github user swill commented on the issue:

https://github.com/apache/cloudstack/pull/1658
  
Well, it is to help generate the docs.  I have included the `txt` and the 
new `json` output produced by this addition to the original post (OP).  I have 
also used this code to create the release notes linked in the OP, so I would 
consider the code tested.

I have your blessing to merge this then?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #872: Strongswan vpn feature

2016-09-09 Thread swill
Github user swill commented on the issue:

https://github.com/apache/cloudstack/pull/872
  
Is anyone working on this right now?  

Having reviewed this thread, I believe the following pieces are still 
outstanding:
- fix merge conflicts.
- potentially: upgrade the VR to use Debian 8 (since we will be removing 
OpenSwan which blocked that upgrade previously).
- update implementation to use 5.x to better support NATed connections.
- build a new systemvmtemplate from this branch on master.
- test site-to-site vpn functionality.
-- create ACS side first, then remote side, then connect.
-- create remote side, then ACS side, then connect.
-- break connection from each side to verify renegotiation of connection is 
established.
- test client-to-site vpn functionality.
-- test from: Mac, Window and Ubuntu.
-- test from behind NATed connection.

What am I missing?  I am looking at potentially picking this up to try to 
get it fixed and ready to merge, so any feedback from the people who have 
reviewed this so far would be appreciated.

@jburwell, @jayapalu, @rhtyd, @pdion891, @remibergsma 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #872: Strongswan vpn feature

2016-09-09 Thread swill
Github user swill commented on the issue:

https://github.com/apache/cloudstack/pull/872
  
@jayapalu are you active enough that if I make pull requests against your 
branch you can make the changes available in this PR.  Or should I just start 
from your work and develop and test in my own branch and when ready for 
community testing, I just create a new PR?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1660: CLOUDSTACK-9470: [BLOCKER] Bug in SshHelper affectin...

2016-09-09 Thread serg38
Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1660
  
@rhtyd @jburwell @swill @koushik-das @rafaelweingartner @wido This PR has 
enough of everything. Can one of the committers merge it? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1651: Marvin Tests: fix expected return string for success...

2016-09-09 Thread serg38
Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1651
  
@rhtyd @jburwell @swill @koushik-das @rafaelweingartner @wido This PR has 
enough of everything. Can one of the committers merge it? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1605: CLOUDSTACK-9428: Fix for CLOUDSTACK-9211 - Improve p...

2016-09-09 Thread serg38
Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/1605
  
@rhtyd @jburwell @swill @koushik-das @rafaelweingartner @wido This PR has 
enough of everything. Can one of the committers merge it? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: storage affinity groups

2016-09-09 Thread Yiping Zhang
I am not a Java developer, so I am at a total loss on Mike’s approach. How 
would end users choose this new storage pool allocator from UI when 
provisioning new instance?

My hope is that if the feature is added to ACS, end users can assign an 
anti-storage affinity group to VM instances, just as assign anti-host affinity 
groups from UI or API, either at VM creation time, or update assignments for 
existing instances (along with any necessary VM stop/start, storage migration 
actions, etc).

Obviously, this feature is useful only when there are more than one primary 
storage devices available for the same cluster or zone (in case for zone wide 
primary storage volumes).

Just curious, how many primary storage volumes are available for your 
clusters/zones? 

Regards,
Yiping

On 9/8/16, 6:04 PM, "Tutkowski, Mike"  wrote:

Personally, I think the most flexible way is if you have a developer write 
a storage-pool allocator to customize the placement of virtual disks as you see 
fit.

You extend the StoragePoolAllocator class, write your logic, and update a 
config file so that Spring is aware of the new allocator and creates an 
instance of it when the management server is started up.

You might even want to extend ClusterScopeStoragePoolAllocator (instead of 
directly implementing StoragePoolAllocator) as it possibly provides some useful 
functionality for you already.

From: Marty Godsey 
Sent: Thursday, September 8, 2016 6:27 PM
To: dev@cloudstack.apache.org
Subject: RE: storage affinity groups

So what would be the best way to do it? I use templates to make it simple 
for my users so that the Xen tools are already installed as an example.

Regards,
Marty Godsey

-Original Message-
From: Yiping Zhang [mailto:yzh...@marketo.com]
Sent: Thursday, September 8, 2016 7:55 PM
To: dev@cloudstack.apache.org
Subject: Re: storage affinity groups

Well, using tags leads to proliferation of templates or service offerings 
etc. It is not very scalable and gets out of hand very quickly.

Yiping

On 9/8/16, 4:25 PM, "Marty Godsey"  wrote:

I do this by using storage tags. As an example I have some templates 
that are either created on SSD or magnetic storage. The template has a storage 
tag associated with it and then I assigned the appropriate storage tag to the 
primary storage.

Regards,
Marty Godsey

-Original Message-
From: Tutkowski, Mike [mailto:mike.tutkow...@netapp.com]
Sent: Thursday, September 8, 2016 7:16 PM
To: dev@cloudstack.apache.org
Subject: Re: storage affinity groups

If one doesn't already exist, you can write a custom storage allocator 
to handle this scenario.

> On Sep 8, 2016, at 4:25 PM, Yiping Zhang  wrote:
>
> Hi,  Devs:
>
> We all know how (anti)-host affinity group works in CloudStack,  I am 
wondering if there is a similar concept for (anti)-storage affinity group?
>
> The use case is as this:  in a setup with just one (somewhat) 
unreliable primary storage, if the primary storage is off line, then all VM 
instances would be impacted. Now if we have two primary storage volumes for the 
cluster, then when one of them goes offline, only half of VM instances would be 
impacted (assuming the VM instances are evenly distributed between the two 
primary storage volumes).  Thus, the (anti)-storage affinity groups would make 
sure that instance's disk volumes are distributed among available primary 
storage volumes just like (anti)-host affinity groups would distribute 
instances among hosts.
>
> Does anyone else see the benefits of anti-storage affinity groups?
>
> Yiping






Re: storage affinity groups

2016-09-09 Thread Tutkowski, Mike
Hi Yiping,

Reading your most recent e-mail, it seems like you are looking for a feature 
that does more than simply makes sure virtual disks are roughly allocated 
equally across the primary storages of a given cluster.

At first, that is what I imagined your request to be.

>From this e-mail, though, it looks like this is something you'd like users to 
>be able to personally choose (ex. a user might want virtual disk 1 on 
>different storage than virtual disk 2).

Is that a fair representation of your request?

If so, I believe storage tagging (as was mentioned by Marty) is the only way to 
do that at present. It does, as you indicated, lead to a proliferation of 
offerings, however.

As for how I personally solve this issue: I do not run a cloud. I work for a 
storage vendor. In our situation, the clustered SAN that we develop is highly 
fault tolerant. If the SAN is offline, then it probably means your entire 
datacenter is offline (ex. power loss of some sort).

Talk to you later,
Mike

From: Yiping Zhang 
Sent: Friday, September 9, 2016 11:08 AM
To: dev@cloudstack.apache.org
Subject: Re: storage affinity groups

I am not a Java developer, so I am at a total loss on Mike’s approach. How 
would end users choose this new storage pool allocator from UI when 
provisioning new instance?

My hope is that if the feature is added to ACS, end users can assign an 
anti-storage affinity group to VM instances, just as assign anti-host affinity 
groups from UI or API, either at VM creation time, or update assignments for 
existing instances (along with any necessary VM stop/start, storage migration 
actions, etc).

Obviously, this feature is useful only when there are more than one primary 
storage devices available for the same cluster or zone (in case for zone wide 
primary storage volumes).

Just curious, how many primary storage volumes are available for your 
clusters/zones?

Regards,
Yiping

On 9/8/16, 6:04 PM, "Tutkowski, Mike"  wrote:

Personally, I think the most flexible way is if you have a developer write 
a storage-pool allocator to customize the placement of virtual disks as you see 
fit.

You extend the StoragePoolAllocator class, write your logic, and update a 
config file so that Spring is aware of the new allocator and creates an 
instance of it when the management server is started up.

You might even want to extend ClusterScopeStoragePoolAllocator (instead of 
directly implementing StoragePoolAllocator) as it possibly provides some useful 
functionality for you already.

From: Marty Godsey 
Sent: Thursday, September 8, 2016 6:27 PM
To: dev@cloudstack.apache.org
Subject: RE: storage affinity groups

So what would be the best way to do it? I use templates to make it simple 
for my users so that the Xen tools are already installed as an example.

Regards,
Marty Godsey

-Original Message-
From: Yiping Zhang [mailto:yzh...@marketo.com]
Sent: Thursday, September 8, 2016 7:55 PM
To: dev@cloudstack.apache.org
Subject: Re: storage affinity groups

Well, using tags leads to proliferation of templates or service offerings 
etc. It is not very scalable and gets out of hand very quickly.

Yiping

On 9/8/16, 4:25 PM, "Marty Godsey"  wrote:

I do this by using storage tags. As an example I have some templates 
that are either created on SSD or magnetic storage. The template has a storage 
tag associated with it and then I assigned the appropriate storage tag to the 
primary storage.

Regards,
Marty Godsey

-Original Message-
From: Tutkowski, Mike [mailto:mike.tutkow...@netapp.com]
Sent: Thursday, September 8, 2016 7:16 PM
To: dev@cloudstack.apache.org
Subject: Re: storage affinity groups

If one doesn't already exist, you can write a custom storage allocator 
to handle this scenario.

> On Sep 8, 2016, at 4:25 PM, Yiping Zhang  wrote:
>
> Hi,  Devs:
>
> We all know how (anti)-host affinity group works in CloudStack,  I am 
wondering if there is a similar concept for (anti)-storage affinity group?
>
> The use case is as this:  in a setup with just one (somewhat) 
unreliable primary storage, if the primary storage is off line, then all VM 
instances would be impacted. Now if we have two primary storage volumes for the 
cluster, then when one of them goes offline, only half of VM instances would be 
impacted (assuming the VM instances are evenly distributed between the two 
primary storage volumes).  Thus, the (anti)-storage affinity groups would make 
sure that instance's disk volumes are distributed among available primary 
storage volumes just like (anti)-host affinity groups would distribute 
instances among hosts.
>
> Does anyone else see the benefits of anti-storage affinity groups?
>
   

Re: storage affinity groups

2016-09-09 Thread Will Stevens
My understanding is that he wants to do anti-affinity across primary
storage endpoints.  So if he has two web servers, it would ensure that one
of his web servers is on Primary1 and the other is on Primary2.  This means
that if he loses a primary storage for some reason, he only loses one of
his load balanced web servers.

Does that sound about right?

*Will STEVENS*
Lead Developer

*CloudOps* *| *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_

On Fri, Sep 9, 2016 at 2:40 PM, Tutkowski, Mike 
wrote:

> Hi Yiping,
>
> Reading your most recent e-mail, it seems like you are looking for a
> feature that does more than simply makes sure virtual disks are roughly
> allocated equally across the primary storages of a given cluster.
>
> At first, that is what I imagined your request to be.
>
> From this e-mail, though, it looks like this is something you'd like users
> to be able to personally choose (ex. a user might want virtual disk 1 on
> different storage than virtual disk 2).
>
> Is that a fair representation of your request?
>
> If so, I believe storage tagging (as was mentioned by Marty) is the only
> way to do that at present. It does, as you indicated, lead to a
> proliferation of offerings, however.
>
> As for how I personally solve this issue: I do not run a cloud. I work for
> a storage vendor. In our situation, the clustered SAN that we develop is
> highly fault tolerant. If the SAN is offline, then it probably means your
> entire datacenter is offline (ex. power loss of some sort).
>
> Talk to you later,
> Mike
> 
> From: Yiping Zhang 
> Sent: Friday, September 9, 2016 11:08 AM
> To: dev@cloudstack.apache.org
> Subject: Re: storage affinity groups
>
> I am not a Java developer, so I am at a total loss on Mike’s approach. How
> would end users choose this new storage pool allocator from UI when
> provisioning new instance?
>
> My hope is that if the feature is added to ACS, end users can assign an
> anti-storage affinity group to VM instances, just as assign anti-host
> affinity groups from UI or API, either at VM creation time, or update
> assignments for existing instances (along with any necessary VM stop/start,
> storage migration actions, etc).
>
> Obviously, this feature is useful only when there are more than one
> primary storage devices available for the same cluster or zone (in case for
> zone wide primary storage volumes).
>
> Just curious, how many primary storage volumes are available for your
> clusters/zones?
>
> Regards,
> Yiping
>
> On 9/8/16, 6:04 PM, "Tutkowski, Mike"  wrote:
>
> Personally, I think the most flexible way is if you have a developer
> write a storage-pool allocator to customize the placement of virtual disks
> as you see fit.
>
> You extend the StoragePoolAllocator class, write your logic, and
> update a config file so that Spring is aware of the new allocator and
> creates an instance of it when the management server is started up.
>
> You might even want to extend ClusterScopeStoragePoolAllocator
> (instead of directly implementing StoragePoolAllocator) as it possibly
> provides some useful functionality for you already.
> 
> From: Marty Godsey 
> Sent: Thursday, September 8, 2016 6:27 PM
> To: dev@cloudstack.apache.org
> Subject: RE: storage affinity groups
>
> So what would be the best way to do it? I use templates to make it
> simple for my users so that the Xen tools are already installed as an
> example.
>
> Regards,
> Marty Godsey
>
> -Original Message-
> From: Yiping Zhang [mailto:yzh...@marketo.com]
> Sent: Thursday, September 8, 2016 7:55 PM
> To: dev@cloudstack.apache.org
> Subject: Re: storage affinity groups
>
> Well, using tags leads to proliferation of templates or service
> offerings etc. It is not very scalable and gets out of hand very quickly.
>
> Yiping
>
> On 9/8/16, 4:25 PM, "Marty Godsey"  wrote:
>
> I do this by using storage tags. As an example I have some
> templates that are either created on SSD or magnetic storage. The template
> has a storage tag associated with it and then I assigned the appropriate
> storage tag to the primary storage.
>
> Regards,
> Marty Godsey
>
> -Original Message-
> From: Tutkowski, Mike [mailto:mike.tutkow...@netapp.com]
> Sent: Thursday, September 8, 2016 7:16 PM
> To: dev@cloudstack.apache.org
> Subject: Re: storage affinity groups
>
> If one doesn't already exist, you can write a custom storage
> allocator to handle this scenario.
>
> > On Sep 8, 2016, at 4:25 PM, Yiping Zhang 
> wrote:
> >
> > Hi,  Devs:
> >
> > We all know how (anti)-host affinity group works in CloudStack,
> I am wondering if there is a similar concept for (anti)-storage affinity
> group?
> >
>   

Re: storage affinity groups

2016-09-09 Thread Tutkowski, Mike
Yep, based on the recent e-mail Yiping sent, I would agree, Will.

At the time being, you have two options: 1) storage tagging 2) fault-tolerant 
primary storage like a SAN.

From: williamstev...@gmail.com  on behalf of Will 
Stevens 
Sent: Friday, September 9, 2016 12:44 PM
To: dev@cloudstack.apache.org
Subject: Re: storage affinity groups

My understanding is that he wants to do anti-affinity across primary
storage endpoints.  So if he has two web servers, it would ensure that one
of his web servers is on Primary1 and the other is on Primary2.  This means
that if he loses a primary storage for some reason, he only loses one of
his load balanced web servers.

Does that sound about right?

*Will STEVENS*
Lead Developer

*CloudOps* *| *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_

On Fri, Sep 9, 2016 at 2:40 PM, Tutkowski, Mike 
wrote:

> Hi Yiping,
>
> Reading your most recent e-mail, it seems like you are looking for a
> feature that does more than simply makes sure virtual disks are roughly
> allocated equally across the primary storages of a given cluster.
>
> At first, that is what I imagined your request to be.
>
> From this e-mail, though, it looks like this is something you'd like users
> to be able to personally choose (ex. a user might want virtual disk 1 on
> different storage than virtual disk 2).
>
> Is that a fair representation of your request?
>
> If so, I believe storage tagging (as was mentioned by Marty) is the only
> way to do that at present. It does, as you indicated, lead to a
> proliferation of offerings, however.
>
> As for how I personally solve this issue: I do not run a cloud. I work for
> a storage vendor. In our situation, the clustered SAN that we develop is
> highly fault tolerant. If the SAN is offline, then it probably means your
> entire datacenter is offline (ex. power loss of some sort).
>
> Talk to you later,
> Mike
> 
> From: Yiping Zhang 
> Sent: Friday, September 9, 2016 11:08 AM
> To: dev@cloudstack.apache.org
> Subject: Re: storage affinity groups
>
> I am not a Java developer, so I am at a total loss on Mike’s approach. How
> would end users choose this new storage pool allocator from UI when
> provisioning new instance?
>
> My hope is that if the feature is added to ACS, end users can assign an
> anti-storage affinity group to VM instances, just as assign anti-host
> affinity groups from UI or API, either at VM creation time, or update
> assignments for existing instances (along with any necessary VM stop/start,
> storage migration actions, etc).
>
> Obviously, this feature is useful only when there are more than one
> primary storage devices available for the same cluster or zone (in case for
> zone wide primary storage volumes).
>
> Just curious, how many primary storage volumes are available for your
> clusters/zones?
>
> Regards,
> Yiping
>
> On 9/8/16, 6:04 PM, "Tutkowski, Mike"  wrote:
>
> Personally, I think the most flexible way is if you have a developer
> write a storage-pool allocator to customize the placement of virtual disks
> as you see fit.
>
> You extend the StoragePoolAllocator class, write your logic, and
> update a config file so that Spring is aware of the new allocator and
> creates an instance of it when the management server is started up.
>
> You might even want to extend ClusterScopeStoragePoolAllocator
> (instead of directly implementing StoragePoolAllocator) as it possibly
> provides some useful functionality for you already.
> 
> From: Marty Godsey 
> Sent: Thursday, September 8, 2016 6:27 PM
> To: dev@cloudstack.apache.org
> Subject: RE: storage affinity groups
>
> So what would be the best way to do it? I use templates to make it
> simple for my users so that the Xen tools are already installed as an
> example.
>
> Regards,
> Marty Godsey
>
> -Original Message-
> From: Yiping Zhang [mailto:yzh...@marketo.com]
> Sent: Thursday, September 8, 2016 7:55 PM
> To: dev@cloudstack.apache.org
> Subject: Re: storage affinity groups
>
> Well, using tags leads to proliferation of templates or service
> offerings etc. It is not very scalable and gets out of hand very quickly.
>
> Yiping
>
> On 9/8/16, 4:25 PM, "Marty Godsey"  wrote:
>
> I do this by using storage tags. As an example I have some
> templates that are either created on SSD or magnetic storage. The template
> has a storage tag associated with it and then I assigned the appropriate
> storage tag to the primary storage.
>
> Regards,
> Marty Godsey
>
> -Original Message-
> From: Tutkowski, Mike [mailto:mike.tutkow...@netapp.com]
> Sent: Thursday, September 8, 2016 7:16 PM
> To: dev@cloudstack.apache.org
> Subject: Re: storage affinity groups
>
> If

Re: storage affinity groups

2016-09-09 Thread Yiping Zhang
Will described my use case perfectly.

Ideally, the underlying storage technology used for the cloud should provide 
the reliability required.  But not every company has the money for the best 
storage technology on the market. So the next best thing is to provide some 
fault tolerance redundancy through the app and at the same time make it easy to 
use for end users and administrators alike.

Regards,

Yiping

On 9/9/16, 11:49 AM, "Tutkowski, Mike"  wrote:

Yep, based on the recent e-mail Yiping sent, I would agree, Will.

At the time being, you have two options: 1) storage tagging 2) 
fault-tolerant primary storage like a SAN.

From: williamstev...@gmail.com  on behalf of Will 
Stevens 
Sent: Friday, September 9, 2016 12:44 PM
To: dev@cloudstack.apache.org
Subject: Re: storage affinity groups

My understanding is that he wants to do anti-affinity across primary
storage endpoints.  So if he has two web servers, it would ensure that one
of his web servers is on Primary1 and the other is on Primary2.  This means
that if he loses a primary storage for some reason, he only loses one of
his load balanced web servers.

Does that sound about right?

*Will STEVENS*
Lead Developer

*CloudOps* *| *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_

On Fri, Sep 9, 2016 at 2:40 PM, Tutkowski, Mike 
wrote:

> Hi Yiping,
>
> Reading your most recent e-mail, it seems like you are looking for a
> feature that does more than simply makes sure virtual disks are roughly
> allocated equally across the primary storages of a given cluster.
>
> At first, that is what I imagined your request to be.
>
> From this e-mail, though, it looks like this is something you'd like users
> to be able to personally choose (ex. a user might want virtual disk 1 on
> different storage than virtual disk 2).
>
> Is that a fair representation of your request?
>
> If so, I believe storage tagging (as was mentioned by Marty) is the only
> way to do that at present. It does, as you indicated, lead to a
> proliferation of offerings, however.
>
> As for how I personally solve this issue: I do not run a cloud. I work for
> a storage vendor. In our situation, the clustered SAN that we develop is
> highly fault tolerant. If the SAN is offline, then it probably means your
> entire datacenter is offline (ex. power loss of some sort).
>
> Talk to you later,
> Mike
> 
> From: Yiping Zhang 
> Sent: Friday, September 9, 2016 11:08 AM
> To: dev@cloudstack.apache.org
> Subject: Re: storage affinity groups
>
> I am not a Java developer, so I am at a total loss on Mike’s approach. How
> would end users choose this new storage pool allocator from UI when
> provisioning new instance?
>
> My hope is that if the feature is added to ACS, end users can assign an
> anti-storage affinity group to VM instances, just as assign anti-host
> affinity groups from UI or API, either at VM creation time, or update
> assignments for existing instances (along with any necessary VM 
stop/start,
> storage migration actions, etc).
>
> Obviously, this feature is useful only when there are more than one
> primary storage devices available for the same cluster or zone (in case 
for
> zone wide primary storage volumes).
>
> Just curious, how many primary storage volumes are available for your
> clusters/zones?
>
> Regards,
> Yiping
>
> On 9/8/16, 6:04 PM, "Tutkowski, Mike"  wrote:
>
> Personally, I think the most flexible way is if you have a developer
> write a storage-pool allocator to customize the placement of virtual disks
> as you see fit.
>
> You extend the StoragePoolAllocator class, write your logic, and
> update a config file so that Spring is aware of the new allocator and
> creates an instance of it when the management server is started up.
>
> You might even want to extend ClusterScopeStoragePoolAllocator
> (instead of directly implementing StoragePoolAllocator) as it possibly
> provides some useful functionality for you already.
> 
> From: Marty Godsey 
> Sent: Thursday, September 8, 2016 6:27 PM
> To: dev@cloudstack.apache.org
> Subject: RE: storage affinity groups
>
> So what would be the best way to do it? I use templates to make it
> simple for my users so that the Xen tools are already installed as an
> example.
>
> Regards,
> Marty Godsey
>
> -Original Message-
> From: Yiping Zhang [mailto:yzh...@marketo.com]
> Sent: 

Re: storage affinity groups

2016-09-09 Thread Tutkowski, Mike
With CloudStack as it currently stands, I believe you will need to resort to 
storage tagging for your use case then.

From: Yiping Zhang 
Sent: Friday, September 9, 2016 1:44 PM
To: dev@cloudstack.apache.org
Subject: Re: storage affinity groups

Will described my use case perfectly.

Ideally, the underlying storage technology used for the cloud should provide 
the reliability required.  But not every company has the money for the best 
storage technology on the market. So the next best thing is to provide some 
fault tolerance redundancy through the app and at the same time make it easy to 
use for end users and administrators alike.

Regards,

Yiping

On 9/9/16, 11:49 AM, "Tutkowski, Mike"  wrote:

Yep, based on the recent e-mail Yiping sent, I would agree, Will.

At the time being, you have two options: 1) storage tagging 2) 
fault-tolerant primary storage like a SAN.

From: williamstev...@gmail.com  on behalf of Will 
Stevens 
Sent: Friday, September 9, 2016 12:44 PM
To: dev@cloudstack.apache.org
Subject: Re: storage affinity groups

My understanding is that he wants to do anti-affinity across primary
storage endpoints.  So if he has two web servers, it would ensure that one
of his web servers is on Primary1 and the other is on Primary2.  This means
that if he loses a primary storage for some reason, he only loses one of
his load balanced web servers.

Does that sound about right?

*Will STEVENS*
Lead Developer

*CloudOps* *| *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_

On Fri, Sep 9, 2016 at 2:40 PM, Tutkowski, Mike 
wrote:

> Hi Yiping,
>
> Reading your most recent e-mail, it seems like you are looking for a
> feature that does more than simply makes sure virtual disks are roughly
> allocated equally across the primary storages of a given cluster.
>
> At first, that is what I imagined your request to be.
>
> From this e-mail, though, it looks like this is something you'd like users
> to be able to personally choose (ex. a user might want virtual disk 1 on
> different storage than virtual disk 2).
>
> Is that a fair representation of your request?
>
> If so, I believe storage tagging (as was mentioned by Marty) is the only
> way to do that at present. It does, as you indicated, lead to a
> proliferation of offerings, however.
>
> As for how I personally solve this issue: I do not run a cloud. I work for
> a storage vendor. In our situation, the clustered SAN that we develop is
> highly fault tolerant. If the SAN is offline, then it probably means your
> entire datacenter is offline (ex. power loss of some sort).
>
> Talk to you later,
> Mike
> 
> From: Yiping Zhang 
> Sent: Friday, September 9, 2016 11:08 AM
> To: dev@cloudstack.apache.org
> Subject: Re: storage affinity groups
>
> I am not a Java developer, so I am at a total loss on Mike’s approach. How
> would end users choose this new storage pool allocator from UI when
> provisioning new instance?
>
> My hope is that if the feature is added to ACS, end users can assign an
> anti-storage affinity group to VM instances, just as assign anti-host
> affinity groups from UI or API, either at VM creation time, or update
> assignments for existing instances (along with any necessary VM 
stop/start,
> storage migration actions, etc).
>
> Obviously, this feature is useful only when there are more than one
> primary storage devices available for the same cluster or zone (in case 
for
> zone wide primary storage volumes).
>
> Just curious, how many primary storage volumes are available for your
> clusters/zones?
>
> Regards,
> Yiping
>
> On 9/8/16, 6:04 PM, "Tutkowski, Mike"  wrote:
>
> Personally, I think the most flexible way is if you have a developer
> write a storage-pool allocator to customize the placement of virtual disks
> as you see fit.
>
> You extend the StoragePoolAllocator class, write your logic, and
> update a config file so that Spring is aware of the new allocator and
> creates an instance of it when the management server is started up.
>
> You might even want to extend ClusterScopeStoragePoolAllocator
> (instead of directly implementing StoragePoolAllocator) as it possibly
> provides some useful functionality for you already.
> 
> From: Marty Godsey 
> Sent: Thursday, September 8, 2016 6:27 PM
> To: dev@cloudstack.apache.org
> Subject: RE: storage affinity groups
>
> So what would be the best way to do it? I use templates to make

Re: storage affinity groups

2016-09-09 Thread Yiping Zhang
I wanted first to see what other people think about this feature. That’s why I 
posted it on Dev list. If enough people consider it as an useful feature for 
ACS,  then I can make formal feature request.

On 9/9/16, 1:25 PM, "Tutkowski, Mike"  wrote:

With CloudStack as it currently stands, I believe you will need to resort 
to storage tagging for your use case then.

From: Yiping Zhang 
Sent: Friday, September 9, 2016 1:44 PM
To: dev@cloudstack.apache.org
Subject: Re: storage affinity groups

Will described my use case perfectly.

Ideally, the underlying storage technology used for the cloud should 
provide the reliability required.  But not every company has the money for the 
best storage technology on the market. So the next best thing is to provide 
some fault tolerance redundancy through the app and at the same time make it 
easy to use for end users and administrators alike.

Regards,

Yiping

On 9/9/16, 11:49 AM, "Tutkowski, Mike"  wrote:

Yep, based on the recent e-mail Yiping sent, I would agree, Will.

At the time being, you have two options: 1) storage tagging 2) 
fault-tolerant primary storage like a SAN.

From: williamstev...@gmail.com  on behalf of 
Will Stevens 
Sent: Friday, September 9, 2016 12:44 PM
To: dev@cloudstack.apache.org
Subject: Re: storage affinity groups

My understanding is that he wants to do anti-affinity across primary
storage endpoints.  So if he has two web servers, it would ensure that 
one
of his web servers is on Primary1 and the other is on Primary2.  This 
means
that if he loses a primary storage for some reason, he only loses one of
his load balanced web servers.

Does that sound about right?

*Will STEVENS*
Lead Developer

*CloudOps* *| *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_

On Fri, Sep 9, 2016 at 2:40 PM, Tutkowski, Mike 

wrote:

> Hi Yiping,
>
> Reading your most recent e-mail, it seems like you are looking for a
> feature that does more than simply makes sure virtual disks are 
roughly
> allocated equally across the primary storages of a given cluster.
>
> At first, that is what I imagined your request to be.
>
> From this e-mail, though, it looks like this is something you'd like 
users
> to be able to personally choose (ex. a user might want virtual disk 1 
on
> different storage than virtual disk 2).
>
> Is that a fair representation of your request?
>
> If so, I believe storage tagging (as was mentioned by Marty) is the 
only
> way to do that at present. It does, as you indicated, lead to a
> proliferation of offerings, however.
>
> As for how I personally solve this issue: I do not run a cloud. I 
work for
> a storage vendor. In our situation, the clustered SAN that we develop 
is
> highly fault tolerant. If the SAN is offline, then it probably means 
your
> entire datacenter is offline (ex. power loss of some sort).
>
> Talk to you later,
> Mike
> 
> From: Yiping Zhang 
> Sent: Friday, September 9, 2016 11:08 AM
> To: dev@cloudstack.apache.org
> Subject: Re: storage affinity groups
>
> I am not a Java developer, so I am at a total loss on Mike’s 
approach. How
> would end users choose this new storage pool allocator from UI when
> provisioning new instance?
>
> My hope is that if the feature is added to ACS, end users can assign 
an
> anti-storage affinity group to VM instances, just as assign anti-host
> affinity groups from UI or API, either at VM creation time, or update
> assignments for existing instances (along with any necessary VM 
stop/start,
> storage migration actions, etc).
>
> Obviously, this feature is useful only when there are more than one
> primary storage devices available for the same cluster or zone (in 
case for
> zone wide primary storage volumes).
>
> Just curious, how many primary storage volumes are available for your
> clusters/zones?
>
> Regards,
> Yiping
>
> On 9/8/16, 6:04 PM, "Tutkowski, Mike"  
wrote:
>
> Personally, I think the most flexible way is if you have a 
developer
> write a storage-pool allocator to customize the placement of virtual 
disks
> as you see fit.
>
> You extend the StoragePoolAllocator class, write your

Re: storage affinity groups

2016-09-09 Thread Will Stevens
I have not really thought through this use case, but off the top of my
head, you MAY be able to do something like use host anti-affinity and then
use different primary storage per host affinity.  I know this is not the
ideal solution, but it will limit the primary storage failure domain to a
set of affinity hosts.  This pushes the responsibility of HA to the
application deployer, which I think you are expecting to the be case
anyway.  You still have a single point of failure with the load balancers
unless you implement GSLB.

This will likely complicate your capacity management, but it may be a short
term solution for your problem until a better solution is developed.

If I think of other potential solutions I will post them, but that is what
I have for right now.

*Will STEVENS*
Lead Developer

*CloudOps* *| *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_

On Fri, Sep 9, 2016 at 3:44 PM, Yiping Zhang  wrote:

> Will described my use case perfectly.
>
> Ideally, the underlying storage technology used for the cloud should
> provide the reliability required.  But not every company has the money for
> the best storage technology on the market. So the next best thing is to
> provide some fault tolerance redundancy through the app and at the same
> time make it easy to use for end users and administrators alike.
>
> Regards,
>
> Yiping
>
> On 9/9/16, 11:49 AM, "Tutkowski, Mike"  wrote:
>
> Yep, based on the recent e-mail Yiping sent, I would agree, Will.
>
> At the time being, you have two options: 1) storage tagging 2)
> fault-tolerant primary storage like a SAN.
> 
> From: williamstev...@gmail.com  on behalf
> of Will Stevens 
> Sent: Friday, September 9, 2016 12:44 PM
> To: dev@cloudstack.apache.org
> Subject: Re: storage affinity groups
>
> My understanding is that he wants to do anti-affinity across primary
> storage endpoints.  So if he has two web servers, it would ensure that
> one
> of his web servers is on Primary1 and the other is on Primary2.  This
> means
> that if he loses a primary storage for some reason, he only loses one
> of
> his load balanced web servers.
>
> Does that sound about right?
>
> *Will STEVENS*
> Lead Developer
>
> *CloudOps* *| *Cloud Solutions Experts
> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> w cloudops.com *|* tw @CloudOps_
>
> On Fri, Sep 9, 2016 at 2:40 PM, Tutkowski, Mike <
> mike.tutkow...@netapp.com>
> wrote:
>
> > Hi Yiping,
> >
> > Reading your most recent e-mail, it seems like you are looking for a
> > feature that does more than simply makes sure virtual disks are
> roughly
> > allocated equally across the primary storages of a given cluster.
> >
> > At first, that is what I imagined your request to be.
> >
> > From this e-mail, though, it looks like this is something you'd like
> users
> > to be able to personally choose (ex. a user might want virtual disk
> 1 on
> > different storage than virtual disk 2).
> >
> > Is that a fair representation of your request?
> >
> > If so, I believe storage tagging (as was mentioned by Marty) is the
> only
> > way to do that at present. It does, as you indicated, lead to a
> > proliferation of offerings, however.
> >
> > As for how I personally solve this issue: I do not run a cloud. I
> work for
> > a storage vendor. In our situation, the clustered SAN that we
> develop is
> > highly fault tolerant. If the SAN is offline, then it probably means
> your
> > entire datacenter is offline (ex. power loss of some sort).
> >
> > Talk to you later,
> > Mike
> > 
> > From: Yiping Zhang 
> > Sent: Friday, September 9, 2016 11:08 AM
> > To: dev@cloudstack.apache.org
> > Subject: Re: storage affinity groups
> >
> > I am not a Java developer, so I am at a total loss on Mike’s
> approach. How
> > would end users choose this new storage pool allocator from UI when
> > provisioning new instance?
> >
> > My hope is that if the feature is added to ACS, end users can assign
> an
> > anti-storage affinity group to VM instances, just as assign anti-host
> > affinity groups from UI or API, either at VM creation time, or update
> > assignments for existing instances (along with any necessary VM
> stop/start,
> > storage migration actions, etc).
> >
> > Obviously, this feature is useful only when there are more than one
> > primary storage devices available for the same cluster or zone (in
> case for
> > zone wide primary storage volumes).
> >
> > Just curious, how many primary storage volumes are available for your
> > clusters/zones?
> >
> > Regards,
> > Yiping
> >
> > On 9/8/16, 6:04 PM, "Tutkowski, Mike" 
> wrote:
> >

RE: storage affinity groups

2016-09-09 Thread Simon Weller
Why not just use different primary storage per cluster. You then can control 
your storage failure domains on a cluster basis.

Simon Weller/ENA
(615) 312-6068

-Original Message-
From: Will Stevens [wstev...@cloudops.com]
Received: Friday, 09 Sep 2016, 5:46PM
To: dev@cloudstack.apache.org [dev@cloudstack.apache.org]
Subject: Re: storage affinity groups

I have not really thought through this use case, but off the top of my
head, you MAY be able to do something like use host anti-affinity and then
use different primary storage per host affinity.  I know this is not the
ideal solution, but it will limit the primary storage failure domain to a
set of affinity hosts.  This pushes the responsibility of HA to the
application deployer, which I think you are expecting to the be case
anyway.  You still have a single point of failure with the load balancers
unless you implement GSLB.

This will likely complicate your capacity management, but it may be a short
term solution for your problem until a better solution is developed.

If I think of other potential solutions I will post them, but that is what
I have for right now.

*Will STEVENS*
Lead Developer

*CloudOps* *| *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_

On Fri, Sep 9, 2016 at 3:44 PM, Yiping Zhang  wrote:

> Will described my use case perfectly.
>
> Ideally, the underlying storage technology used for the cloud should
> provide the reliability required.  But not every company has the money for
> the best storage technology on the market. So the next best thing is to
> provide some fault tolerance redundancy through the app and at the same
> time make it easy to use for end users and administrators alike.
>
> Regards,
>
> Yiping
>
> On 9/9/16, 11:49 AM, "Tutkowski, Mike"  wrote:
>
> Yep, based on the recent e-mail Yiping sent, I would agree, Will.
>
> At the time being, you have two options: 1) storage tagging 2)
> fault-tolerant primary storage like a SAN.
> 
> From: williamstev...@gmail.com  on behalf
> of Will Stevens 
> Sent: Friday, September 9, 2016 12:44 PM
> To: dev@cloudstack.apache.org
> Subject: Re: storage affinity groups
>
> My understanding is that he wants to do anti-affinity across primary
> storage endpoints.  So if he has two web servers, it would ensure that
> one
> of his web servers is on Primary1 and the other is on Primary2.  This
> means
> that if he loses a primary storage for some reason, he only loses one
> of
> his load balanced web servers.
>
> Does that sound about right?
>
> *Will STEVENS*
> Lead Developer
>
> *CloudOps* *| *Cloud Solutions Experts
> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> w cloudops.com *|* tw @CloudOps_
>
> On Fri, Sep 9, 2016 at 2:40 PM, Tutkowski, Mike <
> mike.tutkow...@netapp.com>
> wrote:
>
> > Hi Yiping,
> >
> > Reading your most recent e-mail, it seems like you are looking for a
> > feature that does more than simply makes sure virtual disks are
> roughly
> > allocated equally across the primary storages of a given cluster.
> >
> > At first, that is what I imagined your request to be.
> >
> > From this e-mail, though, it looks like this is something you'd like
> users
> > to be able to personally choose (ex. a user might want virtual disk
> 1 on
> > different storage than virtual disk 2).
> >
> > Is that a fair representation of your request?
> >
> > If so, I believe storage tagging (as was mentioned by Marty) is the
> only
> > way to do that at present. It does, as you indicated, lead to a
> > proliferation of offerings, however.
> >
> > As for how I personally solve this issue: I do not run a cloud. I
> work for
> > a storage vendor. In our situation, the clustered SAN that we
> develop is
> > highly fault tolerant. If the SAN is offline, then it probably means
> your
> > entire datacenter is offline (ex. power loss of some sort).
> >
> > Talk to you later,
> > Mike
> > 
> > From: Yiping Zhang 
> > Sent: Friday, September 9, 2016 11:08 AM
> > To: dev@cloudstack.apache.org
> > Subject: Re: storage affinity groups
> >
> > I am not a Java developer, so I am at a total loss on Mike’s
> approach. How
> > would end users choose this new storage pool allocator from UI when
> > provisioning new instance?
> >
> > My hope is that if the feature is added to ACS, end users can assign
> an
> > anti-storage affinity group to VM instances, just as assign anti-host
> > affinity groups from UI or API, either at VM creation time, or update
> > assignments for existing instances (along with any necessary VM
> stop/start,
> > storage migration actions, etc).
> >
> > Obviously, this feature is useful only when there are 

Re: storage affinity groups

2016-09-09 Thread Will Stevens
Yes, that is essentially the same thing.  You would create your
anti-affinity between clusters instead of hosts.  That is also an option...

*Will STEVENS*
Lead Developer

*CloudOps* *| *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_

On Fri, Sep 9, 2016 at 7:05 PM, Simon Weller  wrote:

> Why not just use different primary storage per cluster. You then can
> control your storage failure domains on a cluster basis.
>
> Simon Weller/ENA
> (615) 312-6068
>
> -Original Message-
> From: Will Stevens [wstev...@cloudops.com]
> Received: Friday, 09 Sep 2016, 5:46PM
> To: dev@cloudstack.apache.org [dev@cloudstack.apache.org]
> Subject: Re: storage affinity groups
>
> I have not really thought through this use case, but off the top of my
> head, you MAY be able to do something like use host anti-affinity and then
> use different primary storage per host affinity.  I know this is not the
> ideal solution, but it will limit the primary storage failure domain to a
> set of affinity hosts.  This pushes the responsibility of HA to the
> application deployer, which I think you are expecting to the be case
> anyway.  You still have a single point of failure with the load balancers
> unless you implement GSLB.
>
> This will likely complicate your capacity management, but it may be a short
> term solution for your problem until a better solution is developed.
>
> If I think of other potential solutions I will post them, but that is what
> I have for right now.
>
> *Will STEVENS*
> Lead Developer
>
> *CloudOps* *| *Cloud Solutions Experts
> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> w cloudops.com *|* tw @CloudOps_
>
> On Fri, Sep 9, 2016 at 3:44 PM, Yiping Zhang  wrote:
>
> > Will described my use case perfectly.
> >
> > Ideally, the underlying storage technology used for the cloud should
> > provide the reliability required.  But not every company has the money
> for
> > the best storage technology on the market. So the next best thing is to
> > provide some fault tolerance redundancy through the app and at the same
> > time make it easy to use for end users and administrators alike.
> >
> > Regards,
> >
> > Yiping
> >
> > On 9/9/16, 11:49 AM, "Tutkowski, Mike" 
> wrote:
> >
> > Yep, based on the recent e-mail Yiping sent, I would agree, Will.
> >
> > At the time being, you have two options: 1) storage tagging 2)
> > fault-tolerant primary storage like a SAN.
> > 
> > From: williamstev...@gmail.com  on behalf
> > of Will Stevens 
> > Sent: Friday, September 9, 2016 12:44 PM
> > To: dev@cloudstack.apache.org
> > Subject: Re: storage affinity groups
> >
> > My understanding is that he wants to do anti-affinity across primary
> > storage endpoints.  So if he has two web servers, it would ensure
> that
> > one
> > of his web servers is on Primary1 and the other is on Primary2.  This
> > means
> > that if he loses a primary storage for some reason, he only loses one
> > of
> > his load balanced web servers.
> >
> > Does that sound about right?
> >
> > *Will STEVENS*
> > Lead Developer
> >
> > *CloudOps* *| *Cloud Solutions Experts
> > 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> > w cloudops.com *|* tw @CloudOps_
> >
> > On Fri, Sep 9, 2016 at 2:40 PM, Tutkowski, Mike <
> > mike.tutkow...@netapp.com>
> > wrote:
> >
> > > Hi Yiping,
> > >
> > > Reading your most recent e-mail, it seems like you are looking for
> a
> > > feature that does more than simply makes sure virtual disks are
> > roughly
> > > allocated equally across the primary storages of a given cluster.
> > >
> > > At first, that is what I imagined your request to be.
> > >
> > > From this e-mail, though, it looks like this is something you'd
> like
> > users
> > > to be able to personally choose (ex. a user might want virtual disk
> > 1 on
> > > different storage than virtual disk 2).
> > >
> > > Is that a fair representation of your request?
> > >
> > > If so, I believe storage tagging (as was mentioned by Marty) is the
> > only
> > > way to do that at present. It does, as you indicated, lead to a
> > > proliferation of offerings, however.
> > >
> > > As for how I personally solve this issue: I do not run a cloud. I
> > work for
> > > a storage vendor. In our situation, the clustered SAN that we
> > develop is
> > > highly fault tolerant. If the SAN is offline, then it probably
> means
> > your
> > > entire datacenter is offline (ex. power loss of some sort).
> > >
> > > Talk to you later,
> > > Mike
> > > 
> > > From: Yiping Zhang 
> > > Sent: Friday, September 9, 2016 11:08 AM
> > > To: dev@cloudstack.apache.org
> > > Subject: Re: storage affinity groups
> > >
> > > I am not a Java developer, so I am at 

Re: storage affinity groups

2016-09-09 Thread Will Stevens
Hang on, can you do cluster anti-affinity?  I know you can with hosts, but
I don't remember if you can do the same thing with clusters...

*Will STEVENS*
Lead Developer

*CloudOps* *| *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_

On Fri, Sep 9, 2016 at 7:09 PM, Will Stevens  wrote:

> Yes, that is essentially the same thing.  You would create your
> anti-affinity between clusters instead of hosts.  That is also an option...
>
> *Will STEVENS*
> Lead Developer
>
> *CloudOps* *| *Cloud Solutions Experts
> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> w cloudops.com *|* tw @CloudOps_
>
> On Fri, Sep 9, 2016 at 7:05 PM, Simon Weller  wrote:
>
>> Why not just use different primary storage per cluster. You then can
>> control your storage failure domains on a cluster basis.
>>
>> Simon Weller/ENA
>> (615) 312-6068
>>
>> -Original Message-
>> From: Will Stevens [wstev...@cloudops.com]
>> Received: Friday, 09 Sep 2016, 5:46PM
>> To: dev@cloudstack.apache.org [dev@cloudstack.apache.org]
>> Subject: Re: storage affinity groups
>>
>> I have not really thought through this use case, but off the top of my
>> head, you MAY be able to do something like use host anti-affinity and then
>> use different primary storage per host affinity.  I know this is not the
>> ideal solution, but it will limit the primary storage failure domain to a
>> set of affinity hosts.  This pushes the responsibility of HA to the
>> application deployer, which I think you are expecting to the be case
>> anyway.  You still have a single point of failure with the load balancers
>> unless you implement GSLB.
>>
>> This will likely complicate your capacity management, but it may be a
>> short
>> term solution for your problem until a better solution is developed.
>>
>> If I think of other potential solutions I will post them, but that is what
>> I have for right now.
>>
>> *Will STEVENS*
>> Lead Developer
>>
>> *CloudOps* *| *Cloud Solutions Experts
>> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
>> w cloudops.com *|* tw @CloudOps_
>>
>> On Fri, Sep 9, 2016 at 3:44 PM, Yiping Zhang  wrote:
>>
>> > Will described my use case perfectly.
>> >
>> > Ideally, the underlying storage technology used for the cloud should
>> > provide the reliability required.  But not every company has the money
>> for
>> > the best storage technology on the market. So the next best thing is to
>> > provide some fault tolerance redundancy through the app and at the same
>> > time make it easy to use for end users and administrators alike.
>> >
>> > Regards,
>> >
>> > Yiping
>> >
>> > On 9/9/16, 11:49 AM, "Tutkowski, Mike" 
>> wrote:
>> >
>> > Yep, based on the recent e-mail Yiping sent, I would agree, Will.
>> >
>> > At the time being, you have two options: 1) storage tagging 2)
>> > fault-tolerant primary storage like a SAN.
>> > 
>> > From: williamstev...@gmail.com  on behalf
>> > of Will Stevens 
>> > Sent: Friday, September 9, 2016 12:44 PM
>> > To: dev@cloudstack.apache.org
>> > Subject: Re: storage affinity groups
>> >
>> > My understanding is that he wants to do anti-affinity across primary
>> > storage endpoints.  So if he has two web servers, it would ensure
>> that
>> > one
>> > of his web servers is on Primary1 and the other is on Primary2.
>> This
>> > means
>> > that if he loses a primary storage for some reason, he only loses
>> one
>> > of
>> > his load balanced web servers.
>> >
>> > Does that sound about right?
>> >
>> > *Will STEVENS*
>> > Lead Developer
>> >
>> > *CloudOps* *| *Cloud Solutions Experts
>> > 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
>> > w cloudops.com *|* tw @CloudOps_
>> >
>> > On Fri, Sep 9, 2016 at 2:40 PM, Tutkowski, Mike <
>> > mike.tutkow...@netapp.com>
>> > wrote:
>> >
>> > > Hi Yiping,
>> > >
>> > > Reading your most recent e-mail, it seems like you are looking
>> for a
>> > > feature that does more than simply makes sure virtual disks are
>> > roughly
>> > > allocated equally across the primary storages of a given cluster.
>> > >
>> > > At first, that is what I imagined your request to be.
>> > >
>> > > From this e-mail, though, it looks like this is something you'd
>> like
>> > users
>> > > to be able to personally choose (ex. a user might want virtual
>> disk
>> > 1 on
>> > > different storage than virtual disk 2).
>> > >
>> > > Is that a fair representation of your request?
>> > >
>> > > If so, I believe storage tagging (as was mentioned by Marty) is
>> the
>> > only
>> > > way to do that at present. It does, as you indicated, lead to a
>> > > proliferation of offerings, however.
>> > >
>> > > As for how I personally solve this issue: I do not run a cloud. I
>> > work for
>> > > a storage vendor. In our situation, the clustered SAN that we
>> > develop i

Re: storage affinity groups

2016-09-09 Thread Tutkowski, Mike
I suppose you could just make sure all of your hosts in a given cluster are in 
a given affinity group.

I think if you did that, then your idea would work.

> On Sep 9, 2016, at 5:11 PM, Will Stevens  wrote:
> 
> Hang on, can you do cluster anti-affinity?  I know you can with hosts, but
> I don't remember if you can do the same thing with clusters...
> 
> *Will STEVENS*
> Lead Developer
> 
> *CloudOps* *| *Cloud Solutions Experts
> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> w cloudops.com *|* tw @CloudOps_
> 
>> On Fri, Sep 9, 2016 at 7:09 PM, Will Stevens  wrote:
>> 
>> Yes, that is essentially the same thing.  You would create your
>> anti-affinity between clusters instead of hosts.  That is also an option...
>> 
>> *Will STEVENS*
>> Lead Developer
>> 
>> *CloudOps* *| *Cloud Solutions Experts
>> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
>> w cloudops.com *|* tw @CloudOps_
>> 
>>> On Fri, Sep 9, 2016 at 7:05 PM, Simon Weller  wrote:
>>> 
>>> Why not just use different primary storage per cluster. You then can
>>> control your storage failure domains on a cluster basis.
>>> 
>>> Simon Weller/ENA
>>> (615) 312-6068
>>> 
>>> -Original Message-
>>> From: Will Stevens [wstev...@cloudops.com]
>>> Received: Friday, 09 Sep 2016, 5:46PM
>>> To: dev@cloudstack.apache.org [dev@cloudstack.apache.org]
>>> Subject: Re: storage affinity groups
>>> 
>>> I have not really thought through this use case, but off the top of my
>>> head, you MAY be able to do something like use host anti-affinity and then
>>> use different primary storage per host affinity.  I know this is not the
>>> ideal solution, but it will limit the primary storage failure domain to a
>>> set of affinity hosts.  This pushes the responsibility of HA to the
>>> application deployer, which I think you are expecting to the be case
>>> anyway.  You still have a single point of failure with the load balancers
>>> unless you implement GSLB.
>>> 
>>> This will likely complicate your capacity management, but it may be a
>>> short
>>> term solution for your problem until a better solution is developed.
>>> 
>>> If I think of other potential solutions I will post them, but that is what
>>> I have for right now.
>>> 
>>> *Will STEVENS*
>>> Lead Developer
>>> 
>>> *CloudOps* *| *Cloud Solutions Experts
>>> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
>>> w cloudops.com *|* tw @CloudOps_
>>> 
 On Fri, Sep 9, 2016 at 3:44 PM, Yiping Zhang  wrote:
 
 Will described my use case perfectly.
 
 Ideally, the underlying storage technology used for the cloud should
 provide the reliability required.  But not every company has the money
>>> for
 the best storage technology on the market. So the next best thing is to
 provide some fault tolerance redundancy through the app and at the same
 time make it easy to use for end users and administrators alike.
 
 Regards,
 
 Yiping
 
> On 9/9/16, 11:49 AM, "Tutkowski, Mike" 
 wrote:
 
Yep, based on the recent e-mail Yiping sent, I would agree, Will.
 
At the time being, you have two options: 1) storage tagging 2)
 fault-tolerant primary storage like a SAN.

From: williamstev...@gmail.com  on behalf
 of Will Stevens 
Sent: Friday, September 9, 2016 12:44 PM
To: dev@cloudstack.apache.org
Subject: Re: storage affinity groups
 
My understanding is that he wants to do anti-affinity across primary
storage endpoints.  So if he has two web servers, it would ensure
>>> that
 one
of his web servers is on Primary1 and the other is on Primary2.
>>> This
 means
that if he loses a primary storage for some reason, he only loses
>>> one
 of
his load balanced web servers.
 
Does that sound about right?
 
*Will STEVENS*
Lead Developer
 
*CloudOps* *| *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_
 
On Fri, Sep 9, 2016 at 2:40 PM, Tutkowski, Mike <
 mike.tutkow...@netapp.com>
wrote:
 
> Hi Yiping,
> 
> Reading your most recent e-mail, it seems like you are looking
>>> for a
> feature that does more than simply makes sure virtual disks are
 roughly
> allocated equally across the primary storages of a given cluster.
> 
> At first, that is what I imagined your request to be.
> 
> From this e-mail, though, it looks like this is something you'd
>>> like
 users
> to be able to personally choose (ex. a user might want virtual
>>> disk
 1 on
> different storage than virtual disk 2).
> 
> Is that a fair representation of your request?
> 
> If so, I believe storage tagging (as was mentioned by Marty) is
>>> the
 only
> way to do that at present. It does, as you indicated, lead to a
>