Github user jburwell commented on the issue:
https://github.com/apache/cloudstack/pull/1639
@abhinandanprateek could you please investigate the cause of the Jenkins
failure?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user pdion891 commented on the issue:
https://github.com/apache/cloudstack/pull/1658
lgtm, that's usefull
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishe
Github user swill commented on the issue:
https://github.com/apache/cloudstack/pull/1658
@jburwell can we merge this? Not sure what process you have in place right
now. Thx...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user jburwell commented on the issue:
https://github.com/apache/cloudstack/pull/1658
@swill any committer may merge so long as there is at least 1 code review
LGTM, 1 test LGTM, and no -1s. I see 2 code review LGTMS. Since this PR is
for docs, is there a way to test it? If
Github user swill commented on the issue:
https://github.com/apache/cloudstack/pull/1658
Well, it is to help generate the docs. I have included the `txt` and the
new `json` output produced by this addition to the original post (OP). I have
also used this code to create the release n
Github user swill commented on the issue:
https://github.com/apache/cloudstack/pull/872
Is anyone working on this right now?
Having reviewed this thread, I believe the following pieces are still
outstanding:
- fix merge conflicts.
- potentially: upgrade the VR to use
Github user swill commented on the issue:
https://github.com/apache/cloudstack/pull/872
@jayapalu are you active enough that if I make pull requests against your
branch you can make the changes available in this PR. Or should I just start
from your work and develop and test in my own
Github user serg38 commented on the issue:
https://github.com/apache/cloudstack/pull/1660
@rhtyd @jburwell @swill @koushik-das @rafaelweingartner @wido This PR has
enough of everything. Can one of the committers merge it?
---
If your project is set up for it, you can reply to this e
Github user serg38 commented on the issue:
https://github.com/apache/cloudstack/pull/1651
@rhtyd @jburwell @swill @koushik-das @rafaelweingartner @wido This PR has
enough of everything. Can one of the committers merge it?
---
If your project is set up for it, you can reply to this e
Github user serg38 commented on the issue:
https://github.com/apache/cloudstack/pull/1605
@rhtyd @jburwell @swill @koushik-das @rafaelweingartner @wido This PR has
enough of everything. Can one of the committers merge it?
---
If your project is set up for it, you can reply to this e
I am not a Java developer, so I am at a total loss on Mike’s approach. How
would end users choose this new storage pool allocator from UI when
provisioning new instance?
My hope is that if the feature is added to ACS, end users can assign an
anti-storage affinity group to VM instances, just as
Hi Yiping,
Reading your most recent e-mail, it seems like you are looking for a feature
that does more than simply makes sure virtual disks are roughly allocated
equally across the primary storages of a given cluster.
At first, that is what I imagined your request to be.
>From this e-mail, tho
My understanding is that he wants to do anti-affinity across primary
storage endpoints. So if he has two web servers, it would ensure that one
of his web servers is on Primary1 and the other is on Primary2. This means
that if he loses a primary storage for some reason, he only loses one of
his lo
Yep, based on the recent e-mail Yiping sent, I would agree, Will.
At the time being, you have two options: 1) storage tagging 2) fault-tolerant
primary storage like a SAN.
From: williamstev...@gmail.com on behalf of Will
Stevens
Sent: Friday, September
Will described my use case perfectly.
Ideally, the underlying storage technology used for the cloud should provide
the reliability required. But not every company has the money for the best
storage technology on the market. So the next best thing is to provide some
fault tolerance redundancy t
With CloudStack as it currently stands, I believe you will need to resort to
storage tagging for your use case then.
From: Yiping Zhang
Sent: Friday, September 9, 2016 1:44 PM
To: dev@cloudstack.apache.org
Subject: Re: storage affinity groups
Will describ
I wanted first to see what other people think about this feature. That’s why I
posted it on Dev list. If enough people consider it as an useful feature for
ACS, then I can make formal feature request.
On 9/9/16, 1:25 PM, "Tutkowski, Mike" wrote:
With CloudStack as it currently stands, I b
I have not really thought through this use case, but off the top of my
head, you MAY be able to do something like use host anti-affinity and then
use different primary storage per host affinity. I know this is not the
ideal solution, but it will limit the primary storage failure domain to a
set of
Why not just use different primary storage per cluster. You then can control
your storage failure domains on a cluster basis.
Simon Weller/ENA
(615) 312-6068
-Original Message-
From: Will Stevens [wstev...@cloudops.com]
Received: Friday, 09 Sep 2016, 5:46PM
To: dev@cloudstack.apache.org
Yes, that is essentially the same thing. You would create your
anti-affinity between clusters instead of hosts. That is also an option...
*Will STEVENS*
Lead Developer
*CloudOps* *| *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_
On Fri
Hang on, can you do cluster anti-affinity? I know you can with hosts, but
I don't remember if you can do the same thing with clusters...
*Will STEVENS*
Lead Developer
*CloudOps* *| *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_
On Fri,
I suppose you could just make sure all of your hosts in a given cluster are in
a given affinity group.
I think if you did that, then your idea would work.
> On Sep 9, 2016, at 5:11 PM, Will Stevens wrote:
>
> Hang on, can you do cluster anti-affinity? I know you can with hosts, but
> I don't
22 matches
Mail list logo