> -Original Message-
> From: Nikola Đipanov [mailto:ndipa...@redhat.com]
> Sent: 19 August 2014 17:50
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova] Scheduler split wrt Extensible Resource
> Tracking
>
> On 08/19/2014 06:39 PM, Sylvain Bauza wrote:
> > On the
gt; On Wed, Aug 20, 2014 at 08:33:31AM -0400, Jay Pipes wrote:
> > On 08/20/2014 04:48 AM, Nikola Đipanov wrote:
> > >On 08/20/2014 08:27 AM, Joe Gordon wrote:
> > >>On Aug 19, 2014 10:45 AM, "Day, Phil" > >><mailto:philip@hp.com>>
>Adding in such case more bureaucracy (specs) is not the best way to resolve
>team throughput issues...
I’d argue that if fundamental design disagreements can be surfaced and debated
at the design stage rather than first emerging on patch set XXX of an
implementation, and be used to then prior
Needing 3 out of 19 instead of 3 out of 20 isn't an order of magnatude
according to my calculator. Its much closer/fairer than making it 2/19 vs
3/20.
If a change is borderline in that it can only get 2 other cores maybe it
doesn't have a strong enough case for an exception.
Phil
Sent from
> -Original Message-
> From: Nikola Đipanov [mailto:ndipa...@redhat.com]
> Sent: 03 September 2014 10:50
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova] Feature Freeze Exception process for
> Juno
>
> I will follow up with a more detailed email about what
> >
> > One final note: the specs referenced above didn't get approved until
> > Spec Freeze, which seemed to leave me with less time to implement
> > things. In fact, it seemed that a lot of specs didn't get approved
> > until spec freeze. Perhaps if we had more staggered approval of
> > specs,
Hi Daniel,
Thanks for putting together such a thoughtful piece - I probably need to
re-read it few times to take in everything you're saying, but a couple of
thoughts that did occur to me:
- I can see how this could help where a change is fully contained within a virt
driver, but I wonder ho
Hi,
I'd like to ask for a FFE for the 3 patchsets that implement quotas for server
groups.
Server groups (which landed in Icehouse) provides a really useful anti-affinity
filter for scheduling that a lot of customers woudl like to use, but without
some form of quota control to limit the amount
> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: 05 September 2014 11:49
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out
> virt drivers
>
> On 09/05/2014 03:02 AM, Sylvain Bauza wrote:
> >
> >
ber 2014 17:05
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] FFE server-group-quotas
>
> On 09/05/2014 11:28 AM, Ken'ichi Ohmichi wrote:
> > 2014-09-05 21:56 GMT+09:00 Day, Phil :
> >> Hi,
> >>
> >> I'd like to
I think in the hopefully not too distant future we'll be able to make the v1_1
client handle both V2 and V2.1 (who knows. Maybe we can even rename it v2) -
and that's what we should do because it will prove if we have full
compatibility or not.
Right now the two things holding that back are the
> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: 12 September 2014 19:37
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Expand resource name allowed
> characters
>
> Had to laugh about the PILE OF POO character :) Comments inline...
> -Original Message-
> From: Kenichi Oomichi [mailto:oomi...@mxs.nes.nec.co.jp]
> Sent: 18 September 2014 02:44
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] are we going to remove the novaclient
> v3 shell or what?
>
>
> > -O
> >
> > DevStack doesn't register v2.1 endpoint to keytone now, but we can use
> > it with calling it directly.
> > It is true that it is difficult to use v2.1 API now and we can check
> > its behavior via v3 API instead.
>
> I posted a patch[1] for registering v2.1 endpoint to keystone, and I co
Hi Folks,
I'm a bit confused about the expectations of a manager class to be able to
receive and process messages from a previous RPC version. I thought the
objective was to always make changes such that the manage can process any
previous version of the call that could come from the last re
Hi,
I think the concept of allowing users to request a cpu topology, but have a few
questions / concerns:
>
> The host is exposing info about vCPU count it is able to support and the
> scheduler picks on that basis. The guest image is just declaring upper limits
> on
> topology it can support.
+1 from me - would much prefer to be able to pick this on an individual basis.
Could kind of see a case for keeping reset_network and inject_network_info
together - but don't have a strong feeling about it (as we don't use them)
> -Original Message-
> From: Andrew Laski [mailto:andrew.la
the flavour is still valid, just not with this particular image
- and that feels like a case that should fail validation at the API layer, not
down on the compute node where the only option is to reschedule or go into an
Error state.
Phil
> -Original Message-
> From: Day, Phil
>
Hi Nova cores,
As per the discussion at the Summit I need two (or more) nova cores to sponsor
the BP that allows Guests a chance to shutdown cleanly rather than just yanking
the virtual power cord out -which is approved and targeted for I2
https://review.openstack.org/#/c/35303/
The Non API a
Hi Cores,
The "Stop, Rescue, and Delete should give guest a chance to shutdown" change
https://review.openstack.org/#/c/35303/ was approved a couple of days ago, but
failed to merge because the RPC version had moved on. Its rebased and sitting
there with one +2 and a bunch of +1s -would be r
cember 2013 14:38
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova] All I want for Christmas is one more +2
> ...
>
> On 12/12/2013 09:22 AM, Day, Phil wrote:
> > Hi Cores,
> >
> >
> >
> > The "Stop, Rescue, and
+1, I would make the 14:00 meeting. I often have good intention of making the
21:00 meeting, but it's tough to work in around family life
Sent from Samsung Mobile
Original message
From: Joe Gordon
Date:
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [
Hi Folks,
I know it may seem odd to be arguing for slowing down a part of the review
process, but I'd like to float the idea that there should be a minimum review
period for patches that change existing functionality in a way that isn't
backwards compatible.
The specific change that got me thi
> -Original Message-
> From: Robert Collins [mailto:robe...@robertcollins.net]
> Sent: 29 December 2013 05:36
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] minimum review period for functional
> changes that break backwards compatib
at break backwards compatibility
>
> On 29 December 2013 04:21, Day, Phil wrote:
> > Hi Folks,
> >
> >
> >
> > I know it may seem odd to be arguing for slowing down a part of the
> > review process, but I'd like to float the idea that there shoul
Hi Folks,
As highlighted in the thread "minimal review period for functional changes" I'd
like to propose that change is https://review.openstack.org/#/c/63209/ is
reverted because:
- It causes inconsistent behaviour in the system, as any existing
"default" backing files will have ex
that break backwards compatibility
On 12/29/2013 03:06 AM, Day, Phil wrote:
>> Basically, I'm not sure what problem you're trying to solve - lets tease that
>> out, and then talk about how to solve it. "Backwards incompatible change
>> landed" might be
Sent from Samsung Mobile
Original message
From: Pádraig Brady
Date:
To: "OpenStack Development Mailing List (not for usage questions)"
Cc: "Day, Phil"
Subject: Re: [openstack-dev] [nova] - Revert change of default ephemeral fs to
ext4
&g
ves the ephemeral backing files
though at the moment.
Phil
Sent from Samsung Mobile
Original message
From: Pádraig Brady
Date:
To: "OpenStack Development Mailing List (not for usage questions)"
,"Day, Phil"
Subject: Re: [openstack-dev] [nova] - Revert
Subject: Re: [openstack-dev] [nova] minimum review period for functional
changes that break backwards compatibility
On 29 December 2013 21:06, Day, Phil wrote:
>> What is the minimum review period intended to accomplish? I mean:
>> everyone that reviewed this *knew* it changed a def
Hi Sean, and Happy New Year :-)
> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: 30 December 2013 22:05
> To: Day, Phil; OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] minimum review period for funct
I don't really see why this thread seems to keep coming back to a position of
improvements to the review process vs changes to automated testing - to my mind
both are equally important and complementary parts of the solution:
- Automated tests are strong for objective examination of particular p
Hi Thierry,
Thanks for a great summary.
I don't really share your view that there is a "us vs them" attitude emerging
between operators and developers (but as someone with a foot in both camps
maybe I'm just thinking that because otherwise I'd become even more bi-polar
:-)
I would suggest t
Change to revert the default to ext3: https://review.openstack.org/#/c/64666/
> -Original Message-
> From: Clint Byrum [mailto:cl...@fewbar.com]
> Sent: 31 December 2013 01:31
> To: openstack-dev
> Subject: Re: [openstack-dev] [nova] - Revert change of default ephemeral fs
> to ext4
>
Would be nice in this specific example though if the actual upgrade impact was
explicitly called out in the commit message.
>From the DocImpact it looks as if some Neutron config options are changing
>names - in which case the impact would seem to be that running systems have
>until the end of
> -Original Message-
> From: Robert Collins [mailto:robe...@robertcollins.net]
> Sent: 10 January 2014 08:54
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] where to expose network quota
>
> On 8 January 2014 03:01, Christopher Yeo
HI Folks,
The original (and fairly simple) driver behind whole-host-allocation
(https://wiki.openstack.org/wiki/WholeHostAllocation) was to enable users to
get guaranteed isolation for their instances. This then grew somewhat along
the lines of "If they have in effect a dedicated hosts then wo
> > > So, I actually don't think the two concepts (reservations and
> > > "isolated instances") are competing ideas. Isolated instances are
> > > actually not reserved. They are simply instances that have a
> > > condition placed on their assignment to a particular compute node
> > > that the node
> Hi Phil and Jay,
>
>Phil, maybe you remember I discussed with you about the possibility of using
>pclouds with Climate, but we finally ended up using Nova aggregates and a
>dedicated filter.
>That works pretty fine. We don't use instance_properties
>but rather aggregate metadata but the idea
> >
> > I think there is clear water between this and the existing aggregate based
> isolation. I also think this is a different use case from reservations.
> It's
> *mostly* like a new scheduler hint, but because it has billing impacts I
> think it
> needs to be more than just that - for exa
> -Original Message-
> From: Khanh-Toan Tran [mailto:khanh-toan.t...@cloudwatt.com]
> Sent: 21 January 2014 14:21
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] Next steps for Whole Host allocation / Pclouds
>
>
> > Exactly - that's why I
> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: 22 January 2014 02:01
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] Next steps for Whole Host allocation / Pclouds
>
> On Tue, 2014-01-21 at 14:21 +, Khanh-Toan Tran wrote:
> > > Exactly
> -Original Message-
> From: Sylvain Bauza [mailto:sylvain.ba...@bull.net]
> Sent: 22 January 2014 10:24
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] Next steps for Whole Host allocation / Pclouds
>
> Le 22/01/2014 02:50, Jay Pipes a écr
Jay Pipes wrote on 01/21/2014 08:50:36 PM:
...
> On Tue, 2014-01-21 at 14:28 +0000, Day, Phil wrote:
> > > >
> > > > I think there is clear water between this and the existing aggregate
> > > > based
> > > isolation. I also think this is a dif
> >
> > Cool. I like this a good bit better as it avoids the reboot. Still, this is
> > a rather
> large amount of data to copy around if I'm only changing a single file in
> Nova.
> >
>
> I think in most cases transfer cost is worth it to know you're deploying what
> you tested. Also it is pret
> On 01/22/2014 12:17 PM, Dan Prince wrote:
> > I've been thinking a bit more about how TripleO updates are developing
> specifically with regards to compute nodes. What is commonly called the
> "update story" I think.
> >
> > As I understand it we expect people to actually have to reboot a compute
HI Sylvain,
The change only makes the user have to supply a network ID if there is more
than one private network available (and the issue there is that otherwise the
assignment order in the Guest is random, which normally leads to all sorts of
routing problems).
I'm running a standard Devstack
Hi Justin,
I can see the value of this, but I'm a bit wary of the metadata service
extending into a general API - for example I can see this extending into a
debate about what information needs to be made available about the instances
(would you always want all instances exposed, all details, e
I agree its oddly inconsistent (you'll get used to that over time ;-) - but to
me it feels more like the validation is missing on the attach that that the
create should allow two VIFs on the same network. Since these are both
virtualised (i.e share the same bandwidth, don't provide any additi
the networks in the
order that they want them to be attached to.
Am I still missing something ?
Cheers,
Phil
From: Sylvain Bauza [mailto:sylvain.ba...@bull.net]
Sent: 24 January 2014 14:02
To: OpenStack Development Mailing List (not for usage questions)
Cc: Day, Phil
Subject: Re: [openstack-
quot;fixed",
"version": 4
}
],
"version": 4
},
{
"cidr": null,
"ips": [],
"version&
>
> Good points - thank you. For arbitrary operations, I agree that it would be
> better to expose a token in the metadata service, rather than allowing the
> metadata service to expose unbounded amounts of API functionality. We
> should therefore also have a per-instance token in the metadata,
> -Original Message-
> From: Clint Byrum [mailto:cl...@fewbar.com]
> Sent: 24 January 2014 21:09
> To: openstack-dev
> Subject: Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances
> through metadata service
>
> Excerpts from Justin Santa Barbara's message of 2014-01-24 12:2
>>
>> What worried me most, I think, is that if we make this part of the standard
>> metadata then everyone would get it, and that raises a couple of concerns:
>>
>> - Users with lots of instances (say 1000's) but who weren't trying to run any
>> form of discovery would start getting a lot more m
> -Original Message-
> From: Justin Santa Barbara [mailto:jus...@fathomdb.com]
> Sent: 28 January 2014 20:17
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances
> through metadata service
>
> Tha
):
> i = self.get_instance_by_ip(ip)
> +mpi = self._get_mpi_data(i['project_id'])
> if i is None:
> return None
> if i['key_name']:
> @@ -135,7 +148,8 @@ class CloudController(object):
> 'pu
Hi,
There were a few related blueprints which were looking to add various
additional types of resource to the scheduler - all of which will now be
implemented on top of a new generic mechanism covered by:
https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking
> -Original
Hi Folks,
I could do with some pointers on config value deprecation.
All of the examples in the code and documentation seem to deal with the case
of "old_opt" being replaced by "new_opt" but still returning the same value
Here using deprecated_name and / or deprecated_opts in the definition of
ide any deprecated options
present. If the new option is not present and multiple
deprecated options are present, the option corresponding to
the first element of deprecated_opts will be chosen.
I hope that it'll help you.
Best regards,
Denis Makogon.
On Wed, Feb 26
> -Original Message-
> From: Chris Behrens [mailto:cbehr...@codestud.com]
> Sent: 26 February 2014 22:05
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] Future of the Nova API
>
>
> This thread is many messages deep now and I'm busy
> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: 24 February 2014 23:49
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Future of the Nova API
>
>
> > Similarly with a Xen vs KVM situation I don't think its an extension
> > related i
e) for not adding lots of completely new features
into V2 if V3 was already available in a stable form - but V2 already provides
a nearly complete support for nova-net features on top of Neutron.I fail to
see what is wrong with continuing to improve that.
Phil
> -Original Message-
&g
Hi Folks,
Is there any support yet in novaclient for requesting a specific microversion ?
(looking at the final leg of extending clean-shutdown to the API, and
wondering how to test this in devstack via the novaclient)
Phil
___
Hi Jay,
I'm going to disagree with you on this one, because:
i) This is a feature that was discussed in at least one if not two Design
Summits and went through a long review period, it wasn't one of those changes
that merged in 24 hours before people could take a good look at it. Whatever
you
> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: 25 April 2014 23:29
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Proposal: remove the server groups
> feature
>
> On Fri, 2014-04-25 at 22:00 +, Day
> >
> > In the original API there was a way to remove members from the group.
> > This didn't make it into the code that was submitted.
>
> Well, it didn't make it in because it was broken. If you add an instance to a
> group after it's running, a migration may need to take place in order to keep
>Nova now can detect host unreachable. But it fails to make out host isolation,
>host dead and nova compute service down. When host unreachable is reported,
>users have to find out the exact state by himself and then take the
>appropriate measure to recover. Therefore we'd like to improve the ho
> -Original Message-
> From: Tripp, Travis S
> Sent: 07 May 2014 18:06
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Cinder] Confusion about the respective use
> cases for volume's admin_metadata, metadata and glance_image_metadata
>
> >
Hi Vish,
I think quota classes have been removed from Nova now.
Phil
Sent from Samsung Mobile
Original message
From: Vishvananda Ishaya
Date:27/05/2014 19:24 (GMT+00:00)
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: Re: [openstack-dev] [nova] no
Could we replace the refresh from the period task with a timestamp in the
network cache of when it was last updated so that we refresh it only when it’s
accessed if older that X ?
From: Aaron Rosen [mailto:aaronoro...@gmail.com]
Sent: 29 May 2014 01:47
To: Assaf Muller
Cc: OpenStack Development
faults are, but it looks like this isn't
the case.
Unfortunately the API removal in Nova was followed by similar changes in
novaclient and Horizon, so fixing Icehouse at this point is probably going to
be difficult.
[Day, Phil] I think we should revert the changes in all three system then.
Hi Jay,
> * Host aggregates may also have a separate allocation ratio that overrides
> any configuration setting that a particular host may have
So with your proposal would the resource tracker be responsible for picking and
using override values defined as part of an aggregate that includes the
/lists.openstack.org/__pipermail/openstack-dev/2014-__February/027574.html
<http://lists.openstack.org/pipermail/openstack-dev/2014-February/027574.html>
[2] https://bugs.launchpad.net/__nova/+bug/1299517
<https://bugs.launchpad.net/nova/+bug/1299517>
> The patch [2] proposes changing the default DNS driver from
> 'nova.network.noop_dns_driver.NoopDNSDriver' to other that verifies if
> DNS entries already exists before adding them, such as the
> 'nova.network.minidns.MiniDNS'.
Changing a default setting in a way that isn't backwards compatible
Hi Folks,
I've been working on a change to make the user_data field an optional part of
the Instance object since passing it around everywhere seems a bad idea since:
- It can be huge
- It's only used when getting metadata
- It can contain user sensitive data
-
http
> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: 04 June 2014 19:23
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Proposal: Move CPU and memory
> allocation ratio out of scheduler
>
> On 06/04/2014 1
Hi Dan,
>
> > On a compute manager that is still running the old version of the code
> > (i.e using the previous object version), if a method that hasn't yet
> > been converted to objects gets a dict created from the new version of
> > the object (e.g. rescue, get_console_output), then object_co
mes you do want to lie to your
users!
[Day, Phil] I agree that there is a problem with having every new option we add
in extra_specs leading to a new set of flavors.There are a number of
changes up for review to expose more hypervisor capabilities via extra_specs
that also have this potenti
mes you do want to lie to your
users!
[Day, Phil] BTW you might be able to (nearly) do this already if you define
aggregates for the two QoS pools, and limit which projects can be scheduled
into those pools using the AggregateMultiTenancyIsolation filter.I say
nearly because as pointed out by
Hi Joe,
Can you give some examples of what that data would be used for ?
It sounds on the face of it that what you’re looking for is pretty similar to
what Extensible Resource Tracker sets out to do
(https://review.openstack.org/#/c/86050
https://review.openstack.org/#/c/71557)
Phil
From:
> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: 09 June 2014 19:03
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Proposal: Move CPU and memory
> allocation ratio out of scheduler
>
> On 06/09/2014 12:32 PM, Chris Friesen wrote:
>
Hi Chris,
>The documentation is NOT the canonical source for the behaviour of the API,
>currently the code should be seen as the reference. We've run into issues
>before where people have tried to align code to the fit the documentation and
>made backwards incompatible changes (although this is
I agree that we need to keep a tight focus on all API changes.
However was the problem with the floating IP change just to do with the
implementation in Nova or the frequency with which Ceilometer was calling it ?
Whatever guildelines we follow on API changes themselves its pretty hard to
p
Hi Folks,
I was looking at the resize code in libvirt, and it has checks which raise an
exception if the target root or ephemeral disks are smaller than the current
ones - which seems fair enough I guess (you can't drop arbitary disk content on
resize), except that the because the check is in
ically impossible to reduce disk unless you have some really nasty guest
additions.
On Fri, Jun 13, 2014 at 6:02 AM, Day, Phil
mailto:philip@hp.com>> wrote:
Hi Folks,
I was looking at the resize code in libvirt, and it has checks which raise an
exception if the target root or ephemer
ki [mailto:andrew.la...@rackspace.com]
Sent: 13 June 2014 13:57
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as
part of resize ?
On 06/13/2014 08:03 AM, Day, Phil wrote:
>Theoretically impossible to
Hi Folks,
A recent change introduced a unit test to "warn/notify developers" when they
make a change which will break the out of tree Ironic virt driver:
https://review.openstack.org/#/c/98201
Ok - so my change (https://review.openstack.org/#/c/68942) broke it as it adds
some extra parameter
Beyond what is and isn’t technically possible at the file system level there is
always the problem that the user may have more data than can fit into the
reduced disk.
I don’t want to take away useful functionality from folks if there are cases
where it already works – mostly I just want to imp
contract
Hi!
On Fri, Jun 13, 2014 at 9:30 AM, Day, Phil
mailto:philip@hp.com>> wrote:
Hi Folks,
A recent change introduced a unit test to “warn/notify developers” when they
make a change which will break the out of tree Ironic virt driver:
https://review.openstack.org/#/c/98201
Ok –
> -Original Message-
> From: Russell Bryant [mailto:rbry...@redhat.com]
> Sent: 17 June 2014 15:57
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction
> as part of resize ?
>
> On 06/17/2014 10:43 A
> -Original Message-
> From: Ahmed RAHAL [mailto:ara...@iweb.com]
> Sent: 18 June 2014 01:21
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] locked instances and snaphot
>
> Hi there,
>
> Le 2014-06-16 15:28, melanie witt a écrit :
> > Hi all,
> >
> [...]
> >
gt;
> On Wed, Jun 18, 2014 at 11:05:01AM +, Day, Phil wrote:
> > > -Original Message-
> > > From: Russell Bryant [mailto:rbry...@redhat.com]
> > > Sent: 17 June 2014 15:57
> > > To: OpenStack Development Mailing List (not for usage questions)
>
I guess that would depend on whether the flavour has any ephemeral storage in
addition to the boot volume.
The block migration should work in this case, have you tried that.
Sent from Samsung Mobile
Original message
From: Chris Friesen
Date:08/03/2014 06:16 (GMT+00:00)
To:
Sorry if I'm coming late to this thread, but why would you define AZs to cover
"othognal zones" ?
AZs are a very specific form of aggregate - they provide a particular isolation
schematic between the hosts (i.e. physical hosts are never in more than one AZ)
- hence the "availability" in the nam
>
> The need arises when you need a way to use both the zones to be used for
> scheduling when no specific zone is specified. The only way to do that is
> either have a AZ which is a superset of the two AZ or the other way could be
> if the default_scheduler_zone can take a list of zones inste
> -Original Message-
> From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
> Sent: 26 March 2014 20:33
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova][scheduler] Availability Zones and Host
> aggregates..
>
>
> On Mar 26, 2014,
> -Original Message-
> From: Chris Friesen [mailto:chris.frie...@windriver.com]
> Sent: 27 March 2014 18:15
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova][scheduler] Availability Zones and Host
> aggregates..
>
> On 03/27/2014 1
>> Personally, I feel it is a mistake to continue to use the Amazon concept
>> of an availability zone in OpenStack, as it brings with it the
>> connotation from AWS EC2 that each zone is an independent failure
>> domain. This characteristic of EC2 availability zones is not enforced in
>> OpenStack
Hi Sylvain,
There was a similar thread on this recently - which might be worth reviewing:
http://lists.openstack.org/pipermail/openstack-dev/2014-March/031006.html
Some interesting use cases were posted, and a I don't think a conclusion was
reached, which seems to suggest this might be a good
I can see the case for Trove being to create an instance within a customer's
tenant (if nothing else it would make adding it onto their Neutron network a
lot easier), but I'm wondering why it really needs to be hidden from them ?
If the instances have a name that makes it pretty obvious that Tro
1 - 100 of 184 matches
Mail list logo