Thanks for letting us know Michael, and thanks for doing it in such a moving
way.Sad news indeed
Phil
From: Michael Still [mailto:mi...@stillhq.com]
Sent: 08 April 2015 05:49
To: OpenStack Development Mailing List
Subject: [openstack-dev] In loving memory of Chris Yeoh
It is my sad duty
Hi Folks,
Is there any support yet in novaclient for requesting a specific microversion ?
(looking at the final leg of extending clean-shutdown to the API, and
wondering how to test this in devstack via the novaclient)
Phil
___
Hi,
Your problem is that you still have the original ram filter configured, so its
still removing all of the hosts. Try removing that and you should be OK. Note
though that then any hosts not in an aggregate with a ram ratio set won't have
a ram limit at all.
You might also find the aggregate
I think it's counted by tenant not user, so can you re-run the db query based
on tenant ?
Sent from Samsung Mobile
Original message
From: Don Waterloo
Date:05/11/2014 17:29 (GMT+01:00)
To: openstack@lists.openstack.org
Subject: [Openstack] nova absolute-limits versus usage
M
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
> cascading
>
> Hi Phil,
>
> Thanks for your feedback, and patience of this long history reading :) See
> comments inline.
&g
> -Original Message-
> From: henry hly [mailto:henry4...@gmail.com]
> Sent: 08 October 2014 09:16
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
> cascading
>
> Hi,
>
> Good questions: why
I think the expectation is that if a user is already interaction with Neutron
to create ports then they should do the security group assignment in Neutron as
well.
The trouble I see with supporting this way of assigning security groups is what
should the correct behavior be if the user passes m
> > Hi Jay,
> >
> > So just to be clear, are you saying that we should generate 2
> > notification messages on Rabbit for every DB update? That feels
> > like a big overkill for me. If I follow that login then the current
> > state transition notifications should also be changed to "Starting to
> >
> > I think we should aim to /always/ have 3 notifications using a pattern
> > of
> >
> >try:
> > ...notify start...
> >
> > ...do the work...
> >
> > ...notify end...
> >except:
> > ...notify abort...
>
> Precisely my viewpoint as well. Unless we standardize on
tions in all cases ?
>
> On Mon, Sep 22, 2014 at 11:03:02AM +, Day, Phil wrote:
> > Hi Folks,
> >
> > I'd like to get some opinions on the use of pairs of notification
> > messages for simple events. I get that for complex operations on
> > an instance (crea
Hi Folks,
I'd like to get some opinions on the use of pairs of notification messages for
simple events. I get that for complex operations on an instance (create,
rebuild, etc) a start and end message are useful to help instrument progress
and how long the operations took.However we also u
> >
> > DevStack doesn't register v2.1 endpoint to keytone now, but we can use
> > it with calling it directly.
> > It is true that it is difficult to use v2.1 API now and we can check
> > its behavior via v3 API instead.
>
> I posted a patch[1] for registering v2.1 endpoint to keystone, and I co
> -Original Message-
> From: Kenichi Oomichi [mailto:oomi...@mxs.nes.nec.co.jp]
> Sent: 18 September 2014 02:44
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] are we going to remove the novaclient
> v3 shell or what?
>
>
> > -O
> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: 12 September 2014 19:37
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Expand resource name allowed
> characters
>
> Had to laugh about the PILE OF POO character :) Comments inline...
I think in the hopefully not too distant future we'll be able to make the v1_1
client handle both V2 and V2.1 (who knows. Maybe we can even rename it v2) -
and that's what we should do because it will prove if we have full
compatibility or not.
Right now the two things holding that back are the
ber 2014 17:05
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] FFE server-group-quotas
>
> On 09/05/2014 11:28 AM, Ken'ichi Ohmichi wrote:
> > 2014-09-05 21:56 GMT+09:00 Day, Phil :
> >> Hi,
> >>
> >> I'd like to
> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: 05 September 2014 11:49
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out
> virt drivers
>
> On 09/05/2014 03:02 AM, Sylvain Bauza wrote:
> >
> >
Hi,
I'd like to ask for a FFE for the 3 patchsets that implement quotas for server
groups.
Server groups (which landed in Icehouse) provides a really useful anti-affinity
filter for scheduling that a lot of customers woudl like to use, but without
some form of quota control to limit the amount
Hi Daniel,
Thanks for putting together such a thoughtful piece - I probably need to
re-read it few times to take in everything you're saying, but a couple of
thoughts that did occur to me:
- I can see how this could help where a change is fully contained within a virt
driver, but I wonder ho
> >
> > One final note: the specs referenced above didn't get approved until
> > Spec Freeze, which seemed to leave me with less time to implement
> > things. In fact, it seemed that a lot of specs didn't get approved
> > until spec freeze. Perhaps if we had more staggered approval of
> > specs,
> -Original Message-
> From: Nikola Đipanov [mailto:ndipa...@redhat.com]
> Sent: 03 September 2014 10:50
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova] Feature Freeze Exception process for
> Juno
>
> I will follow up with a more detailed email about what
Needing 3 out of 19 instead of 3 out of 20 isn't an order of magnatude
according to my calculator. Its much closer/fairer than making it 2/19 vs
3/20.
If a change is borderline in that it can only get 2 other cores maybe it
doesn't have a strong enough case for an exception.
Phil
Sent from
>Adding in such case more bureaucracy (specs) is not the best way to resolve
>team throughput issues...
I’d argue that if fundamental design disagreements can be surfaced and debated
at the design stage rather than first emerging on patch set XXX of an
implementation, and be used to then prior
gt; On Wed, Aug 20, 2014 at 08:33:31AM -0400, Jay Pipes wrote:
> > On 08/20/2014 04:48 AM, Nikola Đipanov wrote:
> > >On 08/20/2014 08:27 AM, Joe Gordon wrote:
> > >>On Aug 19, 2014 10:45 AM, "Day, Phil" > >><mailto:philip@hp.com>>
> -Original Message-
> From: Nikola Đipanov [mailto:ndipa...@redhat.com]
> Sent: 19 August 2014 17:50
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova] Scheduler split wrt Extensible Resource
> Tracking
>
> On 08/19/2014 06:39 PM, Sylvain Bauza wrote:
> > On the
2014 at 4:14 AM, Daniel P. Berrange
> wrote:
> > On Wed, Jul 23, 2014 at 06:08:52PM +, Day, Phil wrote:
> >> Hi Folks,
> >>
> >> I'd like to propose the following as an exception to the spec freeze, on
> >> the
> basis that it addresses
Hi Folks,
I'd like to propose the following as an exception to the spec freeze, on the
basis that it addresses a potential data corruption issues in the Guest.
https://review.openstack.org/#/c/89650
We were pretty close to getting acceptance on this before, apart from a debate
over whether one
>>
>> Sorry, forgot to put this in my previous message. I've been advocating the
>> ability to use names instead of UUIDs for server groups pretty much since I
>> saw them last year.
>>
>> I'd like to just enforce that server group names must be unique within a
>> tenant, and then allow names t
Hi Folks,
I noticed a couple of changes that have just merged to allow the server group
hints to be specified by name (some legacy behavior around automatically
creating groups).
https://review.openstack.org/#/c/83589/
https://review.openstack.org/#/c/86582/
But group names aren't constrained
Hi Folks,
Working on the server groups quotas I hit an issue with the limits API which I
wanted to get feedback on.
Currently this always shows just the project level quotas and usage, which can
be confusing if there is a lower user specific quota. For example:
Project Quota = 10
User Quota =
Hi Melanie,
I have a BP (https://review.openstack.org/#/c/89650) and the first couple of
bits of implementation (https://review.openstack.org/#/c/68942/
https://review.openstack.org/#/c/99916/) out for review on this very topic ;-)
Phil
> -Original Message-
> From: melanie witt [mailt
> -Original Message-
> From: Ahmed RAHAL [mailto:ara...@iweb.com]
> Sent: 25 June 2014 20:25
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] should we have a stale data indication in
> "nova list/show"?
>
> Le 2
i Phil,
thanks for your reply. So should I need to submit a patch/spec to add it now?
On Wed, Jun 25, 2014 at 5:53 PM, Day, Phil
mailto:philip@hp.com>> wrote:
Looking at this a bit deeper the comment in _start_buidling() says that its
doing this to “Save the host and launched_on fields
> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: 25 June 2014 11:49
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] should we have a stale data indication in
> "nova list/show"?
>
> On 06/25/2014 04:28 AM, Belm
I think there’s a bit more to it that just having an aggregate:
- Ironic provides its own version of the Host manager class for the
scheduler, I’m not sure if that is fully compatible with the non-ironic case.
Even in the BP for merging the Ironic driver back into Nova it still looks
this whole update
might just be not needed – although I still like the idea of a state to show
that the request has been taken off the queue by the compute manager.
From: Day, Phil
Sent: 25 June 2014 10:35
To: OpenStack Development Mailing List
Subject: RE: [openstack-dev] [nova] Why is there a
Hi WingWJ,
I agree that we shouldn’t have a task state of None while an operation is in
progress. I’m pretty sure back in the day this didn’t use to be the case and
task_state stayed as Scheduling until it went to Networking (now of course
networking and BDM happen in parallel, so you have to
is down now?
>
> Michael
>
> On Wed, Jun 25, 2014 at 12:53 AM, Day, Phil wrote:
> >> -Original Message-
> >> From: Russell Bryant [mailto:rbry...@redhat.com]
> >> Sent: 24 June 2014 13:08
> >> To: openstack-dev@lists.openstack.org
> &
gt;
> > Cheers,
> > Michael
> >
> > On Tue, Jun 24, 2014 at 9:08 PM, Day, Phil wrote:
> >> Hi Michael,
> >>
> >> Not sure I understand the need for a gap between "Juno Spec approval
> freeze" (Jul 10th) and "K opens for spec proposa
Hi Michael,
Not sure I understand the need for a gap between "Juno Spec approval freeze"
(Jul 10th) and "K opens for spec proposals" (Sep 4th).I can understand that
K specs won't get approved in that period, and may not get much feedback from
the cores - but I don't see the harm in letting
> On 18 June 2014 21:57, Jay Pipes wrote:
> > On 06/17/2014 05:42 PM, Daniel P. Berrange wrote:
> >>
> >> On Tue, Jun 17, 2014 at 04:32:36PM +0100, Pádraig Brady wrote:
> >>>
> >>> On 06/13/2014 02:22 PM, Day, Phil wrote:
> >>>>
&g
The basic framework for supporting this kind of resource scheduling is the
extensible-resource-tracker:
https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking
https://review.openstack.org/#/c/86050/
https://review.openstack.org/#/c/71557/
Once that lands being able schedule on
gt;
> On Wed, Jun 18, 2014 at 11:05:01AM +, Day, Phil wrote:
> > > -Original Message-
> > > From: Russell Bryant [mailto:rbry...@redhat.com]
> > > Sent: 17 June 2014 15:57
> > > To: OpenStack Development Mailing List (not for usage questions)
>
> -Original Message-
> From: Ahmed RAHAL [mailto:ara...@iweb.com]
> Sent: 18 June 2014 01:21
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] locked instances and snaphot
>
> Hi there,
>
> Le 2014-06-16 15:28, melanie witt a écrit :
> > Hi all,
> >
> [...]
> >
> -Original Message-
> From: Russell Bryant [mailto:rbry...@redhat.com]
> Sent: 17 June 2014 15:57
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction
> as part of resize ?
>
> On 06/17/2014 10:43 A
contract
Hi!
On Fri, Jun 13, 2014 at 9:30 AM, Day, Phil
mailto:philip@hp.com>> wrote:
Hi Folks,
A recent change introduced a unit test to “warn/notify developers” when they
make a change which will break the out of tree Ironic virt driver:
https://review.openstack.org/#/c/98201
Ok –
Beyond what is and isn’t technically possible at the file system level there is
always the problem that the user may have more data than can fit into the
reduced disk.
I don’t want to take away useful functionality from folks if there are cases
where it already works – mostly I just want to imp
Hi Folks,
A recent change introduced a unit test to "warn/notify developers" when they
make a change which will break the out of tree Ironic virt driver:
https://review.openstack.org/#/c/98201
Ok - so my change (https://review.openstack.org/#/c/68942) broke it as it adds
some extra parameter
ki [mailto:andrew.la...@rackspace.com]
Sent: 13 June 2014 13:57
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as
part of resize ?
On 06/13/2014 08:03 AM, Day, Phil wrote:
>Theoretically impossible to
ically impossible to reduce disk unless you have some really nasty guest
additions.
On Fri, Jun 13, 2014 at 6:02 AM, Day, Phil
mailto:philip@hp.com>> wrote:
Hi Folks,
I was looking at the resize code in libvirt, and it has checks which raise an
exception if the target root or ephemer
Hi Folks,
I was looking at the resize code in libvirt, and it has checks which raise an
exception if the target root or ephemeral disks are smaller than the current
ones - which seems fair enough I guess (you can't drop arbitary disk content on
resize), except that the because the check is in
I agree that we need to keep a tight focus on all API changes.
However was the problem with the floating IP change just to do with the
implementation in Nova or the frequency with which Ceilometer was calling it ?
Whatever guildelines we follow on API changes themselves its pretty hard to
p
Hi Chris,
>The documentation is NOT the canonical source for the behaviour of the API,
>currently the code should be seen as the reference. We've run into issues
>before where people have tried to align code to the fit the documentation and
>made backwards incompatible changes (although this is
> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: 09 June 2014 19:03
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Proposal: Move CPU and memory
> allocation ratio out of scheduler
>
> On 06/09/2014 12:32 PM, Chris Friesen wrote:
>
Hi Joe,
Can you give some examples of what that data would be used for ?
It sounds on the face of it that what you’re looking for is pretty similar to
what Extensible Resource Tracker sets out to do
(https://review.openstack.org/#/c/86050
https://review.openstack.org/#/c/71557)
Phil
From:
mes you do want to lie to your
users!
[Day, Phil] BTW you might be able to (nearly) do this already if you define
aggregates for the two QoS pools, and limit which projects can be scheduled
into those pools using the AggregateMultiTenancyIsolation filter.I say
nearly because as pointed out by
mes you do want to lie to your
users!
[Day, Phil] I agree that there is a problem with having every new option we add
in extra_specs leading to a new set of flavors.There are a number of
changes up for review to expose more hypervisor capabilities via extra_specs
that also have this potenti
Hi Dan,
>
> > On a compute manager that is still running the old version of the code
> > (i.e using the previous object version), if a method that hasn't yet
> > been converted to objects gets a dict created from the new version of
> > the object (e.g. rescue, get_console_output), then object_co
> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: 04 June 2014 19:23
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Proposal: Move CPU and memory
> allocation ratio out of scheduler
>
> On 06/04/2014 1
Hi Folks,
I've been working on a change to make the user_data field an optional part of
the Instance object since passing it around everywhere seems a bad idea since:
- It can be huge
- It's only used when getting metadata
- It can contain user sensitive data
-
http
> The patch [2] proposes changing the default DNS driver from
> 'nova.network.noop_dns_driver.NoopDNSDriver' to other that verifies if
> DNS entries already exists before adding them, such as the
> 'nova.network.minidns.MiniDNS'.
Changing a default setting in a way that isn't backwards compatible
Hi Jay,
> * Host aggregates may also have a separate allocation ratio that overrides
> any configuration setting that a particular host may have
So with your proposal would the resource tracker be responsible for picking and
using override values defined as part of an aggregate that includes the
/lists.openstack.org/__pipermail/openstack-dev/2014-__February/027574.html
<http://lists.openstack.org/pipermail/openstack-dev/2014-February/027574.html>
[2] https://bugs.launchpad.net/__nova/+bug/1299517
<https://bugs.launchpad.net/nova/+bug/1299517>
faults are, but it looks like this isn't
the case.
Unfortunately the API removal in Nova was followed by similar changes in
novaclient and Horizon, so fixing Icehouse at this point is probably going to
be difficult.
[Day, Phil] I think we should revert the changes in all three system then.
Could we replace the refresh from the period task with a timestamp in the
network cache of when it was last updated so that we refresh it only when it’s
accessed if older that X ?
From: Aaron Rosen [mailto:aaronoro...@gmail.com]
Sent: 29 May 2014 01:47
To: Assaf Muller
Cc: OpenStack Development
Hi Vish,
I think quota classes have been removed from Nova now.
Phil
Sent from Samsung Mobile
Original message
From: Vishvananda Ishaya
Date:27/05/2014 19:24 (GMT+00:00)
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: Re: [openstack-dev] [nova] no
> -Original Message-
> From: Tripp, Travis S
> Sent: 07 May 2014 18:06
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Cinder] Confusion about the respective use
> cases for volume's admin_metadata, metadata and glance_image_metadata
>
> >
>Nova now can detect host unreachable. But it fails to make out host isolation,
>host dead and nova compute service down. When host unreachable is reported,
>users have to find out the exact state by himself and then take the
>appropriate measure to recover. Therefore we'd like to improve the ho
> >
> > In the original API there was a way to remove members from the group.
> > This didn't make it into the code that was submitted.
>
> Well, it didn't make it in because it was broken. If you add an instance to a
> group after it's running, a migration may need to take place in order to keep
> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: 25 April 2014 23:29
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Proposal: remove the server groups
> feature
>
> On Fri, 2014-04-25 at 22:00 +, Day
Hi Jay,
I'm going to disagree with you on this one, because:
i) This is a feature that was discussed in at least one if not two Design
Summits and went through a long review period, it wasn't one of those changes
that merged in 24 hours before people could take a good look at it. Whatever
you
I would like to announce my TC candidacy.
I work full time for HP where I am the architect and technical lead for the
core OpenStack Engineering team, with responsibility for the architecture and
deployment of the OpenStack Infrastructure projects (Nova, Neutron, Cinder,
Glance, Swift) across
> On 04/15/2014 11:01 AM, Brian Elliott wrote:
> >> * specs review. The new blueprint process is a work of genius, and I
> >> think its already working better than what we've had in previous
> >> releases. However, there are a lot of blueprints there in review, and
> >> we need to focus on making s
>
> > Is that right, and any reason why the default for
> > vif_plugging_is_fatal shouldn't be False insated of True to make this
> > sequence less dependent on matching config changes ?
>
> Yes, because the right approach to a new deployment is to have this
> enabled. If it was disabled by defau
Hi Folks,
Sorry for being a tad slow on the uptake here, but I'm trying to understand the
sequence of updates required to move from a system that doesn't have external
events configured between Neutron and Nova and one that does (The new
nova-specs repo would have captured this as part of th
> -Original Message-
> From: Chris Friesen [mailto:chris.frie...@windriver.com]
> Sent: 09 April 2014 15:37
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova] Hosts within two Availability Zones :
> possible or not ?
>
> On 04/09/2014 0
7:25 AM, Jay Pipes wrote:
> > On Tue, 2014-04-08 at 10:49 +, Day, Phil wrote:
> >> On a large cloud you're protect against this to some extent if the
> >> number of servers is >> number of instances in the quota.
> >>
> >> However it does feel
> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: 08 April 2014 14:25
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova] Hosts within two Availability Zones :
> possible or not ?
>
> On Tue, 2014-04-08 at 10:4
> -Original Message-
> From: Russell Bryant [mailto:rbry...@redhat.com]
> Sent: 08 April 2014 13:13
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Server Groups are not an optional
> element, bug or feature ?
>
> On 04/08/2014 0
On a large cloud you're protect against this to some extent if the number of
servers is >> number of instances in the quota.
However it does feel that there are a couple of things missing to really
provide some better protection:
- A quota value on the maximum size of a server group
Trove, as Admin, could “unlock” those Instances, make the
> modification and then “lock” them after it is complete.
>
> Thanks,
>
> Justin Hopper
> Software Engineer - DBaaS
> irc: juice | gpg: EA238CF3 | twt: @justinhopper
>
>
>
>
> On 4/7/14, 10:01
> -Original Message-
> From: Russell Bryant [mailto:rbry...@redhat.com]
> Sent: 07 April 2014 19:12
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Server Groups are not an optional
> element, bug or feature ?
>
>
...
> I consider it a complete working fea
: juice | gpg: EA238CF3 | twt: @justinhopper
On 4/7/14, 10:01, "Day, Phil" mailto:philip@hp.com>>
wrote:
>I can see the case for Trove being to create an instance within a
>customer's tenant (if nothing else it would make adding it onto their
>Neutron network
> Sent: 07 April 2014 20:38
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Server Groups are not an optional
> element, bug or feature ?
>
> On 04/07/2014 02:12 PM, Russell Bryant wrote:
> > On 04/07/2014 01:43 PM, Day, Phil wrote:
> >> Gen
> -Original Message-
> From: Robert Collins [mailto:robe...@robertcollins.net]
> Sent: 07 April 2014 21:01
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [TripleO] config options, defaults, oh my!
>
> So one interesting thing from the influx of new reviews is lots of p
Hi Folks,
Generally the scheduler's capabilities that are exposed via hints can be
enabled or disabled in a Nova install by choosing the set of filters that are
configured. However the server group feature doesn't fit that pattern -
even if the affinity filter isn't configured the anti-affi
I can see the case for Trove being to create an instance within a customer's
tenant (if nothing else it would make adding it onto their Neutron network a
lot easier), but I'm wondering why it really needs to be hidden from them ?
If the instances have a name that makes it pretty obvious that Tro
Hi Sylvain,
There was a similar thread on this recently - which might be worth reviewing:
http://lists.openstack.org/pipermail/openstack-dev/2014-March/031006.html
Some interesting use cases were posted, and a I don't think a conclusion was
reached, which seems to suggest this might be a good
>> Personally, I feel it is a mistake to continue to use the Amazon concept
>> of an availability zone in OpenStack, as it brings with it the
>> connotation from AWS EC2 that each zone is an independent failure
>> domain. This characteristic of EC2 availability zones is not enforced in
>> OpenStack
> -Original Message-
> From: Chris Friesen [mailto:chris.frie...@windriver.com]
> Sent: 27 March 2014 18:15
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova][scheduler] Availability Zones and Host
> aggregates..
>
> On 03/27/2014 1
> -Original Message-
> From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
> Sent: 26 March 2014 20:33
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova][scheduler] Availability Zones and Host
> aggregates..
>
>
> On Mar 26, 2014,
>
> The need arises when you need a way to use both the zones to be used for
> scheduling when no specific zone is specified. The only way to do that is
> either have a AZ which is a superset of the two AZ or the other way could be
> if the default_scheduler_zone can take a list of zones inste
Sorry if I'm coming late to this thread, but why would you define AZs to cover
"othognal zones" ?
AZs are a very specific form of aggregate - they provide a particular isolation
schematic between the hosts (i.e. physical hosts are never in more than one AZ)
- hence the "availability" in the nam
I guess that would depend on whether the flavour has any ephemeral storage in
addition to the boot volume.
The block migration should work in this case, have you tried that.
Sent from Samsung Mobile
Original message
From: Chris Friesen
Date:08/03/2014 06:16 (GMT+00:00)
To:
e) for not adding lots of completely new features
into V2 if V3 was already available in a stable form - but V2 already provides
a nearly complete support for nova-net features on top of Neutron.I fail to
see what is wrong with continuing to improve that.
Phil
> -Original Message-
&g
> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: 24 February 2014 23:49
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Future of the Nova API
>
>
> > Similarly with a Xen vs KVM situation I don't think its an extension
> > related i
> -Original Message-
> From: Chris Behrens [mailto:cbehr...@codestud.com]
> Sent: 26 February 2014 22:05
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] Future of the Nova API
>
>
> This thread is many messages deep now and I'm busy
ide any deprecated options
present. If the new option is not present and multiple
deprecated options are present, the option corresponding to
the first element of deprecated_opts will be chosen.
I hope that it'll help you.
Best regards,
Denis Makogon.
On Wed, Feb 26
Hi Folks,
I could do with some pointers on config value deprecation.
All of the examples in the code and documentation seem to deal with the case
of "old_opt" being replaced by "new_opt" but still returning the same value
Here using deprecated_name and / or deprecated_opts in the definition of
Hi,
There were a few related blueprints which were looking to add various
additional types of resource to the scheduler - all of which will now be
implemented on top of a new generic mechanism covered by:
https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking
> -Original
1 - 100 of 275 matches
Mail list logo