for instances produced without the Magnum virt
> driver when forming or scaling Bays. I suppose a scheduling hint
> might be adequate for this.
>
> Adrian
>
> > On May 17, 2015, at 11:48 AM, Matt Riedemann
> wrote:
> >
> >
> >
> > On 5/16
If system containers is a viable use-case for Nova, and if Magnum is
aiming at both application containers and system containers, would it make
sense to have a new virt driver in nova that would invoke Magnum API for
container provisioning and life cycle? This would avoid (some of the) code
dup
Tom Fifield wrote on 25/02/2015 06:46:13 AM:
> On 24/02/15 19:27, Daniel P. Berrange wrote:
> > On Tue, Feb 24, 2015 at 12:05:17PM +0100, Thierry Carrez wrote:
> >> Daniel P. Berrange wrote:
> >>> [...]
>
> > I'm not familiar with how the translations works, but if they are
> > waiting until the
This sounds related to the discussion on the 'Nova clustered hypervisor
driver' which started at Juno design summit [1]. Talking to another
OpenStack should be similar to talking to vCenter. The idea was that the
Cells support could be refactored around this notion as well.
Not sure whether the
>> So maybe the problem isn?t having the flavors so much, but in how the
user currently has to specific an exact match from that list.
If the user could say ?I want a flavor with these attributes? and then the
system would find a ?best match? based on criteria set by the cloud admin
then would t
It seems that there are also issues around scheduling in environments that
comprise non-flat/homogeneous groups of hosts. Perhaps, related to
'clustered hypervisor support in Nova' proposal (
http://summit.openstack.org/cfp/details/145). Not sure whether we need a
separate slot for this or not -
Heat template orchestrates user actions, while management of flavors is
typically admin's job (due to their tight link to the physical hardware
configuration, unknown to a regular user).
Regards,
Alex
From: "ELISHA, Moshe (Moshe)"
To: "openstack-dev@lists.openstack.org"
,
Date: 09
Similar capabilities are being introduced here:
https://review.openstack.org/#/c/61839/
Regards,
Alex
From: Kenichi Oomichi
To: "OpenStack Development Mailing List (not for usage questions)"
,
Date: 03/02/2014 11:48 AM
Subject:[openstack-dev] [Nova] bp: nova-ecu-support
Maybe we can also briefly discuss the status of
https://review.openstack.org/#/q/topic:bp/multiple-scheduler-drivers,n,z
-- now that a revised implementation is available for review (broken into
4 small patches), and people are back from vacations, would be good to get
some attention from relev
Great initiative!
I would certainly be interested taking part in this -- although I wouldn't
necessary claim to be among "people with the know-how to design and
implement it well". For sure this is going to be a painful but exciting
process.
Regards,
Alex
From: Robert Collins
To: Op
I think the idea in general is very good (I would be a heavy consumer of
such a thing myself). I am not sure how sustainable is to do it manually
though, especially for larger projects.
Maybe there is a reasonable way to automate this.. For example, if we
could generate a 'dashboard' for each p
Another possible approach could be that only part of the 50 succeeds
(reported back to the user), and then a retry mechanism at a higher level
would potentially approach the other partition/scheduler - similar to
today's retries.
Regards,
Alex
From: Mike Wilson
To: "OpenStack Develop
Boris Pavlovic wrote on 18/11/2013 08:31:20 AM:
> Actually schedulers in nova and cinder are almost the same.
Well, this is kind of expected, since Cinder scheduler started as a
copy-paste of the Nova scheduler :-) But they already started diverging
(not sure whether this is necessarily a bad
think you mentioned that if we have, say, 10 schedulers, we will also have
10 instances of memcached.
Regards,
Alex
> Best regards,
> Boris Pavlovic
>
>
> On Sun, Nov 10, 2013 at 4:20 PM, Alex Glikson
wrote:
> Hi Boris,
>
> This is a very interesting approach.
>
Russell Bryant wrote on 15/11/2013 06:49:31 PM:
> 3) If you have work planned for Icehouse, please get your blueprints
> filed as soon as possible. Be sure to set a realistic target milestone.
> So far, *everyone* has targeted *everything* to icehouse-1, which is
> set to be released in less th
Sylvain Bauza wrote on 15/11/2013 11:13:37 AM:
> On a technical note, as a Stackforge contributor, I'm trying to
> implement best practices of Openstack coding into my own project, and
> I'm facing day-to-day issues trying to understand what Oslo libs do or
> how they can be used in a fashion
view soon.
Regards,
Alex
> De : Alex Glikson [mailto:glik...@il.ibm.com]
> Envoyé : jeudi 14 novembre 2013 16:13
> À : OpenStack Development Mailing List (not for usage questions)
> Objet : Re: [openstack-dev] [nova] Configure overcommit policy
>
> In fact, there is a blueprint which w
In fact, there is a blueprint which would enable supporting this scenario
without partitioning --
https://blueprints.launchpad.net/nova/+spec/cpu-entitlement
The idea is to annotate flavors with CPU allocation guarantees, and enable
differentiation between instances, potentially running on the
flow with Ironic?
On Wed, Nov 13, 2013 at 8:02 AM, Alex Glikson wrote:
Hi,
Is there a documentation somewhere on the scheduling flow with Ironic?
The reason I am asking is because we would like to get virtualized and
bare-metal workloads running in the same cloud (ideally with the ability
to
Hi,
Is there a documentation somewhere on the scheduling flow with Ironic?
The reason I am asking is because we would like to get virtualized and
bare-metal workloads running in the same cloud (ideally with the ability
to repurpose physical machines between bare-metal workloads and
virtualiz
You can consider having a separate host aggregate for Hadoop, and use a
combination of AggregateInstanceExtraSpecFilter (with a special flavor
mapped to this host aggregate) and AggregateCoreFilter (overriding
cpu_allocation_ratio for this host aggregate to be 1).
Regards,
Alex
From: John
Hi Boris,
This is a very interesting approach.
How do you envision the life cycle of such a scheduler in terms of code
repository, build, test, etc?
What kind of changes to provisioning APIs do you envision to 'feed' such a
scheduler?
Any particular reason you didn't mention Neutron?
Also, there
There is a ZK-backed driver in Nova service heartbeat mechanism (
https://blueprints.launchpad.net/nova/+spec/zk-service-heartbeat) -- would
be interesting to know whether it is widely used (might be worth asking at
the general ML, or user groups). There have been also discussions on using
it fo
Russell Bryant wrote on 30/10/2013 10:20:34 AM:
> On 10/30/2013 03:13 AM, Alex Glikson wrote:
> > Maybe a more appropriate approach could be to have a tool/script that
> > does it, as a one time thing.
> > For example, it could make sense in a scenario when Nova DB gets lost
Maybe a more appropriate approach could be to have a tool/script that does
it, as a one time thing.
For example, it could make sense in a scenario when Nova DB gets lost or
corrupted, a new Nova controller is deployed, and the DB needs to be
recreated. Potentially, since Nova DB is primarily a c
vironments
and different kinds of workload mix (for example, as you pointed out, in
an environment with flat network and centralized storage, the sharing can
be rather minimal).
> Alex Glikson asked why not go directly to holistic if there is no
> value in doing Nova-only. Yathi replie
topic).
Thanks,
Yathi.
On 10/29/13, 2:14 PM, "Andrew Laski" wrote:
On 10/29/13 at 04:05pm, Mike Spreitzer wrote:
Alex Glikson wrote on 10/29/2013 03:37:41 AM:
1. I assume that the motivation for rack-level anti-affinity is to
survive a rack failure. Is this indeed the case?
This i
Andrew Laski wrote on 29/10/2013 11:14:03 PM:
> [...]
> Having Nova call into Heat is backwards IMO. If there are specific
> pieces of information that Nova can expose, or API capabilities to help
> with orchestration/placement that Heat or some other service would like
> to use then let's loo
Nice example.. I think this is certainly a step in the right direction.
However, I am a bit lost when trying to figure out how this kind of API
(which makes perfect sense at the conceptual level) can be implemented.
IMO, when we make the attempt to design the actual implementation that
would be
+1
Regards,
Alex
Joshua Harlow wrote on 26/10/2013 09:29:03 AM:
>
> An idea that others and I are having for a similar use case in
> cinder (or it appears to be similar).
>
> If there was a well defined state machine/s in nova with well
> defined and managed transitions between states then
Hi Caitlin,
Caitlin Bestler wrote on 21/10/2013 06:51:36
PM:
> On 10/21/2013 2:34 AM, Avishay Traeger wrote:
> >
> > Hi all,
> > We (IBM and Red Hat) have begun discussions on enabling Disaster
Recovery
> > (DR) in OpenStack.
> >
> > We have created a wiki page with our initial thoughts:
> > ht
This sounds very similar to
https://blueprints.launchpad.net/nova/+spec/multiple-scheduler-drivers
We worked on it in Havana, learned a lot from feedbacks during the review
cycle, and hopefully will finalize the details at the summit and will be
able to continue & finish the implementation in
I would suggest not to generalize too much.. e.g., restrict the discussion
to PlacementPolicy. If anyone else would want to use a similar construct
for other purposes -- it can be generalized later.
For example, the notion of 'policy' already exists in other places in
OpenStack in the context of
IMO, the three themes make sense, but I would suggest waiting until the
submission deadline and discuss at the following IRC meeting on the 22nd.
Maybe there will be more relevant proposals to consider.
Regards,
Alex
P.S. I plan to submit a proposal regarding scheduling policies, and maybe
one
all the above.
Regards,
Alex
[1]
https://docs.google.com/document/d/17OIiBoIavih-1y4zzK0oXyI66529f-7JTCVj-BcXURA/edit
[2] https://wiki.openstack.org/wiki/Heat/PolicyExtension
====
Alex Glikson
M
48
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] automatically evacuate instances on
compute failure
>
> On 10/08/2013 03:20 PM, Alex Glikson wrote:
> > Seems that this can be broken into 3 incremental pieces. First, would
> > be great if the a
Good summary. I would also add that in A1 the schedulers (e.g., in Nova
and Cinder) could talk to each other to coordinate. Besides defining the
policy, and the user-facing APIs, I think we should also outline those
cross-component APIs (need to think whether they have to be user-visible,
or ca
Seems that this can be broken into 3 incremental pieces. First, would be
great if the ability to schedule a single 'evacuate' would be finally
merged (
https://blueprints.launchpad.net/nova/+spec/find-host-and-evacuate-instance
). Then, it would make sense to have the logic that evacuates an enti
Mike Spreitzer wrote on 01/10/2013 06:58:10 AM:
> Alex Glikson wrote on 09/29/2013 03:30:35 PM:
> > Mike Spreitzer wrote on 29/09/2013 08:02:00 PM:
> >
> > > Another reason to prefer host is that we have other resources to
> > > locate besides compute.
>
Mike Spreitzer wrote on 29/09/2013 08:02:00 PM:
> Another reason to prefer host is that we have other resources to
> locate besides compute.
Good point. Another approach (not necessarily contradicting) could be to
specify the location as a property of host aggregate rather than
individual ho
consolidation.
>
> I?d like to have some form of summit discussion in Hong Kong around
> these topics but it is not clear where it fits.
>
> Are there others who feel similarly ? How can we fit it in ?
>
> Tim
> [attachment "smime.p7s" deleted by Alex Glikson/H
If I understand correctly, what really matters at least in case of Hadoop
is network proximity between instances.
Hence, maybe Neutron would be a better fit to provide such information. In
particular, depending on virtual network configuration, having 2 instances
on the same node does not guaran
I tend to agree with Jake that this check is likely to conflict with the
scheduler, and should be removed.
Regards,
Alex
From: Guangya Liu
To: openstack-dev@lists.openstack.org,
Date: 03/09/2013 02:03 AM
Subject:[openstack-dev] Questions related to live migration
without ta
Joe Gordon wrote on 28/08/2013 11:04:45 PM:
>> Well, first, at the moment each of these filters today duplicate the
>> code that handles aggregate-based overrides. So, it would make sense
>> to have it in one place anyway. Second, why duplicating all the
>> filters if this can be done with a sing
9:12 AM, Alex Glikson wrote:
It seems that the main concern was that the overridden scheduler
properties are taken from the flavor, and not from the aggregate. In fact,
there was a consensus that this is not optimal.
I think that we can still make some progress in Havana towards
per-aggregate o
It seems that the main concern was that the overridden scheduler
properties are taken from the flavor, and not from the aggregate. In fact,
there was a consensus that this is not optimal.
I think that we can still make some progress in Havana towards
per-aggregate overrides, generalizing on the
To: OpenStack Development Mailing List
,
Date: 27/07/2013 01:22 AM
Subject:Re: [openstack-dev] [Nova] support for multiple active
scheduler policies/drivers
On Wed, Jul 24, 2013 at 6:18 PM, Alex Glikson wrote:
Russell Bryant wrote on 24/07/2013 07:14:27 PM:
>
There are roughly three cases.
1. Multiple identical instances of the scheduler service. This is
typically done to increase scalability, and is already supported (although
sometimes may result in provisioning failures due to race conditions
between scheduler instances). There is a single queue o
Agree. Some enhancements to Nova might be still required (e.g., to handle
resource reservations, so that there is enough capacity), but the
end-to-end framework probably should be outside of existing services,
probably talking to Nova, Ceilometer and potentially other components
(maybe Cinder,
duler policies/drivers
>
>
>
> > From: Joe Gordon [mailto:joe.gord...@gmail.com]
> > Sent: 26 July 2013 23:16
> > To: OpenStack Development Mailing List
> > Subject: Re: [openstack-dev] [Nova] support for multiple active
> scheduler policies/drivers
> >
Russell Bryant wrote on 24/07/2013 07:14:27 PM:
>
> I really like your point about not needing to set things up via a config
> file. That's fairly limiting since you can't change it on the fly via
> the API.
True. As I pointed out in another response, the ultimate goal would be to
have policie
> From: Russell Bryant [mailto:rbry...@redhat.com]
> > Sent: 23 July 2013 22:32
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [Nova] support for multiple active
scheduler
> > policies/drivers
> >
> > On 07/23/2013 04:24 PM, Alex Gliks
Russell Bryant wrote on 23/07/2013 07:19:48 PM:
> I understand the use case, but can't it just be achieved with 2 flavors
> and without this new aggreagte-policy mapping?
>
> flavor 1 with extra specs to say aggregate A and policy Y
> flavor 2 with extra specs to say aggregate B and policy Z
I
Russell Bryant wrote on 23/07/2013 05:35:18 PM:
> >> #1 - policy associated with a host aggregate
> >>
> >> This seems very odd to me. Scheduling policy is what chooses hosts,
so
> >> having a subset of hosts specify which policy to use seems backwards.
> >
> > This is not what we had in mind.
Russell Bryant wrote on 23/07/2013 01:04:24 AM:
> > [1]
https://blueprints.launchpad.net/nova/+spec/multiple-scheduler-drivers
> > [2] https://wiki.openstack.org/wiki/Nova/MultipleSchedulerPolicies
> > [3] https://review.openstack.org/#/c/37407/
>
> Thanks for bringing this up. I do have some c
Dear all,
Following the initial discussions at the last design summit, we have
published the design [2] and the first take on the implementation [3] of
the blueprint adding support for multiple active scheduler
policies/drivers [1].
In a nutshell, the idea is to allow overriding the 'default'
56 matches
Mail list logo