On Wednesday morning Jay Pipes led a double session on the work going on in the Nova scheduler. The session etherpad is here [1].

Jay started off by taking us through a high-level overview of what was completed for the quantitative changes:

1. Resource classes

2. Resource providers

3. Online data migration of compute node RAM/CPU/disk inventory.

Then Jay talked through the in-progress quantitative changes:

1. Online data migration of allocation fields for instances.

2. Modeling of generic resource pools for things like shared storage and IP subnet allocation pools (for Neutron routed networks).

3. Creating a separate placement REST API endpoint for generic resource pools.

4. Cleanup/refactor around PCI device handling.

For the quantitative changes, we still need to migrate the PCI and NUMA fields.

In the second scheduler session we got more into the capabilities (qualitative) part of the request. For example, this could be exposing hypervisor capabilities from the compute node through the API or for scheduling.

Jay pointed out there have been several blueprints posted over time related to this.

We discussed just what a capability is, i.e. is a hypervisor version a capability since certain features are only exposed with certain versions? For libvirt it isn't really. For example, you can only set the admin password in a guest if you have libvirt >=1.2.16. But for hyper-v there are features which are supported by both older and newer versions of the hypervisor but are generally considered better or more robust in the newer version. But for some things, like supporting hyperv-gen2 instances, we consider that 'supports-hyperv-gen2' is the capability. We can further break that down by value enums if we need more granular information about the capability.

While informative, we left the session with some unanswered next steps:

1. Where to put the inventories/allocations tables. We have three choices: API DB, Nova (cell) DB, or a new placement DB. Leaving them in the cell DB would mean aggregating data which would be a mess. Putting them in a new placement DB would mean a 2nd new database in a short number of releases for deployers to manage. So I believe we agreed to put them in the API DB (but if others think otherwise please speak up).

2. We talked for quite awhile about what a capability is and isn't, but I didn't come away with a definitive answer. This might get teased out in Claudiu's spec [2]. Note, however, that on Friday we agreed that as far as microversions are concerned, a new capability exposed in the REST API requires a microversion. But new enumerations for a capability, e.g. CPU features, do not require a new microversion bump, there are just too many of them.

3. I think we're in agreement on the blueprints/roadmap that Jay has put forth, but it's unclear if we have an owner for each blueprint. Jay and Chris Dent own several of these and some others are helping out (Dan Smith has been doing a lot of the online data migration patches), but we don't have owners for everything.

4. We need to close out any obsolete blueprints from the list that Jay had in the etherpad. As already noted, several of these are older and probably superseded by current work, so the team just needs flush these out.

5. Getting approval on the generic-resource-pools, resource-providers-allocations, standardizing capabilities (extra specs). The first two are pretty clear at this point, the specs just need to be rebased and reviewed in the next week. A lot of the code is already up for review. We're less clear on standardizing capabilities.

--

So the focus for the immediate future is going to have to be on completing the resource providers, inventory and allocation data migration code and generic resource pools. That's all the quantitative work and if we can push to get a lot of that done before the midcycle meetup it would be great, then we can see where we sit and discuss more about capabilities.

Jay - please correct or add to anything above.

[1] https://etherpad.openstack.org/p/newton-nova-scheduler
[2] https://review.openstack.org/#/c/286520/

--

Thanks,

Matt Riedemann


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to