On 09/19/2017 05:21 PM, Steven D. Searles wrote:
Hello everyone and thanks in advance.  I have Openstack Pike (KVM,FC-SAN/Cinder)
installed in our lab for testing before upgrade and am seeing a possible issue
with disabling a host and live migrating the instances off via the horizon
interface. I can migrate the instances individually via the Openstack client
without issue. It looks like I might be missing something relating to concurrent
jobs in my nova config? Interestingly enough when a migrate host is attempted
via horizon they all fail.  Migrating a single instance through the horizon
interface does function.   Below is what I am seeing in my scheduler log on the
controller when trying to live migrate all instances from a disabled host. I
believe the last line to be the obvious issue but I can not find a nova variable
that seems to relate to this.  Can anyone point me in the right direction?

There's a default limit of 1 outgoing live migration per compute node. I don't think that's the whole issue here though.

*2017-09-19 19:02:30.588 19741 DEBUG nova.scheduler.filter_scheduler
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] There are 1 hosts available
but 10 instances requested to build. select_destinations
/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py:101*

It's unclear to me why it's trying to schedule 10 instances all at once. Did you originally create all the instances as part of a single boot request?

Chris

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to