On 11/17/2016 12:27 AM, Chris Friesen wrote:
On 11/16/2016 03:55 PM, Sławek Kapłoński wrote:
As I said before, I was testing it and I didn't have instances in Error
state. Can You maybe check it once again on current master branch?
I don't have a master devstack handy...will try and set one up. I just tried on
a stable/mitaka devstack--I bumped up the quotas and ran:
nova boot --flavor m1.tiny --image cirros-0.3.4-x86_64-uec --min-count 1
--max-count 100 blah
All the instances went to the "scheduling" state, the first 21 instances
scheduled successfully then one failed the RamFilter. I ended up with 100
instances all in the "error" state.
I located a running devstack based on master, the nova repo was using commit
633c817d from Nov 12.
It behaved the same...I jacked up the quotas to give it space, then ran:
nova boot --flavor m1.xlarge --image cirros-0.3.4-x86_64-uec --min-count 1
--max-count 20 blah
The first nine instances scheduled successfully, the next one failed the
RamFilter filter, and all the instances went to the "error" state.
This is what we'd expect given that in ComputeTaskManager.build_instances() if
the call to self._schedule_instances() raises an exception we'll hit the
"except" clause and loop over all the instances, setting them to the error
state. And down in FilterScheduler.select_destinations() we will raise an
exception if we couldn't schedule all the hosts:
if len(selected_hosts) < num_instances:
<snip>
reason = _('There are not enough hosts available.')
raise exception.NoValidHost(reason=reason)
Chris
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev