As I understand it, the behaviour *should* be that any active nova-conductor or
nova-scheduler could possibly process any outstanding work item pulled from the
RPC queue. I don't think that nova-conductor and nova-scheduler need to be
co-located.
I think you might have found a bug though...I'm not an expert in this area of
the code, but I didn't see any checks for components other than nova-compute
being disabled.
Chris
On 05/08/2017 04:54 AM, Masha Atakova wrote:
Hi everyone,
I have a setup with 2 controllers and 2 compute nodes. For testing purposes, I
want to make sure that when I send a request for launching a new instance, it's
being processed by a particular scheduler. For this I have several options:
1) ssh to controller with scheduler I don't want to use and power it down
2) disable scheduler using nova api
Option #2 seems much cleaner and effective to me (less time needed to get
service up and running again), but looks like I'm missing something very
important in how nova disables service.
My `nova service-list` gives me the following:
+----+------------------+-------------------------------------+----------+----------+-------+----------------------------+--------------------+
| Id | Binary | Host | Zone | Status
| State | Updated_at | Disabled Reason |
+----+------------------+-------------------------------------+----------+----------+-------+----------------------------+--------------------+
| 25 | nova-consoleauth | controller1 | internal | enabled | up |
2017-05-08T08:46:09.000000 | - |
| 28 | nova-consoleauth | controller2 | internal | disabled | up |
2017-05-08T08:46:10.000000 | - |
| 31 | nova-scheduler | controller1 | internal | enabled | up |
2017-05-08T08:46:14.000000 | - |
| 34 | nova-scheduler | controller2 | internal | disabled | up |
2017-05-08T08:46:17.000000 | Test |
| 37 | nova-conductor | controller1 | internal | enabled | up |
2017-05-08T08:46:13.000000 | - |
| 46 | nova-conductor | controller2 | internal | disabled | up |
2017-05-08T08:46:13.000000 | Test |
| 55 | nova-compute | compute1 | nova | enabled | up |
2017-05-08T08:46:10.000000 | - |
| 58 | nova-compute | compute2 | nova | enabled | up |
2017-05-08T08:46:16.000000 | - |
+----+------------------+-------------------------------------+----------+----------+-------+----------------------------+--------------------+
But when I run request for new instance (either from python-client, command line
or horizon), I see in logs that nova-scheduler at controller2 is working to
process that request half of the time. The same behavior as if I didn't disable
it at all.
I've read some code and noticed that each nova-conductor is always paired up
with the scheduler on the same controller node, so I've disabled nova-conductor
as well on controller2. Which didn't change anything.
While I'm going through the code of nova-api, could you please help me to
understand if it's a correct behavior or a bug?
Thanks in advance for your time.
_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators