On 03/29/2018 01:19 AM, Chris Dent wrote:
On Wed, 28 Mar 2018, iain MacDonnell wrote:
Looking for recommendations on tuning of nova-placement-api. I have a
few moderately-sized deployments (~200 nodes, ~4k instances),
currently on Ocata, and instance creation is getting very slow as they
fill up.
This should be well within the capabilities of an appropriately
installed placement service, so I reckon something is weird about
your installation. More within.
$ time curl http://apihost:8778/
{"error": {"message": "The request you have made requires
authentication.", "code": 401, "title": "Unauthorized"}}
real 0m20.656s
user 0m0.003s
sys 0m0.001s
This is good choice for trying to determine what's up because it
avoids any interaction with the database and most of the stack of
code: the web server answers, runs a very small percentage of the
placement python stack and kicks out the 401. So this mostly
indicates that socket accept is taking forever.
Well, this test connects and gets a 400 immediately:
echo | nc -v apihost 8778
so I don't think it's at at the socket level, but, I assume, the actual
WSGI app, once the socket connection is established. I did try to choose
a test that tickles the app, but doesn't "get too deep", as you say.
nova-placement-api is running under mod_wsgi with the "standard"(?)
config, i.e.:
Do you recall where this configuration comes from? The settings for
WSGIDaemonProcess are not very good and if there is some packaging
or documentation that is settings this way it would be good to find
it and fix it.
Good question. I could have sworn it was in the installation guide, but
I can't find it now. It must have come from RDO, i.e.:
https://github.com/rdo-packages/nova-distgit/blob/rpm-master/nova-placement-api.conf
Depending on what else is on the host running placement I'd boost
processes to number of cores divided by 2, 3 or 4 and boost threads to
around 25. Or you can leave 'threads' off and it will default to 15
(at least in recent versions of mod wsgi).
With the settings a below you're basically saying that you want to
handle 3 connections at a time, which isn't great, since each of
your compute-nodes wants to talk to placement multiple times a
minute (even when nothing is happening).
Right, that was my basic assessment too.... so now I'm trying to figure
out how it should be tuned, but had not been able to find any
guidelines, so thought of asking here. You've confirmed that I'm on the
right track (or at least "a" right track).
Tweaking the number of processes versus the number of threads
depends on whether it appears that the processes are cpu or I/O
bound. More threads helps when things are I/O bound.
Interesting. Will keep that in mind. Thanks!
...
WSGIProcessGroup nova-placement-api
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova
group=nova
WSGIScriptAlias / /usr/bin/nova-placement-api
...
[snip]
Other suggestions? I'm looking at things like turning off
scheduler_tracks_instance_changes, since affinity scheduling is not
needed (at least so-far), but not sure that that will help with
placement load (seems like it might, though?)
This won't impact the placement service itself.
It seemed like it might be causing the compute nodes to make calls to
update allocations, so I was thinking it might reduce the load a bit,
but I didn't confirm that. This was "clutching at straws" - hopefully I
won't need to now.
A while back I did some experiments with trying to overload
placement by using the fake virt driver in devstack and wrote it up
at https://anticdent.org/placement-scale-fun.html
The gist was that with a properly tuned placement service it was
other parts of the system that suffered first.
Interesting. Thanks for sharing that!
~iain
_______________________________________________
OpenStack-operators mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators