On 02/02/2017 11:16 AM, Matthew Treinish wrote: <snip> > <oops, forgot to finish my though> > > We definitely aren't saying running a single worker is how we recommend people > run OpenStack by doing this. But it just adds on to the differences between > the > gate and what we expect things actually look like.
I'm all for actually getting to the bottom of this, but honestly real memory profiling is needed here. The growth across projects probably means that some common libraries are some part of this. The ever growing requirements list is demonstrative of that. Code reuse is good, but if we are importing much of a library to get access to a couple of functions, we're going to take a bunch of memory weight on that (especially if that library has friendly auto imports in top level __init__.py so we can't get only the parts we want). Changing the worker count is just shuffling around deck chairs. I'm not familiar enough with memory profiling tools in python to know the right approach we should take there to get this down to individual libraries / objects that are containing all our memory. Anyone more skilled here able to help lead the way? -Sean -- Sean Dague http://dague.net __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev