On 08/19/2016 11:41 AM, Derek Higgins wrote:
On 19 August 2016 at 00:07, Sagi Shnaidman <sshna...@redhat.com> wrote:
Hi,
we have a problem again with not enough memory in HA jobs, all of them
constantly fails in CI: http://status-tripleoci.rhcloud.com/
Have we any idea why we need more memory all of a sudden? For months
the overcloud nodes have had 5G of RAM, then last week[1] we bumped it
too 5.5G now we need it bumped too 6G.
If a new service has been added that is needed on the overcloud then
bumping to 6G is expected and probably the correct answer but I'd like
to see us avoiding blindly increasing the resources each time we see
out of memory errors without investigating if there was a regression
causing something to start hogging memory.
fwiw, one recent addition was the cinder-backup service
though this service wasn't enabled by default in mitaka so with [1] we
can disable the service by default for newton as well
1. https://review.openstack.org/#/c/357729
--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev