We use Debian linux 32-bits, so the addressable space available really seems
to be in the 3.0Gb to 3.5Gb range. Last night I decreased the shared_buffers
from 2Gb to 1Gb and tried the global ANALYZE again. It went out of memory
after 3 hours 40 minutes. That database has 12,197 schemas with 22 tables
each, which means 268,334 tables. Should I keep reducing the shared_buffers
and trying again? We don't plan to run the ANALYZE every week, so we can
keep the shared_buffers high most of the time and tweak it when needed. I
will try again tonight with ~512Mb and see what happens. Please let me know
if you have ideas.

Thanks again!
Hugo



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Thousands-of-schemas-and-ANALYZE-goes-out-of-memory-tp5726198p5726657.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to