Carlos Henrique Reimer <carlos.rei...@opendb.com.br> writes:
> I've facing an out of memory condition after running SLONY several hours to
> get a 1TB database with about 23,000 tables replicated. The error occurs
> after about 50% of the tables were replicated.

I'd try bringing this up with the Slony crew.

> I guess postgresql is trying to perform an atomic allocation (those which
> cannot wait for reclaim) to get a continues memory area and is failing due
> to memory fragmentation.

This theory has nothing to do with reality.  More likely it's just a
garden variety memory leak.  If it was an out-of-memory error reported
by Postgres, there should have been a memory statistics dump written in
the postmaster log --- can you find that and post it?

Another possible theory is that you're just looking at lots of memory
needed to hold relcache entries for all 23000 tables :-(.  If so there
may not be any easy way around it, except perhaps replicating subsets
of the tables.  Unless you can boost the memory available to the backend
--- since this is a 64 bit build, the only reason I can see for
out-of-memory failures would be a restrictive ulimit setting.

> After SLONY gets the out of memory condition, select * of the table also
> does not work:
> FiscalWeb=# select * from "8147_spunico"."sincdc";
> ERROR:  out of memory
> DETAIL:  Failed on request of size 268435456.

That's odd ... looks more like data corruption than anything else.
Does this happen even in a fresh session?  What do you have to do
to get rid of the failure?

>  PostgreSQL 8.3.21 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC)
> 4.4.6 20120305 (Red Hat 4.4.6-4)

You realize of course that this version is years out of support, and that
even if this problem traces to a bug in Postgres, 8.3 is not going to get
fixed.

                        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to