That was exactly what the process was doing and the out of memory error
happened while one of the merges to set 1 was being executed.

On Thu, Dec 11, 2014 at 4:42 PM, Vick Khera <vi...@khera.org> wrote:

>
> On Thu, Dec 11, 2014 at 10:30 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>
>> needed to hold relcache entries for all 23000 tables :-(.  If so there
>> may not be any easy way around it, except perhaps replicating subsets
>> of the tables.  Unless you can boost the memory available to the backend
>>
>
> I'd suggest this. Break up your replication into something like 50 sets of
> 500 tables each, then add one at a time to replication, merging it into the
> main set. Something like this:
>
> create & replicate set 1.
> create & replicate set 2.
> merge 2 into 1.
> create & replicate set 3.
> merge 3 into 1.
>
> repeat until done. this can be scripted.
>
> Given you got about 50% done before it failed, maybe even 4 sets of 6000
> tables each may work out.
>



-- 
Reimer
47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br

Reply via email to