Right now we are purging old LO objects because our production system run
out of memory
Mem: 41154296k total, 40797560k used, 356736k free, 15748k buffers
Swap: 16777208k total, 1333260k used, 15443948k free, 35304844k cached
SELECT count(*) FROM pg_largeobject;
count
--
52614842
(1 row)
Tomas Vondra writes:
> On 03/21/2018 02:18 PM, Jaime Soler wrote:
>> We still get out of memory error during pg_dump execution
>> pg_dump: reading large objects
>> out of memory
> H ... that likely happens because of this for loop copying a lot of
> data:
> https://github.com/postgres/postgre
On 03/21/2018 02:18 PM, Jaime Soler wrote:
> Hi,
>
> We still get out of memory error during pg_dump execution
> ...
> pg_dump: reading row security enabled for table "public.lo_table"
> pg_dump: reading policies for table "public.lo_table"
> pg_dump: reading publications
> pg_dump: reading public
Hi,
We still get out of memory error during pg_dump execution
bin$ ./initdb -D /tmp/test
The files belonging to this database system will be owned by user "jsoler".
This user must also own the server process.
The database cluster will be initialized with locale "es_ES.UTF-8".
The default databas
On 17 March 2018 at 00:47, Tom Lane wrote:
> Amit Khandekar writes:
>> If the SELECT target list expression is a join subquery, and if the
>> subquery does a hash join, then the query keeps on consuming more and
>> more memory. Below is such a query :
>
> Thanks for the report!
>
> I dug into thi
Amit Khandekar writes:
> If the SELECT target list expression is a join subquery, and if the
> subquery does a hash join, then the query keeps on consuming more and
> more memory. Below is such a query :
Thanks for the report!
I dug into this with valgrind, and found that the problem is that
Exe