Replaying to my own mail. Maybe we've found the root cause:
In one database there was a table with 200k records where each record
contained 15kB bytea field. Auto-ANALYZE was running on that table
continuously (with statistics target 500). When we avoid the
auto-ANALYZE via UPDATE table set by
Jakub Ouhrabka writes:
>>> They clearly were: notice the reference to "Autovacuum context" in the
>>> memory map. I think you are right to suspect that auto-analyze was
>>> getting blown out by the wide bytea columns. Did you have any
>>> expression indexes involving those columns?
> Yes, there
> They clearly were: notice the reference to "Autovacuum context" in the
> memory map. I think you are right to suspect that auto-analyze was
> getting blown out by the wide bytea columns. Did you have any
> expression indexes involving those columns?
Yes, there are two unique btree indexes:
(
Jakub Ouhrabka writes:
> Could it be that the failed connections were issued by autovacuum?
They clearly were: notice the reference to "Autovacuum context" in the
memory map. I think you are right to suspect that auto-analyze was
getting blown out by the wide bytea columns. Did you have any
exp
2010/11/8 Jakub Ouhrabka :
> Replaying to my own mail. Maybe we've found the root cause:
>
> In one database there was a table with 200k records where each record
> contained 15kB bytea field. Auto-ANALYZE was running on that table
> continuously (with statistics target 500). When we avoid the auto
> Date: Mon, 8 Nov 2010 20:05:23 +0100
> From: k...@comgate.cz
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] ERROR: Out of memory - when connecting to database
>
> Replaying to my own mail. Maybe we've found the root cause:
>
> In one database there
what's the work_mem?
64MB
that's *way* too much with 24GB of ram and> 1k connections. please
lower it to 32MB or even less.
Thanks for your reply. You are generally right. But in our case most of
the backends are only waiting for notify so not consuming any work_mem.
The server is not swa
On Mon, Nov 08, 2010 at 08:04:32PM +0100, Jakub Ouhrabka wrote:
> > is it 32bit or 64bit machine?
>
> 64bit
>
> > what's the work_mem?
>
> 64MB
that's *way* too much with 24GB of ram and > 1k connections. please
lower it to 32MB or even less.
Best regards,
depesz
--
Linkedin: http://www.lin
Replaying to my own mail. Maybe we've found the root cause:
In one database there was a table with 200k records where each record
contained 15kB bytea field. Auto-ANALYZE was running on that table
continuously (with statistics target 500). When we avoid the
auto-ANALYZE via UPDATE table set by
> is it 32bit or 64bit machine?
64bit
> what's the work_mem?
64MB
Kuba
Dne 8.11.2010 19:52, hubert depesz lubaczewski napsal(a):
On Mon, Nov 08, 2010 at 07:19:43PM +0100, Jakub Ouhrabka wrote:
Hi,
we have several instances of following error in server log:
2010-11-08 18:44:18 CET 5177 1 @
On Mon, Nov 08, 2010 at 07:19:43PM +0100, Jakub Ouhrabka wrote:
> Hi,
>
> we have several instances of following error in server log:
>
> 2010-11-08 18:44:18 CET 5177 1 @ ERROR: out of memory
> 2010-11-08 18:44:18 CET 5177 2 @ DETAIL: Failed on request of size 16384.
>
> It's always the firs
Hi,
we have several instances of following error in server log:
2010-11-08 18:44:18 CET 5177 1 @ ERROR: out of memory
2010-11-08 18:44:18 CET 5177 2 @ DETAIL: Failed on request of size 16384.
It's always the first log message from the backend. We're trying to
trace it down. Whether it's al
12 matches
Mail list logo