[GENERAL] ERROR: Out of memory - when connecting to database

2010-11-08 Thread Jakub Ouhrabka
Hi, we have several instances of following error in server log: 2010-11-08 18:44:18 CET 5177 1 @ ERROR: out of memory 2010-11-08 18:44:18 CET 5177 2 @ DETAIL: Failed on request of size 16384. It's always the first log message from the backend. We're trying to trace it down. Whether it's al

Re: [GENERAL] ERROR: Out of memory - when connecting to database

2010-11-08 Thread Jakub Ouhrabka
> is it 32bit or 64bit machine? 64bit > what's the work_mem? 64MB Kuba Dne 8.11.2010 19:52, hubert depesz lubaczewski napsal(a): On Mon, Nov 08, 2010 at 07:19:43PM +0100, Jakub Ouhrabka wrote: Hi, we have several instances of following error in server log: 2010-11-08 18:44:18

Re: [GENERAL] ERROR: Out of memory - when connecting to database

2010-11-08 Thread Jakub Ouhrabka
e set bytea_column = null; CLUSTER table; the problem with ERROR: out of memory went away. Could it be that the failed connections were issued by autovacuum? Thanks, Kuba Dne 8.11.2010 19:19, Jakub Ouhrabka napsal(a): Hi, we have several instances of following error in server log: 2010-11-08

Re: [GENERAL] ERROR: Out of memory - when connecting to database

2010-11-08 Thread Jakub Ouhrabka
what's the work_mem? 64MB that's *way* too much with 24GB of ram and> 1k connections. please lower it to 32MB or even less. Thanks for your reply. You are generally right. But in our case most of the backends are only waiting for notify so not consuming any work_mem. The server is not swa

Re: [GENERAL] ERROR: Out of memory - when connecting to database

2010-11-08 Thread Jakub Ouhrabka
> They clearly were: notice the reference to "Autovacuum context" in the > memory map. I think you are right to suspect that auto-analyze was > getting blown out by the wide bytea columns. Did you have any > expression indexes involving those columns? Yes, there are two unique btree indexes: (

Re: [GENERAL] ERROR: Out of memory - when connecting to database

2010-11-08 Thread Jakub Ouhrabka
e set bytea_column = null; CLUSTER table; the problem with ERROR: out of memory went away. Could it be that the failed connections were issued by autovacuum? Thanks, Kuba Dne 8.11.2010 19:19, Jakub Ouhrabka napsal(a): Hi, we have several instances of following error in server log: 2010-11-08

[GENERAL] vacuum: out of memory error

2006-11-24 Thread Jakub Ouhrabka
Hi all, I have few of these messages in server log: ERROR: out of memory DETAIL: Failed on request of size 262143996. STATEMENT: VACUUM ANALYZE tablename There are few of them, always the same request size(?) but different two databases (out of 100+) and few different tables (pg_listener,

Re: [GENERAL] vacuum: out of memory error

2006-11-28 Thread Jakub Ouhrabka
144000 What is the cause of the error? Continuous block of this size can't be allocated? So maybe there's no corruption - what do you think? Regards, Kuba Andrew Sullivan napsal(a): On Fri, Nov 24, 2006 at 11:59:16AM +0100, Jakub Ouhrabka wrote: I've done little research in mai

[GENERAL] triggers and plpgsql

2001-08-03 Thread Jakub Ouhrabka
hi, i'm getting strange results when executing the code below. i would expect that li_count in function foo and the select after calling this function should return same values. can anyone explain me why i'm getting these results, please? thanks kuba example (using 7.1.2): CREATE TABLE TC01 ( T

[GENERAL] very slow delete

2001-09-03 Thread Jakub Ouhrabka
hi, i'm trying to tune some batches and after some research i located the biggest problem in doing something like this: begin; update ts08 set ts08typ__ = ; delete from ts08; end; the update takes about 1m25s (there are aprox. 7 rows in ts08). but the delete then takes more than 20 minut