On Wed, 2006-10-25 at 18:15 -0400, Tom Lane wrote:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > Jeff Davis wrote:
> >> * smgrGetPendingDeletes() calls palloc()
> >> * palloc() fails, resulting in ERROR, causing infinite recursion
>
> > Hmm, maybe we could have AbortTransaction switch to ErrorC
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Jeff Davis wrote:
>> * smgrGetPendingDeletes() calls palloc()
>> * palloc() fails, resulting in ERROR, causing infinite recursion
> Hmm, maybe we could have AbortTransaction switch to ErrorContext, which
> has some preallocated space, before calling Rec
On Wed, 2006-10-25 at 16:20 -0300, Alvaro Herrera wrote:
> Jeff Davis wrote:
> > I found the root cause of the bug I reported at:
> >
> > http://archives.postgresql.org/pgsql-bugs/2006-10/msg00211.php
> >
> > What happens is this:
> > * Out of memory condition causes an ERROR
> > * ERROR triggers
Jeff Davis wrote:
> I found the root cause of the bug I reported at:
>
> http://archives.postgresql.org/pgsql-bugs/2006-10/msg00211.php
>
> What happens is this:
> * Out of memory condition causes an ERROR
> * ERROR triggers an AbortTransaction()
> * AbortTransaction() calls RecordTransactionAbor
I found the root cause of the bug I reported at:
http://archives.postgresql.org/pgsql-bugs/2006-10/msg00211.php
What happens is this:
* Out of memory condition causes an ERROR
* ERROR triggers an AbortTransaction()
* AbortTransaction() calls RecordTransactionAbort()
* RecordTransactionAbort calls
Casey Duncan <[EMAIL PROTECTED]> writes:
> I haven't tried increasing the stats target. What would be a suitable
> value to try?
Try 100 (instead of the default 10) ... you can go as high as 1000,
though hopefully that's overkill.
regards, tom lane
-
On Sep 24, 2006, at 8:59 AM, Tom Lane wrote:
Casey Duncan <[EMAIL PROTECTED]> writes:
seed | st_id | 164656
I ran analyze after this, but the results were roughly the same.
What's the statistics target set to, and did you try increasing it?
Can we see the rest of the pg_sta
Casey Duncan <[EMAIL PROTECTED]> writes:
> seed | st_id | 164656
> I ran analyze after this, but the results were roughly the same.
What's the statistics target set to, and did you try increasing it?
Can we see the rest of the pg_stats row for this column (I'm mainly
interested in
I posted that in a subsequent mail, but here it is again:
I'm interested in collecting info on the distribution of data.
Can you post:
select tablename, attname, n_distinct from pg_stats
where attname = 'st_id';
tablename | attname | n_distinct
--+-+
st
Casey Duncan <[EMAIL PROTECTED]> writes:
> select st_id, min(seed_id) as "initial_seed_id", count(*) as
> "seed_count" from seed group by st_id;
> The query plan and table stats are:
>QUERY PLAN
> --
On Sep 19, 2006, at 1:51 AM, Simon Riggs wrote:
On Mon, 2006-09-18 at 14:08 -0700, Casey Duncan wrote:
I've reported variants of this in the past, but this case is entirely
repeatable.
Executing this query:
select st_id, min(seed_id) as "initial_seed_id", count(*) as
"seed_count"
from seed g
On Mon, 2006-09-18 at 14:08 -0700, Casey Duncan wrote:
> I've reported variants of this in the past, but this case is entirely
> repeatable.
>
> Executing this query:
>
> select st_id, min(seed_id) as "initial_seed_id", count(*) as
> "seed_count"
> from seed group by st_id;
>
> The query pla
I've reported variants of this in the past, but this case is entirely
repeatable.
Executing this query:
select st_id, min(seed_id) as "initial_seed_id", count(*) as
"seed_count"
from seed group by st_id;
The query plan and table stats are:
QUERY PLAN
hi,
thx for the hint. i decreased work_mem. its working now.
the dump written in the logfile is:
TopMemoryContext: 40960 total in 5 blocks; 11064 free (37 chunks); 29896 used
TopTransactionContext: 8192 total in 1 blocks; 7856 free (0 chunks); 336 used
Remote Con hash: 8192 total in 1 blocks; 5
"Anita Lederer" <[EMAIL PROTECTED]> writes:
> i have a statement which ends with
> ERROR: out of memory
> DETAIL: Failed on request of size 639.
Do you have work_mem set to a large value? Your query plan contains
several sorts so would potentially try to use several times work_mem
... if th
Hello,
i have a statement which ends with
ERROR: out of memory
DETAIL: Failed on request of size 639.
i tried it several times, the number after size changes but not the outcome.
could someone tell me what information you need to tell me whats wrong?
what i have so fare is:
PostgreSQL ver
On Mon, Jun 20, 2005 at 06:54:20PM -0400, Tom Lane wrote:
> "Bill Rugolsky Jr." <[EMAIL PROTECTED]> writes:
> > The PL/pgSQL FOR loop in the function consume_memory() defined below
> > will consume VM on each iteration until the process hits its ulimit.
> > The problem occurs with variables of ROWT
"Bill Rugolsky Jr." <[EMAIL PROTECTED]> writes:
> The PL/pgSQL FOR loop in the function consume_memory() defined below
> will consume VM on each iteration until the process hits its ulimit.
> The problem occurs with variables of ROWTYPE; there is no unbounded
> allocation when using simple types su
Hello,
The PL/pgSQL FOR loop in the function consume_memory() defined below
will consume VM on each iteration until the process hits its ulimit.
The problem occurs with variables of ROWTYPE; there is no unbounded
allocation when using simple types such as integer or varchar. Before I
delve into t
Logged by: Lutischan Ferenc
Email address: yoursoft ( at ) freemail ( dot ) hu
PostgreSQL version: 7.4.3, 7.4.5
Operating system: Linux: SuSE9.0, SLES9
Description:Out of memory
Details:
Dear Developer Team!
I use the PostgreSQL 7.4.5 on SLES9.0 and PostgreSQL 7.4.3 on SuS
Robert Osowiecki <[EMAIL PROTECTED]> writes:
> While doing some large update on table with over 1 million records:
> HashBatchContext: 360701952 total in 52 blocks; 7158680 free (140
> chunks); 353543272 used
Evidently this hashtable got out of hand :-(
> Query is EXPLAIN-ed as follows:
> Hash
While doing some large update on table with over 1 million records:
update tordspecif set
sp_vat=vv_vat,
sp_vat_opis=vv_vat_opis,
sp_ar_sww=vv_sww
from varticlevat
where sp_az_artsize=vv_artsize
PostgreSQL 8.0.0beta5 on i686-pc-linux-gnu, compiled by GCC 2.96
reported
Greetings,
During testing of our application we ran into a very odd error:
very randomly during the test the postgresql log file showed a "ERROR:
53200: out of memory"
we changed the logging configuration to log statements causing errors
and found the error to be caused by a SELECT,
here is t
On Tue, 8 Jul 2003, Ivan Boscaino wrote:
> Hi,
> I'm not sure if you know this problem.
>
> Create an empty function with 26 parameters:
>
> create function test (INTEGER,TEXT,TEXT,TEXT,CHAR(16),CHAR(11),
> BOOLEAN,BOOLEAN,BOOLEAN,BOOLEAN,BOOLEAN,BOOLEAN,
> BOOLEAN,BOOLEAN,BOOLEAN,CHAR(2),TEXT,TE
Hi,
I'm not sure if you know this problem.
Create an empty function with 26 parameters:
create function test (INTEGER,TEXT,TEXT,TEXT,CHAR(16),CHAR(11),
BOOLEAN,BOOLEAN,BOOLEAN,BOOLEAN,BOOLEAN,BOOLEAN,
BOOLEAN,BOOLEAN,BOOLEAN,CHAR(2),TEXT,TEXT,TEXT,TEXT,
CHAR(5),TEXT,TEXT,TEXT,TEXT,TEXT )
returns i
25 matches
Mail list logo