It seems that the process goes a little further lowering shared_buffers
but I've reached the minimum (128kB with max_connections = 2)
without reaching the end .
Are there any chances to break the 128kb limit ?
Or do I need to break this process in two smaller parts (not easy for me
) ?
The procedure is create_accessors_methods in the dbi_link package
which you can find at:
http://pgfoundry.org/projects/dbi-link/
I've slightly modified the code to adapt it better to Oracle.
Basically it is a procedure which builds a lot of views and tables based
on objects (synonyms in my case)
On 30/12/2009 6:35 PM, Nicola Farina wrote:
Hello
I am using PostgreSQL 8.3.7, compiled by Visual C++ build 1400 under
win32 on a pc with 2 gb ram.
I need to use a long running plperlu stored procedure which actually
seems to make pg consume lot of memory
till a point in which pg crashes.
Can
Hello
I am using PostgreSQL 8.3.7, compiled by Visual C++ build 1400 under
win32 on a pc with 2 gb ram.
I need to use a long running plperlu stored procedure which actually
seems to make pg consume lot of memory
till a point in which pg crashes.
I have a log with these messages:
<<
Out of m
Tom Lane 写道:
>Neil Conway <[EMAIL PROTECTED]> writes:
>
>
>>Have you run ANALYZE recently? You might be running into the well-known
>>problem that hashed aggregation can consume an arbitrary amount of
>>memory -- posting the EXPLAIN for the query would confirm that.
>>
>>
>
>It would be usef
>Have you run ANALYZE recently? You might be running into the well-known
>problem that hashed aggregation can consume an arbitrary amount of
>memory -- posting the EXPLAIN for the query would confirm that.
>
>-Neil
>
>
>
yes, I run VACUUM ANALYZE VERBOSE then run the query,
and finally got the o
Neil Conway <[EMAIL PROTECTED]> writes:
> Have you run ANALYZE recently? You might be running into the well-known
> problem that hashed aggregation can consume an arbitrary amount of
> memory -- posting the EXPLAIN for the query would confirm that.
It would be useful to confirm whether this behavi
laser wrote:
> SELECT url,sum(ct) as ctperkw from ctrraw group by url order by ctperkw
> desc limit 1000;
> and the query run out of memory, the log file attached.
Have you run ANALYZE recently? You might be running into the well-known
problem that hashed aggregation can consume an arbitrary amou
hi,
we are using postgresql to analyze our web log, we got a 6M table,
and while doing the query:
SELECT url,sum(ct) as ctperkw from ctrraw group by url order by ctperkw
desc limit 1000;
the table structure is:
CREATE TABLE ctrRAW
(
cdate date,
ip inet,
kw varchar(128),
prd varchar(6),
pos int,
Pruteanu Dragos wrote:
Hi all,
I am running Postgres on a machine with
4G of memory.
When I run
dbvlm=> SELECT u.email, g.email FROM dom_user u,
shared_buffers = 20
sort_mem = 819200
vacuum_mem = 819200
What process led you to choose these values? Do you
Hi all,
I am running Postgres on a machine with
4G of memory.
When I run
dbvlm=> SELECT u.email, g.email FROM dom_user u,
dom_member m, dom_group g
andlm-> WHERE u.userid=m.userid and
m.groupid=g.groupid and g.iso_language='de' and
dbvlm-> m.type='n' limit 1000;
ERROR: out of memory
DETAIL: Fa
11 matches
Mail list logo