On Sat, 2005-02-05 at 11:25, Dirk Lutzebaeck wrote:
> Hi,
>
> here is a query which produces over 1G temp file in pgsql_tmp. This
> is on pgsql 7.4.2, RHEL 3.0, XEON MP machine with 32GB RAM, 300MB
> sort_mem and 320MB shared_mem.
First step, upgrade to the latest 7.4.x version. 7.4.2 is an OLD
He is probably using IPOT (IP Other Time) : http://kadreg.free.fr/ipot/ :-) (sorry only french page ) On Oct 27, 2006, at 16:33, Bricklen Anderson wrote:Merlin Moncure wrote: On 2/5/05, Dirk Lutzebaeck <[EMAIL PROTECTED]> wrote: Was the original message actually from 2/5/05?---
Hi,
I'm sorry but it look like my computer has resent older posts from me,
sigh...
Dirk
Alexander Staubo wrote:
While I can't explain why PostgreSQL would use that
memory, I recommend looking into tweaking the work_mem parameter. This
setting specifies how much memory PostgreSQL on certain
While I can't explain why PostgreSQL would use that memory, I
recommend looking into tweaking the work_mem parameter. This setting
specifies how much memory PostgreSQL on certain temporary data
structures (hash tables, sort vectors) until it starts using
temporary files. Quoting the docs:
Merlin Moncure wrote:
On 2/5/05, Dirk Lutzebaeck <[EMAIL PROTECTED]> wrote:
Was the original message actually from 2/5/05?
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail comman
On 2/5/05, Dirk Lutzebaeck <[EMAIL PROTECTED]> wrote:
here is a query which produces over 1G temp file in pgsql_tmp. This
is on pgsql 7.4.2, RHEL 3.0, XEON MP machine with 32GB RAM, 300MB
sort_mem and 320MB shared_mem.
Below is the query and results for EXPLAIN and EXPLAIN ANALYZE. All
tables ha
Hi,
here is a query which produces over 1G temp file in pgsql_tmp. This
is on pgsql 7.4.2, RHEL 3.0, XEON MP machine with 32GB RAM, 300MB
sort_mem and 320MB shared_mem.
Below is the query and results for EXPLAIN and EXPLAIN ANALYZE. All
tables have been analyzed before.
Can some please explain
I'm doing VACUUM ANALYZE once a night. Before the tests I did VACUUM and
then ANALYZE.
I'd suggest once an hour on any resonably active database...
Chris
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
John A Meinel wrote:
Dirk Lutzebaeck wrote:
Greg Stark wrote:
I gave a bunch of "explain analyze select" commands to test
estimates for
individual columns. What results do they come up with? If those are
inaccurate
then raising the statistics target is a good route. If those are
accurate
individ
Tom,
the orginal query has more output columns. I reduced it for readability.
Specifically it returns a persitent object (flatobj column) which needs
to be processed by the application as the returned result. The problem
of the huge sort space usage seems to be that the flatobj is part of the
r
Hi John,
thanks very much for your analysis. I'll probably need to reorganize
some things.
Regards,
Dirk
John A Meinel wrote:
Dirk Lutzebaeck wrote:
Hi,
here is a query which produces over 1G temp file in pgsql_tmp. This
is on pgsql 7.4.2, RHEL 3.0, XEON MP machine with 32GB RAM, 300MB
sort_mem a
Greg Stark wrote:
I gave a bunch of "explain analyze select" commands to test estimates for
individual columns. What results do they come up with? If those are inaccurate
then raising the statistics target is a good route. If those are accurate
individually but the combination is inaccurate then yo
John,
I'm doing VACUUM ANALYZE once a night. Before the tests I did VACUUM and
then ANALYZE.
Dirk
John A Meinel wrote:
Dirk Lutzebaeck wrote:
Greg,
Thanks for your analysis. But I dont get any better after bumping
STATISTICS target from 10 to 200.
explain analyze shows that the optimizer is stil
Greg,
Thanks for your analysis. But I dont get any better after bumping
STATISTICS target from 10 to 200.
explain analyze shows that the optimizer is still way off estimating the
rows. Is this normal? It still produces a 1 GB temp file.
I simplified the query a bit, now only two tables are involv
[EMAIL PROTECTED] (Dirk Lutzebaeck) writes:
> SELECT DISTINCT ON (df.val_9, df.created, df.flatid) df.docindex,
> df.flatobj, bi.oid, bi.en
> FROM bi,df
> WHERE bi.rc=130170467
> ...
> ORDER BY df.val_9 ASC, df.created DESC
> LIMIT 1000 OFFSET 0
Just out of curiosity, what is this query supposed
Dirk Lutzebaeck wrote:
Greg Stark wrote:
I gave a bunch of "explain analyze select" commands to test estimates
for
individual columns. What results do they come up with? If those are
inaccurate
then raising the statistics target is a good route. If those are
accurate
individually but the combina
I gave a bunch of "explain analyze select" commands to test estimates for
individual columns. What results do they come up with? If those are inaccurate
then raising the statistics target is a good route. If those are accurate
individually but the combination is inaccurate then you have a more dif
Dirk Lutzebaeck wrote:
Greg,
Thanks for your analysis. But I dont get any better after bumping
STATISTICS target from 10 to 200.
explain analyze shows that the optimizer is still way off estimating
the rows. Is this normal? It still produces a 1 GB temp file.
I simplified the query a bit, now onl
Greg Stark <[EMAIL PROTECTED]> writes:
> Am I wrong? I thought the disk sort algorithm was the polyphase tape sort from
> Knuth which is always reading two tapes and writing to a third.
It is a polyphase sort, but we recycle the input "tapes" as fast as we
use them, so that the maximum disk space
Tom Lane <[EMAIL PROTECTED]> writes:
> It was 700k rows to sort, not 22k.
Oops, missed that.
> > The temporary table does need to store three copies of the records at
> > a given time, but still it sounds like an awful lot.
>
> Huh?
Am I wrong? I thought the disk sort algorithm was the polyp
Greg Stark <[EMAIL PROTECTED]> writes:
> Dirk Lutzebaeck <[EMAIL PROTECTED]> writes:
>> Can some please explain why the temp file is so huge? I understand
>> there are a lot of rows.
> Well that I can't explain. 22k rows of width 1361 doesn't sound so big to me
> either.
It was 700k rows to sort,
Dirk Lutzebaeck <[EMAIL PROTECTED]> writes:
> Below is the query and results for EXPLAIN and EXPLAIN ANALYZE. All
> tables have been analyzed before.
Really? A lot of the estimates are very far off. If you really just analyzed
these tables immediately prior to the query then perhaps you should t
Dirk Lutzebaeck wrote:
Hi,
here is a query which produces over 1G temp file in pgsql_tmp. This
is on pgsql 7.4.2, RHEL 3.0, XEON MP machine with 32GB RAM, 300MB
sort_mem and 320MB shared_mem.
Below is the query and results for EXPLAIN and EXPLAIN ANALYZE. All
tables have been analyzed before.
Can s
Hi,
here is a query which produces over 1G temp file in pgsql_tmp. This
is on pgsql 7.4.2, RHEL 3.0, XEON MP machine with 32GB RAM, 300MB
sort_mem and 320MB shared_mem.
Below is the query and results for EXPLAIN and EXPLAIN ANALYZE. All
tables have been analyzed before.
Can some please explain
24 matches
Mail list logo