Tom Lane mentioned :
=> Not if you haven't got the RAM to support it :-(
=>
=> Another thing you might look at is ANALYZEing the tables again after
=> you've loaded all the new data. The row-count estimates seem way off
=> in these plans. You might need to increase the statistics target,
=> too,
Stef wrote:
Christopher Kings-Lynne mentioned :
=> sort_mem = 4096
Reducing sort_mem to 4096 seems to make it run in a reasonable time
again. Any idea why? The database does a whole lot of huge sorts
every day, so I thought upping this parameter would help.
A couple of queries do seem to run slower
Stef <[EMAIL PROTECTED]> writes:
> Reducing sort_mem to 4096 seems to make it run in a reasonable time
> again. Any idea why? The database does a whole lot of huge sorts
> every day, so I thought upping this parameter would help.
Not if you haven't got the RAM to support it :-(
Another thing you
Stef wrote:
> Christopher Kings-Lynne mentioned :
> => sort_mem = 4096
>
> Reducing sort_mem to 4096 seems to make it run in a reasonable time
> again. Any idea why? The database does a whole lot of huge sorts
> every day, so I thought upping this parameter would help.
>
> A couple of queries do se
For starters,
shared_buffers = 110592
wal_buffers = 400
sort_mem = 30720
vacuum_mem = 10240
checkpoint_segments = 30
commit_delay = 5000
commit_siblings = 100
effective_cache_size = 201413
Try more like this:
shared_buffers =
Hi all,
I've attached all the query in query.sql
I'm using postgres 7.3.4 on Linux version 2.4.26-custom
( /proc/sys/vm/overcommit_memory = 0 this time )
free :
total used free sharedbuffers cached
Mem: 18102121767384 42828 0