Matthew T. O'Connor wrote:
> Niblett, David A wrote:
> 
>> Wow, then I'm completely lost looking at your numbers.
>>
>> I set my shmall and shmmax to 805306368, so that the DB (which
>> is the only thing using memory on the box) could have 800M of
>> shared RAM.  Am I crazy here?
> 
> PostgreSQL uses it's shared buffers for some internal operations, but it
> really relies on the OS to do most of data file caching, I think the
> usual recomendation for shared_buffers settings is more on the order of
> 10% of RAM or so.  If you set them too high, it can cause performance
> problems of it's own (this problem is greatly reduced in 8.1 but still).

Yes the recommendation is 5-15%.

Another important option concerning memory is work_mem. This is used for
sort and hash operations. If Pg has to sort a 1 MB result set but
work_mem is only 512 kb it has to split, sort and merge the result set.
But be careful: this amount of memory can be used several times.

Another memory option to speed up vacuums is maintenance_work_mem.

Check the effective_cache_size as well.

>> My max_fsm_pages is 200000 and relations are 10000.  I'm
>> guessing that the more fsm_pages, the less you have to run
>> VACUUM FULL.  Do you think it might be better to lower the
>> shared_buffers, and increase the max_fsm_pages?
> 
> Yes I think both of those are good todo, however increasing FSM size is
> only relevant if the free space isn't all fitting in the map already.

Pg can tell the right settings: a VACUUM VERBOSE reports what has been
used. You should set the values somewhat higher.

Additionally I always increase default_statistics_target - but I never
did intensive tests to find a good value.


The transactions/second should increase a lot if the WAL is in memory.
In a test environment I'd move it to a tmpfs to see how the numbers
change, in a production environment a battery powered SSD is a better
option ;-)


Thomas

Reply via email to