Folks,
I have this function that adds 100-900 rows to a table and then unpdates
them 12 times using data pulled from all over the database. I've
increased the pgsql buffer, the sort memory, and wal_files significantly
(2048, 1024 and 16) as well as adding a few relevant indexes.
However, this function does not seem to improve in response time by more
than 10%, no matter how much resources I give postgresql. While the 35
seconds it takes on an empty system isn't a problem, towards the end of
the day with a full slate of users it's taking several minutes -- an
unacceptable delay at this stage.
I can't help but feel that, because functions wrap everything in a
transaction, some sort of tinkering with the xlog settings/performance
is called for ... but I haven't found any docs on this. Or maybe I'm
just being choked by the speed of disk access?
Can anyone point me in the right direction, or should I be posting this
to a different list?
-Josh Berkus
P.S. Postgres 7.1.2 running on SuSE Linux 7.2 on a Celeron 500/128mb
RAM/IDE HDD.
______AGLIO DATABASE SOLUTIONS___________________________
Josh Berkus
Complete information technology [EMAIL PROTECTED]
and data management solutions (415) 565-7293
for law firms, small businesses fax 621-2533
and non-profit organizations. San Francisco
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?
http://www.postgresql.org/search.mpl