I currently have a J2EE app that allows our users to upload files. The
actual file gets stored on the same disk as the webserver is running
on, while the information they entered about the file gets stored in
the database. We now need to move the database to a different machine
and I'm wonder
Why couldn't Postgres learn for itself what the optimal performance
settings would be? The easy one seems to be the effective_cache_size.
Top shows us this information. Couldn't Postgres read that value from
the same place top reads it instead of relying on a config file value?
Seems like it
On Wednesday, July 2, 2003, at 07:04 PM, Ang Chin Han wrote:
Matthew Hixson wrote:
I don't know what that is. I don't have an iostat utility on the
machine. This is a Debian Linux machine. Is there a package with
that utility in it?
apt-get install sysstat
apt-cache search ios
On Wednesday, July 2, 2003, at 01:10 PM, Rod Taylor wrote:
We have also done little to no performance tuning of Postgres'
configuration. We do have indexes on all of the important columns and
we have reindexed. Any pointers would be greatly appreciated.
Tuning will often double (if not more)
We currently have a public website that is serving customers, or at
least trying to. This machine is underpowered but we are going to be
upgrading soon. In the meantime we need to keep the current site alive.
We are running a Java application server. It is receiving
'transaction timed out'
On Friday, June 27, 2003, at 01:17 PM, Jord Tanner wrote:
On Fri, 2003-06-27 at 12:09, Patrick Hatcher wrote:
I have 6gig Ram box. I've set my shmmax to 307200. The database
starts up fine without any issues. As soon as a query is ran
or a FTP process to the server is done, the used me