Hi,
I've asked this question a couple of times before on this forum but no one seems to be nice enough to point me to the right direction or help me out with any information, if possible. Please help me out with this because this is a very serious issue for me and I need to learn more about this. And here it is again:
I've been running postgres on my server for over a year now and the tables have become huge. I have 3 tables that have data over 10GB each and these tables are read very very frequently. In fact, heavy searches on these tables are expected every 2 to 3 minutes. This unfortunately gives a very poor response time to the end user and so I'm looking at other alternatives now.
Currently, the postgresql installation is on a single disk and so all the tables have their data read from a single disk. Searching on different tables by multiple users at the same time results in very slow searches, as it's mainly dependant on the spindle speed. I recently gained access to another server which has 3 SCSI disks. I know there is a way to mirror the tables across the three different disks but I'm not sure if it's as easy as symlinking the files (WAL files only?) across. Can anyone please tell me what to do here and how to harness the power of the three SCSI drives that I have. Which files in the data directory need to be moved? Is this safe? Can backups etc be easily done? Any information will be greatly appreciated. Thank you,
Steve
---------------------------(end of broadcast)--------------------------- TIP 4: Don't 'kill -9' the postmaster