I am busy reading Gregory Smith' s  PostgreSQL 9.0 High Performance and
when the book was written he seemed to me a bit sceptical about SSD's.  I
suspect the reliability of the SSD's has improved significantly since then.

Our present server (128Gb RAM and 2.5 Tb disk space  and 12 CPU cores -
RAID 10) will become a development server and we are going to buy a new
server.

At the moment the 'base'  directory uses 1.5Tb  of disk space and there is
still more data to come.

The database contains blbliometric data that receive updates on a weekly
basis but not much changes other than that except for cleaning of data by a
few persons.

Some of the queries can take many hours to finish.

On our present system there are sometimes more than 300GB in temporary
files which I suspect will not be the case on the new system with a much
larger RAM.

Analysis or the SAR-logs showed that there were too much iowait in the
CPU's on the old system which has a lower spec CPU than the ones considered
for the new system.

We are looking possibly the following hardware:

CPU: 2 x  Ivy Bridge 8C E5-2667V2 3.3G 25M 8GT/s QPI - 16 cores
RAM: 24 x 32GB DDR3-1866 2Rx4 LP ECC REG RoHS  - 768Gb

with enough disk space - about 4.8 Tb on RAID 10.
My question is about the possible advantage and usage of SSD disks in the
new server.  At the moment I am considering using 2 x 200GB SSD' s for a
separate partion for temporary files and 2 x 100GB for the operating system.

So my questions:

1. Will the SSD's in this case be worth the cost?
2.  What will the best way to utilize them in the system?

Regards
Johann
-- 
Because experiencing your loyal love is better than life itself,
my lips will praise you.  (Psalm 63:3)

Reply via email to