On Thu, 18 Mar 2004, Anton Nikiforov wrote:
But i'm worry about mentioned centeral database that should store 240
millions of records daily and should collect this data for years.
I have not worked with anything even remotely so big. A few thougths.. I think this is more of a hardware issue than a PostgreSQL issue. I think a good disk subsystem will be a must. Last time I was looking for my ex employer at large disk subsystems I think the one we were leaning towards was an IBM disk subsystem. I think it was in the $100,000 range.
Regardless of architecture (ie PC, SUN, etc..) SMP may be of help if you have concurrent users. Lots and lots of memory will help too.
And the data migration problem is still an opened issue for me - how to
make data migration from fast devices (RAID ARRAY) to slower devices (MO
Library or something like this) still having access to this data?
Don't follow you there. You mean backup? You can make a pg_dump of the data while the DB is running and then back that up.
Or were you talking about something else like storing different data in
different media speeds? (Like Hierarchical Storage Management)
I do not exactly know how to deal wth such a huge amount of data. The disk subsytem is the must and i do undrstand this. SMP architecture is the must also. I was asking is there any way that data will migrate from fast disk subsystem to slower but relyible automaticaly. Like in Nivell Netware (i used to work with it 7-8 years ago) you could ask the system if the file is untached for a month - then move it from one disk to magnetic-optical or tape but if this file is requested OS could move it back to the operational volume.
Anton
smime.p7s
Description: S/MIME Cryptographic Signature