Hello Everyone,
While talking to a friend about what his company is planning to do,
I found out that he is planning a 70TB filesystem/servers/cluster/db.
(Yes, seventy t-e-r-a-b-y-t-e...)
Apparently, he has files that go up to 2gb each, and actually require
such a horribly sized cluster.
If he wanted a PC cluster, and having 5TB on each PC, he would have
350 machines to maintain. From past experience maintaining clusters,
I guarantee that he will have at least 1 box failing every other day.
And I really do not think his idea of using NFS is that good. ;-)
Now if we were to go to the high-end route (and probably more cost
effective), we can pick SAN's, large Sun fileservers, or somesuch.
I still cannot picture him being able to maintain file integrity.
I say that he should attempt to split his filesystems into much
smaller chunks, say 1TB each. And attempt some way of having a RAID5
array. Mirroring or other RAID configurations would prove too costly.
What would you guys do in this case? :)
--
+------------------------------------------------------------------+
| [EMAIL PROTECTED] | [EMAIL PROTECTED] |
| http://peorth.iteration.net/~keichii | Yes, BSD is a conspiracy. |
+------------------------------------------------------------------+
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message