On Mon, 2002-06-10 at 03:46, Alvin Oga wrote: > > - and if the drives gonna fail... i say its more likely to die > within the first 30 days ...
Yes. MTBF only measures how likely it is to fail during the middle of its life. A good number die early (defective) and late (worn out). Not many die in the middle. That's what MTBF measures. I was speaking of the MTBF of RAID-0 where any one disk death means the whole array is gone. > - what's the likelyhood of 2 drives that fail ... > rendering the raid subsystem to be just blank disks.. Not much. Especially if you replace the failed disk promptly, or have a spare. > ( hopefully one can rest a little better after the first disk > ( dies... or is more of the same fate to happen to the rest of > ( the disks ... Neither. Unless the failure was due to the environment (e.g., running disks at 120 degress in a paint can shaker), having one fail makes others neither more likely nor less likely to fail. > > - i still prefer 1 large disks.. instead of many small ones... If you have many small disks and one fails, you are OK, as long as you used RAID 1 or RAID 4/5. You can replace the one failed disk. If your one large disk fails, you're down until you restore from backups. > > - if the server needs to stay up 24x7 ... than i'd like to have 2 or 3 > servers to be looking like 1 server... Yep. This isn't always easy, though.
signature.asc
Description: This is a digitally signed message part