On Sat, 16 Oct 2004, Russell Coker wrote: > Getting servers that each have 200G or 300G of storage is easy. Local
Make it a few TBs... > though). Having multiple back-end servers with local disks reduces the risks > (IMHO). There's less cables for idiots to trip over or otherwise break It depends on the global amount of disks you end up with on your server room, I think. If it increases too much, very good SAN hardware will decrease the chances of service downtime, because the SAN hardware and disks are better protected against catastrophic failures, and do predictive failure analysis right. And a big SAN is much easier to manage than hundreds of servers with many disks each, of different types. OTOH, any SAN hardware that does not have at least full double redundancy is NOT a good idea at all. > one back-end server go down and take out 1/7 of the mail boxes would be > annoying, but a lot less annoying than a SAN problem taking it all out. And that can happen, too. I think it did happen to an ISP around here, to their big, expensive as all heck EMC hardware. It is rather rare, and the service contract usually states that the SAN manufacturer is responsible for damages on such long downtimes... -- "One disk to rule them all, One disk to find them. One disk to bring them all and in the darkness grind them. In the Land of Redmond where the shadows lie." -- The Silicon Valley Tarot Henrique Holschuh -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]