As shipped, there our x4500 have 8 raidz pools with 6 disks each in them. If 
spaced right, you can loose 6(?) disks without the pool dying. The root disk is 
mirrored, so if one dies it's not the end of the world. With the exception that 
grub is thoroughly fraked up in that if the 0 disk dies, you have to manually 
make the darn thing boot. You can't hot swap CPU or memory, but you can swap 
drives, fans, network links, and power supplies.

With the rest of the hardware redundancy built in, they have been working 
pretty well for us here. We did have some issues with a failure of the machine 
(software related) but with a decent support contract, you should be ok.

Our windows group purchased their BlueArc san and spent 100k for 15TB (raw)... 
I spent 50K for 33TB (useable)...


David


David Glaser
Systems Administrator
LSA Information Technology
University of Michigan

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Solaris
Sent: Thursday, October 09, 2008 4:09 PM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] Strategies to avoid single point of failure w/ X45x0 
Servers?

I have been leading the charge in my IT department to evaluate the Sun
Fire X45x0 as a commodity storage platform, in order to leverage
capacity and cost against our current NAS solution which is backed by
EMC Fiberchannel SAN.  For our corporate environments, it would seem
like a single machine would supply more than triple our current usable
capacity on our NAS, and the cost is significantly less per GB.  I am
also working to prove the multi-protocol shared storage capabilities
of the Thumper significantly out perform those of our current solution
(which is notoriously bad from the end user perspective).

The EMC solution is completely redundant with no single point of
failure.  What are some good strategies for providing a Thumper
solution with no single point of failure?

The storage folks are poo-poo'ing this concept because of the chances
for an Operating System failure... I'd like to come up with some
reasonable methods to put them in their place :)
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to