Hi All,
I have been watching this thread for a while and thought it was time a
chipped my 2 cents
worth in. I have been an aggressive adopter of ZFS here across all of
our Solaris
systems and have found the benefits have far outweighed any small issues
that have
arisen.
Currently I have many systems that have LUNs provided from SAN based
storage to
systems for zpools. All our systems are configured with mirrored vdevs
and the
reliability factor has been as good as, if not greater than UFS and LVM.
My rules of thumb around systems tend to stem around getting the storage
infrastructure
right as that generally leads to the best availability. To this end for
every single SAN
attached system we have dual paths to separate switches, every array has
dual controllers
dual pathed to different switches. ZFS may be more or less susceptible
to any physical
infrastructure problem, but in my experience it is on a par with UFS
(and I gave up
shelling out for vxfs long ago)
The reasons for the above configuration is that our storage is evenly
split between two sites
dark fibre between them across redundant routes. This forms a ring
configuration which is
around 5 km around. We have so much storage that we need to have this in
case of a data
center catastrophe. The business recognizes the time to recovery risk
would be so great
that if we didn't we would be out of business in the event of one of
our data centres burning
or other natural disaster.
I have seen other people discussing power availability on other threads
recently. If you
want it, you can have it. You just need the business case for it. I
don't buy the comments
on UPS unreliability.
Quite frequently I have rebooted arrays and removed them from mirrored
vdevs and have
not had any issues with the LUNS they provided reattaching and re
silvering. Scrubs
on the pools have always been successful. Largest single mirrored pool
is around 11TB
which is form two 6140 RAID 5's.
We also use Loki boxes as well for very large storage pools which are
routinely filled.
(I was a beta tester for Loki). I have two J4500's, one with 48 x 250 GB
and 1 x with 48
x 1 TB drives. No issues there either. The 48 x 1 TB is used in a a Disk
_> Disk - Tape config
with a SL500 to back up our entire site. It is routinely fulled to the
brim and it performs
admirably attached to a T5220 which is 10 gig attached.
All of the systems I have mentioned vary from Samba servers to
compliance archives
to Oracle DB servers, Blackboard content stores, squid web caches, LDAP
directory
servers, Mail stores, Mail spools., Calendar servers DB's. The list
covers 60 plus systems.
I have 0% Solaris older than Solaris 10. Why would you?
In short I hope people don't hold back from adoption of ZFS because they
are unsure
about it. Judge for yourself as I have done and dip your toes in at
whatever rate you
are happy to do so. Thats what I did.
/Scott.
I also use it at home too with and old D1000 attached to a v120 with 8 x
320 GB scsi's
in a RAIDZ2 for all our home data and home business (which is a printing
outfit
which creates a lot of very big files on our macs).
--
_______________________________________________________________________
Scott Lawson
Systems Architect
Manukau Institute of Technology
Information Communication Technology Services Private Bag 94006 Manukau
City Auckland New Zealand
Phone : +64 09 968 7611
Fax : +64 09 968 7641
Mobile : +64 27 568 7611
mailto:sc...@manukau.ac.nz
http://www.manukau.ac.nz
________________________________________________________________________
perl -e 'print
$i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'
________________________________________________________________________
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss