Lloyd H. Gill wrote:
Hello folks,
I am sure this topic has been asked, but I am new to this list. I have
read a ton of doc's on the web, but wanted to get some opinions from
you all. Also, if someone has a digest of the last time this was
discussed, you can just send that to me. In any case, I am reading a
lot of mixed reviews related to ZFS on HW RAID devices.
The Sun docs seem to indicate it possible, but not a recommended
course. I realize there are some advantages, such as snapshots, etc.
But, the h/w raid will handle 'most' disk problems, basically reducing
the great capabilities of the big reasons to deploy zfs. One
suggestion would be to create the h/w RAID LUNs as usual, present them
to the OS, then do simple striping with ZFS. Here are my two
applications, where I am presented with this possibility:
Comments below from me as I am a user of both of these environments, bot
with ZFS. You may also want to check the iMS archives or subscribe to
the list. This is
where all the Sun Messaging Server gurus hang out. (I listen mostly ;))
List is : info-...@arnold.com and you can get more info here :
http://mail.arnold.com/info-ims.htmlx
Sun Messaging Environment:
We currently use EMC storage. The storage team manages all Enterprise
storage. We currently have 10x300gb UFS mailstores presented to the
OS. Each LUN is a HW RAID 5 device. We will be upgrading the
application and doing a hardware refresh of this environment, which
will give us the chance to move to ZFS, but stay on EMC storage. I am
sure the storage team will not want to present us with JBOD. It is
there practice to create the HW LUNs and present them to the
application teams. I don't want to end up with a complicated scenario,
but would like to leverage the most I can with ZFS, but on the EMC
array as I mentioned.
In this environment I do what Bob mentioned in his reply to you and that
is I prevision two LUNS for each data volume and mirror them with ZFS.
The LUNS are based on RAID 5
stripes on 3510's, 3511's and 6140's. Mirroring them with ZFS gives all
of the niceties of ZFS and it will catch any of the silent data
corruption type issues that hardware RAID
will not. My reasonings for doing this way go back to Disksuite days as
well. (which I no longer use, ZFS or nothing pretty much these days).
My setup is based on 5 x 250 GB mirrored pairs with around 3-4 million
messages per volume.
The two LUNS I mirror are *always* provisioned from two separate arrays
in different data centers. This also means that in the case of a massive
catastrophe at one
data centre, I should have a good copy from the 'mirror of last resort'
that I can get our business back up and running on quickly.
Other advantages of this is that it also allows for relatively easy
array maintenance and upgrades as well. ZFS only remirrors changed
blocks rather than a complete
block re sync like disksuite does. This allows for very fast convergence
times in the likes of file servers where change is relatively light,
albeit continuous. Mirrors
here are super quick to re converge from my experience, a little quicker
than RAIDZ's. ( I don't have data to back this up, just a casuall
observation)
In some respect being both a storage guy and a systems guy. Sometimes
the storage people need to get with the program a bit. :P If you use ZFS
with one
of it's redundant forms (mirrors or RAIDZ's) then JBOD presentation will
be fine.
Sun Directory Environment:
The directory team is running HP DL385 G2, which also has a built-in
HW RAID controller for 5 internal SAS disks. The team currently has
DS5.2 deployed on RHEL3, but as we move to DS6.3.1, they may want to
move to Solaris 10. We have an opportunity to move to ZFS in this
environment, but am curious how to best leverage ZFS capabilities in
this scenario. JBOD is very clear, but a lot of manufacturers out
there are still offering HW RAID technologies, with high-speed caches.
Using ZFS with these is not very clear to me, and as I mentioned,
there are very mixed reviews, not on ZFS features, but how it's used
in HW RAID settings.
Sun Directory environment generally isn't very IO intensive, except for
in massive data reloads or indexing operations. Other than this it is an
ideal candidate for ZFS
and it's rather nice ARC cache. Memory is cheap on a lot of boxes and it
will make read only type file systems fly. I imagine your actual living
LDAP data set on disk
probably won't be larger than 10 Gigs or so? I have around 400K objects
in mine and it's only about 2 Gigs or so including all our indexes. I
tend to tune DS up
so that everything it needs is in RAM anyway. As far as diectory server
goes, are you using the 64 bit version on Linux? If not you should be as
well.
Thanks for any observations.
Lloyd
------------------------------------------------------------------------
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
_________________________________________________________________________
Scott Lawson
Systems Architect
Information Communication Technology Services
Manukau Institute of Technology
Private Bag 94006
South Auckland Mail Centre
Manukau 2240
Auckland
New Zealand
Phone : +64 09 968 7611
Fax : +64 09 968 7641
Mobile : +64 27 568 7611
mailto:sc...@manukau.ac.nz
http://www.manukau.ac.nz
__________________________________________________________________________
perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'
__________________________________________________________________________
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss