Hello Jeffery,

Friday, January 26, 2007, 3:16:44 PM, you wrote:

JM> Hi Folks,

JM> I am currently in the midst of setting up a completely new file
JM> server using a pretty well loaded Sun T2000 (8x1GHz, 16GB RAM)
JM> connected to an Engenio 6994 product (I work for LSI Logic so
JM> Engenio is a no brainer).  I have configured a couple of zpools
JM> from Volume groups on the Engenio box - 1x2.5TB and 1x3.75TB.  I
JM> then created sub zfs systems below that and set quotas and
JM> sharenfs'd them so that it appears that these "file systems" are
JM> dynamically shrinkable and growable.  It looks very good...  I can
JM> see the correct file system sizes on all types of machines (Linux
JM> 32/64bit and of course Solaris boxes) and if I resize the quota
JM> it's picked up in NFS right away.  But I would be the first in our
JM> organization to use this in an enterprise system so I definitely
JM> have some concerns that I'm hoping someone here can address.

JM> 1.  How stable is ZFS?  The Engenio box is completely configured
JM> for RAID5 with hot spares and write cache (8GB) has battery backup
JM> so I'm not too concerned from a hardware side.  I'm looking for an
JM> idea of how stable ZFS itself is in terms of corruptability, uptime and OS 
stability.

When it comes to uptime, os stability or corruptability - no problems
here.

However if you give ZFS entire LUN's on Enginio devices IIRC with that
arrays when zfs issues flush wrtie cache to the array it actually does
and this can possibly hurt performance. There's a way to setup array
to ignore flush commands or you can put zfs on SMI. You have to check
if this problem was actually with Enginio - I'm not sure.

However, depending on workload, consider doing RAID in ZFS instead of
in on the array. Especially 'coz you get self-healing from ZFS then.

At least doing stripe between several RAID5 LUNs would be good idea.


JM> 2.  Recommended config.  Above, I have a fairly simple setup.  In
JM> many of the examples the granularity is home directory level and
JM> when you have many many users that could get to be a bit of a
JM> nightmare administratively.  I am really only looking for high
JM> level dynamic size adjustability and am not interested in its
JM> built in RAID features.  But given that, any real world recommendations?

Depending on how much users you have consider creating a file system
for each user or at least for a group of users if you can group them.


JM> 3.  Caveats?  Anything I'm missing that isn't in the docs that could turn 
into a BIG gotchya?

WRITE CACHE problem I mentioned above - but check if it was really
Enginio - anyway there're simple workarounds.

There're some performance issues in corner cases hope you won't hit
one. Use at least S10U3 or Nevada (there're some people using nevada
in production :)).


JM> 4.  Since all data access is via NFS we are concerned that 32 bit
JM> systems (Mainly Linux and Windows via Samba) will not be able to
JM> access all the data areas of a 2TB+ zpool even if the zfs quota on
JM> a particular share is less then that.  Can anyone comment?

If there's quota on a file system then nfs client will see that quota
as a file system size IIRC so it shouldn't be a problem. But that
means a file system for each users.


JM> The bottom line is that with anything new there is cause for
JM> concern.  Especially if it hasn't been tested within our
JM> organization.  But the convenience/functionality factors are way too hard 
to ignore.


ZFS is new, that's right. There're some problems, mostly related to
performance and hot spare support (when doing raid in ZFS). Other that
that you should be ok. Quite a lot of people are using ZFS in a
production. I myself have ZFS in a production for years and right now
with well over 100TB of data on it using different storage arrays and
I'm still migrating more and more data. Never lost any data on ZFS, at
least I don't know about it :)))))



-- 
Best regards,
 Robert                            mailto:[EMAIL PROTECTED]
                                       http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to