Hi Jeff,

We're running a FLX210 which I believe is an Engenio 2884. In our case
it also is attached to a T2000. ZFS has run VERY stably for us with
data integrity issues at all.

We did have a significant latency problem caused by ZFS flushing the
write cache on the array after every write, but that can be fixed by
configuring your array to ignore cache flushes. The instructions for
Engenio products are here: http://blogs.digitar.com/jjww/?itemid=44

We use the config for a production database, so I can't speak to the
NFS issues. All I would mention is to watch the RAM consumption by
ZFS.

Does anyone on the list have a recommendation for ARC sizing with NFS?

Best Regards,
Jason


On 1/26/07, Jeffery Malloch <[EMAIL PROTECTED]> wrote:
Hi Folks,

I am currently in the midst of setting up a completely new file server using a pretty 
well loaded Sun T2000 (8x1GHz, 16GB RAM) connected to an Engenio 6994 product (I work for 
LSI Logic so Engenio is a no brainer).  I have configured a couple of zpools from Volume 
groups on the Engenio box - 1x2.5TB and 1x3.75TB.  I then created sub zfs systems below 
that and set quotas and sharenfs'd them so that it appears that these "file 
systems" are dynamically shrinkable and growable.  It looks very good...  I can see 
the correct file system sizes on all types of machines (Linux 32/64bit and of course 
Solaris boxes) and if I resize the quota it's picked up in NFS right away.  But I would 
be the first in our organization to use this in an enterprise system so I definitely have 
some concerns that I'm hoping someone here can address.

1.  How stable is ZFS?  The Engenio box is completely configured for RAID5 with 
hot spares and write cache (8GB) has battery backup so I'm not too concerned 
from a hardware side.  I'm looking for an idea of how stable ZFS itself is in 
terms of corruptability, uptime and OS stability.

2.  Recommended config.  Above, I have a fairly simple setup.  In many of the 
examples the granularity is home directory level and when you have many many 
users that could get to be a bit of a nightmare administratively.  I am really 
only looking for high level dynamic size adjustability and am not interested in 
its built in RAID features.  But given that, any real world recommendations?

3.  Caveats?  Anything I'm missing that isn't in the docs that could turn into 
a BIG gotchya?

4.  Since all data access is via NFS we are concerned that 32 bit systems 
(Mainly Linux and Windows via Samba) will not be able to access all the data 
areas of a 2TB+ zpool even if the zfs quota on a particular share is less then 
that.  Can anyone comment?

The bottom line is that with anything new there is cause for concern.  
Especially if it hasn't been tested within our organization.  But the 
convenience/functionality factors are way too hard to ignore.

Thanks,

Jeff


This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to