Re: [zfs-discuss] reliable, enterprise worthy JBODs?

2011-01-27 Thread Rocky Shek
Pasi, I have not tried the Opensolaris FMA yet. But we have developed a tool called DSM that allow users to locate disk drive location, failed drive identification, FRU parts status. http://dataonstorage.com/dataon-products/dsm-30-for-nexentastor.html We also spending time in past to sure SES

Re: [zfs-discuss] Lower latency ZIL Option?: SSD behind Controller BB Write Cache

2011-01-27 Thread Eff Norwood
They have been incredibly reliable with zero downtime or issues. As a result, we use 2 in every system striped. For one application outside of VDI, we use a pair of them mirrored, but that is very unusual and driven by the customer and not us. -- This message posted from opensolaris.org ___

Re: [zfs-discuss] Changed ACL behavior in snv_151 ?

2011-01-27 Thread Garrett D'Amore
We are working on a change to illumos (and NexentaStor) to revive acl_mode... lots and lots of people have had very bad experiences as a result of that particular change. - Garrett On Thu, 2011-01-27 at 07:32 +, Ryan John wrote: > > -Original Message- > > From: Frank Lahm [mai

Re: [zfs-discuss] n-tiered storage?

2011-01-27 Thread David Magda
On Jan 26, 2011, at 19:48, Roy Sigurd Karlsbakk wrote: > The scenario is as thus: We have a 50TB storage unit which was built to be an > archive, but lately, scientists have been using this for a fileserver for > modelling. Pracitaclly, this means 50+ processes doing more or less random > i/o t

[zfs-discuss] Move zpool to new virtual volume

2011-01-27 Thread Willi Schiegel
Hello all, I want to reorganize the virtual disk/ storage pool /volume layout on a StorageTek 6140 with two CSM200 expansion units attached (for example stripe LUNs across trays, which is not the case at the moment). On a data server I have a zpool "pool1" over one of the volumes on the Storage

Re: [zfs-discuss] n-tiered storage?

2011-01-27 Thread Roy Sigurd Karlsbakk
> > Hi all > > > > Is there anything usable for zfs/openindiana that allows for > > multi-tiered storage? > > > > The scenario is as thus: We have a 50TB storage unit which was built > > to be an archive, but lately, scientists have been using this for a > > fileserver for modelling. Pracitaclly, t

Re: [zfs-discuss] Changed ACL behavior in snv_151 ?

2011-01-27 Thread Frank Lahm
2011/1/27 Garrett D'Amore : > We are working on a change to illumos (and NexentaStor) to revive > acl_mode... lots and lots of people have had very bad experiences as a > result of that particular change. We had to put a chmod() wrapper into our app (Netatalk) to work around that. Good to hear you

Re: [zfs-discuss] Changed ACL behavior in snv_151 ?

2011-01-27 Thread Frank Lahm
2011/1/27 Ryan John : >> -Original Message- >> From: Frank Lahm [mailto:frankl...@googlemail.com] >> Sent: 25 January 2011 14:50 >> To: Ryan John >> Cc: zfs-discuss@opensolaris.org >> Subject: Re: [zfs-discuss] Changed ACL behavior in snv_151 ? > >> John, > >> welcome onboard! > >> 2011/1/

Re: [zfs-discuss] Lower latency ZIL Option?: SSD behind Controller BB Write Cache

2011-01-27 Thread James
Chris & Eff, Thanks for your expertise on this and other posts. Greatly appreciated. I've just been re-reading some of the great SSD-as-ZIL discussions. Chris, Cost: Our case is a bit non-representative as we have spare P410/512's that came with ESXi hosts (USB boot) so I've budgetted them at

Re: [zfs-discuss] Best choice - file system for system

2011-01-27 Thread Tristram Scott
I don't disagree that zfs is the better choice, but... > Seriously though. UFS is dead. It has no advantage > over ZFS that I'm aware > of. > When it comes to dumping and restoring filesystems, there is still no official replacement for the ufsdump and ufsrestore. The discussion has been had

Re: [zfs-discuss] Lower latency ZIL Option?: SSD behind Controller BB Write Cache

2011-01-27 Thread Eff Norwood
We tried all combinations of OCZ SSDs including their PCI based SSDs and they do NOT work as a ZIL. After a very short time performance degrades horribly and for the OCZ drives they eventually fail completely. We also tried Intel which performed a little better and didn't flat out fail over time