Re: [zfs-discuss] ZFS Device fail timeout?

2008-04-01 Thread Luke Scharf
Richard Elling wrote: > In general, ZFS doesn't manage device timeouts. The lower > layer drivers do. The timeout management depends on which OS, > OS version, and HBA you use. A fairly extreme example may be > Solaris using parallel SCSI and the sd driver, which uses a default > timeout of 60 s

Re: [zfs-discuss] What to do about retryable write errors?

2008-04-01 Thread Richard Elling
Martin Englund wrote: > I've got a newly created zpool where I know (from the previous UFS) that one > of the disks has retryable write errors. > > What should I do about it now? Just leave zfs to deal with it? Repair it? > Retryable write errors are not fatal, they are retried. What do you th

Re: [zfs-discuss] ZFS Device fail timeout?

2008-04-01 Thread Richard Elling
Luke Scharf wrote: > I'm running ZFS in a test-server against a bunch of drives in an Apple > XRaid (configured in the JBOD mode). It works pretty well, except that > when I yank one of the drives, ZFS hangs -- presumably, it's waiting > for a response from the the XRAID. > > Is there any way to

[zfs-discuss] What to do about retryable write errors?

2008-04-01 Thread Martin Englund
I've got a newly created zpool where I know (from the previous UFS) that one of the disks has retryable write errors. What should I do about it now? Just leave zfs to deal with it? Repair it? If I should repair, if this procedure ok? zpool offline z2 c5t4d0 format -d c5t4d0 repair ... zpool onl

[zfs-discuss] ZFS Device fail timeout?

2008-04-01 Thread Luke Scharf
I'm running ZFS in a test-server against a bunch of drives in an Apple XRaid (configured in the JBOD mode). It works pretty well, except that when I yank one of the drives, ZFS hangs -- presumably, it's waiting for a response from the the XRAID. Is there any way to set the device-failure timeout

[zfs-discuss] How to unmount when devices write-disabled?

2008-04-01 Thread Brian Kolaci
In a recovery situation where the primary node crashed, the disks get write-disabled while the failover node takes control. How can you unmount the zpool? It panics the system and actually gets into a panic loop when it tries to mount it again on next boot. Thanks, Brian

[zfs-discuss] Unable to run scrub on degraded zpool

2008-04-01 Thread Robin Bowes
Hi, I've got a 10-disk raidz2 zpool with a dead drive (it's actually been physically removed from the server pending replacement). This is how it looks: # zpool status space pool: space state: DEGRADED status: One or more devices could not be used because the label is missing or

Re: [zfs-discuss] Problem importing pool from BSD 7.0 into Nexenta

2008-04-01 Thread Michael Armbrust
On Mon, Mar 31, 2008 at 8:35 AM, Bob Friesenhahn < [EMAIL PROTECTED]> wrote: > On Mon, 31 Mar 2008, Tim wrote: > > > Perhaps someone else can correct me if I'm wrong, but if you're using > the > > whole disk, ZFS shouldn't be displaying a slice when listing your disks, > > should it? I've *NEVER*

Re: [zfs-discuss] Per filesystem scrub

2008-04-01 Thread Webmail
>> We have recently discovered the same issue on one of our internal build >> machines. We have a daily bringover of the Teamware onnv-gate that is >> snapshoted when it completes and as such we can never run a full scrub. >> Given some of our storage is reaching (or past) EOSL I really want to

Re: [zfs-discuss] Per filesystem scrub

2008-04-01 Thread Wade . Stuart
[EMAIL PROTECTED] wrote on 04/01/2008 04:25:39 AM: > kristof wrote: > > I would be very happy having a filesystem based zfs scrub > > > > We have a 18TB big zpool, it takes more then 2 days to do the scrub. > > > > Since we cannot take snapshots during the scrub, this is unacceptable > > We have

Re: [zfs-discuss] ZFS problem with oracle

2008-04-01 Thread Ed Saipetch
Wiwat, You should make sure that you have read the Best Practices Guide and the Evil Tuning Guide for helpful information on optimizing ZFS for Oracle. There are some things you can do to tweak ZFS to get better performance like using a separate filesystem for logs and separating the ZFS inte

Re: [zfs-discuss] Per filesystem scrub

2008-04-01 Thread Adam Leventhal
On Mar 31, 2008, at 10:41 AM, kristof wrote: > I would be very happy having a filesystem based zfs scrub > > We have a 18TB big zpool, it takes more then 2 days to do the scrub. > > Since we cannot take snapshots during the scrub, this is unacceptable While per-dataset scrubbing would certainly be

[zfs-discuss] ZFS problem with oracle

2008-04-01 Thread Wiwat Kiatdechawit
I implement ZFS with Oracle but it slower than UFS very much. Do you have any solution? Can I fix this problem with ZFS direct I/O. If it can, how to set it? Wiwat ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolari

Re: [zfs-discuss] Per filesystem scrub

2008-04-01 Thread Darren J Moffat
kristof wrote: > I would be very happy having a filesystem based zfs scrub > > We have a 18TB big zpool, it takes more then 2 days to do the scrub. > > Since we cannot take snapshots during the scrub, this is unacceptable We have recently discovered the same issue on one of our internal build m

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-01 Thread Simon Breden
If it's of interest, I've written up some articles on my experiences of building a ZFS NAS box which you can read here: http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/ I used CIFS to share the filesystems, but it will be a simple matter to use NFS instead: issue the command 'zfs set