Richard Elling wrote:
> In general, ZFS doesn't manage device timeouts. The lower
> layer drivers do. The timeout management depends on which OS,
> OS version, and HBA you use. A fairly extreme example may be
> Solaris using parallel SCSI and the sd driver, which uses a default
> timeout of 60 s
Martin Englund wrote:
> I've got a newly created zpool where I know (from the previous UFS) that one
> of the disks has retryable write errors.
>
> What should I do about it now? Just leave zfs to deal with it? Repair it?
>
Retryable write errors are not fatal, they are retried.
What do you th
Luke Scharf wrote:
> I'm running ZFS in a test-server against a bunch of drives in an Apple
> XRaid (configured in the JBOD mode). It works pretty well, except that
> when I yank one of the drives, ZFS hangs -- presumably, it's waiting
> for a response from the the XRAID.
>
> Is there any way to
I've got a newly created zpool where I know (from the previous UFS) that one of
the disks has retryable write errors.
What should I do about it now? Just leave zfs to deal with it? Repair it?
If I should repair, if this procedure ok?
zpool offline z2 c5t4d0
format -d c5t4d0
repair ...
zpool onl
I'm running ZFS in a test-server against a bunch of drives in an Apple
XRaid (configured in the JBOD mode). It works pretty well, except that
when I yank one of the drives, ZFS hangs -- presumably, it's waiting
for a response from the the XRAID.
Is there any way to set the device-failure timeout
In a recovery situation where the primary node crashed, the
disks get write-disabled while the failover node takes control.
How can you unmount the zpool? It panics the system and actually
gets into a panic loop when it tries to mount it again on next boot.
Thanks,
Brian
Hi,
I've got a 10-disk raidz2 zpool with a dead drive (it's actually been
physically removed from the server pending replacement).
This is how it looks:
# zpool status space
pool: space
state: DEGRADED
status: One or more devices could not be used because the label is
missing or
On Mon, Mar 31, 2008 at 8:35 AM, Bob Friesenhahn <
[EMAIL PROTECTED]> wrote:
> On Mon, 31 Mar 2008, Tim wrote:
>
> > Perhaps someone else can correct me if I'm wrong, but if you're using
> the
> > whole disk, ZFS shouldn't be displaying a slice when listing your disks,
> > should it? I've *NEVER*
>> We have recently discovered the same issue on one of our internal build
>> machines. We have a daily bringover of the Teamware onnv-gate that is
>> snapshoted when it completes and as such we can never run a full scrub.
>> Given some of our storage is reaching (or past) EOSL I really want to
[EMAIL PROTECTED] wrote on 04/01/2008 04:25:39 AM:
> kristof wrote:
> > I would be very happy having a filesystem based zfs scrub
> >
> > We have a 18TB big zpool, it takes more then 2 days to do the scrub.
> >
> > Since we cannot take snapshots during the scrub, this is unacceptable
>
> We have
Wiwat,
You should make sure that you have read the Best Practices Guide and the
Evil Tuning Guide for helpful information on optimizing ZFS for Oracle.
There are some things you can do to tweak ZFS to get better performance
like using a separate filesystem for logs and separating the ZFS inte
On Mar 31, 2008, at 10:41 AM, kristof wrote:
> I would be very happy having a filesystem based zfs scrub
>
> We have a 18TB big zpool, it takes more then 2 days to do the scrub.
>
> Since we cannot take snapshots during the scrub, this is unacceptable
While per-dataset scrubbing would certainly be
I implement ZFS with Oracle but it slower than UFS very much. Do you have
any solution?
Can I fix this problem with ZFS direct I/O. If it can, how to set it?
Wiwat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolari
kristof wrote:
> I would be very happy having a filesystem based zfs scrub
>
> We have a 18TB big zpool, it takes more then 2 days to do the scrub.
>
> Since we cannot take snapshots during the scrub, this is unacceptable
We have recently discovered the same issue on one of our internal build
m
If it's of interest, I've written up some articles on my experiences of
building a ZFS NAS box which you can read here:
http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
I used CIFS to share the filesystems, but it will be a simple matter to use NFS
instead: issue the command 'zfs set
15 matches
Mail list logo