On 29 October, 2008 - Michael Stalnaker sent me these 32K bytes:

> All;
> 
> I have a large zfs tank with four raidz2 groups in it. Each of these groups
> is 11 disks, and I have four hot spare disks in the system.  The system is
> running Open Solaris build snv_90.   One of these groups has had a disk
> failure, which the OS correctly detected, and replaced with one of the hot
> spares, and began rebuilding.
> 
> Now it gets interesting. The resilver runs for about 1 hour, then stops. If
> I put zpool status ­v in a while loop with a 10 minute sleep, I see the
> repair proceed, then with no messages of ANY kind, it¹ll silently quit and
> start over. I¹m attaching the output of zpool status ­v from an hour ago and
> then from just now below. Has anyone seen this, or have any ideas as to the
> cause? Is there a timeout or priorty I need to change in a tuneable or
> something?

Snapshots every hour? That will currently restart resilvering.. I think
there has been some late fix for that, or if it's coming soon..

There has been some bug with 'zpool status' as root restarting as well..
'zpool status' as non-root doesn't..

/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to