What's the output of 'zfs upgrade' and 'zpool upgrade'? (I'm just
curious - I had a similar situation which seems to be resolved now
that I've gone to Solaris 10u6 or OpenSolaris 2008.11).



On Wed, Jan 21, 2009 at 2:11 PM, Ben Miller <mil...@eecis.udel.edu> wrote:
> Bug ID is 6793967.
>
> This problem just happened again.
> % zpool status pool1
>  pool: pool1
>  state: DEGRADED
>  scrub: resilver completed after 0h48m with 0 errors on Mon Jan  5 12:30:52 
> 2009
> config:
>
>        NAME           STATE     READ WRITE CKSUM
>        pool1          DEGRADED     0     0     0
>          raidz2       DEGRADED     0     0     0
>            c4t8d0s0   ONLINE       0     0     0
>            c4t9d0s0   ONLINE       0     0     0
>            c4t10d0s0  ONLINE       0     0     0
>            c4t11d0s0  ONLINE       0     0     0
>            c4t12d0s0  REMOVED      0     0     0
>            c4t13d0s0  ONLINE       0     0     0
>
> errors: No known data errors
>
> % zpool status -x
> all pools are healthy
> %
> # zpool online pool1 c4t12d0s0
> % zpool status -x
>  pool: pool1
>  state: ONLINE
> status: One or more devices is currently being resilvered.  The pool will
>        continue to function, possibly in a degraded state.
> action: Wait for the resilver to complete.
>  scrub: resilver in progress for 0h0m, 0.12% done, 2h38m to go
> config:
>
>        NAME           STATE     READ WRITE CKSUM
>        pool1          ONLINE       0     0     0
>          raidz2       ONLINE       0     0     0
>            c4t8d0s0   ONLINE       0     0     0
>            c4t9d0s0   ONLINE       0     0     0
>            c4t10d0s0  ONLINE       0     0     0
>            c4t11d0s0  ONLINE       0     0     0
>            c4t12d0s0  ONLINE       0     0     0
>            c4t13d0s0  ONLINE       0     0     0
>
> errors: No known data errors
> %
>
> Ben
>
>> I just put in a (low priority) bug report on this.
>>
>> Ben
>>
>> > This post from close to a year ago never received
>> a
>> > response.  We just had this same thing happen to
>> > another server that is running Solaris 10 U6.  One
>> of
>> > the disks was marked as removed and the pool
>> > degraded, but 'zpool status -x' says all pools are
>> > healthy.  After doing an 'zpool online' on the
>> disk
>> > it resilvered in fine.  Any ideas why 'zpool
>> status
>> > -x' reports all healthy while 'zpool status' shows
>> a
>> > pool in degraded mode?
>> >
>> > thanks,
>> > Ben
>> >
>> > > We run a cron job that does a 'zpool status -x'
>> to
>> > > check for any degraded pools.  We just happened
>> to
>> > > find a pool degraded this morning by running
>> > 'zpool
>> > > status' by hand and were surprised that it was
>> > > degraded as we didn't get a notice from the cron
>> > > job.
>> > >
>> > > # uname -srvp
>> > > SunOS 5.11 snv_78 i386
>> > >
>> > > # zpool status -x
>> > > all pools are healthy
>> > >
>> > > # zpool status pool1
>> > >   pool: pool1
>> > > tate: DEGRADED
>> > >  scrub: none requested
>> > > onfig:
>> > >
>> > >         NAME         STATE     READ WRITE CKSUM
>> > > pool1        DEGRADED     0     0     0
>> > >           raidz1     DEGRADED     0     0     0
>> > >   c1t8d0   REMOVED      0     0     0
>> > >           c1t9d0   ONLINE       0     0     0
>> > >   c1t10d0  ONLINE       0     0     0
>> > >           c1t11d0  ONLINE       0     0     0
>> > > No known data errors
>> > >
>> > > I'm going to look into it now why the disk is
>> > listed
>> > > as removed.
>> > >
>> > > Does this look like a bug with 'zpool status
>> -x'?
>> > >
>> > > Ben
> --
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to