Hours... :-(

Should have used both devices as slog, but...

Thinking.... maybe I could make a mirror with the cache device and then remove 
the failing disk?

I will give it a try.

On Mar 16, 2012, at 9:08 PM, Matt Breitbach wrote:

> How long have you let the box sit?  I had to offline the slog device, and it
> took quite a while for it to come back to life after removing the device
> (4-5 minutes).  It's a painful process, which is why ever since I've used
> mirrored slog devices.
> 
> -----Original Message-----
> From: zfs-discuss-boun...@opensolaris.org
> [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Jan Hellevik
> Sent: Friday, March 16, 2012 2:20 PM
> To: zfs-discuss@opensolaris.org
> Subject: [zfs-discuss] Cannot remove slog device
> 
> I have a problem with my box. The slog started showing errors, so I decided
> to remove it. I have tried to offline it with the same result. Any ideas?
> 
> I have offlined the cache device, which happened immediately, but both
> offline/remove of the slog hangs and makes the box unusable. 
> 
> If I have a ssh connection open, it will allow me to run commands like top
> and dmesg, but if I try to open a new connection, it hangs after displaying
> 'Last login: .....'
> 
> I have mounted shares from the server, and I can access files (read, but not
> write) on them without any problems.
> 
> The only thing that seems to work is powercycling the machine.
> 
> Any ideas out there?
> 
> OpenIndiana (powered by illumos)    SunOS 5.11    oi_151a    September 2011
> hellevik@xeon:~$ zpool status
>  pool: master
> state: DEGRADED
> status: One or more devices are faulted in response to persistent errors.
>        Sufficient replicas exist for the pool to continue functioning in a
>        degraded state.
> action: Replace the faulted device, or use 'zpool clear' to mark the device
>        repaired.
>  scan: scrub repaired 0 in 19h9m with 0 errors on Mon Jan 30 05:57:51 2012
> config:
> 
>        NAME        STATE     READ WRITE CKSUM
>        master      DEGRADED     0     0     0
>          mirror-0  ONLINE       0     0     0
>            c9t0d0  ONLINE       0     0     0
>            c9t5d0  ONLINE       0     0     0
>          mirror-1  ONLINE       0     0     0
>            c9t1d0  ONLINE       0     0     0
>            c9t6d0  ONLINE       0     0     0
>          mirror-2  ONLINE       0     0     0
>            c9t2d0  ONLINE       0     0     0
>            c9t7d0  ONLINE       0     0     0
>          mirror-3  ONLINE       0     0     0
>            c9t3d0  ONLINE       0     0     0
>            c9t4d0  ONLINE       0     0     0
>        logs
>          c8t5d0    FAULTED      0     0     0  too many errors
>        cache
>          c8t4d0    OFFLINE      0     0     0
> 
> errors: No known data errors
> 
>  pool: rpool
> state: ONLINE
>  scan: scrub repaired 0 in 1h33m with 0 errors on Sun Jan 29 16:37:20 2012
> config:
> 
>        NAME        STATE     READ WRITE CKSUM
>        rpool       ONLINE       0     0     0
>          mirror-0  ONLINE       0     0     0
>            c5d0s0  ONLINE       0     0     0
>            c5d1s0  ONLINE       0     0     0
> 
> errors: No known data errors
> hellevik@xeon:~$ pfexec zpool remove master c8t5d0
> <hangs>
> 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
> 

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to