FWIW, I strongly expect live ripping of a SATA device to not panic the disk
layer. It explicitly shouldn't panic the ZFS layer, as ZFS is supposed to be
"fault-tolerant" and "drive dropping away at any time" is a rather expected
scenario.

[I've popped disks out live in many cases, both when I was experimenting
with ZFS+RAID-Z on various systems and occasionally, when I've had to
replace a disk live. In the latter case, I've done cfgadm about half the
time - the rest, I've just live ripped and then brought the disk up after
that, and it's Just Worked.]

- Rich

On Thu, Apr 9, 2009 at 3:21 PM, Grant Lowe <gl...@sbcglobal.net> wrote:

>
> Hi Remco.
>
> Yes, I realize that was asking for trouble.  It wasn't supposed to be a
> test of yanking a LUN.  We needed a LUN for a VxVM/VxFS system and that LUN
> was available.  I was just surprised at the panic, since the system was
> quiesced at the time.  But there is coming a time when we will be doing
> this.  Thanks for the feedback.  I appreciate it.
>
>
>
>
> ----- Original Message ----
> From: Remco Lengers <re...@lengers.com>
> To: Grant Lowe <gl...@sbcglobal.net>
> Cc: zfs-discuss@opensolaris.org
> Sent: Thursday, April 9, 2009 5:31:42 AM
> Subject: Re: [zfs-discuss] ZFS Panic
>
> Grant,
>
> Didn't see a response so I'll give it a go.
>
> Ripping a disk away and silently inserting a new one is asking for trouble
> imho. I am not sure what you were trying to accomplish but generally replace
> a drive/lun would entail commands like
>
> zpool offline tank c1t3d0
> cfgadm | grep c1t3d0
> sata1/3::dsk/c1t3d0            disk         connected    configured   ok
> # cfgadm -c unconfigure sata1/3
> Unconfigure the device at: /devices/p...@0,0/pci1022,7...@2/pci11ab,1...@1
> :3
> This operation will suspend activity on the SATA device
> Continue (yes/no)? yes
> # cfgadm | grep sata1/3
> sata1/3                        disk         connected    unconfigured ok
> <Replace the physical disk c1t3d0>
> # cfgadm -c configure sata1/3
>
> Taken from this page:
>
> http://docs.sun.com/app/docs/doc/819-5461/gbbzy?a=view
>
> ..Remco
>
> Grant Lowe wrote:
> > Hi All,
> >
> > Don't know if this is worth reporting, as it's human error.  Anyway, I
> had a panic on my zfs box.  Here's the error:
> >
> > marksburg /usr2/glowe> grep panic /var/log/syslog
> > Apr  8 06:57:17 marksburg savecore: [ID 570001 auth.error] reboot after
> panic: assertion failed: 0 == dmu_buf_hold_array(os, object, offset, size,
> FALSE, FTAG, &numbufs, &dbp), file: ../../common/fs/zfs/dmu.c, line: 580
> > Apr  8 07:15:10 marksburg savecore: [ID 570001 auth.error] reboot after
> panic: assertion failed: 0 == dmu_buf_hold_array(os, object, offset, size,
> FALSE, FTAG, &numbufs, &dbp), file: ../../common/fs/zfs/dmu.c, line: 580
> > marksburg /usr2/glowe>
> >
> > What we did to cause this is we pulled a LUN from zfs, and replaced it
> with a new LUN.  We then tried to shutdown the box, but it wouldn't go down.
>  We had to send a break to the box and reboot.  This is an oracle sandbox,
> so we're not really concerned.  Ideas?
> >
> > _______________________________________________
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 

BOFH excuse #439: Hot Java has gone cold
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to