Hi, I'm copying the list - assume you meant to send it there.

On Sun 2010-12-19 (15:52), Miles Nordin wrote:
> If 'zpool replace /dev/ad6' will not accept that the disk is a
> replacement, then You can unplug the disk, erase the label in a
> different machine using
> 
> dd if=/dev/zero of=/dev/thedisk bs=512 count=XXX
> dd if=/dev/zero of=/dev/thedisk bs=512 count=XXX seek=YYY
> 
> then plug it back into its old spot and issue 'zpool replace /dev/ad6'
> 
> XXX should be about a mbyte worth of sectors, and YYY should be the
> LBA of about 1mbyte from the end of the disk.  You can read or
> experiment to determine the exact values.  you do need to know the
> size of your disk in sectors though.  There's a copy of the EFI label
> at the end of the disk and another at the beginning, which is why you
> have to do this.

Awesome, that does the trick thanx. I assumed it was identifying the
disk by serial number or something. I don't need to unplug the disk
though, it works if I zero it from the same machine.

This should probably be implemented as a zpool function, if it hasn't
already been in later versions.

> In general especially when a disk has corrupt data on it rather than
> unreadable sectors it's best to do the replacement in a way that the
> old and new disks are available simultaneously, because ZFS will use
> the old disk sometimes in places where the old disk is correct.  If
> you take away the old disk, then the old disk can't be used at all
> even when it's correct, so if there are a few spots where there are
> problems with the other good disks in the raidz you will not be able
> to recover that, while with a suspect old disk you could.  OTOH if the
> old disk has unreadable sectors, the controller and ZFS will freeze
> whenever it touches those unreadable sectors, causing the replacement
> to take forever.  This is kind of bullshit and should be solved with
> software IMNSHO, but it's how things are, so if you have a physically
> failing disk I would suggest running the replace/resilver with the
> physically failing disk physically removed (while if the disk has bad
> data on it and is not physically failing i suggest keeping it
> ocnnected somehow).  so...yeah...if there is corrupt data on this
> disk, you'll have to buy another disk to follow my advice in this
> paragraph.  you can go ahead and break the advice, wipe the label,
> replace, though.

Noted. Though if "there are a few spots where there are problems with
the other good disks" ZFS should know about them right?
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to