On Wed, Jul 21, 2010 at 2:09 AM, Joshua Boyd <boy...@jbip.net> wrote:
> On Wed, Jul 21, 2010 at 1:57 AM, alan bryan <alanbryan1...@yahoo.com>wrote: > >> >> >> --- On Mon, 7/19/10, Dan Langille <d...@langille.org> wrote: >> >> > From: Dan Langille <d...@langille.org> >> > Subject: Re: Problems replacing failing drive in ZFS pool >> > To: "Freddie Cash" <fjwc...@gmail.com> >> > Cc: "freebsd-stable" <freebsd-stable@freebsd.org> >> > Date: Monday, July 19, 2010, 7:07 PM >> > On 7/19/2010 12:15 PM, Freddie Cash >> > wrote: >> > > On Mon, Jul 19, 2010 at 8:56 AM, Garrett Moore<garrettmo...@gmail.com >> > >> > wrote: >> > >> So you think it's because when I switch from the >> > old disk to the new disk, >> > >> ZFS doesn't realize the disk has changed, and >> > thinks the data is just >> > >> corrupt now? Even if that happens, shouldn't the >> > pool still be available, >> > >> since it's RAIDZ1 and only one disk has gone >> > away? >> > > >> > > I think it's because you pull the old drive, boot with >> > the new drive, >> > > the controller re-numbers all the devices (ie da3 is >> > now da2, da2 is >> > > now da1, da1 is now da0, da0 is now da6, etc), and ZFS >> > thinks that all >> > > the drives have changed, thus corrupting the >> > pool. I've had this >> > > happen on our storage servers a couple of times before >> > I started using >> > > glabel(8) on all our drives (dead drive on RAID >> > controller, remove >> > > drive, reboot for whatever reason, all device nodes >> > are renumbered, >> > > everything goes kablooey). >> > >> > Can you explain a bit about how you use glabel(8) in >> > conjunction with ZFS? If I can retrofit this into an >> > exist ZFS array to make things easier in the future... >> > >> > 8.0-STABLE #0: Fri Mar 5 00:46:11 EST 2010 >> > >> > ]# zpool status >> > pool: storage >> > state: ONLINE >> > scrub: none requested >> > config: >> > >> > NAME >> > STATE READ WRITE CKSUM >> > storage >> > ONLINE >> > 0 0 >> > 0 >> > raidz1 >> > ONLINE 0 >> > 0 0 >> > ad8 >> > ONLINE >> > 0 0 >> > 0 >> > ad10 >> > ONLINE 0 >> > 0 0 >> > ad12 >> > ONLINE 0 >> > 0 0 >> > ad14 >> > ONLINE 0 >> > 0 0 >> > ad16 >> > ONLINE 0 >> > 0 0 >> > >> > > Of course, always have good backups. ;) >> > >> > In my case, this ZFS array is the backup. ;) >> > >> > But I'm setting up a tape library, real soon now.... >> > >> > -- Dan Langille - http://langille.org/ >> > _______________________________________________ >> > freebsd-stable@freebsd.org >> > mailing list >> > http://lists.freebsd.org/mailman/listinfo/freebsd-stable >> > To unsubscribe, send any mail to " >> freebsd-stable-unsubscr...@freebsd.org" >> > >> >> Dan, >> >> Here's how to do it after the fact: >> >> >> http://unix.derkeiler.com/Mailing-Lists/FreeBSD/current/2009-07/msg00623.html >> >> --Alan Bryan >> > > [r...@foghornleghorn ~]# glabel label disk01 /dev/da0 > glabel: Can't store metadata on /dev/da0: Operation not permitted. > > Hrmph. > Nevermind, sysctl kern.geom.debugflags=16 solves that problem, but then you get this: [r...@foghornleghorn ~]# zpool replace tank da0 label/disk01 cannot open 'label/disk01': no such GEOM provider must be a full path or shorthand device name > > >> >> >> >> >> >> >> _______________________________________________ >> freebsd-stable@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-stable >> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org" >> > > > > -- > Joshua Boyd > JBipNet > > E-mail: boy...@jbip.net > > http://www.jbip.net > -- Joshua Boyd JBipNet E-mail: boy...@jbip.net http://www.jbip.net _______________________________________________ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"