Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-11-07 Thread Krzys
I was wondering if this ever made to zfs as a fix for bad labels? On Wed, 7 May 2008, Jeff Bonwick wrote: > Yes, I think that would be useful. Something like 'zpool revive' > or 'zpool undead'. It would not be completely general-purpose -- > in a pool with multiple mirror devices, it could only

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-10-10 Thread MC
I'm wondering if this bug is fixed and if not, what is the bug number: > If your entire pool consisted of a single mirror of > two disks, A and B, > and you detached B at some point in the past, you > *should* be able to > recover the pool as it existed when you detached B. > However, I just > ri

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-10-09 Thread Ron Halstead
Jeff, Sorry this is so late. Thanks for the labelfix binary. I would like to have one compiled for sparc. I tried compiling your source code but it threw up with many errors. I'm not a programmer and reading the source code means absolutely nothing to me. One error was: cc labelfix.c "labelfix.

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-08 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Darren J Moffat wrote: | Great tool, any chance we can have it integrated into zpool(1M) so that | it can find and "fixup" on import detached vdevs as new pools ? | | I'd think it would be reasonable to extend the meaning of | 'zpool import -D' to list

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-08 Thread Robert Milkowski
Hello Darren, Tuesday, May 6, 2008, 11:16:25 AM, you wrote: DJM> Great tool, any chance we can have it integrated into zpool(1M) so that DJM> it can find and "fixup" on import detached vdevs as new pools ? I remember long time ago some posts about 'zpool split' so one could split a pool in two (

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-07 Thread Darren J Moffat
Jeff Bonwick wrote: > Yes, I think that would be useful. Something like 'zpool revive' > or 'zpool undead'. Why a new subcommand when 'zpool import' got '-D' to revive destroyed pools ? > It would not be completely general-purpose -- > in a pool with multiple mirror devices, it could only wo

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-07 Thread Jeff Bonwick
Yes, I think that would be useful. Something like 'zpool revive' or 'zpool undead'. It would not be completely general-purpose -- in a pool with multiple mirror devices, it could only work if all replicas were detached in the same txg -- but for the simple case of a single top-level mirror vdev,

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-06 Thread Darren J Moffat
Great tool, any chance we can have it integrated into zpool(1M) so that it can find and "fixup" on import detached vdevs as new pools ? I'd think it would be reasonable to extend the meaning of 'zpool import -D' to list detached vdevs as well as destroyed pools. -- Darren J Moffat __

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-06 Thread Robert Milkowski
Hello Cyril, Sunday, May 4, 2008, 11:34:28 AM, you wrote: CP> On Sun, May 4, 2008 at 11:42 AM, Jeff Bonwick <[EMAIL PROTECTED]> wrote: >> Oh, and here's the source code, for the curious: >> CP> [snipped] >> >> label_write(fd, offsetof(vdev_label_t, vl_uberblock), >> 1ULL <<

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-04 Thread Benjamin Brumaire
Well, thanks your program, I could recover the data on the detach disk. Now I m copying the data on other disks and resilver it inside the pool. Warm words aren't enough to express how I feel. This community is great. Thanks you very much. bbr This message posted from opensolaris.org _

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-04 Thread Mario Goebbels
> Oh, and here's the source code, for the curious: The forensics project will be all over this, I hope, and wrap it up in a nice command line tool. -mg ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-04 Thread Cyril Plisko
On Sun, May 4, 2008 at 11:42 AM, Jeff Bonwick <[EMAIL PROTECTED]> wrote: > Oh, and here's the source code, for the curious: > [snipped] > > label_write(fd, offsetof(vdev_label_t, vl_uberblock), > 1ULL << UBERBLOCK_SHIFT, ub); > > label_write(fd, offsetof(vdev_label_t,

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-04 Thread Jeff Bonwick
Oh, and here's the source code, for the curious: #include #include #include #include #include #include #include #include #include #include #include /* * Write a label block with a ZBT checksum. */ static void label_write(int fd, uint64_t offset, uint64_t size, void *buf) { z

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-04 Thread Jeff Bonwick
OK, here you go. I've successfully recovered a pool from a detached device using the attached binary. You can verify its integrity against the following MD5 hash: # md5sum labelfix ab4f33d99fdb48d9d20ee62b49f11e20 labelfix It takes just one argument -- the disk to repair: # ./labelfix /dev/rd

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-03 Thread Jeff Bonwick
Oh, you're right! Well, that will simplify things! All we have to do is convince a few bits of code to ignore ub_txg == 0. I'll try a couple of things and get back to you in a few hours... Jeff On Fri, May 02, 2008 at 03:31:52AM -0700, Benjamin Brumaire wrote: > Hi, > > while diving deeply in

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-02 Thread Benjamin Brumaire
it is on x86. Does it means that I have to split the output from digest in 4 words (each 8 bytes) and reverse each before comparing with the stored value? bbr This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensol

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-02 Thread Darren J Moffat
Benjamin Brumaire wrote: > I try to calculate it assuming only uberblock is relevant. > #dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=168 | digest -a sha256 > 168+0 records in > 168+0 records out > 710306650facf818e824db5621be394f3b3fe934107bdfc861bbc82cb9e1bbf3 Is this on SPARC or x86 ? ZFS st

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-02 Thread Benjamin Brumaire
Hi, while diving deeply in zfs in order to recover data I found that every uberblock in label0 does have the same ub_rootbp and a zeroed ub_txg. Does it means only ub_txg was touch while detaching? Hoping it is the case, I modified ub_txg from one uberblock to match the tgx from the label a

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-04-29 Thread Benjamin Brumaire
If I understand you correctly the steps to follow are: read each sector (dd bs=512 count=1 split=n is enough?) decompress it (any tools implementing the algo lzjb?) size = 1024? structure might be objset_phys_t? take the oldest birth time as the root block c

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-04-29 Thread Jeff Bonwick
Urgh. This is going to be harder than I thought -- not impossible, just hard. When we detach a disk from a mirror, we write a new label to indicate that the disk is no longer in use. As a side effect, this zeroes out all the old uberblocks. That's the bad news -- you have no uberblocks. The go

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-04-29 Thread Jeff Bonwick
Urgh. This is going to be harder than I thought -- not impossible, just hard. When we detach a disk from a mirror, we write a new label to indicate that the disk is no longer in use. As a side effect, this zeroes out all the old uberblocks. That's the bad news -- you have no uberblocks. The go

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-04-29 Thread Benjamin Brumaire
Jeff thank you very much for taking time to look at this. My entire pool consisted of a single mirror of two slices on different disks A and B. I attach a third slice on disk C and wait for resilver and then detach it. Now disks A and B burned and I have only disk C at hand. bbr This messag

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-04-28 Thread Jeff Bonwick
If your entire pool consisted of a single mirror of two disks, A and B, and you detached B at some point in the past, you *should* be able to recover the pool as it existed when you detached B. However, I just tried that experiment on a test pool and it didn't work. I will investigate further and

[zfs-discuss] recovering data from a dettach mirrored vdev

2008-04-28 Thread Benjamin Brumaire
Hi, my system (solaris b77) was physically destroyed and i loosed data saved in a zpool mirror. The only thing left is a dettached vdev from the pool. I'm aware that uberblock is gone and that i can't import the pool. But i still hope their is a way or a tool (like tct http://www.porcupine.org/