On Wed, Feb 17, 2010 at 04:44:19PM -0500, Ethan wrote:
> There was no partitioning on the truecrypt disks. The truecrypt volumes
> occupied the whole raw disks (1500301910016 bytes each). The devices that I
> gave to the zpool on linux were the whole raw devices that truecrypt exposed
> (1500301647872 bytes each). There were no partition tables on either the raw
> disks or the truecrypt volumes, just truecrypt headers on the raw disk and
> zfs on the truecrypt volumes.
> I copied the data simply using
> 
> dd if=/dev/mapper/truecrypt1 of=/dev/sdb

Ok, then as you noted, you want to start with the ..p0 device, as the 
equivalent. 

> The labels 2 and 3 should be on the drives, but they are 262144 bytes
> further from the end of slice 2 than zpool must be looking.

I don't think so.. They're found by counting from the start; the end
can move out further (LUN expansion), and with autoexpand the vdev can
be extended (adding metaslabs) and the labels will be rewritten at the
new end after the last metaslab.  

I think the issue is that there are no partitions on the devices that allow
import to read that far.  Fooling it into using p0 would work around this.

> I could create a partition table on each drive, specifying a partition with
> the size of the truecrypt volume, and re-copy the data onto this partition
> (would have to re-copy as creating the partition table would overwrite zfs
> data, as zfs starts at byte 0). Would this be preferable?

Eventually, probably, yes - once you've confirmed all the speculation,
gotten past the partitioning issue to whatever other damage is in the
pool, resolved that, and have some kind of access to your data.

There are other options as well, including using replace
one at a time, or send|recv.

> I was under some
> impression that zpool devices were preferred to be raw drives, not
> partitions, but I don't recall where I came to believe that much less
> whether it's at all correct.

Sort of. zfs commands can be handed bare disks; internally they put
EFI labels on them automatically (though evidently not the fuse
variants).

ZFS mostly makes partitioning go away (hooray), but it still becomes
important in cases like this - shared disks and migration between
operating systems.

> as for using import -F, I am on snv_111b, which I am not sure has -F for
> import. 

Nope.

> I tried to update to the latest dev build (using the instructions
> at http://pkg.opensolaris.org/dev/en/index.shtml ) but things are behaving
> very strangely. I get error messages on boot - "gconf-sanity-check-2 exited
> with error status 256", and when I dismiss this and go into gnome, terminal
> is messed up and doesn't echo anything I type, and I can't ssh in (error
> message about not able to allocate a TTY). anyway, zfs mailing list isn't
> really the place to be discussing that I suppose.

Not really, but read the release notes.

Alternately, if this is a new machine, you could just reinstall (or
boot from) a current livecd/usb, download from genunix.org

--
Dan.

Attachment: pgp4YNaiZ3HRo.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to