On Wed, Feb 17, 2010 at 16:14, Daniel Carosone <d...@geek.com.au> wrote:

> On Wed, Feb 17, 2010 at 03:37:59PM -0500, Ethan wrote:
> > On Wed, Feb 17, 2010 at 15:22, Daniel Carosone <d...@geek.com.au> wrote:
> > I have not yet successfully imported. I can see two ways of making
> progress
> > forward. One is forcing zpool to attempt to import using slice 2 for each
> > disk rather than slice 8. If this is how autoexpand works, as you say, it
> > seems like it should work fine for this. But I don't know how, or if it
> is
> > possible to, make it use slice 2.
>
> Just get rid of 8? :-)
>

That sounds like an excellent idea, but, being very new to opensolaris, I
have no idea how to do this. I'm reading through
http://multiboot.solaris-x86.org/iv/3.html at the moment. You mention the
'format' utility below, which I will read more into.


>
> Normally, when using the whole disk, convention is that slice 0 is
> used, and there's a small initial offset (for the EFI label).  I think
> you probably want to make a slice 0 that spans the right disk sectors.
>
> Were you using some other partitioning inside the truecrypt "disks"?
> What devices were given to zfs-fuse, and what was their starting
> offset? You may need to account for that, too.  How did you copy the
> data, and to what target device, on what platform?  Perhaps the
> truecrypt device's partition table is now at the start of the physical
> disk, but solaris can't read it properly? If that's an MBR partition
> table (which you look at with fdisk), you could try zdb -l on
> /dev/dsk/c...p[01234] as well.
>

There was no partitioning on the truecrypt disks. The truecrypt volumes
occupied the whole raw disks (1500301910016 bytes each). The devices that I
gave to the zpool on linux were the whole raw devices that truecrypt exposed
(1500301647872 bytes each). There were no partition tables on either the raw
disks or the truecrypt volumes, just truecrypt headers on the raw disk and
zfs on the truecrypt volumes.
I copied the data simply using

dd if=/dev/mapper/truecrypt1 of=/dev/sdb

on linux, where /dev/mapper/truecrypt1 is the truecrypt volume for one hard
disk (which was on /dev/sda) and /dev/sdb is a new blank drive of the same
size as the old drive (but slightly larger than the truecrypt volume). And
repeat likewise for each of the five drives.

The labels 2 and 3 should be on the drives, but they are 262144 bytes
further from the end of slice 2 than zpool must be looking.

I could create a partition table on each drive, specifying a partition with
the size of the truecrypt volume, and re-copy the data onto this partition
(would have to re-copy as creating the partition table would overwrite zfs
data, as zfs starts at byte 0). Would this be preferable? I was under some
impression that zpool devices were preferred to be raw drives, not
partitions, but I don't recall where I came to believe that much less
whether it's at all correct.



>
> We're just guessing here.. to provide more concrete help, you'll need
> to show us some of the specifics, both of what you did and what you've
> ended up with. fdisk and format partition tables and zdb -l output
> would be a good start.
>
> Figuring out what is different about the disk where s2 was used would
> be handy too.  That may be a synthetic label because something is
> missing from that disk that the others have.
>
> > The other way is to make a slice that is the correct size of the volumes
> as
> > I had them before (262144 bytes less than the size of the disk). It seems
> > like this should cause zpool to prefer to use this slice over slice 8, as
> it
> > can find all 4 labels, rather than just labels 0 and 1. I don't know how
> to
> > go about this either, or if it's possible. I have been starting to read
> > documentation on slices in solaris but haven't had time to get far enough
> to
> > figure out what I need.
>
> format will let you examine and edit these.  Start by making sure they
> have all the same partitioning, flags, etc.
>

I will have a look at format, but if this operates on partition tables,
well, my disks have none at the moment so I'll have to remedy that.


>
> > I also have my doubts about this solving my actual issues - the ones that
> > caused me to be unable to import in zfs-fuse. But I need to solve this
> issue
> > before I can move forward to figuring out/solving whatever that issue
> was.
>
> Yeah - my suspicion is that import -F may help here.  That is a pool
> recovery mode, where it rolls back progressive transactions until it
> finds one that validates correctly.  It was only added recently and is
> probably missing from the fuse version.
>
> --
> Dan.
>
>
as for using import -F, I am on snv_111b, which I am not sure has -F for
import. I tried to update to the latest dev build (using the instructions
at http://pkg.opensolaris.org/dev/en/index.shtml ) but things are behaving
very strangely. I get error messages on boot - "gconf-sanity-check-2 exited
with error status 256", and when I dismiss this and go into gnome, terminal
is messed up and doesn't echo anything I type, and I can't ssh in (error
message about not able to allocate a TTY). anyway, zfs mailing list isn't
really the place to be discussing that I suppose.

-Ethan
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to