On 10/15/09 23:31, Cameron Jones wrote:

by cross-mounting do you mean mounting the drives on 2 running OS's?
that wasn't really what i was looking for but nice to know the option
is there, even tho not recommended!

No, since you really can't run two OSs at the same time unless you use
zones. Maybe someone more expert than I could comment on the idea
of running OpenSolaris on a Solaris 10 or sxce host - e.g., in the case
of sxce, if they were both, say snv124?
my only real aim was to have the 3 disks accessible when booting into
either OS so i could share archived data between them.

That's what you should do (and I do it all the time). Put your user
data a separate pool and import only that on both OS instances. So
in your case, install OpenSolaris in a 32GB or more slice 0 partition
of the mirror and /export on (say) slice 1. My data pool is called "space"
and it has a number of file systems most of which are mounted on
/export (e.g., /export/home/userz for user "userz". You could do this
by zfs snap of the OpenSolaris rpool from Solaris, and then zfs recv
after running format (follow the guide for restoring a zfs rpool at
http://docs.sun.com/app/docs/doc/819-5461/ghzur?a=view).

it sounds like i shouldn't have any problem cold-cross-mounting :)
although does bug 11358 only apply to opensolaris or would it also be
possible to apply to solaris 10 too?

Not sure. sxce and Open Solaris both do the dreaded archive update,
so AFAIK Solaris 10 would do it too, possibly with bad consequences.
A workaround would be to make sure the other rpool is not mounted
when you reboot, but one whoops and you might be toast. Better to
keep data and OS  separate. Then you can do zfs snaps for rpool
backups and something different if you like for user data backups.

also i thought i read in the doco that ZFS assigns an id to each
drive which is unique to the OS  - if i try to mount it into another
OS would this id keep changing each time i switch?

AFAIK it doesn't. I have sxce and OpenSolaris running alternately
on one host and they mount the data pool with no problems at all. I
no longer even try to cross mount the rpools because my OpenSolaris
installs kept getting trashed by 11358, but at that time sxce was
on UFS. I believe the ids are assigned when the pool is created,
so if you zfs recv an rpool from another host with an otherwise
identical configuration, it will try (and correctly fail) to mount a
zombie data pool when you boot it. I assume the id is ignored
on the root pool at boot time or it wouldn't be able to boot at all.
Undoubtedly a guru will chip in here if this is incorrect :-)

HTH -- Frank
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to