I've been wrestling with implementing some ZFS mounts for /var and 
/usr into a jumpstart setup. I know that jumpstart does "know" anything 
about zfs as in your can't define ZFS volumes or pools in the profile. 
I've gone ahead and let the JS do a base install into a single ufs slice 
and then attempted to create the zpool and zfs volumes in the finish 
script and ufsdump|ufsrestore the data from the /usr and /var partitions 
into the new zfs volumes. Problem is there doesn't seem to be a way to 
ensure that the zpool is imported into the freshly built system on the 
first reboot.
     I see in the archives here from a few weeks ago someone was asking 
a similar question and it was suggested that as part of the finish 
script the "/etc/zfs/zpool.cache" could be copied to 
"/etc/zfs/zpool.cache" but it has been my experience through some 
serious testing that when creating and managing zfs pools and volumes in 
the jumpstart scripts that no zpool.cache file is created. Even 
including "find / -name zpool.cache" in the finish script returns no 
hits on that file name. Now, I'm aware that the zpool.cache file isn't 
intended to really be used for administrative tasks as it's format and 
existence aren't even well documented or solidified as part of the 
management framework for zfs moving forward; I would however REALLY like 
to know why in every other situation when managing zfs pools/vols that 
this file is created, but in this one situation it isn't. I would be 
equally curious to know if it is possible to maybe force the creation of 
this file or as a last option, at least make zpool statically linked in 
the default solaris distribution so that I may put a method and 
toolchain neccessary for import pools in the early part of the SMF boot 
sequence.

Thanks in Advance for any insight as to how to work this out.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to