Hi ZFS team,

I am currently working on fixes for couple of bugs in OpenSolaris Caiman
installer and since they are related to the ZFS, I would like to kindly
ask you, if you could please help me to understand, if the issues
encountered and mentioned below are known (somebody works on them or
if they are already fixed), or if some workarounds might be used.

Also, please let us know if there is possibility that other approach
(like other/new API, command, subcommand) might be used in order to
solve the problem.

Any help/suggestions/comments are much appreciated.
Thank you very much,
Jan


Let me describe the main problem we are trying to address by the fix
which is tracked by following bug:

1771 Install will fail if rpool ZFS pool already exists

When OpenSolaris starts process of installation, it creates ZFS root pool
"rpool" on the target, which is then populated with appropriate ZFS datasets
and ZFS volumes (used for swap and dump) - then Solaris is installed there.

If installer exits either due to some kind of failure or user intervention,
we would like to make sure, it can be restarted again. Because of this,
we need to destroy all allocated resources, which in ZFS case means we
need to release "rpool", so that installer could use it again.

The problem is that we don't know, if "rpool" was just created by the
installer or imported by user (for example other Solaris instance might 
exist
on other disk). If latter is the case, we don't want to destroy the pool.

We were trying to address this problem in following ways, but none of them
worked for us due to the issues encountered:

[1] Installing into "temporary" pool
====================================
Installer created "rpool_tmp" for installation and when installation
successfully finished, it was "renamed" to "rpool".
If installer failed during installation, we knew that we could safely
remove "rpool_tmp", since this was only temporary pool and thus couldn't
contain valid Solaris instance.

We were "renaming" the pool in following way:

# zpool create -f rpool_tmp <device>
[...installation...]
# pool_id=`zpool list -H -o guid rpol_tmp`
# zpool export -f rpool_tmp
# zpool import -f $pool_id rpool

However, we encountered problem that sometimes root "/rpool" dataset
was not mounted due to some reason and we failed to access it. Please
see more details in bug report

1350 Install without menu.lst

[2] Exporting "rpool" if present on the system
==============================================
When installer started, it checked if "rpool" is present and
in that case it exported it.

# zpool export -f rpool

The problem is that this operation made "rpool" unbootable:

1365 zfs import/export will modify the boot up information of ZFS

So if user had some Solaris instance on that pool, it was not able
to boot it anymore (without importing the pool).

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to