I wanted to send out status on the effort to make a
version of Solaris install available that supports zfs
as a root file system. I've got a version of it ready
for distribution, but I'd like to test it on the Build 62
community release before I make it available.
Without the build 62 community release, I have to
test it on a build 61 image, updated with some build 62
packages. It's kind of a hack and I'm not sure it's a
such a good test of the "real" procedure. I hate to
put something out there that hasn't been tested at all
on its actual target environment. So once we get a build
62 community release available, I will verify that the
kit works on it and then will make it available through
the download center.
I've attached the README file (as of now) so that anyone
who is interested can get the flavor of what the kit will
contain and how it will be used.
Lori
Introduction
------------
This toolkit contains files which enable the conversion
of a Solaris install image to support profile-based
installation of a system with a ZFS root file system.
****CAVEAT*****
---------------
This software is very preliminary. It is certain to
change, and the profile syntax is very likely to change.
There is no guarantee that systems set up with zfs root using
this procedure will be upgradable, or even migratable to
subsequent releases of this software. We hope that a more
"cooked" version of this procedure and software can be provided
soon, but for now, this is being made available for those who
are willing and interested in trying out a very preliminary
install. This version of the software isn't being provided
for the purpose of testing the install software per se, but for
testing the actual operation of ZFS as a root file system.
This install procedure is just easier than the manual setup.
Overview of Procedure
---------------------
The steps for installing a system with a zfs root file system are:
1. Build or download a full Solaris netinstall image with
bits that are build 62 or later, or have been built from
the Nevada source files that contain revision 3912 (putback
on 28-Mar-2007).
2. Run the patch_image_for_zfsboot script (provided in this
toolkit directory) on the netinstall image. This will modify
the image to contain install software that is able to install
a system with a ZFS root file system.
3. Boot a system to be installed from the netinstall image, or burn
a DVD from the image and boot a client from it.
4. Do the installation, providing it a Jumpstart profile that
contains the new keywords for defining root pools and bootable
datasets.
5. Boot the newly-installed system, now running with a zfs root.
Detailed Steps for Preparation of the Image
-------------------------------------------
1. Build or download a Solaris install image with bits that support
zfs boot (build 62 or later, or with Revision 3912).
2. Become root.
3. cd into the top level of the directory you unpacked from the
tarball (the directory where this README is found).
4. Execute this command:
# ./patch_image_for_zfsboot <root-of-install-image>
where <root_of_install_image> is the directory containing
the install image produced in step 1. This is the directory
that contains subdirectories "boot", "Solaris_11", the
file ".cdtoc", and others.
5. Now boot a machine to be installed from the netinstall image.
Detailed Steps for the Install
------------------------------
Netinstall or DVD install of a system with a zfs root must be done with
the profile-driven install program (pfinstall). This means that the
install is controlled by a profile (a short file describing the
desired system configuration) instead of by interactive responses.
You can do a profile-driven install either by setting your system
up for jumpstart install or by a fairly simple tweak to the usual
interactive netinstall procedure (described below).
The standard way to do a profile-driven install is to set up your
system for Jumpstart. However, since not everyone wants to set
up the overhead for Jumpstart, it might be easier to use the standard
netinstall procedure, with a small change described here:
Here's a quick-and-dirty way to do a profile-driven install:
1. Boot your system off the net or from the DVD in the usual manner.
2. Select "Interactive Install". Then, at the first opportunity
to exit out of it (which will be after you've answered the
system configuration questions, such as whether you want
Kerberos and what the root password will be), exit out to a shell.
3. Create a profile for the install in /tmp/profile. (The contents
of the profile are described below).
4. Execute the following:
# pfinstall /tmp/profile
When it's done, reboot. You should get a GRUB menu. Select the
entry with the title "Solaris <release-name> X86". The failsave
entry should work too.
Creating a profile for the install
----------------------------------
The system profile you use should look something like this:
install_type initial_install
cluster SUNWCuser
filesys c0t0d0s1 auto swap
pool mypool free / mirror c0t0d0s0 c0t1d0s0
dataset mypool/BE1 auto /
dataset mypool/BE1/usr auto /usr
dataset mypool/BE1/opt auto /opt
dataset mypool/BE1/var auto /var
dataset mypool/BE1/export auto /export
Notes on the above lines:
1) The profile must start with a "install_type initial_install" line. This
is the only kind of install supported (no upgrade or flash install yet).
2) "cluster" and "package" commands are permitted. If not specified, you
will get the SUNWCall metacluster.
3) In order to get crash dumps, you must specify a line of the form:
filesys <slice> auto swap
where <slice> is a slice name such as "c0d0s1" or "c1t0d0s1".
This is a bit misleading because this slice will actually NOT
end up being your swap space. (A zvol in the root pool will be
set up as swap). But since ZFS doesn't support doing crash dumps
into a pool yet, we need a dump slice so that we can get crash
dumps.
4) You must have exactly one line with the "pool" keyword. This entry
defines the root pool. It's of the form:
pool <poolname> free / <vdev-spec>
where <poolname> is the name you want to assign to the root pool
and <vdev-spec> defines a vdev which will contain the pool. The
<vdev-spec> entry is of the following forms:
<slice>
or
mirror <slice> [<slice>]*
The <slice> field is just the "c?d?s?" or "c?t?d?s?" part of the
device name. It must be a slice (i.e., end in "s?").
Examples of "pool" keyword lines:
pool mypool free / c0d0s0
pool roottank free / mirror c0t0d0s0 c1t0d0s0
5. You must have at least one "dataset" entry to define the root
file system and it's best if you have entries for root, /usr,
/opt, /var, and /export. The form of a dataset entry is:
dataset <dataset-name> auto <mount-point>
Where "<dataset-name>" is the name of a dataset to be created
in the root pool (the name must begin with the name of the
root pool). <mount-point> is where the dataset will be
mounted. Although this is not enforced yet, it is likely
the required convention for dataset name will be this.
<pool-name>/<boot-environment-name>[/<directory in Solaris name space>]
A <boot-environment-name> is comparable to what we call a "BE"
in LiveUpgrade terminology. It's just the name you want to
assign to a particular boot environment, where a "boot environment"
is a root file system and its subordinate file systems.
Although it's possible to put the entire Solaris name space
in one dataset (mounted at "/"), it is very likely that the
recommended configuration for system with zfs roots will be to
split the name space into separate datasets along appropriate
lines. The recommended division isn't defined yet (inputs on
what makes sense are welcomed), but we can be reasonably sure
the /usr and /opt and at least some parts of /var will
be separate from root. Here's why:
1) First of all, there's no strong reason NOT to split the
name space where we think it makes sense. ZFS file systems
are cheap, right? Back in the old days of small
disks, we sometimes had to split the Solaris name space
into separate file systems. Now we don't have to, but
we can choose to, for greater administrative flexibility.
2) For boot environment cloning, sometimes you'd like part of
the Solaris name space to appear in the clone by reference,
not as a copy. Dividing up the name space into file systems
gives you that flexibility.
3) Just like in the old days, there are some good reasons to keep
the amount of space needed at early boot time as small as is
reasonably possible. The design is still being worked out,
but it's likely that keeping the root file system small will
help pave the way for booting from RAID-Z vdevs.
4) There will probably turn out to be some zone-related reasons
to split up the name space (it simplifies sharing part of the
name space between zones).
5) Most other Jumpstart keywords are not appropriate and have not
tested. It's probably not worth experimenting with them yet
because they probably won't work right. Wait for a later version
of the install code.
_______________________________________________
zfs-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss