On Tue, 19 Aug 2008, Ivan Ordonez wrote:
>
> I would like to get your opinion on what is the best way to setup the
> machine using ZFS on all drives and partitions.  I want to use Zones,
> Live Upgrade and Virtualbox in the future.
>
> The machine is SUN Ultra 40 M2 with 6G of RAM, 2 dual core AMD CPU, and
> three physical drive with 250G each.  I installed Solaris 10 5/08 (5.10
> Generic_137112-05) using Solaris Interactive (I chose all the default
> options and partitioning) on the first physical drive, c1d0s0.  The two
> other physical drives (c2d0s0 and c3d0s0) were formatted and partitioned
> the same way as c1d0s0.  I want to convert c1d0s0 to ZFS filesystem and
> setup raid1 mirroring using c2d0s0.  I want to use the third drive
> (c3d0s0) for live upgrade, virtualbox and zones.  I want minimal
> downtime in case one of the drive fails, and need to be replace.

The problem is that the advice which is best for Solaris 10U5 will be 
invalidated by Solaris 10U6, which should be available a month or two. 
That is because Solaris 10U6 will offer support for ZFS boot so your 
first two drives can be in one mirrored pool.

Given this expected short term future, the best path forward is likely 
to place an order now for a fourth disk drive.  The first drive can be 
your Solaris boot disk, without redundant data protection. Put your 
two other drives (c2d0s0 and c3d0s0) into a mirrored pool, using the 
whole disk rather than partitions.  This will be your data pool.  You 
can use the extra space on c1d0s0 for non-critical data storage using 
UFS or ZFS.

When Solaris 10U6 arrives, you can install the extra disk and 
re-install your system from scratch so that the first two disks are 
your ZFS root pool, and the previously existing mirror pool continues 
to function as before.  I don't know what live upgrade for Solaris 
10U6 looks like, but if we are luckly it has the smarts to upgrade an 
existing boot disk to form a ZFS root pool.

Assuming that the fourth disk shows up before Solaris 10U6 then I 
think that you can create a second boot Solaris 10U5 boot environment 
on that disk (using lucreate') so that if your normal root disk 
craters, you are still able to boot via the second disk (with possible 
data loss).  The location of the GRUB menu is always a problem since 
it seems that there can be only one master GRUB menu, and it might be 
on the disk that craters.

I have a SUN Ultra 40 M2 here as well, and since a motherboard 
replacement a month ago (requiring three motherboards before finding 
one that maybe worked), the system has just not been the same.  The 
Emulex fiber channel card now locks up (is specifically shut down with 
"Adaptor error") sometimes at the start of a 'zfs scrub'.  Yesterday I 
found that the system had mysteriously shut down at the start of a zfs 
scrub (according to 'last') without being requested to by a user, 
without any apparent panic, and without any updates to 
/var/adm/messages.  Unfortunately, I installed Solaris 10U5 shortly 
after the motherboard swap so it is difficult to tell if the strange 
behavior is due to hardware, or if Solaris 10U5 (or its Emulex device 
driver) has a dreadful bug.

My experience should be fair warning that even with a service contract 
your system can be down unexpectly for a whole week and the 
replacement parts might not work due to a quality control problem. 
The disk drives are perhaps not the primary concern.

Bob
======================================
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to