IF you run solaris and opensolaris ,for example you my use c0t0d0 (for scsi 
disk) or c0d0 (for ide /SATA disk ) as the system disk.
In default ,solaris x86 and opensolaris will use RAW driver :
c0t0d0s0 (/dev/rdsk/c0t0d0s0) as the member driver of  rpool.

Infact, solaris2 partition can be more then one in each Hard Disk, so we also 
can use the RAW driver like : c0t0d0p1 (/dev/rdsk/c0t0d0p1) ,c0t0d0p2 
(/dev/rdsk/c0t0d0p2) as the member driver to create a new zpool :
mor...@egoodbrac1:~# zpool create dpool raidz c0t0d0p1,c0t1d0,c0t2d0

This command can successful create a new raidz pool named dpool
and c0t0d0p1 means the RAW drive of the first solaris2 partition of the system 
disk (c0t0d0),c0t1d0 and c0t2d0 is another 2 RAW disk.

But ,If you understand , in logic the member driver of rpool c0t0d0s0 should 
same with c0t0d0p1s0 (p0 means the full disk ,and p1 means the first partition 
),
so c0t0d0s0 is the child of c0t0d0p1   ,if c0t0d0s0 already be the member drive 
of a zpool how we still can use " the father" c0t0d0p1 to create a new zpool?

I tried it tow times in my PC and the VM on Virtualbox :
IF you create 2 solaris2 fdisk partition on a disk ,you can use the second 
partition (as a raw  partition) like c0t0d0p2 to be a member driver of new pool 
 .
But if  you use the first partition of your system disk to be a member drive of 
other zpool   ,it will make the grub load boot stage failed when you try reboot 
the system.

If use c0t0d0p1 as a member driver of a zpool not make the grub failed,you can 
try destroy the new zpool you just create ,after the problem must be happen!

the real test process is post on ixpub.net :
http://home.ixpub.net/space.php?uid=10821989&do=blog&id=407468 
Q.Ho
21/08/2009 11:52 GMT+1
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to