> In this setup that will install everything on the root mirror so 
> I will
> have to move things around later? Like /var and /usr or whatever 
> I don't
> want on the root mirror?
Actually, you do want /usr and much of /var on the root pool, they
are integral parts of the "svc:/filesystem/local" needed to bring up
your system to a useable state (regardless of whether the other
pools are working or not).
 
Depending on the OS versions, you can do manual data migrations
to separate datasets of the root pool, in order to keep some data
common between OE's or to enforce different quotas or compression
rules. For example, on SXCE and Solaris 10 (but not on oi_148a)
we successfully splice out many filesystems in such a layout
(the example below also illustrates multiple OEs):
 
# zfs list -o name,refer,quota,compressratio,canmount,mountpoint -t filesystem 
-r rpool
NAME                    REFER  QUOTA  RATIO  CANMOUNT  MOUNTPOINT
rpool                          7.92M   none  1.45x        on  /rpool
rpool/ROOT                       21K   none  1.38x    noauto  /rpool/ROOT
rpool/ROOT/snv_117              758M   none  1.00x    noauto  /
rpool/ROOT/snv_117/opt         27.1M   none  1.00x    noauto  /opt
rpool/ROOT/snv_117/usr          416M   none  1.00x    noauto  /usr
rpool/ROOT/snv_117/var          122M   none  1.00x    noauto  /var
rpool/ROOT/snv_129              930M   none  1.45x    noauto  /
rpool/ROOT/snv_129/opt          109M   none  2.70x    noauto  /opt
rpool/ROOT/snv_129/usr          509M   none  2.71x    noauto  /usr
rpool/ROOT/snv_129/var          288M   none  2.54x    noauto  /var
rpool/SHARED                     18K   none  3.36x    noauto  legacy
rpool/SHARED/var                 18K   none  3.36x    noauto  legacy
rpool/SHARED/var/adm           2.97M     5G  4.43x    noauto  legacy
rpool/SHARED/var/cores          118M     5G  3.44x    noauto  legacy
rpool/SHARED/var/crash         1.39G     5G  3.41x    noauto  legacy
rpool/SHARED/var/log            102M     5G  3.43x    noauto  legacy
rpool/SHARED/var/mail          66.4M   none  1.79x    noauto  legacy
rpool/SHARED/var/tmp             20K   none  1.00x    noauto  legacy
rpool/test                     50.5K   none  1.00x    noauto  /rpool/test
 
Mounts of /var/* components are done via /etc/vfstab lines like:
rpool/SHARED/var/adm    -       /var/adm        zfs     -       yes     -
rpool/SHARED/var/log    -       /var/log        zfs     -       yes     -
rpool/SHARED/var/mail   -       /var/mail       zfs     -       yes     -
rpool/SHARED/var/crash  -       /var/crash      zfs     -       yes     -
rpool/SHARED/var/cores  -       /var/cores      zfs     -       yes     -

While system paths /usr /var /opt are mounted by SMF services
directly.
 
 
> And then I just make a RAID10 like Jim 
> was saying
> with the other 4x60 slices? How should I move mountpoints that aren't
> separate ZFS filesystems?
 
 

> 
> > The only conclusion you can draw from that is:  First 
> take it as a given
> > that you can't boot from a raidz volume.  Given, you must 
> have one mirror.
> 
> Thanks, I will keep it in mind.
> 
> > Then you raidz all the remaining space that's capable of being 
> put into a
> > raidz...  And what you have left is a pair of unused 
> space, equal to the
> > size of your boot volume.  You either waste that space, 
> or you mirror it
> > and put it into your tank.
...or use it as swap space :)
 
> I didn't understand what you suggested about appending a 13G 
> mirror to tank. Would that be something like RAID10 without
> actually being RAID10 so I could still boot from it? How would
> the system use it?
No, this would be an uneven striping over a raid10 (or raidzN) 
bank of 60Gb slices and a 13Gb mirror. ZFS can do that too,
although for performance considerations unbalanced pools are 
not recommended and should be forced on command-line.

And you can not boot from any pool other than a mirror or a
single drive. Rationale: a single BIOS device must be sufficient
to boot the system and contain all the data needed to boot.
 
> So RAID10 sounds like the only reasonable choice since there are 
> an even
> number of slices, I mean is RAIDZ1 even possible with 4 slices?
Yes, it is possible with any amount of slices starting from 3.

> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 
-- 

+============================================================+ 
|                                                            | 
| Климов Евгений,                                 Jim Klimov | 
| технический директор                                   CTO | 
| ЗАО "ЦОС и ВТ"                                  JSC COS&HT | 
|                                                            | 
| +7-903-7705859 (cellular)          mailto:jimkli...@cos.ru | 
|                        CC:ad...@cos.ru,jimkli...@gmail.com | 
+============================================================+ 
| ()  ascii ribbon campaign - against html mail              | 
| /\                        - against microsoft attachments  | 
+============================================================+
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to