> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Dave U.Random
> 
> > My personal preference, assuming 4 disks, since the OS is mostly reads
and
> > only a little bit of writes, is to create a 4-way mirrored 100G
partition
> > for the OS, and the remaining 900G of each disk (or whatever) becomes
> > either a stripe of mirrors or raidz, as appropriate in your case, for
the
> > storagepool.
> 
> Oh, you are talking about 1T drives and my servers are all 4x73G! So it's
a
> fairly big deal since I have little storage to waste and still want to be
> able to survive losing one drive. 

Well ... 
Slice all 4 drives into 13G and 60G.
Use a mirror of 13G for the rpool.
Use 4x 60G in some way (raidz, or stripe of mirrors) for tank
Use a mirror of 13G appended to tank

That would use all your space as efficiently as possible, while providing at
least one level of redundancy, and the only sacrifice you're making is the
fact that you get different performance characteristics between a raidz and
a mirror, which are both in the same pool.  For example, you might decide
the ideal performance characteristics for your workload are to use raidz...
Or to use mirrors ... but your pool is a hybrid, so you can't achieve the
ideal performance characteristics no matter which type of data workload you
have.

That is a very small sacrifice, considering the constraints you're up
against for initial conditions.  "I have 4x 73G disks" "I want to survive a
single disk failure" "I don't want to waste any space" and "My boot pool
must be included."

The only conclusion you can draw from that is:  First take it as a given
that you can't boot from a raidz volume.  Given, you must have one mirror.
Then you raidz all the remaining space that's capable of being put into a
raidz...  And what you have left is a pair of unused space, equal to the
size of your boot volume.  You either waste that space, or you mirror it and
put it into your tank.

It's really the only solution, without changing your hardware or design
constraints.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to