Anton Rang wrote:
On Dec 19, 2006, at 7:14 AM, Mike Seda wrote:
Anton B. Rang wrote:
I have a Sun SE 3511 array with 5 x 500 GB SATA-I disks in a RAID
5. This
2 TB logical drive is partitioned into 10 x 200GB slices. I gave 4
of these slices to a Solaris 10 U2 machine and added each of them
to a concat (non-raid) zpool as listed below:
This is certainly a supportable configuration. However, it's not an
optimal one.
What would be the optimal configuration that you recommend?
If you don't need ZFS redundancy, I would recommend taking a single
"slice" for your ZFS file system (e.g. 6 x 200 GB for other file
systems, and 1 x 800 GB for the ZFS pool). There would still be
contention between the various file systems, but at least ZFS would be
working with a single contiguous block of space on the array.
Because of the implicit striping in ZFS, what you have right now is
analogous to taking a single disk, partitioning it into several
partitions, then striping across those partitions -- it works, you can
use all of the space, but there's a rearrangement which means that
logically contiguous blocks on disk are no longer physically
contiguous, hurting performance substantially.
Hmm... But, how is my current configuration (1 striped zpool consisting
of 4 x 200 GB luns from a hardware RAID 5 logical drive) "analogous to
taking a single disk, partitioning it into several partitions, then
striping across those partitions" if each 200 GB lun is presented to
solaris as a whole disk:
Current partition table (original):
Total disk sectors available: 390479838 + 16384 (reserved sectors)
Part Tag Flag First Sector Size Last Sector
0 usr wm 34 186.20GB 390479838
1 unassigned wm 0 0 0
2 unassigned wm 0 0 0
3 unassigned wm 0 0 0
4 unassigned wm 0 0 0
5 unassigned wm 0 0 0
6 unassigned wm 0 0 0
8 reserved wm 390479839 8.00MB 390496222
Why is my current configuration not analogous to taking 4 disks and
striping across those 4 disks?
Yes, I am worried about the lack of redundancy. And, I have some new
disks on order, at least one of which will be a hot spare.
Glad to hear it.
Anton
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss