My server used to use ODS/SDS/SVM/WhateverSunCallsItToday RAID 5. When
my old motherboard decided to flake out on me, SVM refused to recognize
the old RAID5 set. Fortunately, I resurrected my old parts long enough
to copy off almost all my data on to a pair of 750GB disks.
I'm now running on ZFS, and am much happier. The motherboard is Intel
ICH7 based (so I eagerly await AHCI driver support - anyone know if it
made U4?). My boot disks are SVM mirrored SATA disks off the
motherboard. My data array is 8 Hitachi HDS72505 SATA disks off of 2
4-port SI3114 based PCI cards (I also eagerly await a supported PCI
express internal SATA JBOD controller). The case is a Lian-Li PC-V100B,
which has 12 internal 3.5" drive bays. I'm running Solaris x86 10 U3.
carson:gandalf 0 $ zpool status
pool: media
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
media ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c2d0 ONLINE 0 0 0
c2d1 ONLINE 0 0 0
c3d0 ONLINE 0 0 0
c3d1 ONLINE 0 0 0
c4d0 ONLINE 0 0 0
c4d1 ONLINE 0 0 0
c5d0 ONLINE 0 0 0
c5d1 ONLINE 0 0 0
errors: No known data errors
carson:gandalf 0 $ zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
media 3.62T 1.70T 1.92T 46% ONLINE -
A basic sequential write test (compression on this file system is off):
carson:gandalf 0 $ time dd if=/dev/zero of=/export/media/test bs=8192k
count=1024
1024+0 records in
1024+0 records out
real 2m30.875s
user 0m0.010s
sys 0m10.853s
During the write:
carson:gandalf 0 $ zpool iostat 5
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
media 1.71T 1.92T 0 570 21.6K 63.8M
media 1.71T 1.92T 0 592 51.1K 63.7M
media 1.71T 1.92T 0 573 613 64.9M
media 1.71T 1.92T 0 574 0 64.3M
media 1.71T 1.92T 0 573 204 64.7M
media 1.71T 1.92T 0 563 0 63.6M
media 1.71T 1.92T 0 594 613 67.5M
media 1.71T 1.92T 0 547 0 65.1M
media 1.71T 1.92T 0 558 0 59.2M
carson:gandalf 130 $ iostat -l 11 -x 5
extended device statistics
device r/s w/s kr/s kw/s wait actv svc_t %w %b
cmdk0 1.8 0.0 11.2 0.0 0.0 0.0 4.8 0 1
cmdk1 0.4 200.6 7.3 9226.7 22.1 1.6 117.9 74 82
cmdk2 0.6 270.6 10.2 9339.4 26.4 1.8 104.0 85 94
cmdk3 0.6 198.4 10.3 9220.9 22.0 1.6 118.5 74 82
cmdk4 0.6 239.0 10.2 9287.0 26.1 1.8 116.4 84 92
cmdk5 0.8 200.6 23.1 9228.7 22.2 1.6 118.2 75 83
cmdk6 0.4 244.2 6.6 9271.8 26.4 1.8 115.1 84 92
cmdk7 0.6 197.8 10.3 9256.1 21.7 1.6 117.4 73 81
cmdk8 0.4 267.2 6.6 9288.6 26.1 1.8 104.3 84 92
cmdk9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
cmdk12 1.6 0.0 11.0 0.0 0.0 0.0 7.3 0 1
extended device statistics
device r/s w/s kr/s kw/s wait actv svc_t %w %b
cmdk0 2.2 0.6 12.0 0.3 0.0 0.0 3.3 0 1
cmdk1 0.0 176.4 0.0 9350.9 22.0 1.5 133.7 74 80
cmdk2 0.0 246.2 0.0 9386.9 26.4 1.8 114.6 85 92
cmdk3 0.0 177.8 0.0 9346.9 22.2 1.5 133.4 74 80
cmdk4 0.0 223.0 0.0 9374.8 26.3 1.7 125.9 84 90
cmdk5 0.0 177.0 0.0 9317.6 22.3 1.6 134.5 75 80
cmdk6 0.0 228.8 0.0 9414.3 26.5 1.8 123.4 85 90
cmdk7 0.0 176.6 0.0 9322.8 22.1 1.5 133.7 74 80
cmdk8 0.0 247.2 0.0 9399.7 26.4 1.7 113.8 84 90
cmdk9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
cmdk12 2.4 0.2 15.2 0.1 0.0 0.0 4.6 0 1
extended device statistics
device r/s w/s kr/s kw/s wait actv svc_t %w %b
cmdk0 0.0 1.8 0.0 1.0 0.0 0.0 0.3 0 0
cmdk1 0.0 183.4 0.0 9425.3 22.8 1.6 132.8 75 80
cmdk2 0.0 249.0 0.0 9477.2 27.2 1.8 116.4 86 92
cmdk3 0.0 183.8 0.0 9421.0 22.9 1.6 133.3 75 81
cmdk4 0.0 228.6 0.0 9473.5 27.2 1.8 126.8 85 91
cmdk5 0.0 187.4 0.0 9449.0 22.9 1.6 130.5 75 81
cmdk6 0.0 233.2 0.0 9471.1 27.3 1.8 124.6 86 91
cmdk7 0.0 183.6 0.0 9425.9 22.7 1.6 132.1 75 80
cmdk8 0.0 248.8 0.0 9480.9 27.0 1.8 115.7 85 91
cmdk9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
cmdk12 0.2 1.0 0.8 0.6 0.0 0.0 0.3 0 0
carson:gandalf 0 $ vmstat 5
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr cd cd cd cd in sy cs
us sy id
0 0 0 752692 123480 19 140 1 0 0 0 2 2 11 11 11 748 448 487
1 1 98
0 0 0 662952 141588 83 925 394 0 0 0 0 44 168 177 166 18154 1425
4269 3 14 83
0 0 0 640196 117944 0 22 69 0 0 0 0 7 246 311 253 6966 426 6150
2 18 80
0 0 0 634956 112120 10 363 250 0 0 0 0 22 241 287 248 19963 1600
5770 5 24 71
0 0 0 669984 146904 0 2 3 0 0 0 0 1 208 260 213 6446 400 5537
1 17 82
0 0 0 583060 59364 48 525 530 29 29 0 0 50 229 288 228 6999 2398
5826 5 23 72
2 0 0 583472 58980 2 50 80 0 0 0 0 12 181 239 180 8810 506 5162
2 21 77
Reads are slightly faster, but not much. I'm pretty sure I'm saturating
the el-cheapo PCI SATA controllers, but I can't _quite_ justify buying
an Areca PCI-Express 8-port controller for JBOD, and this motherboard
only has 32-bit PCI slots (but has 2 x8 and 1 x4 PCI express slots).
carson:gandalf 0 $ time dd if=/export/media/test of=/dev/null bs=8192k
1024+0 records in
1024+0 records out
real 1m48.725s
user 0m0.008s
sys 0m7.890s
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss