Hi.
3510 with two HW controllers, configured on LUN in RAID-10 using 12 disks in
head unit (FC-AL 73GB 15K disks). Optimization set to random, stripe size 32KB.
Connected to v440 using two links, however in tests only one link was used (no
MPxIO).
I used filebench and varmail test with default parameters and run for 60s, test
was run twice.
System is S10U2 with all available patches (all support patches), kernel -18.
ZFS filesystem on HW lun with atime=off:
IO Summary: 499078 ops 8248.0 ops/s, (1269/1269 r/w) 40.6mb/s, 314us
cpu/op, 6.0ms latency
IO Summary: 503112 ops 8320.2 ops/s, (1280/1280 r/w) 41.0mb/s, 296us
cpu/op, 5.9ms latency
Now the same LUN but ZFS was destroyed and UFS filesystem was created.
UFS filesystem on HW lun with maxcontig=24 and noatime:
IO Summary: 401671 ops 6638.2 ops/s, (1021/1021 r/w) 32.7mb/s, 404us
cpu/op, 7.5ms latency
IO Summary: 403194 ops 6664.5 ops/s, (1025/1025 r/w) 32.5mb/s, 406us
cpu/op, 7.5ms latency
Now another v440 server (the same config) with snv_44, connected several 3510
JBODS on two FC-loops however only one loop was used (no MPxIO). The same disks
(73GB FC-AL 15K).
ZFS filesystem with atime=off with ZFS raid-10 using 12 disks from one
enclosure:
IO Summary: 558331 ops 9244.1 ops/s, (1422/1422 r/w) 45.2mb/s, 312us
cpu/op, 5.2ms latency
IO Summary: 537542 ops 8899.9 ops/s, (1369/1369 r/w) 43.5mb/s, 307us
cpu/op, 5.4ms latency
### details ####
$ cat zfs-benhc.txt
v440, Generic_118833-18
filebench> set $dir=/se3510_hw_raid10_12disks/t1/
filebench> run 60
582: 42.107: Fileset bigfileset: 1000 files, avg dir = 1000000.0, avg depth =
0.5, mbytes=15
582: 42.108: Creating fileset bigfileset...
582: 45.262: Preallocated 812 of 1000 of fileset bigfileset in 4 seconds
582: 45.262: Creating/pre-allocating files
582: 45.262: Starting 1 filereader instances
586: 46.268: Starting 16 filereaderthread threads
582: 49.278: Running...
582: 109.787: Run took 60 seconds...
582: 109.801: Per-Operation Breakdown
closefile4 634ops/s 0.0mb/s 0.0ms/op 8us/op-cpu
readfile4 634ops/s 10.3mb/s 0.1ms/op 65us/op-cpu
openfile4 634ops/s 0.0mb/s 0.1ms/op 63us/op-cpu
closefile3 634ops/s 0.0mb/s 0.0ms/op 11us/op-cpu
fsyncfile3 634ops/s 0.0mb/s 11.3ms/op 150us/op-cpu
appendfilerand3 635ops/s 9.9mb/s 0.1ms/op 132us/op-cpu
readfile3 635ops/s 10.4mb/s 0.1ms/op 66us/op-cpu
openfile3 635ops/s 0.0mb/s 0.1ms/op 63us/op-cpu
closefile2 635ops/s 0.0mb/s 0.0ms/op 11us/op-cpu
fsyncfile2 635ops/s 0.0mb/s 11.9ms/op 137us/op-cpu
appendfilerand2 635ops/s 9.9mb/s 0.1ms/op 94us/op-cpu
createfile2 634ops/s 0.0mb/s 0.2ms/op 163us/op-cpu
deletefile1 634ops/s 0.0mb/s 0.1ms/op 86us/op-cpu
582: 109.801:
IO Summary: 499078 ops 8248.0 ops/s, (1269/1269 r/w) 40.6mb/s, 314us
cpu/op, 6.0ms latency
582: 109.801: Shutting down processes
filebench>
filebench> run 60
582: 190.655: Fileset bigfileset: 1000 files, avg dir = 1000000.0, avg depth
= 0.5, mbytes=15
582: 190.720: Removed any existing fileset bigfileset in 1 seconds
582: 190.720: Creating fileset bigfileset...
582: 193.259: Preallocated 786 of 1000 of fileset bigfileset in 3 seconds
582: 193.259: Creating/pre-allocating files
582: 193.259: Starting 1 filereader instances
591: 194.268: Starting 16 filereaderthread threads
582: 197.278: Running...
582: 257.748: Run took 60 seconds...
582: 257.761: Per-Operation Breakdown
closefile4 640ops/s 0.0mb/s 0.0ms/op 8us/op-cpu
readfile4 640ops/s 10.5mb/s 0.1ms/op 64us/op-cpu
openfile4 640ops/s 0.0mb/s 0.1ms/op 63us/op-cpu
closefile3 640ops/s 0.0mb/s 0.0ms/op 11us/op-cpu
fsyncfile3 640ops/s 0.0mb/s 11.1ms/op 147us/op-cpu
appendfilerand3 640ops/s 10.0mb/s 0.1ms/op 124us/op-cpu
readfile3 640ops/s 10.5mb/s 0.1ms/op 67us/op-cpu
openfile3 640ops/s 0.0mb/s 0.1ms/op 63us/op-cpu
closefile2 640ops/s 0.0mb/s 0.0ms/op 11us/op-cpu
fsyncfile2 640ops/s 0.0mb/s 11.9ms/op 139us/op-cpu
appendfilerand2 640ops/s 10.0mb/s 0.1ms/op 89us/op-cpu
createfile2 640ops/s 0.0mb/s 0.2ms/op 157us/op-cpu
deletefile1 640ops/s 0.0mb/s 0.1ms/op 87us/op-cpu
582: 257.761:
IO Summary: 503112 ops 8320.2 ops/s, (1280/1280 r/w) 41.0mb/s, 296us
cpu/op, 5.9ms latency
582: 257.761: Shutting down processes
filebench>
bash-3.00# zpool destroy se3510_hw_raid10_12disks
bash-3.00# newfs -C 24 /dev/rdsk/c3t40d0s0
newfs: construct a new file system /dev/rdsk/c3t40d0s0: (y/n)? y
Warning: 4164 sector(s) in last cylinder unallocated
/dev/rdsk/c3t40d0s0: 857083836 sectors in 139500 cylinders of 48 tracks, 128
sectors
418498.0MB in 8719 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
Initializing cylinder groups:
...............................................................................
...............................................................................
................
super-block backups for last 10 cylinder groups at:
856130208, 856228640, 856327072, 856425504, 856523936, 856622368, 856720800,
856819232, 856917664, 857016096
bash-3.00# mount -o noatime /dev/dsk/c3t40d0s0 /mnt/
bash-3.00#
bash-3.00# /opt/filebench/bin/sparcv9/filebench
filebench> load varmail
632: 2.758: Varmail Version 1.24 2005/06/22 08:08:30 personality successfully
loaded
632: 2.759: Usage: set $dir=<dir>
632: 2.759: set $filesize=<size> defaults to 16384
632: 2.759: set $nfiles=<value> defaults to 1000
632: 2.759: set $nthreads=<value> defaults to 16
632: 2.759: set $meaniosize=<value> defaults to 16384
632: 2.759: set $meandirwidth=<size> defaults to 1000000
632: 2.759: (sets mean dir width and dir depth is calculated as log (width,
nfiles)
632: 2.759: dirdepth therefore defaults to dir depth of 1 as in postmark
632: 2.759: set $meandir lower to increase depth beyond 1 if desired)
632: 2.759:
632: 2.759: run runtime (e.g. run 60)
632: 2.759: syntax error, token expected on line 51
filebench> set $dir=/mnt/
filebench> run 60
632: 7.699: Fileset bigfileset: 1000 files, avg dir = 1000000.0, avg depth =
0.5, mbytes=15
632: 7.722: Creating fileset bigfileset...
632: 10.611: Preallocated 812 of 1000 of fileset bigfileset in 3 seconds
632: 10.611: Creating/pre-allocating files
632: 10.611: Starting 1 filereader instances
633: 11.615: Starting 16 filereaderthread threads
632: 14.625: Running...
632: 75.135: Run took 60 seconds...
632: 75.149: Per-Operation Breakdown
closefile4 511ops/s 0.0mb/s 0.0ms/op 8us/op-cpu
readfile4 511ops/s 8.4mb/s 0.1ms/op 65us/op-cpu
openfile4 511ops/s 0.0mb/s 0.0ms/op 37us/op-cpu
closefile3 511ops/s 0.0mb/s 0.0ms/op 12us/op-cpu
fsyncfile3 511ops/s 0.0mb/s 9.7ms/op 168us/op-cpu
appendfilerand3 511ops/s 8.0mb/s 2.6ms/op 190us/op-cpu
readfile3 511ops/s 8.3mb/s 0.1ms/op 65us/op-cpu
openfile3 511ops/s 0.0mb/s 0.0ms/op 37us/op-cpu
closefile2 511ops/s 0.0mb/s 0.0ms/op 12us/op-cpu
fsyncfile2 511ops/s 0.0mb/s 8.4ms/op 152us/op-cpu
appendfilerand2 511ops/s 8.0mb/s 1.7ms/op 170us/op-cpu
createfile2 511ops/s 0.0mb/s 4.3ms/op 297us/op-cpu
deletefile1 511ops/s 0.0mb/s 3.1ms/op 145us/op-cpu
632: 75.149:
IO Summary: 401671 ops 6638.2 ops/s, (1021/1021 r/w) 32.7mb/s, 404us
cpu/op, 7.5ms latency
632: 75.149: Shutting down processes
filebench> run 60
632: 193.974: Fileset bigfileset: 1000 files, avg dir = 1000000.0, avg depth
= 0.5, mbytes=15
632: 194.874: Removed any existing fileset bigfileset in 1 seconds
632: 194.875: Creating fileset bigfileset...
632: 196.817: Preallocated 786 of 1000 of fileset bigfileset in 2 seconds
632: 196.817: Creating/pre-allocating files
632: 196.817: Starting 1 filereader instances
636: 197.825: Starting 16 filereaderthread threads
632: 200.835: Running...
632: 261.335: Run took 60 seconds...
632: 261.350: Per-Operation Breakdown
closefile4 513ops/s 0.0mb/s 0.0ms/op 8us/op-cpu
readfile4 513ops/s 8.2mb/s 0.1ms/op 64us/op-cpu
openfile4 513ops/s 0.0mb/s 0.0ms/op 38us/op-cpu
closefile3 513ops/s 0.0mb/s 0.0ms/op 12us/op-cpu
fsyncfile3 513ops/s 0.0mb/s 9.7ms/op 169us/op-cpu
appendfilerand3 513ops/s 8.0mb/s 2.7ms/op 189us/op-cpu
readfile3 513ops/s 8.3mb/s 0.1ms/op 65us/op-cpu
openfile3 513ops/s 0.0mb/s 0.0ms/op 38us/op-cpu
closefile2 513ops/s 0.0mb/s 0.0ms/op 12us/op-cpu
fsyncfile2 513ops/s 0.0mb/s 8.4ms/op 154us/op-cpu
appendfilerand2 513ops/s 8.0mb/s 1.7ms/op 165us/op-cpu
createfile2 513ops/s 0.0mb/s 4.2ms/op 301us/op-cpu
deletefile1 513ops/s 0.0mb/s 3.2ms/op 148us/op-cpu
632: 261.350:
IO Summary: 403194 ops 6664.5 ops/s, (1025/1025 r/w) 32.5mb/s, 406us
cpu/op, 7.5ms latency
632: 261.350: Shutting down processes
filebench>
v440, snv_44
bash-3.00# zpool status
pool: zfs_raid10_12disks
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
zfs_raid10_12disks ONLINE 0 0 0
mirror ONLINE 0 0 0
c2t16d0 ONLINE 0 0 0
c2t17d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c2t18d0 ONLINE 0 0 0
c2t19d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c2t20d0 ONLINE 0 0 0
c2t21d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c2t22d0 ONLINE 0 0 0
c2t23d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c2t24d0 ONLINE 0 0 0
c2t25d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c2t26d0 ONLINE 0 0 0
c2t27d0 ONLINE 0 0 0
errors: No known data errors
bash-3.00#
bash-3.00# /opt/filebench/bin/sparcv9/filebench
filebench> load varmail
393: 6.283: Varmail Version 1.24 2005/06/22 08:08:30 personality successfully
loaded
393: 6.283: Usage: set $dir=<dir>
393: 6.283: set $filesize=<size> defaults to 16384
393: 6.283: set $nfiles=<value> defaults to 1000
393: 6.283: set $nthreads=<value> defaults to 16
393: 6.283: set $meaniosize=<value> defaults to 16384
393: 6.284: set $meandirwidth=<size> defaults to 1000000
393: 6.284: (sets mean dir width and dir depth is calculated as log (width,
nfiles)
393: 6.284: dirdepth therefore defaults to dir depth of 1 as in postmark
393: 6.284: set $meandir lower to increase depth beyond 1 if desired)
393: 6.284:
393: 6.284: run runtime (e.g. run 60)
393: 6.284: syntax error, token expected on line 51
filebench> set $dir=/zfs_raid10_12disks/t1/
filebench> run 60
393: 18.766: Fileset bigfileset: 1000 files, avg dir = 1000000.0, avg depth =
0.5, mbytes=15
393: 18.767: Creating fileset bigfileset...
393: 23.020: Preallocated 812 of 1000 of fileset bigfileset in 5 seconds
393: 23.020: Creating/pre-allocating files
393: 23.020: Starting 1 filereader instances
394: 24.030: Starting 16 filereaderthread threads
393: 27.040: Running...
393: 87.440: Run took 60 seconds...
393: 87.453: Per-Operation Breakdown
closefile4 711ops/s 0.0mb/s 0.0ms/op 9us/op-cpu
readfile4 711ops/s 11.4mb/s 0.1ms/op 62us/op-cpu
openfile4 711ops/s 0.0mb/s 0.1ms/op 65us/op-cpu
closefile3 711ops/s 0.0mb/s 0.0ms/op 11us/op-cpu
fsyncfile3 711ops/s 0.0mb/s 10.0ms/op 148us/op-cpu
appendfilerand3 711ops/s 11.1mb/s 0.1ms/op 129us/op-cpu
readfile3 711ops/s 11.6mb/s 0.1ms/op 63us/op-cpu
openfile3 711ops/s 0.0mb/s 0.1ms/op 65us/op-cpu
closefile2 711ops/s 0.0mb/s 0.0ms/op 11us/op-cpu
fsyncfile2 711ops/s 0.0mb/s 10.0ms/op 115us/op-cpu
appendfilerand2 711ops/s 11.1mb/s 0.1ms/op 97us/op-cpu
createfile2 711ops/s 0.0mb/s 0.2ms/op 163us/op-cpu
deletefile1 711ops/s 0.0mb/s 0.1ms/op 89us/op-cpu
393: 87.454:
IO Summary: 558331 ops 9244.1 ops/s, (1422/1422 r/w) 45.2mb/s, 312us
cpu/op, 5.2ms latency
393: 87.454: Shutting down processes
filebench> run 60
393: 118.054: Fileset bigfileset: 1000 files, avg dir = 1000000.0, avg depth
= 0.5, mbytes=15
393: 118.108: Removed any existing fileset bigfileset in 1 seconds
393: 118.108: Creating fileset bigfileset...
393: 122.619: Preallocated 786 of 1000 of fileset bigfileset in 5 seconds
393: 122.619: Creating/pre-allocating files
393: 122.619: Starting 1 filereader instances
401: 123.630: Starting 16 filereaderthread threads
393: 126.640: Running...
393: 187.040: Run took 60 seconds...
393: 187.053: Per-Operation Breakdown
closefile4 685ops/s 0.0mb/s 0.0ms/op 8us/op-cpu
readfile4 685ops/s 11.1mb/s 0.1ms/op 62us/op-cpu
openfile4 685ops/s 0.0mb/s 0.1ms/op 65us/op-cpu
closefile3 685ops/s 0.0mb/s 0.0ms/op 11us/op-cpu
fsyncfile3 685ops/s 0.0mb/s 10.5ms/op 150us/op-cpu
appendfilerand3 685ops/s 10.7mb/s 0.1ms/op 124us/op-cpu
readfile3 685ops/s 11.1mb/s 0.1ms/op 60us/op-cpu
openfile3 685ops/s 0.0mb/s 0.1ms/op 65us/op-cpu
closefile2 685ops/s 0.0mb/s 0.0ms/op 11us/op-cpu
fsyncfile2 685ops/s 0.0mb/s 10.4ms/op 113us/op-cpu
appendfilerand2 685ops/s 10.7mb/s 0.1ms/op 93us/op-cpu
createfile2 685ops/s 0.0mb/s 0.2ms/op 156us/op-cpu
deletefile1 685ops/s 0.0mb/s 0.1ms/op 89us/op-cpu
393: 187.054:
IO Summary: 537542 ops 8899.9 ops/s, (1369/1369 r/w) 43.5mb/s, 307us
cpu/op, 5.4ms latency
393: 187.054: Shutting down processes
filebench>
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss