Does snv44 have the ZFS fixes to the I/O scheduler, the ARC and the prefetch logic?
These are great results for random I/O, I wonder how the sequential I/O looks? Of course you'll not get great results for sequential I/O on the 3510 :-) - Luke Sent from my GoodLink synchronized handheld (www.good.com) -----Original Message----- From: Robert Milkowski [mailto:[EMAIL PROTECTED] Sent: Tuesday, August 08, 2006 10:15 AM Eastern Standard Time To: zfs-discuss@opensolaris.org Subject: [zfs-discuss] Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID Hi. This time some RAID5/RAID-Z benchmarks. This time I connected 3510 head unit with one link to the same server as 3510 JBODs are connected (using second link). snv_44 is used, server is v440. I also tried changing max pending IO requests for HW raid5 lun and checked with DTrace that larger value is really used - it is but it doesn't change benchmark numbers. 1. ZFS on HW RAID5 with 6 disks, atime=off IO Summary: 444386 ops 7341.7 ops/s, (1129/1130 r/w) 36.1mb/s, 297us cpu/op, 6.6ms latency IO Summary: 438649 ops 7247.0 ops/s, (1115/1115 r/w) 35.5mb/s, 293us cpu/op, 6.7ms latency 2. ZFS with software RAID-Z with 6 disks, atime=off IO Summary: 457505 ops 7567.3 ops/s, (1164/1164 r/w) 37.2mb/s, 340us cpu/op, 6.4ms latency IO Summary: 457767 ops 7567.8 ops/s, (1164/1165 r/w) 36.9mb/s, 340us cpu/op, 6.4ms latency 3. UFS on HW RAID5 with 6 disks, noatime IO Summary: 62776 ops 1037.3 ops/s, (160/160 r/w) 5.5mb/s, 481us cpu/op, 49.7ms latency IO Summary: 63661 ops 1051.6 ops/s, (162/162 r/w) 5.4mb/s, 477us cpu/op, 49.1ms latency 4. UFS on HW RAID5 with 6 disks, noatime, S10U2 + patches (the same filesystem mounted as in 3) IO Summary: 393167 ops 6503.1 ops/s, (1000/1001 r/w) 32.4mb/s, 405us cpu/op, 7.5ms latency IO Summary: 394525 ops 6521.2 ops/s, (1003/1003 r/w) 32.0mb/s, 407us cpu/op, 7.7ms latency 5. ZFS with software RAID-Z with 6 disks, atime=off, S10U2 + patches (the same disks as in test #2) IO Summary: 461708 ops 7635.5 ops/s, (1175/1175 r/w) 37.4mb/s, 330us cpu/op, 6.4ms latency IO Summary: 457649 ops 7562.1 ops/s, (1163/1164 r/w) 37.0mb/s, 328us cpu/op, 6.5ms latency In this benchmark software raid-5 with ZFS (raid-z to be precise) gives a little bit better performance than hardware raid-5. ZFS is also faster in both cases (HW ans SW raid) than UFS on HW raid. Something is wrong with UFS on snv_44 - the same ufs filesystem on s10U2 works as expected. ZFS on S10U2 in this benchmark gives the same results as on snv_44. #### details #### // c2t43d0 is a HW raid5 made of 6 disks // array is configured for random IO's # zpool create HW_RAID5_6disks c2t43d0 # # zpool create -f zfs_raid5_6disks raidz c3t16d0 c3t17d0 c3t18d0 c3t19d0 c3t20d0 c3t21d0 # # zfs set atime=off zfs_raid5_6disks HW_RAID5_6disks # # zfs create HW_RAID5_6disks/t1 # zfs create zfs_raid5_6disks/t1 # # /opt/filebench/bin/sparcv9/filebench filebench> load varmail 450: 3.175: Varmail Version 1.24 2005/06/22 08:08:30 personality successfully loaded 450: 3.199: Usage: set $dir=<dir> 450: 3.199: set $filesize=<size> defaults to 16384 450: 3.199: set $nfiles=<value> defaults to 1000 450: 3.199: set $nthreads=<value> defaults to 16 450: 3.199: set $meaniosize=<value> defaults to 16384 450: 3.199: set $meandirwidth=<size> defaults to 1000000 450: 3.199: (sets mean dir width and dir depth is calculated as log (width, nfiles) 450: 3.199: dirdepth therefore defaults to dir depth of 1 as in postmark 450: 3.199: set $meandir lower to increase depth beyond 1 if desired) 450: 3.199: 450: 3.199: run runtime (e.g. run 60) 450: 3.199: syntax error, token expected on line 51 filebench> set $dir=/HW_RAID5_6disks/t1 filebench> run 60 450: 13.320: Fileset bigfileset: 1000 files, avg dir = 1000000.0, avg depth = 0.5, mbytes=15 450: 13.321: Creating fileset bigfileset... 450: 15.514: Preallocated 812 of 1000 of fileset bigfileset in 3 seconds 450: 15.515: Creating/pre-allocating files 450: 15.515: Starting 1 filereader instances 451: 16.525: Starting 16 filereaderthread threads 450: 19.535: Running... 450: 80.065: Run took 60 seconds... 450: 80.079: Per-Operation Breakdown closefile4 565ops/s 0.0mb/s 0.0ms/op 8us/op-cpu readfile4 565ops/s 9.2mb/s 0.1ms/op 60us/op-cpu openfile4 565ops/s 0.0mb/s 0.1ms/op 64us/op-cpu closefile3 565ops/s 0.0mb/s 0.0ms/op 11us/op-cpu fsyncfile3 565ops/s 0.0mb/s 12.9ms/op 147us/op-cpu appendfilerand3 565ops/s 8.8mb/s 0.1ms/op 126us/op-cpu readfile3 565ops/s 9.2mb/s 0.1ms/op 60us/op-cpu openfile3 565ops/s 0.0mb/s 0.1ms/op 63us/op-cpu closefile2 565ops/s 0.0mb/s 0.0ms/op 11us/op-cpu fsyncfile2 565ops/s 0.0mb/s 12.9ms/op 102us/op-cpu appendfilerand2 565ops/s 8.8mb/s 0.1ms/op 92us/op-cpu createfile2 565ops/s 0.0mb/s 0.2ms/op 154us/op-cpu deletefile1 565ops/s 0.0mb/s 0.1ms/op 86us/op-cpu 450: 80.079: IO Summary: 444386 ops 7341.7 ops/s, (1129/1130 r/w) 36.1mb/s, 297us cpu/op, 6.6ms latency 450: 80.079: Shutting down processes filebench> run 60 450: 115.945: Fileset bigfileset: 1000 files, avg dir = 1000000.0, avg depth = 0.5, mbytes=15 450: 115.998: Removed any existing fileset bigfileset in 1 seconds 450: 115.998: Creating fileset bigfileset... 450: 118.049: Preallocated 786 of 1000 of fileset bigfileset in 3 seconds 450: 118.049: Creating/pre-allocating files 450: 118.049: Starting 1 filereader instances 454: 119.055: Starting 16 filereaderthread threads 450: 122.065: Running... 450: 182.595: Run took 60 seconds... 450: 182.608: Per-Operation Breakdown closefile4 557ops/s 0.0mb/s 0.0ms/op 8us/op-cpu readfile4 557ops/s 9.0mb/s 0.1ms/op 59us/op-cpu openfile4 557ops/s 0.0mb/s 0.1ms/op 64us/op-cpu closefile3 557ops/s 0.0mb/s 0.0ms/op 11us/op-cpu fsyncfile3 557ops/s 0.0mb/s 13.0ms/op 149us/op-cpu appendfilerand3 558ops/s 8.7mb/s 0.1ms/op 120us/op-cpu readfile3 558ops/s 9.0mb/s 0.1ms/op 59us/op-cpu openfile3 558ops/s 0.0mb/s 0.1ms/op 64us/op-cpu closefile2 558ops/s 0.0mb/s 0.0ms/op 11us/op-cpu fsyncfile2 558ops/s 0.0mb/s 13.2ms/op 100us/op-cpu appendfilerand2 558ops/s 8.7mb/s 0.1ms/op 90us/op-cpu createfile2 557ops/s 0.0mb/s 0.1ms/op 151us/op-cpu deletefile1 557ops/s 0.0mb/s 0.1ms/op 86us/op-cpu 450: 182.609: IO Summary: 438649 ops 7247.0 ops/s, (1115/1115 r/w) 35.5mb/s, 293us cpu/op, 6.7ms latency 450: 182.609: Shutting down processes filebench> quit # /opt/filebench/bin/sparcv9/filebench filebench> load varmail 458: 2.590: Varmail Version 1.24 2005/06/22 08:08:30 personality successfully loaded 458: 2.591: Usage: set $dir=<dir> 458: 2.591: set $filesize=<size> defaults to 16384 458: 2.591: set $nfiles=<value> defaults to 1000 458: 2.591: set $nthreads=<value> defaults to 16 458: 2.591: set $meaniosize=<value> defaults to 16384 458: 2.591: set $meandirwidth=<size> defaults to 1000000 458: 2.591: (sets mean dir width and dir depth is calculated as log (width, nfiles) 458: 2.591: dirdepth therefore defaults to dir depth of 1 as in postmark 458: 2.592: set $meandir lower to increase depth beyond 1 if desired) 458: 2.592: 458: 2.592: run runtime (e.g. run 60) 458: 2.592: syntax error, token expected on line 51 filebench> set $dir=/zfs_raid5_6disks/t1 filebench> run 60 458: 9.251: Fileset bigfileset: 1000 files, avg dir = 1000000.0, avg depth = 0.5, mbytes=15 458: 9.251: Creating fileset bigfileset... 458: 14.232: Preallocated 812 of 1000 of fileset bigfileset in 5 seconds 458: 14.232: Creating/pre-allocating files 458: 14.232: Starting 1 filereader instances 459: 15.235: Starting 16 filereaderthread threads 458: 18.245: Running... 458: 78.704: Run took 60 seconds... 458: 78.718: Per-Operation Breakdown closefile4 582ops/s 0.0mb/s 0.0ms/op 8us/op-cpu readfile4 582ops/s 9.6mb/s 0.1ms/op 62us/op-cpu openfile4 582ops/s 0.0mb/s 0.1ms/op 67us/op-cpu closefile3 582ops/s 0.0mb/s 0.0ms/op 11us/op-cpu fsyncfile3 582ops/s 0.0mb/s 12.4ms/op 206us/op-cpu appendfilerand3 582ops/s 9.1mb/s 0.1ms/op 125us/op-cpu readfile3 582ops/s 9.5mb/s 0.1ms/op 61us/op-cpu openfile3 582ops/s 0.0mb/s 0.1ms/op 66us/op-cpu closefile2 582ops/s 0.0mb/s 0.0ms/op 11us/op-cpu fsyncfile2 582ops/s 0.0mb/s 12.4ms/op 132us/op-cpu appendfilerand2 582ops/s 9.1mb/s 0.1ms/op 94us/op-cpu createfile2 582ops/s 0.0mb/s 0.2ms/op 160us/op-cpu deletefile1 582ops/s 0.0mb/s 0.1ms/op 89us/op-cpu 458: 78.718: IO Summary: 457505 ops 7567.3 ops/s, (1164/1164 r/w) 37.2mb/s, 340us cpu/op, 6.4ms latency 458: 78.718: Shutting down processes filebench> run 60 458: 98.396: Fileset bigfileset: 1000 files, avg dir = 1000000.0, avg depth = 0.5, mbytes=15 458: 98.449: Removed any existing fileset bigfileset in 1 seconds 458: 98.449: Creating fileset bigfileset... 458: 103.837: Preallocated 786 of 1000 of fileset bigfileset in 6 seconds 458: 103.837: Creating/pre-allocating files 458: 103.837: Starting 1 filereader instances 468: 104.845: Starting 16 filereaderthread threads 458: 107.854: Running... 458: 168.345: Run took 60 seconds... 458: 168.358: Per-Operation Breakdown closefile4 582ops/s 0.0mb/s 0.0ms/op 8us/op-cpu readfile4 582ops/s 9.4mb/s 0.1ms/op 61us/op-cpu openfile4 582ops/s 0.0mb/s 0.1ms/op 66us/op-cpu closefile3 582ops/s 0.0mb/s 0.0ms/op 11us/op-cpu fsyncfile3 582ops/s 0.0mb/s 12.5ms/op 207us/op-cpu appendfilerand3 582ops/s 9.1mb/s 0.1ms/op 124us/op-cpu readfile3 582ops/s 9.4mb/s 0.1ms/op 61us/op-cpu openfile3 582ops/s 0.0mb/s 0.1ms/op 66us/op-cpu closefile2 582ops/s 0.0mb/s 0.0ms/op 11us/op-cpu fsyncfile2 582ops/s 0.0mb/s 12.3ms/op 132us/op-cpu appendfilerand2 582ops/s 9.1mb/s 0.1ms/op 94us/op-cpu createfile2 582ops/s 0.0mb/s 0.2ms/op 156us/op-cpu deletefile1 582ops/s 0.0mb/s 0.1ms/op 89us/op-cpu 458: 168.359: IO Summary: 457767 ops 7567.8 ops/s, (1164/1165 r/w) 36.9mb/s, 340us cpu/op, 6.4ms latency 458: 168.359: Shutting down processes filebench> # zpool destroy HW_RAID5_6disks # newfs -C 20 /dev/rdsk/c2t43d0s0 newfs: construct a new file system /dev/rdsk/c2t43d0s0: (y/n)? y Warning: 68 sector(s) in last cylinder unallocated /dev/rdsk/c2t43d0s0: 714233788 sectors in 116249 cylinders of 48 tracks, 128 sectors 348747.0MB in 7266 cyl groups (16 c/g, 48.00MB/g, 5824 i/g) super-block backups (for fsck -F ufs -o b=#) at: 32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920, Initializing cylinder groups: ............................................................................... .................................................................. super-block backups for last 10 cylinder groups at: 713296928, 713395360, 713493792, 713592224, 713690656, 713789088, 713887520, 713985952, 714084384, 714182816 # # mount -o noatime /dev/dsk/c2t43d0s0 /mnt # # /opt/filebench/bin/sparcv9/filebench filebench> load varmail 546: 2.573: Varmail Version 1.24 2005/06/22 08:08:30 personality successfully loaded 546: 2.573: Usage: set $dir=<dir> 546: 2.573: set $filesize=<size> defaults to 16384 546: 2.573: set $nfiles=<value> defaults to 1000 546: 2.574: set $nthreads=<value> defaults to 16 546: 2.574: set $meaniosize=<value> defaults to 16384 546: 2.574: set $meandirwidth=<size> defaults to 1000000 546: 2.574: (sets mean dir width and dir depth is calculated as log (width, nfiles) 546: 2.574: dirdepth therefore defaults to dir depth of 1 as in postmark 546: 2.574: set $meandir lower to increase depth beyond 1 if desired) 546: 2.574: 546: 2.574: run runtime (e.g. run 60) 546: 2.574: syntax error, token expected on line 51 filebench> set $dir=/mnt filebench> run 60 546: 22.095: Fileset bigfileset: 1000 files, avg dir = 1000000.0, avg depth = 0.5, mbytes=15 546: 22.109: Creating fileset bigfileset... 546: 24.577: Preallocated 812 of 1000 of fileset bigfileset in 3 seconds 546: 24.577: Creating/pre-allocating files 546: 24.577: Starting 1 filereader instances 548: 25.584: Starting 16 filereaderthread threads 546: 28.594: Running... 546: 89.114: Run took 60 seconds... 546: 89.128: Per-Operation Breakdown closefile4 80ops/s 0.0mb/s 0.0ms/op 8us/op-cpu readfile4 80ops/s 1.5mb/s 0.1ms/op 76us/op-cpu openfile4 80ops/s 0.0mb/s 0.0ms/op 39us/op-cpu closefile3 80ops/s 0.0mb/s 0.0ms/op 12us/op-cpu fsyncfile3 80ops/s 0.0mb/s 29.2ms/op 107us/op-cpu appendfilerand3 80ops/s 1.2mb/s 30.4ms/op 189us/op-cpu readfile3 80ops/s 1.5mb/s 0.1ms/op 73us/op-cpu openfile3 80ops/s 0.0mb/s 0.0ms/op 38us/op-cpu closefile2 80ops/s 0.0mb/s 0.0ms/op 12us/op-cpu fsyncfile2 80ops/s 0.0mb/s 30.8ms/op 125us/op-cpu appendfilerand2 80ops/s 1.2mb/s 22.6ms/op 173us/op-cpu createfile2 80ops/s 0.0mb/s 37.2ms/op 224us/op-cpu deletefile1 80ops/s 0.0mb/s 48.5ms/op 108us/op-cpu 546: 89.128: IO Summary: 62776 ops 1037.3 ops/s, (160/160 r/w) 5.5mb/s, 481us cpu/op, 49.7ms latency 546: 89.128: Shutting down processes filebench> run 60 546: 738.541: Fileset bigfileset: 1000 files, avg dir = 1000000.0, avg depth = 0.5, mbytes=15 546: 739.455: Removed any existing fileset bigfileset in 1 seconds 546: 739.455: Creating fileset bigfileset... 546: 741.387: Preallocated 786 of 1000 of fileset bigfileset in 2 seconds 546: 741.387: Creating/pre-allocating files 546: 741.387: Starting 1 filereader instances 557: 742.394: Starting 16 filereaderthread threads 546: 745.404: Running... 546: 805.944: Run took 60 seconds... 546: 805.958: Per-Operation Breakdown closefile4 81ops/s 0.0mb/s 0.0ms/op 8us/op-cpu readfile4 81ops/s 1.5mb/s 0.1ms/op 73us/op-cpu openfile4 81ops/s 0.0mb/s 0.0ms/op 38us/op-cpu closefile3 81ops/s 0.0mb/s 0.0ms/op 11us/op-cpu fsyncfile3 81ops/s 0.0mb/s 27.8ms/op 105us/op-cpu appendfilerand3 81ops/s 1.3mb/s 28.6ms/op 187us/op-cpu readfile3 81ops/s 1.4mb/s 0.1ms/op 70us/op-cpu openfile3 81ops/s 0.0mb/s 0.0ms/op 37us/op-cpu closefile2 81ops/s 0.0mb/s 0.0ms/op 12us/op-cpu fsyncfile2 81ops/s 0.0mb/s 29.9ms/op 124us/op-cpu appendfilerand2 81ops/s 1.3mb/s 23.6ms/op 171us/op-cpu createfile2 81ops/s 0.0mb/s 38.9ms/op 220us/op-cpu deletefile1 81ops/s 0.0mb/s 47.4ms/op 109us/op-cpu 546: 805.958: IO Summary: 63661 ops 1051.6 ops/s, (162/162 r/w) 5.4mb/s, 477us cpu/op, 49.1ms latency 546: 805.958: Shutting down processes filebench> #### solaris 10 06/06 + patches, server with the same hardware specs ##### ##### test # 4 # mount -o noatime /dev/dsk/c3t40d0s0 /mnt # /opt/filebench/bin/sparcv9/filebench filebench> load varmail 1384: 3.678: Varmail Version 1.24 2005/06/22 08:08:30 personality successfully loaded 1384: 3.679: Usage: set $dir=<dir> 1384: 3.679: set $filesize=<size> defaults to 16384 1384: 3.679: set $nfiles=<value> defaults to 1000 1384: 3.679: set $nthreads=<value> defaults to 16 1384: 3.679: set $meaniosize=<value> defaults to 16384 1384: 3.679: set $meandirwidth=<size> defaults to 1000000 1384: 3.679: (sets mean dir width and dir depth is calculated as log (width, nfiles) 1384: 3.679: dirdepth therefore defaults to dir depth of 1 as in postmark 1384: 3.679: set $meandir lower to increase depth beyond 1 if desired) 1384: 3.680: 1384: 3.680: run runtime (e.g. run 60) 1384: 3.680: syntax error, token expected on line 51 filebench> set $dir=/mnt filebench> run 60 1384: 10.872: Fileset bigfileset: 1000 files, avg dir = 1000000.0, avg depth = 0.5, mbytes=15 1384: 11.858: Removed any existing fileset bigfileset in 1 seconds 1384: 11.859: Creating fileset bigfileset... 1384: 14.221: Preallocated 812 of 1000 of fileset bigfileset in 3 seconds 1384: 14.221: Creating/pre-allocating files 1384: 14.221: Starting 1 filereader instances 1387: 15.231: Starting 16 filereaderthread threads 1384: 18.241: Running... 1384: 78.701: Run took 60 seconds... 1384: 78.715: Per-Operation Breakdown closefile4 500ops/s 0.0mb/s 0.0ms/op 8us/op-cpu readfile4 500ops/s 8.4mb/s 0.1ms/op 65us/op-cpu openfile4 500ops/s 0.0mb/s 0.0ms/op 36us/op-cpu closefile3 500ops/s 0.0mb/s 0.0ms/op 12us/op-cpu fsyncfile3 500ops/s 0.0mb/s 9.7ms/op 169us/op-cpu appendfilerand3 500ops/s 7.8mb/s 2.6ms/op 187us/op-cpu readfile3 500ops/s 8.3mb/s 0.1ms/op 64us/op-cpu openfile3 500ops/s 0.0mb/s 0.0ms/op 36us/op-cpu closefile2 500ops/s 0.0mb/s 0.0ms/op 12us/op-cpu fsyncfile2 500ops/s 0.0mb/s 8.4ms/op 154us/op-cpu appendfilerand2 500ops/s 7.8mb/s 1.7ms/op 168us/op-cpu createfile2 500ops/s 0.0mb/s 4.3ms/op 298us/op-cpu deletefile1 500ops/s 0.0mb/s 3.2ms/op 144us/op-cpu 1384: 78.715: IO Summary: 393167 ops 6503.1 ops/s, (1000/1001 r/w) 32.4mb/s, 405us cpu/op, 7.5ms latency 1384: 78.715: Shutting down processes filebench> run 60 1384: 94.146: Fileset bigfileset: 1000 files, avg dir = 1000000.0, avg depth = 0.5, mbytes=15 1384: 95.767: Removed any existing fileset bigfileset in 2 seconds 1384: 95.768: Creating fileset bigfileset... 1384: 97.972: Preallocated 786 of 1000 of fileset bigfileset in 3 seconds 1384: 97.973: Creating/pre-allocating files 1384: 97.973: Starting 1 filereader instances 1393: 98.981: Starting 16 filereaderthread threads 1384: 101.991: Running... 1384: 162.491: Run took 60 seconds... 1384: 162.505: Per-Operation Breakdown closefile4 502ops/s 0.0mb/s 0.0ms/op 8us/op-cpu readfile4 502ops/s 8.1mb/s 0.1ms/op 64us/op-cpu openfile4 502ops/s 0.0mb/s 0.0ms/op 37us/op-cpu closefile3 502ops/s 0.0mb/s 0.0ms/op 12us/op-cpu fsyncfile3 502ops/s 0.0mb/s 9.9ms/op 172us/op-cpu appendfilerand3 502ops/s 7.8mb/s 2.7ms/op 189us/op-cpu readfile3 502ops/s 8.2mb/s 0.1ms/op 65us/op-cpu openfile3 502ops/s 0.0mb/s 0.0ms/op 37us/op-cpu closefile2 502ops/s 0.0mb/s 0.0ms/op 12us/op-cpu fsyncfile2 502ops/s 0.0mb/s 8.6ms/op 156us/op-cpu appendfilerand2 502ops/s 7.8mb/s 1.7ms/op 166us/op-cpu createfile2 502ops/s 0.0mb/s 4.4ms/op 301us/op-cpu deletefile1 502ops/s 0.0mb/s 3.2ms/op 148us/op-cpu 1384: 162.506: IO Summary: 394525 ops 6521.2 ops/s, (1003/1003 r/w) 32.0mb/s, 407us cpu/op, 7.7ms latency 1384: 162.506: Shutting down processes filebench> #### test 5 #### these are the same disks as used in test #2 # zpool create zfs_raid5_6disks raidz c2t16d0 c2t17d0 c2t18d0 c2t19d0 c2t20d0 c2t21d0 # zfs set atime=off zfs_raid5_6disks # zfs create zfs_raid5_6disks/t1 # # /opt/filebench/bin/sparcv9/filebench filebench> load varmail 1437: 3.762: Varmail Version 1.24 2005/06/22 08:08:30 personality successfully loaded 1437: 3.762: Usage: set $dir=<dir> 1437: 3.762: set $filesize=<size> defaults to 16384 1437: 3.762: set $nfiles=<value> defaults to 1000 1437: 3.763: set $nthreads=<value> defaults to 16 1437: 3.763: set $meaniosize=<value> defaults to 16384 1437: 3.763: set $meandirwidth=<size> defaults to 1000000 1437: 3.763: (sets mean dir width and dir depth is calculated as log (width, nfiles) 1437: 3.763: dirdepth therefore defaults to dir depth of 1 as in postmark 1437: 3.763: set $meandir lower to increase depth beyond 1 if desired) 1437: 3.763: 1437: 3.763: run runtime (e.g. run 60) 1437: 3.763: syntax error, token expected on line 51 filebench> set $dir=/zfs_raid5_6disks/t1 filebench> run 60 1437: 13.102: Fileset bigfileset: 1000 files, avg dir = 1000000.0, avg depth = 0.5, mbytes=15 1437: 13.102: Creating fileset bigfileset... 1437: 20.092: Preallocated 812 of 1000 of fileset bigfileset in 7 seconds 1437: 20.092: Creating/pre-allocating files 1437: 20.092: Starting 1 filereader instances 1438: 21.095: Starting 16 filereaderthread threads 1437: 24.105: Running... 1437: 84.575: Run took 60 seconds... 1437: 84.589: Per-Operation Breakdown closefile4 587ops/s 0.0mb/s 0.0ms/op 9us/op-cpu readfile4 587ops/s 9.5mb/s 0.1ms/op 63us/op-cpu openfile4 587ops/s 0.0mb/s 0.1ms/op 63us/op-cpu closefile3 587ops/s 0.0mb/s 0.0ms/op 11us/op-cpu fsyncfile3 587ops/s 0.0mb/s 12.1ms/op 196us/op-cpu appendfilerand3 587ops/s 9.2mb/s 0.1ms/op 123us/op-cpu readfile3 587ops/s 9.5mb/s 0.1ms/op 64us/op-cpu openfile3 587ops/s 0.0mb/s 0.1ms/op 63us/op-cpu closefile2 587ops/s 0.0mb/s 0.0ms/op 11us/op-cpu fsyncfile2 587ops/s 0.0mb/s 12.6ms/op 145us/op-cpu appendfilerand2 588ops/s 9.2mb/s 0.1ms/op 93us/op-cpu createfile2 587ops/s 0.0mb/s 0.2ms/op 166us/op-cpu deletefile1 587ops/s 0.0mb/s 0.1ms/op 90us/op-cpu 1437: 84.589: IO Summary: 461708 ops 7635.5 ops/s, (1175/1175 r/w) 37.4mb/s, 330us cpu/op, 6.4ms latency 1437: 84.589: Shutting down processes filebench> run 60 1437: 136.114: Fileset bigfileset: 1000 files, avg dir = 1000000.0, avg depth = 0.5, mbytes=15 1437: 136.171: Removed any existing fileset bigfileset in 1 seconds 1437: 136.172: Creating fileset bigfileset... 1437: 141.880: Preallocated 786 of 1000 of fileset bigfileset in 6 seconds 1437: 141.880: Creating/pre-allocating files 1437: 141.880: Starting 1 filereader instances 1441: 142.885: Starting 16 filereaderthread threads 1437: 145.895: Running... 1437: 206.415: Run took 60 seconds... 1437: 206.429: Per-Operation Breakdown closefile4 582ops/s 0.0mb/s 0.0ms/op 8us/op-cpu readfile4 582ops/s 9.4mb/s 0.1ms/op 63us/op-cpu openfile4 582ops/s 0.0mb/s 0.1ms/op 62us/op-cpu closefile3 582ops/s 0.0mb/s 0.0ms/op 11us/op-cpu fsyncfile3 582ops/s 0.0mb/s 12.2ms/op 202us/op-cpu appendfilerand3 582ops/s 9.1mb/s 0.1ms/op 122us/op-cpu readfile3 582ops/s 9.4mb/s 0.1ms/op 64us/op-cpu openfile3 582ops/s 0.0mb/s 0.1ms/op 62us/op-cpu closefile2 582ops/s 0.0mb/s 0.0ms/op 11us/op-cpu fsyncfile2 582ops/s 0.0mb/s 12.9ms/op 141us/op-cpu appendfilerand2 582ops/s 9.1mb/s 0.1ms/op 91us/op-cpu createfile2 582ops/s 0.0mb/s 0.2ms/op 157us/op-cpu deletefile1 582ops/s 0.0mb/s 0.1ms/op 89us/op-cpu 1437: 206.429: IO Summary: 457649 ops 7562.1 ops/s, (1163/1164 r/w) 37.0mb/s, 328us cpu/op, 6.5ms latency 1437: 206.429: Shutting down processes filebench> This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss