We loaded Nevada_78 on a peer T2000 unit.  Imported the same ZFS pool.  I 
didn't even upgrade the pool since we wanted to be able to move it back to 
10u4.  Cut 'n paste of my colleague's email with the results:

Here's the latest Pepsi Challenge results.

Sol10u4 vs Nevada78. Same tuning options, same zpool, same storage, same SAN
switch - you get the idea. The only difference is the OS.

Sol10u4:
 4984: 82.878: Per-Operation Breakdown
closefile4                404ops/s   0.0mb/s      0.0ms/op       19us/op-cpu
readfile4                 404ops/s   6.3mb/s      0.1ms/op      109us/op-cpu
openfile4                 404ops/s   0.0mb/s      0.1ms/op      112us/op-cpu
closefile3                404ops/s   0.0mb/s      0.0ms/op       25us/op-cpu
fsyncfile3                404ops/s   0.0mb/s     18.7ms/op     1168us/op-cpu
appendfilerand3           404ops/s   6.3mb/s      0.2ms/op      192us/op-cpu
readfile3                 404ops/s   6.3mb/s      0.1ms/op      111us/op-cpu
openfile3                 404ops/s   0.0mb/s      0.1ms/op      111us/op-cpu
closefile2                404ops/s   0.0mb/s      0.0ms/op       24us/op-cpu
fsyncfile2                404ops/s   0.0mb/s     19.0ms/op     1162us/op-cpu
appendfilerand2           404ops/s   6.3mb/s      0.2ms/op      173us/op-cpu
createfile2               404ops/s   0.0mb/s      0.3ms/op      334us/op-cpu
deletefile1               404ops/s   0.0mb/s      0.2ms/op      173us/op-cpu

 4984: 82.879: 
IO Summary:      318239 ops 5251.8 ops/s, (808/808 r/w)  25.2mb/s,   1228us
cpu/op,   9.7ms latency


Nevada78:
 1107: 82.554: Per-Operation Breakdown
closefile4               1223ops/s   0.0mb/s      0.0ms/op       22us/op-cpu
readfile4                1223ops/s  19.4mb/s      0.1ms/op      112us/op-cpu
openfile4                1223ops/s   0.0mb/s      0.1ms/op      128us/op-cpu
closefile3               1223ops/s   0.0mb/s      0.0ms/op       29us/op-cpu
fsyncfile3               1223ops/s   0.0mb/s      4.6ms/op      256us/op-cpu
appendfilerand3          1223ops/s  19.1mb/s      0.2ms/op      191us/op-cpu
readfile3                1223ops/s  19.9mb/s      0.1ms/op      116us/op-cpu
openfile3                1223ops/s   0.0mb/s      0.1ms/op      127us/op-cpu
closefile2               1223ops/s   0.0mb/s      0.0ms/op       28us/op-cpu
fsyncfile2               1223ops/s   0.0mb/s      4.4ms/op      239us/op-cpu
appendfilerand2          1223ops/s  19.1mb/s      0.1ms/op      159us/op-cpu
createfile2              1223ops/s   0.0mb/s      0.5ms/op      389us/op-cpu
deletefile1              1223ops/s   0.0mb/s      0.2ms/op      198us/op-cpu

 1107: 82.581: 
IO Summary:      954637 ops 15903.4 ops/s, (2447/2447 r/w)  77.5mb/s,
590us cpu/op,   2.6ms latency


That's a 3-4x improvement in ops/sec and average fsync time.


Here are the results from our UFS software mirror for comparison:
 4984: 211.056: Per-Operation Breakdown
closefile4                465ops/s   0.0mb/s      0.0ms/op       23us/op-cpu
readfile4                 465ops/s  12.6mb/s      0.1ms/op      142us/op-cpu
openfile4                 465ops/s   0.0mb/s      0.1ms/op       83us/op-cpu
closefile3                465ops/s   0.0mb/s      0.0ms/op       24us/op-cpu
fsyncfile3                465ops/s   0.0mb/s      6.0ms/op      498us/op-cpu
appendfilerand3           465ops/s   7.3mb/s      1.7ms/op      282us/op-cpu
readfile3                 465ops/s  11.1mb/s      0.1ms/op      132us/op-cpu
openfile3                 465ops/s   0.0mb/s      0.1ms/op       84us/op-cpu
closefile2                465ops/s   0.0mb/s      0.0ms/op       26us/op-cpu
fsyncfile2                465ops/s   0.0mb/s      5.9ms/op      445us/op-cpu
appendfilerand2           465ops/s   7.3mb/s      1.1ms/op      231us/op-cpu
createfile2               465ops/s   0.0mb/s      2.2ms/op      443us/op-cpu
deletefile1               465ops/s   0.0mb/s      2.0ms/op      269us/op-cpu

 4984: 211.057: 
IO Summary:      366557 ops 6049.2 ops/s, (931/931 r/w)  38.2mb/s,    912us
cpu/op,   4.8ms latency


So either we're hitting a pretty serious zfs bug, or they're purposely
holding back performance in Solaris 10 so that we all have a good reason to
upgrade to 11.  ;) 
 

-Nick
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to