On Sat, 26 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote: > > 1.Re configure array with 12 independent disks > 2. Allocate disks to RAIDZed pool
Using raidz will penalize your transaction performance since all disks will need to perform I/O for each write. It is definitely better to use load-shared mirrors for this purpose. > 3. Fine tune the 2540 according to Bob's 2540-ZFS-Performance.pdf (Thankx Bob) > 4. Apply ZFS tunings (i.e. zfs_nocacheflush=1 etc.) Hopefully after step #3, step#4 will not be required. Step #4 puts data at risk if there is a system crash. > However, I could not find additional cards to support I/O Multipath. Hope > that would not affect > on latency. Probably not. It will effect sequential I/O performance but latency is primarily dependent on disk configuration and ZFS filesystem block size. I have performed some tests here of synchronous writes using iozone with multi-threaded readers/writers. This is for the same 2540 configuration that I wrote about earlier. For this particular test, the ZFS filesystem blocksize is 8K and the size of the I/Os is 8K. This may not be a good representation of your own workload since the threads are contending for I/O with random access. In your case, it seems that writes may be written in a sequential append mode. I also have test results handy for similar test parameters but using various ZFS filesystem settings (8K/128K block size, checksum enable/disable, noatime, and sha256 checksum), and 8K or 128K I/O block sizes. Let me know if there is something you would like for me to measure. It should be easy to simulate your application behavior using iozone. Bob ====================================== Bob Friesenhahn [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ Iozone: Performance Test of File I/O Version $Revision: 3.283 $ Compiled for 64 bit mode. Build: Solaris10gcc-64 Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins Al Slater, Scott Rhine, Mike Wisner, Ken Goss Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR, Randy Dunlap, Mark Montague, Dan Million, Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Erik Habbinga, Kris Strecker, Walter Wong. Run began: Wed Jul 2 10:54:19 2008 Multi_buffer. Work area 16777216 bytes OPS Mode. Output is in operations per second. Record Size 8 KB SYNC Mode. File size set to 2097152 KB Command line used: iozone -m -t 8 -T -O -r 8k -o -s 2G Time Resolution = 0.000001 seconds. Processor cache size set to 1024 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. Throughput test with 8 threads Each thread writes a 2097152 Kbyte file in 8 Kbyte records Children see throughput for 8 initial writers = 4315.57 ops/sec Parent sees throughput for 8 initial writers = 4266.15 ops/sec Min throughput per thread = 532.18 ops/sec Max throughput per thread = 543.36 ops/sec Avg throughput per thread = 539.45 ops/sec Min xfer = 256746.00 ops Children see throughput for 8 rewriters = 2595.08 ops/sec Parent sees throughput for 8 rewriters = 2595.06 ops/sec Min throughput per thread = 322.07 ops/sec Max throughput per thread = 326.15 ops/sec Avg throughput per thread = 324.38 ops/sec Min xfer = 258867.00 ops Children see throughput for 8 readers = 53462.03 ops/sec Parent sees throughput for 8 readers = 53451.08 ops/sec Min throughput per thread = 6340.39 ops/sec Max throughput per thread = 6859.59 ops/sec Avg throughput per thread = 6682.75 ops/sec Min xfer = 242368.00 ops Children see throughput for 8 re-readers = 54585.11 ops/sec Parent sees throughput for 8 re-readers = 54573.08 ops/sec Min throughput per thread = 6022.81 ops/sec Max throughput per thread = 7164.78 ops/sec Avg throughput per thread = 6823.14 ops/sec Min xfer = 220373.00 ops Children see throughput for 8 reverse readers = 56755.70 ops/sec Parent sees throughput for 8 reverse readers = 56667.52 ops/sec Min throughput per thread = 5893.60 ops/sec Max throughput per thread = 7554.16 ops/sec Avg throughput per thread = 7094.46 ops/sec Min xfer = 204744.00 ops Children see throughput for 8 stride readers = 11964.43 ops/sec Parent sees throughput for 8 stride readers = 11959.61 ops/sec Min throughput per thread = 1353.59 ops/sec Max throughput per thread = 1545.83 ops/sec Avg throughput per thread = 1495.55 ops/sec Min xfer = 229619.00 ops Children see throughput for 8 random readers = 3314.17 ops/sec Parent sees throughput for 8 random readers = 3314.11 ops/sec Min throughput per thread = 367.38 ops/sec Max throughput per thread = 482.99 ops/sec Avg throughput per thread = 414.27 ops/sec Min xfer = 199395.00 ops Children see throughput for 8 mixed workload = 2438.01 ops/sec Parent sees throughput for 8 mixed workload = 2414.88 ops/sec Min throughput per thread = 77.17 ops/sec Max throughput per thread = 528.42 ops/sec Avg throughput per thread = 304.75 ops/sec Min xfer = 38284.00 ops Children see throughput for 8 random writers = 3176.50 ops/sec Parent sees throughput for 8 random writers = 3141.77 ops/sec Min throughput per thread = 394.89 ops/sec Max throughput per thread = 400.16 ops/sec Avg throughput per thread = 397.06 ops/sec Min xfer = 258695.00 ops iozone test complete. _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss