Hello, Short version: Pool A is fast, pool B is slow. Writing to pool A is fast. Writing to pool B is slow. Writing to pool B WHILE writing to pool A is fast on both pools. Explanation?
Long version: I have an existing two-disk pool consisting of two SATA drives. Call this pool "pool1". This has always been as fast as I would have expected; 30-50 MB/second write, 105 MB/second read. I have now added four additional drives (A, B, C and D) to the machine, that I wanted to use for a raidz. For initial testing I chose a striped pool just to see what kind of performance I would get. The initial two drives (pool1) are on their own controller. A and B are on a second controller and C and D on a third controller. All of the controllers are SiL3512. Here comes the very interesting bit. For the purpose of the diagram below, "good" performance means 20-50 mb/sec write and ~70-80 mb/sec read. "Bad" performance means 3-5 mb/sec (!!!!) write and ~70-80 mb/sec read. disk layout in pool | other I/O | performance ------------------------------------------------------------------------ A + B | none | good C + D | none | good A + B + C + D | none | bad A + B + C | none | bad A + B + C + D | write to pool1 | goodish (!!!) (some tested combinations omitted) In other words: Initially it looked like write performance went down the drain as soon as I combined drives from multiple controllers into one pool, while performance was fine as long as I stayed within one controller. However, writing to the "slow" A+B+C+D pool *WHILE ALSO WRITING TO POOL1* actually *INCREASES* performance. The write to pool1 and the otherwise "slow" pool are not quite up to normal "good" level, but that is probably to be expected even under normal circumstances. CPU usage during the writing (when slow) is almost non-existent. There is no spike similar to what you seem to get every five second or so normally (during transaction commits?). Also, at least once I saw the write performance on the "slow" pool spike at 19 mb/second for a single second period (zpool iostat) when I initiated the write, then it went down again and remains very constant, not really varying outside 3.4-4.5. Often EXACTLY at 3.96. "writing" and "reading" means dd:ing (to /dev/null, from /dev/zero) with bs=$((1024*1024)). Pools created with "zpool create speedtest c4d0 c5d0 c6d0 c7d0" and variations of that for the different combinations. The pool with all four drives is 1.16T in size. -- / Peter Schuller, InfiDyne Technologies HB PGP userID: 0xE9758B7D or 'Peter Schuller <[EMAIL PROTECTED]>' Key retrieval: Send an E-Mail to [EMAIL PROTECTED] E-Mail: [EMAIL PROTECTED] Web: http://www.scode.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss