On Feb 16, 2010, at 12:39 PM, Daniel Carosone wrote: > On Mon, Feb 15, 2010 at 09:11:02PM -0600, Tracey Bernath wrote: >> On Mon, Feb 15, 2010 at 5:51 PM, Daniel Carosone <d...@geek.com.au> wrote: >>> Just be clear: mirror ZIL by all means, but don't mirror l2arc, just >>> add more devices and let them load-balance. This is especially true >>> if you're sharing ssd writes with ZIL, as slices on the same devices. >>> >>> Well, the problem I am trying to solve is wouldn't it read 2x faster with >> the mirror? It seems once I can drive the single device to 10 queued >> actions, and 100% busy, it would be more useful to have two channels to the >> same data. Is ZFS not smart enough to understand that there are two >> identical mirror devices in the cache to split requests to? Or, are you >> saying that ZFS is smart enough to cache it in two places, although not >> mirrored? > > First, Bob is right, measurement trumps speculation. Try it. > > As for speculation, you're thinking only about reads. I expect > reading from l2arc devices will be the same as reading from any other > zfs mirror, and largely the same in both cases above; load balanced > across either device. In the rare case of a bad read from unmirrored > l2arc, data will be fetched from the pool, so mirroring l2arc doesn't > add any resiliency benefit. > > However, your cache needs to be populated and maintained as well, and > this needs writes. Twice as many of them for the mirror as for the > "stripe". Half of what is written never needs to be read again. These > writes go to the same ssd devices you're using for ZIL, on commodity > ssd's which are not well write-optimised, they may be hurting zil > latency by making the ssd do more writing, stealing from the total > iops count on the channel, and (as a lesser concern) adding wear > cycles to the device.
The L2ARC writes are throttled to be 8MB/sec, except during cold start where the throttle is 16MB/sec. This should not be noticeable on the channels. > When you're already maxing out the IO, eliminating wasted cycles opens > your bottleneck, even if only a little. +1 -- richard > Once you reach steady state, I don't know how much turnover in l2arc > contents you will have, and therefore how many extra writes we're > talking about. It may not be many, but they are unnecessary ones. > > Normally, we'd talk about measuring a potential benefit, and then > choosing based on the results. In this case, if I were you I'd > eliminate the unnecessary writes, and measure the difference more as a > matter of curiosity and research, since I was already set up to do so. > > -- > Dan. > ZFS storage and performance consulting at http://www.RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance http://nexenta-atlanta.eventbrite.com (March 15-17, 2010) _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss