On Dec 29, 2009, at 12:36 PM, Bob Friesenhahn <bfrie...@simple.dallas.tx.us > wrote:

On Tue, 29 Dec 2009, Ross Walker wrote:

A mirrored raidz provides redundancy at a steep cost to performance and might I add a high monetary cost.

I am not sure what a "mirrored raidz" is. I have never heard of such a thing before.

With raid10 each mirrored pair has the IOPS of a single drive. Since these mirrors are typically 2 disk vdevs, you can have a lot more of them and thus a lot more IOPS (some people talk about using 3 disk mirrors, but it's probably just as good as getting setting copies=2 on a regular pool of mirrors).

This is another case where using a term like "raid10" does not make sense when discussing zfs. ZFS does not support "raid10". ZFS does not support RAID 0 or RAID 1 so it can't support RAID 1+0 (RAID 10).

Did it again... I understand the difference. I hope it didn't confuse the OP by throwing that out there. What I meant to say was a zpool of mirror vdevs.

Some important points to consider are that every write to a raidz vdev must be synchronous. In other words, the write needs to complete on all the drives in the stripe before the write may return as complete. This is also true of "RAID 1" (mirrors) which specifies that the drives are perfect duplicates of each other.

I believe mirrored vdevs can do this in parallel though, while raidz vdevs need to do this serially due to the ordered nature of the transaction which makes the sync writes faster on the mirrors.

However, zfs does not implement "RAID 1" either. This is easily demonstrated since you can unplug one side of the mirror and the writes to the zfs mirror will still succeed, catching up the mirror which is behind as soon as it is plugged back in. When using mirrors, zfs supports logic which will catch that mirror back up (only sending the missing updates) when connectivity improves. With RAID 1 where is no way to recover a mirror other than a full copy from the other drive.

That's not completely true these days as a lot of raid implementations use bitmaps to track changed blocks and a raid1 continues to function when the other side disappears. The real difference is the mirror implementation in ZFS is in the file system and not at an abstracted block-io layer so it is more intelligent in it's use and layout.

Zfs load-shares across vdevs so it will load-share across mirror vdevs rather than striping (as RAID 10 would require).

Bob, an interesting question was brought up to me about how copies may affect random read performance. I didn't know the answer, but if ZFS knows there are additional copies would it not also spread the load across those as well to make sure the wait queues on each spindle are as even as possible?

-Ross

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to