On Jun 28, 2006, at 12:32, Erik Trimble wrote:
shouldn't that be capacity = ((N -1) / 2) ? loss of a single disk would cause a rebuild on the R5 stripe which could affect performance on that side of the mirror. Generally speaking good RAID controllers will dedicate processors and channels to calculate the parity and write it out so you're not impacted from the host access PoV. There is a similar sort of CoW behaviour that can happen between the array cache and the drives, but in the ideal case you're dealing with this in dedicated hw instead of shared hw.
I think you're comparing this to software R5 or at least badly implemented array code and divining that there is a considerable speed hit when using R5. In practice this is not always the case provided that the response time and interaction between the array cache and drives is sufficient for the incoming stream. By moving your operation to software you're now introducing more layers between the CPU, L1/L2 cache, memory bus, and system bus before you get to the interconnect and further latencies on the storage port and underlying device (virtualized or not.) Ideally it would be nice to see ZFS style improvements in array firmware, but given the state of embedded Solaris and the predominance of 32bit controllers - I think we're going to have some issues. We'd also need to have some sort of client mechanism to interact with the array if we're talking about moving the filesystem layer out there .. just a thought Jon E |
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss