Howdy Robert. Robert Milkowski wrote:
You've got the same behavior with any LVM when you replace a disk. So it's not something unexpected for admins. Also most of the time they expect LVM to resilver ASAP. With default setting not being 100% you'll definitely see people complaining ZFS is slooow, etc.
It's quite possible that I've only seen the other side of the coin but in my past I've had support calls where customers complained that they {replaced a drive, resilvered a mirror, ... } and it knocked the performance of other things. My fave was a set of A5200s on a hub and after they cranked the i/o rate up on the mirror it caused some other app - Me thinks it was Oracle - get too slow, think there was a disk problem, crash(!), and then initiate a cluster failover. Given the disk group was not in perfect health....oh the fun we had.
In any case the key is documenting the behavior well enough so people can see what is going on, how to tune it slower or faster on the fly, etc. I'm more concerned with that then the actual algorithm or method used.
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss