> > > That's the one that's been an issue for me and my customers - they > get billed back for GB allocated to their servers by the back end > arrays. > To be more explicit about the 'self-healing properties' - > To deal with any fs corruption situation that would traditionally > require an fsck on UFS (SAN switch crash, multipathing issues, > cables going flaky or getting pulled, server crash that corrupts > fs's) ZFS needs some disk redundancy in place so it has parity and > can recover. (raidz, zfs mirror, etc) > Which means to use ZFS a customer have to pay more to get the back > end storage redundancy they need to recover from anything that would > cause an fsck on UFS. I'm not saying it's a bad implementation or > that the gains aren't worth it, just that cost-wise, ZFS is more > expensive in this particular bill-back model. > > cheers, > Brian >> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Why would the customer need to use raidz or zfs mirroring if the array is doing it for them? As someone else posted, metadata is already redundant by default and doesn't consume a ton of space. Some people may disagree but the first thing I like about ZFS is the ease of pool management and the second thing is the checksumming. When a customer had issues with Solaris 10 x86, vxfs and EMC powerpath, I took them down the road of using powerpath and zfs. Made some tweaks so we didn't tell the array to flush to rust and they're happy as clams. _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss