> didn't seem to we would need zfs to provide that redundancy also. There was a time when I fell for this line of reasoning too. The problem (if you want to call it that) with zfs is that it will show you, front and center, the corruption taking place in your stack.
> Since we're on SAN with Raid internally Your situation would suggest that your RAID silently corrupted data and didn't even know about it. Until you can trust the volumes behind zfs (and I don't trust any of them anymore, regardless of the brand name on the cabinet), give zfs at least some redundancy so that it can pick up the slack. By the way, I used to trust storage because I didn't believe it was corrupting data, but I had no proof one way or the other, so I gave it the benefit of the doubt. Since I have been using zfs, my standards have gone up considerably. Now I trust storage because I can *prove* it's correct. If someone can't prove that a volume is returning correct data, don't trust it. Let zfs manage it. -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss