Mike Seda wrote:
Basically, is this a supported zfs configuration?
Can't see why not, but support or not is something only Sun support can
speak for, not this mailing list.
You say you lost access to the array though-- a full disk failure
shouldn't cause this if you were using RAID-5 on the array. Perhaps you
mean you've had to take it out of production because it couldn't keep up
with the expected workload?
You are gonna laugh, but do you think my zfs configuration caused the
drive failure?
You mention this is a new array. As one Sun person (whose name I can't
remember) mentioned to me, there's a high 'infant mortality' rate among
semiconductors. Components that are going to fail will either do so in
the first 120 days or so, or will run for many years.
I'm no expert in the area though and I have no data to prove it, but it
has felt somewhat true as I've seen new systems set up over the years.
A quick search for "semiconductor infant mortality" turned up some
interesting results.
Chances are, it's something much more mundane that got your disk. ZFS
is using the same underlying software as everything else to read/write
to disks on a SAN (i.e. the ssd driver and friends)-- it's just smarter
about it. :)
Regards,
- Matt
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss