The big problem is that if you don't do your redundancy in the zpool, then the 
loss of a single device flatlines the system. This occurs in single device 
pools or stripes or concats. Sun support has said in support calls and Sunsolve 
docs that this is by design, but I've never seen the loss of any other 
filesystem cause a machine to halt and dump core. Multiple bus resets can 
create a condition that makes the kernel believe that the device is no longer 
available. This was a persistant problem, especially on Pillar, until I started 
using setting sd_max_throttle down.

 "Why on earth would I not want to make redundant devices in zfs, when it's 
reliability is so much better than other RAIDs?"
 This is the problem that says "I want the management ease of ZFS but I don't 
want to have to jump through hoops in my SAN to present LUNS when the 
reliability is basically good enough.". 
 While I can knit multiple luns together in Pillar (wasting space on already 
redundant storage), it's easier to manage for, say, backup devices or small 
storage for a zone, to simply create a LUN and import it as a single zpool, 
adding space when necessary. Another thing that would be a great use of this 
would be to create mirrors on EMC and then knit those together as a stripe, 
taking advantage of my existing failover devices and zfs speed and management 
all at the same time. Unfortunately this bug puts the kibosh on that.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to