Peter Eriksson wrote:
There is nothing in the ZFS FAQ about this. I also fail to see how FMA could make any difference since it seems that ZFS is deadlocking somewhere in the kernel when this happens...

Some people don't see a difference between "hung" and "patiently waiting."
There are failure modes where you would patiently wait.  With full FMA 
integration
the system will know that patiently waiting is futile.

It works if you wrap all the physical devices inside SVM metadevices and use 
those for your
ZFS/zpool instead. Ie:

metainit d101 1 1 c1t5d0s0
metainit d102 1 1 c1t5d1s0
metainit d103 1 1 c1t5d2s0
zpool create foo radz /dev/md/dsk/d101 /dev/md/dsk/d102 /dev/md/dsk/d103

Another unrelated observation - I've noticed that ZFS often works *faster* if I wrap a physical partition inside a metadevice and then feed that to zpool instead of using the raw partition directly with zpool... Example: Testing ZFS on a spare 40GB partition of the boot ATA disk in an Sun Ultra 10/440 gives horrible performance numbers. If I wrap that into a simple metadevice and feed to ZFS things work much faster... Ie:

More likely this is:
6421427 netra x1 slagged by NFS over ZFS leading to long spins in the ATA 
driver code
 -- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to