Thanks for the reply and corroboration, Brent.  I just liveupgraded the machine 
from Solaris 10 u5 to Solaris 10 u6, which purports to have fixed all known 
issues with the Marvell device, and am still experiencing the hang.  So I guess 
this set of facts would imply one of:

1) they missed one, or
2) it's not a Marvell related problem.


Not sure where else to look for information about this.  Without further info, 
I guess I'm essentially forced to ditch production Solaris and stick with 
Nevada.  But that'd be a very blind, dismissive action on my part and I'd 
really rather find out what's at play here.

A little more background/ tangent:  The other filer we're running with this 
exact same feature set (simultaneous iSCSI and NFS sharing out of the same 
zpool), in production, is at Nevada b91 and it has never exhibited this flaw.  
My intention was to install an officially supported Solaris release on the new 
filer and zfs send everything from the old Nevada box to the new Solaris box to 
get to a position where I could purchase Sun support.   But now I'm obviously 
thinking that I can't do it.  We have like $12000 worth of Sun contracts here 
but haven't added the PC filers in yet because they're on Nevada and thus, I 
assumed, unsupportable.  Is that correct?  Or can I put a Nevada PC on Sun 
support?  (Yes, it's on the HCL.) (Sorry for the seemingly ot question here, 
but I do need to find out how to get Sun support on my zfs box, so it's at 
least *arguably* on-topic :)


One last thing I noticed was that the zfs version in Solaris 10 u6 is higher 
than that in u5.  Any chance that an upgrade of my zpool would enable the new 
features that would address this issue?

thx
jake
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to