I have a volume shared via iSCSI that has become unusable.  Both  
target and initiator nodes are running nevada b99.  Running "newfs" on  
the initiator node fails immediately with an "I/O error" (no other  
details).  The pool in which the "bad" volume resides includes other  
volumes exported via iSCSI, all of which are functioning.

I took a snapshot of the bad volume and one of a good volume, then  
cloned the snapshots.  The clones exhibited the same behavior as their  
origins, the clone of the bad volume was not usable whereas the clone  
of the good worked fine.

I can, however, successfully run "newfs" on the good|bad /dev/zvol  
devices on the target node, and subsequently mount and create files.

"zpool status" shows no errors, and a "zpool scrub" did not fix the  
problem.

I've restarted the iSCSI SMF service to no avail.  Nor did a reboot.

Some details on how I came to be in this state, which may or may not  
be germane.  On the initiator node the  iSCSI devices were used as  
backing stores for VirtualBox virtual disks (vmdk's).  There are some  
known bugs in the VirtualBox drivers that cause a kernel panic and  
reboot, which I ran into.  After a reboot, VirtualBox complained about  
the backing store, the investigation of which includes this post.   
What I've described above has happened twice.  I suspect the panic/ 
reboot had something to do with getting me into this state and I'm  
continuing to isolate root cause.  However seems to me that the ZFS- 
backed iSCSI share should continue to function regardless.

cheers,
/Chris
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to