This worked.
I've restarted my testing but I've been fdisking each drive before I
add it to the pool, and so far the system is behaving as expected
when I spin a drive down, i.e., the hot spare gets automatically used.
This makes me wonder if it's possible to ensure that the forced
addition of
> You are likely hitting:
>
> 6397052 unmounting datasets should process
> /etc/mnttab instead of traverse DSL
>
> Which was fixed in build 46 of Nevada. In the
> meantime, you can remove
> /etc/zfs/zpool.cache manually and reboot, which will
> remove all your
> pools (which you can then re-impo
Nevermind:
# zfs destroy [EMAIL PROTECTED]:28
cannot open '[EMAIL PROTECTED]:28': I/O error
Jim
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discus
Here's the truss output:
402:ioctl(3, ZFS_IOC_POOL_LOG_HISTORY, 0x080427B8) = 0
402:ioctl(3, ZFS_IOC_OBJSET_STATS, 0x0804192C) = 0
402:ioctl(3, ZFS_IOC_DATASET_LIST_NEXT, 0x0804243C) = 0
402:ioctl(3, ZFS_IOC_DATASET_LIST_NEXT, 0x0804243C) Err#3 ESRCH
402:ioctl(3, ZFS_IOC_
BTW, I'm also unable to export the pool -- same error.
Jim
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss