I had a pool on external drive.Recently the drive failed,but pool still shows
up when run 'zpoll status'
Any attempt to remove/delete/export pool ends up with unresponsiveness(The
system is still up/running perfectly,it's just this specific command kind of
hangs so I have to open new ssh sessio
Thanks Cindy,
I just needed to delete all luns before
sbdadm delete-lu 600144F00800270514BC4C1E29FB0001
itadm delete-target -f
iqn.1986-03.com.sun:02:f38e0b34-be30-ca29-dfbd-d1d28cd75502
And then I was able to destroy ZFS system itself.
--
This message posted from opensolaris.org
_
Thanks.Everything is clear now.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks...Now I think I understand...
Let me summarize it andd let me know if I'm wrong.
Disabling ZIL converts all synchronous calls to asynchronous which makes ZSF to
report data acknowledgment before it actually was written to stable storage
which in turn improves performance but might cause
Any argumentation why ?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Is there any way run start-up script before non-root pool is mounted ?
For example I'm trying to use ramdisk as ZIL device (ramdiskadm )
So I need to create ramdisk before actual pool is mounted otherwise it
complains that log device is missing :)
For sure I can manually remove/and add it by sc
I have a pool with zvolume (Opensolaris b134)
When I try zpool destroy tank I get "pool is busy"
# zpool destroy -f tank
cannot destroy 'tank': pool is busy
When I try destroy zvolume first I get " dataset is busy"
# zfs destroy -f tank/macbook0-data
cannot destroy 'tank/macbook0-data': datase
Thank you very much for the answer
Yea,that what I was afraid of.
There is something I really cannot understand about zpool structuring...
What is a role these 4 drives play in that tank pool with current configuration
?
If they are not part of raidz3 array what is a point for Solaris to accept
I have zpool like that
pool: tank
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
tankONLINE 0 0 0
raidz3-0 ONLINE 0 0 0
___c6t0d0 ONLINE 0 0 0
___c6t1d0 ONLINE 0 0
This is a situation:
I've got an error on one of the drives in 'zpool status' output:
zpool status tank
pool: tank
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determ
It looks like I have some leftovers of old clones that I cannot delete:
Clone name is tank/WinSrv/Latest
I'm trying:
zfs destroy -f -R tank/WinSrv/Latest
cannot unshare 'tank/WinSrv/Latest': path doesn't exist: unshare(1M) failed
Please help me to get rid of this garbage.
Thanks a lot.
--
Th
11 matches
Mail list logo