On 7/25/2010 1:58 PM, Dan Langille wrote:
I'm trying to destroy a zfs array which I recently created.  It contains
nothing of value.

# zpool status
pool: storage
state: ONLINE
status: One or more devices could not be used because the label is
missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-4J
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
raidz2 ONLINE 0 0 0
gpt/disk01 ONLINE 0 0 0
gpt/disk02 ONLINE 0 0 0
gpt/disk03 ONLINE 0 0 0
gpt/disk04 ONLINE 0 0 0
gpt/disk05 ONLINE 0 0 0
/tmp/sparsefile1.img UNAVAIL 0 0 0 corrupted data
/tmp/sparsefile2.img UNAVAIL 0 0 0 corrupted data

errors: No known data errors

Why sparse files? See this post:

http://docs.freebsd.org/cgi/getmsg.cgi?fetch=1007077+0+archive/2010/freebsd-stable/20100725.freebsd-stable


The two tmp files were created via:

dd if=/dev/zero of=/tmp/sparsefile1.img bs=1 count=0 oseek=1862g
dd if=/dev/zero of=/tmp/sparsefile2.img bs=1 count=0 oseek=1862g

And the array created with:

zpool create -f storage raidz2 gpt/disk01 gpt/disk02 gpt/disk03 \
gpt/disk04 gpt/disk05 /tmp/sparsefile1.img /tmp/sparsefile2.img

The -f flag was required to avoid this message:

invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: raidz contains both files and devices


I tried to offline one of the sparse files:

zpool offline storage /tmp/sparsefile2.img

That caused a panic: http://www.langille.org/tmp/zpool-offline-panic.jpg

After rebooting, I rm'd both /tmp/sparsefile1.img and
/tmp/sparsefile2.img without thinking they were still in the zpool. Now
I am unable to destroy the pool. The system panics. I disabled ZFS via
/etc/rc.conf, rebooted, recreated the two sparse files, then did a
forcestart of zfs. Then I saw:

# zpool status
pool: storage
state: ONLINE
status: One or more devices could not be used because the label is
missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-4J
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
raidz2 ONLINE 0 0 0
gpt/disk01 ONLINE 0 0 0
gpt/disk02 ONLINE 0 0 0
gpt/disk03 ONLINE 0 0 0
gpt/disk04 ONLINE 0 0 0
gpt/disk05 ONLINE 0 0 0
/tmp/sparsefile1.img UNAVAIL 0 0 0 corrupted data
/tmp/sparsefile2.img UNAVAIL 0 0 0 corrupted data

errors: No known data errors


Another attempt to destroy the array created a panic.

Suggestions as to how to remove this array and get started again?

I fixed this by:

* reboot zfs_enable="NO" in /etc/rc.conf
* rm /boot/zfs/zpool.cache
* wiping the first and last 16KB of each partition involved in the array

Now I'm trying mdconfig instead of sparse files. Making progress, but not all the way there yet. :)

--
Dan Langille - http://langille.org/
_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Reply via email to