Hi,
I've been looking at a raidz using opensolaris snv_111b and I've come
across something I don't quite understand. I have 5 disks (fixed size
disk images defined in virtualbox) in a raidz configuration, with 1 disk
marked as a spare. The disks are 100m in size and I wanted simulate data
corruption on one of them and watch the hot spare kick in, but when I do
dd if=/dev/zero of=/dev/c10t0d0 ibs=1024 count=102400
The pool remains perfectly healthy
pool: datapool
state: ONLINE
scrub: scrub completed after 0h0m with 0 errors on Wed Oct 21 17:12:11
2009
config:
NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c10t0d0 ONLINE 0 0 0
c10t1d0 ONLINE 0 0 0
c10t2d0 ONLINE 0 0 0
c10t3d0 ONLINE 0 0 0
spares
c10t4d0 AVAIL
errors: No known data errors
I don't understand the output, I thought I should see cksum errors
against c10t0d0. I tried exporting/importing the pool and scrubbing it
incase this was a cache thing, but nothing changes.
I've tried this on all the disks in the pool with the same result and
the datasets in the pool is uncorrupted. I guess I'm misunderstanding
something fundamental about ZFS, can anyone help me out and explain.
-Ian.
z
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss