Hi All,

Just a note that after catch up with -current my zfs pool kissed good bye. I'll 
omit details about its last days and go strait to the final state:

Creating pool from scratch:

#zpool create tank raidz da{1..3}
#zpool status
  pool: tank
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            da1     ONLINE       0     0     0
            da2     ONLINE       0     0     0
            da3     ONLINE       0     0     0

errors: No known data errors
#zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank   140K  3,56T  40,0K  /tank

Let's use some space out of it.

#dd if=/dev/zero of=/tank/foo
^C250939+0 records in
250938+0 records out
128480256 bytes transferred in 30.402453 secs (4225983 bytes/sec)

Oops...

#zpool status
  pool: tank
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: scrub repaired 5K in 0h0m with 0 errors on Sat Jan 19 23:11:20 2013
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            da1     ONLINE       0     0     1
            da2     ONLINE       0     0     0
            da3     ONLINE       0     0     1

At some state (more data copied) it is enough to do another scrub run to 
trigger new cksum errors / unrecoverable file loss.
I do not see any error messages from kernel and smartctl output has zero error 
counters.
Full memtest cycle seems to be all right.
Kernel built with gcc is suffering from same sympthoms.
Tried to create raidz pool out of files and it worked fine (even placed one 
chunk to UFS made out of da0). 

Any idea what it can be?

Last kernel which did work was back from October 2012.

Thanks,
Alexander.
_______________________________________________
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Reply via email to