Hi,

Does ZFS flag blocks as bad so it knows to avoid using them in the future?

During testing I had huge numbers of unrecoverable checksum errors, which I 
resolved by disabling write caching on the disks.

After doing this, and confirming the errors had stopped occuring, I removed the 
test files. A few seconds after removing the test files, I noticed the used 
space dropped from 16GB to 11GB according to 'df', but it did not appear to 
ever drop below this value.

Is this just normal file system overhead (This is a raidz with 8x 500GB 
drives), or has ZFS not freed some of the space allocated to bad files?

If ZFS is holding on to this space because it thinks it might be bad, is there 
a way to tell it that it is okay to use it?

I am using ZFS on FreeBSD, which from what I've read has had minimal 
modification done to the source to make it work on that platform. Unfortunately 
my hardware for booting with is not supported by Solaris, which is where the 
majority of experience with ZFS is at this point.

Thanks!
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to