On 2012-May-29 22:04:39 +1000, Edward Ned Harvey <opensolarisisdeadlongliveopensola...@nedharvey.com> wrote: >If you have a drive (or two drives) with bad sectors, they will only be >detected as long as the bad sectors get used. Given that your pool is less >than 100% full, it means you might still have bad hardware going undetected, >if you pass your scrub.
One way around this is to 'dd' each drive to /dev/null (or do a "long" test using smartmontools). This ensures that the drive thinks all sectors are readable. >You might consider creating a big file (dd if=/dev/zero of=bigfile.junk >bs=1024k) and then when you're out of disk space, scrub again. (Obviously, >you would be unable to make new writes to pool as long as it's filled...) I'm not sure how ZFS handles "no large free blocks", so you might need to repeat this more than once to fill the disk. This could leave your drive seriously fragmented. If you do try this, I'd recommend creating a snapshot first and then rolling back to it, rather than just deleting the junk file. Also, this (obviously) won't work at all on a filesystem with compression enabled. -- Peter Jeremy
pgpwHwVLcSvcK.pgp
Description: PGP signature
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss