On Mon, 2012-01-30 at 01:50 +0000, Lou Picciano wrote: > Bayard, > > Indeed, you did answer it - and thanks for getting back to me - your > suggestion was spot ON! > > However, the simple zpool clear/scrub cycle wouldn't work in our case - at > least initially. In fact, after multiple 'rinse/repeats', the offending file > - or its hex representation - would reappear. In fact, the CHSKUM errors > would often mount... Logically, this seems to make some sense; that zfs would > attempt to reconstitute the damaged file with each scrub...(?)
As the truth is somewhere in between, I'll insert my comment accordingly. You should only see the errors continue if there's a dataset with a reference to the version of the file that creates those errors. I've seen this before: until all of the datasets are deleted, the errors will continue to be diagnosed, sometimes presented without databaset names, which might be considered a bug (it seems wrong that you don't get a dataset name for clones). You wouldn't happen to have preserved output that could be used to determine if/where there's a bug? > In any case, after gathering the nerve to start deleting old snapshots - > including the one with the offending file - the clear/scrub process worked a > charm. Many thanks again! > > Lou Picciano > > ----- Original Message ----- > From: "Bayard G. Bell" <buffer.g.overf...@gmail.com> > To: z...@lists.illumos.org > Cc: zfs-discuss@opensolaris.org > Sent: Sunday, January 29, 2012 3:22:39 PM > Subject: Re: [zfs] Oddly-persistent file error on ZFS root pool > > Lou, > > Tried to answer this when you asked on IRC. Try a zpool clear and scrub > again to see if the errors persist. > > Cheers, > Bayard
signature.asc
Description: This is a digitally signed message part
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss