dear all, victor,

i am most happy to report that the problems were somehwat hardware-related, 
caused by a damaged / dangling SATA cable which apparently caused long delays 
(sometimes working, disk on, disk off, ...) during normal zfs operations. Why 
the -f produced a kernel panic I'm unsure. Interestingly it all fit some 
symptoms other people have with a bad uberlblock, a defect spanned metadata 
structure (?) detected after a scrube tc.

anyway, great that you guys answered to quickly. there was 6 TB of data on that 
pool. I stress-tested it for a week and 30 minutes prior to the incident 
deleted the old RAID set ... imagine my horror ;)

have a good one
marc
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to