Hi Harry,
Generally, you need to use zpool clear to clear the pool errors, but I
can't reproduce the removed files reappearing in zpool status on my own
system when I corrupt data so I'm not sure this will help. Some other
larger problem is going on here...
Did any hardware changes lead up to the z3 pool in this state? I would
suspect controller/cable issues with c5d0 and c6d0 if your root pool is
running fine. Otherwise, the hardware problems might be CPU, memory...
Can you review error messages in /var/adm/messages and also fmdump -eV
for clues?
Thanks,
Cindy
On 10/20/10 20:47, Harry Putnam wrote:
build 133
zpool version 22
I'm getting:
zpool status:
NAME STATE READ WRITE CKSUM
z3 DEGRADED 0 0 167
mirror-0 DEGRADED 0 0 334
c5d0 DEGRADED 0 0 335 too many errors
c6d0 DEGRADED 0 0 334 too many errors
[...]
When I saw it I deleted all the files listed in status -v report as
recommended on the report
Ran a new scrub.... Now I get the same message and it still lists all
the files I deleted as being the problem.
_______________________________________________
zfs-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss