I have registered ECC memory in the system. I will run some memory diagnostics 
also, but mentioning the power supply got me thinking that around the same time 
of the errors we had a storm and the lights dimmed in my house quite a few 
times. It was not enough of a drop to shut the system down but perhaps it had 
something to do with it. Hopefully it is as simple as that. A UPS is now on my 
list.

I took Bob's advice, added more disks and created another pool since I do not 
trust the old pool.
I used dd with noerror and sync to a new block volume and that did the trick, 
thanks Bob and thanks Edward for the explanation.

I was a bit unsure using dd on the zvol directly so I added another LUN (on the 
new pool) to the system's view and used clonezilla; booted it to the command 
prompt and use dd from there to duplicate the dev.
Any thoughts on directly accessing the zvol via dd? I assume it the same as any 
other device and should not be a problem.

Another thing I noticed is the high % of wait I/O on the disks of the 
problematic pool. I am not sure if it was ever this high before. My new pool is 
on a different controller and it is a different raid type so I cannot compare. 
This time I selected raidz2


Thanks for the replies really appreciate it.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to