I'll be doing this over the upcoming weekend so I'll see how it goes.
Thanks for all of the suggestions.
Todd
On Jun 22, 2011, at 10:48 AM, Cindy Swearingen
wrote:
> Hi Todd,
>
> Yes, I have seen zpool scrub do some miracles but I think it depends
> on the amount of corruption.
>
> A few
Hi Todd,
Yes, I have seen zpool scrub do some miracles but I think it depends
on the amount of corruption.
A few suggestions are:
1. Identify and resolve the corruption problems on the underlying
hardware. No point in trying to clear the pool errors if this
problem continues.
The fmdump comman
On Jun 21, 2011, at 2:54 PM, Todd Urie wrote:
> The volumes sit on HDS SAN. The only reason for the volumes is to prevent
> inadvertent import of the zpool on two nodes of a cluster simultaneously.
> Since we're on SAN with Raid internally, didn't seem to we would need zfs to
> provide that
> didn't seem to we would need zfs to provide that redundancy also.
There was a time when I fell for this line of reasoning too. The problem (if
you want to call it that) with zfs is that it will show you, front and center,
the corruption taking place in your stack.
> Since we're on SAN with R
On 21/06/11 7:54 AM, Todd Urie wrote:
> The volumes sit on HDS SAN. The only reason for the volumes is to
> prevent inadvertent import of the zpool on two nodes of a cluster
> simultaneously. Since we're on SAN with Raid internally, didn't seem to
> we would need zfs to provide that redundancy al
The volumes sit on HDS SAN. The only reason for the volumes is to prevent
inadvertent import of the zpool on two nodes of a cluster simultaneously.
Since we're on SAN with Raid internally, didn't seem to we would need zfs
to provide that redundancy also.
On Tue, Jun 21, 2011 at 4:17 AM, Remco Le
Todd,
Is that ZFS on top of VxVM ? Are those volumes okay? I wonder if this
is really a sensible combination?
..Remco
On 6/21/11 7:36 AM, Todd Urie wrote:
I have a zpool that shows the following from a zpool status -v name>
brsnnfs0104 [/var/spool/cron/scripts]# zpool status -v ABC0101
p
On 21 June, 2011 - Todd Urie sent me these 5,9K bytes:
> I have a zpool that shows the following from a zpool status -v
>
> brsnnfs0104 [/var/spool/cron/scripts]# zpool status -v ABC0101
> pool:ABC0101
> state: ONLINE
> status: One or more devices has experienced an error resulting in data
>
I have a zpool that shows the following from a zpool status -v
brsnnfs0104 [/var/spool/cron/scripts]# zpool status -v ABC0101
pool:ABC0101
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the