On Feb 1, 2011, at 5:56 AM, Mike Tancsa wrote:
> On 1/31/2011 4:19 PM, Mike Tancsa wrote:
>> On 1/31/2011 3:14 PM, Cindy Swearingen wrote:
>>> Hi Mike,
>>>
>>> Yes, this is looking much better.
>>>
>>> Some combination of removing corrupted files indicated in the zpool
>>> status -v output, runni
Excellent.
I think you are good for now as long as your hardware setup is stable.
You survived a severe hardware failure so say a prayer and make sure
this doesn't happen again. Always have good backups.
Thanks,
Cindy
On 02/01/11 06:56, Mike Tancsa wrote:
On 1/31/2011 4:19 PM, Mike Tancsa wr
On 1/31/2011 4:19 PM, Mike Tancsa wrote:
> On 1/31/2011 3:14 PM, Cindy Swearingen wrote:
>> Hi Mike,
>>
>> Yes, this is looking much better.
>>
>> Some combination of removing corrupted files indicated in the zpool
>> status -v output, running zpool scrub and then zpool clear should
>> resolve the
On Jan 31, 2011, at 1:19 PM, Mike Tancsa wrote:
> On 1/31/2011 3:14 PM, Cindy Swearingen wrote:
>> Hi Mike,
>>
>> Yes, this is looking much better.
>>
>> Some combination of removing corrupted files indicated in the zpool
>> status -v output, running zpool scrub and then zpool clear should
>> res
On 1/31/2011 3:14 PM, Cindy Swearingen wrote:
> Hi Mike,
>
> Yes, this is looking much better.
>
> Some combination of removing corrupted files indicated in the zpool
> status -v output, running zpool scrub and then zpool clear should
> resolve the corruption, but its depends on how bad the corru
Hi Mike,
Yes, this is looking much better.
Some combination of removing corrupted files indicated in the zpool
status -v output, running zpool scrub and then zpool clear should
resolve the corruption, but its depends on how bad the corruption is.
First, I would try least destruction method: Try
On 1/29/2011 6:18 PM, Richard Elling wrote:
>
> On Jan 29, 2011, at 12:58 PM, Mike Tancsa wrote:
>
>> On 1/29/2011 12:57 PM, Richard Elling wrote:
0(offsite)# zpool status
pool: tank1
state: UNAVAIL
status: One or more devices could not be opened. There are insufficient
He says he's using FreeBSD. ZFS recorded names like "ada0" which always means
a whole disk.
In any case FreeBSD will search all block storage for the ZFS dev components if
the cached name is wrong: if the attached disks are connected to the system at
all FreeBSD will find them wherever they ma
On Jan 30, 2011, at 1:09 PM, Peter Jeremy wrote:
> On 2011-Jan-30 13:39:22 +0800, Richard Elling
> wrote:
>> I'm not sure of the way BSD enumerates devices. Some clever person thought
>> that hiding the partition or slice would be useful.
>
> No, there's no hiding. /dev/ada0 always refers to
On 2011-Jan-30 13:39:22 +0800, Richard Elling wrote:
>I'm not sure of the way BSD enumerates devices. Some clever person thought
>that hiding the partition or slice would be useful.
No, there's no hiding. /dev/ada0 always refers to the entire physical disk.
If it had PC-style fdisk slices, ther
On Jan 30, 2011, at 4:31 AM, Mike Tancsa wrote:
> On 1/30/2011 12:39 AM, Richard Elling wrote:
>>> Hmmm, doesnt look good on any of the drives.
>>
>> I'm not sure of the way BSD enumerates devices. Some clever person thought
>> that hiding the partition or slice would be useful. I don't find it
On 1/30/2011 12:39 AM, Richard Elling wrote:
>> Hmmm, doesnt look good on any of the drives.
>
> I'm not sure of the way BSD enumerates devices. Some clever person thought
> that hiding the partition or slice would be useful. I don't find it useful.
> On a Solaris
> system, ZFS can show a disk
On Jan 29, 2011, at 4:14 PM, Mike Tancsa wrote:
> On 1/29/2011 6:18 PM, Richard Elling wrote:
>>> 0(offsite)#
>>
>> The next step is to run "zdb -l" and look for all 4 labels. Something like:
>> zdb -l /dev/ada2
>>
>> If all 4 labels exist for each drive and appear intact, then look more
>
On 1/29/2011 6:18 PM, Richard Elling wrote:
>> 0(offsite)#
>
> The next step is to run "zdb -l" and look for all 4 labels. Something like:
> zdb -l /dev/ada2
>
> If all 4 labels exist for each drive and appear intact, then look more closely
> at how the OS locates the vdevs. If you can't so
On Jan 29, 2011, at 12:58 PM, Mike Tancsa wrote:
> On 1/29/2011 12:57 PM, Richard Elling wrote:
>>> 0(offsite)# zpool status
>>> pool: tank1
>>> state: UNAVAIL
>>> status: One or more devices could not be opened. There are insufficient
>>> replicas for the pool to continue functioning.
>>>
On 1/29/2011 11:38 AM, Edward Ned Harvey wrote:
>
> That is precisely the reason why you always want to spread your mirror/raidz
> devices across multiple controllers or chassis. If you lose a controller or
> a whole chassis, you lose one device from each vdev, and you're able to
> continue produ
On 1/29/2011 12:57 PM, Richard Elling wrote:
>> 0(offsite)# zpool status
>> pool: tank1
>> state: UNAVAIL
>> status: One or more devices could not be opened. There are insufficient
>>replicas for the pool to continue functioning.
>> action: Attach the missing device and online it using 'z
On Jan 28, 2011, at 6:41 PM, Mike Tancsa wrote:
> Hi,
> I am using FreeBSD 8.2 and went to add 4 new disks today to expand my
> offsite storage. All was working fine for about 20min and then the new
> drive cage started to fail. Silly me for assuming new hardware would be
> fine :(
>
> Th
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Mike Tancsa
>
> NAMESTATE READ WRITE CKSUM
> tank1 UNAVAIL 0 0 0 insufficient replicas
> raidz1ONLINE 0 0 0
>
Hi,
I am using FreeBSD 8.2 and went to add 4 new disks today to expand my
offsite storage. All was working fine for about 20min and then the new
drive cage started to fail. Silly me for assuming new hardware would be
fine :(
The new drive cage started to fail, it hung the server and the
20 matches
Mail list logo