To me it seems it's a special case that has not been accounted for...
While is seems zfs is checking the disks against the pool and handle them
nicely using labels/meta-data, even if they are mounted on different
controllers, the problem I've encountered has to do with that a specific
device/di
[EMAIL PROTECTED] wrote on 01/10/2008 08:07:37 PM:
> I finaly found the cause of the error
>
> Since my disks are mounted in a cassette with four in each I had to
> disconnect all cables to them to replace the crashed disk.
>
> When re-attaching the cables I reversed the order of them by
> acc
I finaly found the cause of the error
Since my disks are mounted in a cassette with four in each I had to disconnect
all cables to them to replace the crashed disk.
When re-attaching the cables I reversed the order of them by accident. In my
early tests this was not a problem since zfs iden
Robert wrote:
> Ok, not a single soul knows this either, this doesn't look promising
>
> How can I list/edit the metadata(?) that is on my disks or the pool so that I
> may see/edit what each physical disk in the pool has registered?
To view but not edit you can use /usr/sbin/zdb
--
Darren
Robert telia.com> writes:
>
> I simply need to rename/remove one of the erronous c2d0 entries/disks in
> the pool so that I can use it in full again, since at this time I can't
> reconnect the 10th disk in my raid and if one more disk fails all my
> data would be lost (4 TB is a lot of disk to wa
Ok, not a single soul knows this either, this doesn't look promising
How can I list/edit the metadata(?) that is on my disks or the pool so that I
may see/edit what each physical disk in the pool has registered?
Since I don't know what I'm looking for yet I can't be more specific in my
ques
Since there is no answer yet here's a simpler(?) question,
Why does zpool think that I have 2 c2d0?
Even if all disks are offline, zpool still lists two c2d0 instead of c2d0 and
c3d0
It seems that a logical name is confused with the physical, or something...
This message posted from opensol
One of my disks in the zfs raidz2 pool developed some mechanical faliure and
had to be replaced. It is possible that I may have swaped the sata cables
during the exchange, but this has never been a problem before in my previous
tests.
What concerns me is the output from zpool status for the c2d