> -Original Message-
> From: Mark J Musante [mailto:mark.musa...@oracle.com]
> Sent: Wednesday, August 11, 2010 5:03 AM
> To: Seth Keith
> Cc: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] zfs replace problems please please help
>
> On Tue, 10 Au
On Wed, 11 Aug 2010, seth keith wrote:
NAME STATE READ WRITE CKSUM
brick DEGRADED 0 0 0
raidz1 DEGRADED 0 0 0
c13d0 ONLINE 0 0 0
c4d0
this is for newbies like myself: I used using 'zdb -l' wrong, just using the
drive name from 'zpool status' or format which is like c6d1, didn't work. I
needed to add s0 to the end:
zdb -l /dev/dsk/c6d1s0
gives me a good looking label ( I think ). The pool_guid values are the same
for all
On Wed, 11 Aug 2010, Seth Keith wrote:
When I do a zdb -l /dev/rdsk/ I get the same output for all my
drives in the pool, but I don't think it looks right:
# zdb -l /dev/rdsk/c4d0
What about /dev/rdsk/c4d0s0?
___
zfs-discuss mailing list
zfs-discu
On Tue, 10 Aug 2010, seth keith wrote:
# zpool status
pool: brick
state: UNAVAIL
status: One or more devices could not be used because the label is missing
or invalid. There are insufficient replicas for the pool to continue
functioning.
action: Destroy and re-create the pool fro
First off double thanks for replying to my post. I tried to your advice but
something is way wrong. I have all 2TB drives disconnected, and the 7 500GB
drives connected. All 7 show up in bios and in format. Here all the drives are
the original 7 500Mb drives:
# format
Searching for disk
On Tue, 10 Aug 2010, seth keith wrote:
first off I don't have the exact failure messages here, and I did not take good
notes of the failures, so I will do the best I can. Please try and give me
advice anyway.
I have a 7 drive raidz1 pool with 500G drives, and I wanted to replace them all
wit