I'm pretty new to opensolaris.  I recently had the need to convert a freebsd 
system to opensolaris and the only way i could get a pool to import was if i 
created the pool on solaris formated EFI labeled drives, then created it in 
FreeBSD, copied everything from my FreeBSD pool to this new pool (which FreeBSD 
was happy to do even though it gave me weird errors due to GPT corruption, i 
assume this is due to differences in the EFI labeling in the two os's)

Anyways, i imported the pool fine...but heres the thing which is weird.....some 
of the devices didn't show up as "raw drives"

This had me somewhat worried as i couldnt' find any info to explain it.
This is what zpool status looked like:

  pool: tank
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on older software versions.
 scrub: none requested
config:

        NAME          STATE     READ WRITE CKSUM
        tank          ONLINE       0     0     0
          raidz2-0    ONLINE       0     0     0
            c5t4d0    ONLINE       0     0     0
            c3t5d0p0  ONLINE       0     0     0
            c4t4d0    ONLINE       0     0     0
            c3t2d0p0  ONLINE       0     0     0
            c4t6d0    ONLINE       0     0     0
            c5t6d0p0  ONLINE       0     0     0
            c4t7d0p0  ONLINE       0     0     0

errors: No known data errors


see how some of the drives end in p0?  What is that? i know what s0 is but i 
have no idea what p0 is.

Because i couldnt' get any answer regarding this in ZFS discuss i thought i'd 
post here, being this is the "general discussion" forum.

This also brings up another question.  I wasn't sure if this zpool was ok so i 
just decided to replace the drives.  I found out the hard way that if you are 
replacing a drive, and start replacing a SECOND drive the entire process starts 
over (i was 95% done with a replace and i started replacing a second 
drive...they both started at 0%)   This made me wonder how many drives it was 
safe to replace at once.....being this was raidz2 i dind't want to try more 
than 2.

Is it possible to replace more than 2 if you haven't actually FAILED the drives?

also....something curious i noticed.....each replace seems to be getting faster 
and faster....why is this? the data was spread VERY evenly across the drives 
i'm pretty sure.

Is this simply due to ARC?
-- 
This message posted from opensolaris.org
_______________________________________________
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Reply via email to