I am using an LSI PCI-X dual port HBA, in a 2 chip opteron system.
Connected to the HBA is a SUN Storagetek A1000 populated with 14 36GB disks.

I have two questions that I think are related.

Initially I set up 2 zpools one on each channel so the pool looked like this:

        share        
          raidz2     
            c2t3d0   
            c2t4d0   
            c2t5d0   
            c2t6d0   
            c2t7d0   
            c2t8d0   
          raidz2     
            c3t9d0   
            c3t10d0  
            c3t11d0  
            c3t12d0  
            c3t13d0  
            c3t14d0  
        spares
          c3t15d0    AVAIL
          c2t2d0     AVAIL

With the mpt driver and alternate pathing turned on I could sustain 100MB/s 
throughput into the file systems I create on it.

I was learning the zpool commands and features when I unmounted the file 
systems and exported the pool. This worked and I ran the import according to 
the documentation and that worked, but it added all the disks on c2 instead of 
half on c2 and half on c3 like I had before. Now I am back down to 40MB/s at 
best throughput.

Why did it do that and how can I in such a setup export and import while 
keeping my paths how I want them?

Next question is more of a recent issue.

I posted here asking about replacing the disk, but didnt really find out if I 
needed to do any work in the OS side.

I had a disk fail and the hot spare took over. I had another disk spare in the 
array and I ran the replace using it (removed it first). I then spun down the 
bad disk and popped in a replacement.

Bringing it back up I could not add the new disk into the pool (as a 
replacement for the spare I used for the replace) even after running the proper 
utils to scan the bus (and they did run and work).

So I shutdown and rebooted.

The system comes back up fine, and before I go to add the disk I do a zpool 
status and notice that after the boot the disks in the pools have re-arranged 
themselves.

Original zpool:
share
     raidz2
          c2t3d0
          c2t4d0
          c2t5d0
          c2t6d0
          c2t7d0
          c2t8d0  <----drive that failed
     raidz2
          c2t9d0
          c2t10d0
          c2t11d0
          c2t12d0
          c2t13d0
          c2t14d0
spares
     c2t2d0
     c2t16d0 <--------I have no idea why it isn't t15

I removed the c2t2d0 spare and ran zpool replace using c2t2d0 to replace the 
dead c2t8d0.
I ran a scan just to be sure before I did anything and it checks out fine.
After rebooting it shows up like this (before I add the spare volume back):

share
     raidz2
          c2t3d0
          c2t5d0
          c2t6d0
          c2t7d0
          c2t8d0
          c2t2d0 
     raidz2
          c2t9d0
          c2t10d0
          c2t11d0
          c2t12d0
          c2t13d0
          c2t14d0
spares
     c2t16d0 

The devices designated as c2t4d0 that was not touched during the replacement is 
now missing but c2t8d0 which was failed and replaced is there now. I added 
c2t4d0 as a spare and got no errors and ever ran 2 rescans just to be sure.

Everything is working ok but I'd like to know why that happened. 

I feel like trying to understand the behavior of the devices is like trying to 
map R'lyeh. I suspect I should name the server/array Cthluthu (if that will fit 
on the little lcd) or maybe Hastur (nothing like seeing that name bounce around 
on the display).
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to