On Tue, 15 Jul 2008, Ross wrote:

> Well I haven't used a J4500, but when we had an x4500 (Thumper) on 
> loan they had Solaris pretty well integrated with the hardware. 
> When a disk failed, I used cfgadm to offline it and as soon as I did 
> that a bright blue "Ready to Remove" LED lit up on the drive tray of 
> the faulty disk, right next to the handle you need to lift to remove 
> the drive.

That sure sounds a whole lot easier to manage than my setup with a 
StorageTek 2540 and each drive as a LUN.  The 2540 could detect a 
failed drive by itself and turn an LED on, but if ZFS decides that a 
drive has failed and the 2540 does not, then I will have to use the 
2540's CAM administrative interface and manually set the drive out of 
service. I very much doubt that cfgadm will communicate with the 2540 
and tell it to do anything.

A little while back I created this table so I could understand how 
things were mapped:

Disk    Volume   LUN  WWN                                              Device   
                              ZFS
======  =======  ===  ===============================================  
=====================================  ===
t85d01  Disk-01  0    60:0A:0B:80:00:3A:8A:0B:00:00:09:61:47:B4:51:BE  
c4t600A0B80003A8A0B0000096147B451BEd0  P3-A
t85d02  Disk-02  1    60:0A:0B:80:00:39:C9:B5:00:00:0A:9C:47:B4:52:2D  
c4t600A0B800039C9B500000A9C47B4522Dd0  P6-A
t85d03  Disk-03  2    60:0A:0B:80:00:39:C9:B5:00:00:0A:A0:47:B4:52:9B  
c4t600A0B800039C9B500000AA047B4529Bd0  P1-B
t85d04  Disk-04  3    60:0A:0B:80:00:3A:8A:0B:00:00:09:66:47:B4:53:CE  
c4t600A0B80003A8A0B0000096647B453CEd0  P4-A
t85d05  Disk-05  4    60:0A:0B:80:00:39:C9:B5:00:00:0A:A4:47:B4:54:4F  
c4t600A0B800039C9B500000AA447B4544Fd0  P2-B
t85d06  Disk-06  5    60:0A:0B:80:00:3A:8A:0B:00:00:09:6A:47:B4:55:9E  
c4t600A0B80003A8A0B0000096A47B4559Ed0  P1-A
t85d07  Disk-07  6    60:0A:0B:80:00:39:C9:B5:00:00:0A:A8:47:B4:56:05  
c4t600A0B800039C9B500000AA847B45605d0  P3-B
t85d08  Disk-08  7    60:0A:0B:80:00:3A:8A:0B:00:00:09:6E:47:B4:56:DA  
c4t600A0B80003A8A0B0000096E47B456DAd0  P2-A
t85d09  Disk-09  8    60:0A:0B:80:00:39:C9:B5:00:00:0A:AC:47:B4:57:39  
c4t600A0B800039C9B500000AAC47B45739d0  P4-B
t85d10  Disk-10  9    60:0A:0B:80:00:39:C9:B5:00:00:0A:B0:47:B4:57:AD  
c4t600A0B800039C9B500000AB047B457ADd0  P5-B
t85d11  Disk-11  10   60:0A:0B:80:00:3A:8A:0B:00:00:09:73:47:B4:57:D4  
c4t600A0B80003A8A0B0000097347B457D4d0  P5-A
t85d12  Disk-12  11   60:0A:0B:80:00:39:C9:B5:00:00:0A:B4:47:B4:59:5F  
c4t600A0B800039C9B500000AB447B4595Fd0  P6-B

When I selected the drive pairings, it was based on a dump from a 
multipath utility and it seems that on a chassis level there is no 
rhyme or reason for the zfs mirror pairings.

This is an area where traditional RAID hardware makes ZFS more 
difficult to use.

Bob
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to