On 28/02/11 12:46 PM, Dave Pooser wrote:
On 2/27/11 4:07 PM, "James C. McPherson"<j...@opensolaris.org> wrote:
...
PHY iport@
0 1
1 2
2 4
3 8
4 10
5 20
6 40
7 80
OK, bear with me for a moment because I'm feeling extra dense this evening.
The PHY tells me which port on the HBA I'm connected to. What tells me
which HBA?
The devinfo path that's reported in the non-MPxIO output from
format < /dev/null (see below)
> That's the information I care most about, and if that
information is contained up there I'll do a happy dance and head on in to
the office to start building zpools.
I've arranged these by devinfo path:
1st controller
c10t2d0 /pci@0,0/pci8086,340a@3/pci1000,72@0/iport@4/disk@p2,0
c15t5000CCA222E006B6d0
/pci@0,0/pci8086,340a@3/pci1000,72@0/iport@8/disk@w5000cca222e006b6,0
c13t5000CCA222DF92A0d0
/pci@0,0/pci8086,340a@3/pci1000,72@0/iport@10/disk@w5000cca222df92a0,0
c12t5000CCA222E0533Fd0
/pci@0,0/pci8086,340a@3/pci1000,72@0/iport@20/disk@w5000cca222e0533f,0
The most likely reason why you're seeing a c10t2d0 is because the
disk is failing to respond in the required fashion for a particular
SCSI INQUIRY command when the disk is attached to the system.
2nd controller
c16t5000CCA222DDD7BAd0
/pci@0,0/pci8086,340c@5/pci1000,3020@0/iport@2/disk@w5000cca222ddd7ba,0
3rd controller
c14t5000CCA222DF8FBEd0
/pci@0,0/pci8086,340e@7/pci1000,3020@0/iport@1/disk@w5000cca222df8fbe,0
c18t5000CCA222DEAFE6d0
/pci@0,0/pci8086,340e@7/pci1000,3020@0/iport@2/disk@w5000cca222deafe6,0
c19t5000CCA222E0A3DEd0
/pci@0,0/pci8086,340e@7/pci1000,3020@0/iport@4/disk@w5000cca222e0a3de,0
c20t5000CCA222E046B7d0
/pci@0,0/pci8086,340e@7/pci1000,3020@0/iport@8/disk@w5000cca222e046b7,0
c17t5000CCA222DF3CECd0
/pci@0,0/pci8086,340e@7/pci1000,3020@0/iport@20/disk@w5000cca222df3cec,0
With the information above about the PHY/iport relationship, I
hope you can now see better what your physical layout is. Also,
please remember that using MPxIO means you have a single virtual
controller, and the driver stack handles the translation to physical
for you so you don't have to worry about that aspect. Of course,
if you want to worry about it, feel free.
Well, I want to make sure that a single controller failure can't cause any
of my RAIDz2 vdevs to fault. I know I can do that manually by building the
vdevs in such a way that no more than two drives are on a single
controller. If the virtual controller is smart enough to do that
automagically-- when I'm using SATA disks and a backplane that doesn't
support multipathing-- then I have no complaints and I owe you a beer or
three the next time you're in the Dallas area. But that seems unlikely to
me, and so I think I have to worry about it. I'd love to be wrong, though!
No, the controller doesn't do that for you. That's one bit of AI that
we haven't quite got to just yet. Unless you wanted to spend $LOTS on
a full frame HDS beast ;-)
Personally, having worked on the mpt_sas(7d) project, I'm disappointed
that you believe the card and its driver are "a failed bit".
I'd like to revise and extend my remarks and replace that with "a
suboptimal choice for this project."
Not knowing your other requirements for the project, I'll settle
for this version :)
[snip]
hth,
James
--
Oracle
http://www.jmcp.homeunix.com/blog
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss