Sorry if you got this twice but I never saw it appear on the alias.


OK Today I played with a J4400 connected to a Txxx server running S10 10/09
 
First off read the release notes I spent about 4 hours pulling my hair out as I could not get stmsboot to work until we read in the release notes that 500GB SATA drives do not work!!!
 
Initial Setup:
A pair of dual port SAS controllers (c4 and c5)
A J4400 with 6x 1TB SATA disks
 
The J440 had two controllers and these where connected to one SAS card (physical controller c4)
 
Test 1:
 
First a reboot -- -r
 
format shows 12 disks on c4 (each disk having two paths). If you picked the same disk via both paths ZFS stopped you doing stupid things by knowing the disk was already in use.
 
Test 2:
 
run stmsboot -e
 
format now shows six disk on controller c6, a new "virtual controller" The two internal disks are also now on c6 and stmsboot has done the right stuff with the rpool, so I would guess you could multi-path at a later date if you don't want to fist off, but I did not test this.
 
stmsboot -L only showed the two internal disk not the six in the J4400 strange, but we pressed on.
 
Test 3:
 
I created a zpool (two disks mirrored) using two of the new devices on c6.
 
I created some I/O load
 
I then unplugged one of the cables from the SAS card (physical c4).
 
Result: Nothing everything just keeps working - cool stuff!
 
Test 4:
 
I plugged the unplugged cable into the other controller (physical c5)
 
Result: Nothing everything just keeps working - cool stuff!
 
Test 5:
 
Being bold I then unplugged the remaining cable from the physical c4 controller
 
Result: Nothing everything just keeps working - cool stuff!
 
So I had gone from dual pathed, on a single controller (c4) to single pathed, on a different controller (c5).
 
 
Test 6:
 
I added the other four drives to the zpool (plain old zfs stuff - a bit boring).
 
 
Test 7:
 
I plugged in four more disks.
 
Result: Their mulipathed devices just showed up in format, I added them to the pool and also added them as spares all the while the I/O load is happening. No noticable stops or glitches.
 
Conclusion:
 
If you RTFM first then stmsboot does everything it is documented to do. You don't need to play with cfgadm or anything like that, just as I said orginally (below). The multi-pathing stuff is easy to set up and even a very rusty admin. like me found it very easy.
 
Note: There may be patches for the 500GB SATA disks I don'y know, fortunatly that's not what I've sold - Phew!!
 
TTFN
Trevor
 
 
 
 
 



Trevor Pretty wrote:
Karl

Don't you just use stmsboot?

http://docs.sun.com/source/820-3223-14/SASMultipath.html#50511899_pgfId-1046940

Bruno

Next week I'm playing with a M3000 and a J4200 in the local NZ distributor's  lab. I had planned to just use the latest version of S10, but if I get the time I might play with OpenSolaris as well, but I don't think there is anything radically different between the two here.

>From what I've read in preparation (and I stand to be corrected):


    * Will i be able to achieve multipath support, if i connect the 
      J4400 to 2 LSI HBA in one server, with SATA disks, or this is only 
      possible with SAS disks? This server will have OpenSolaris (any 
      release i think) . 

Disk type does not matter (see link above).

    * The CAM ( StorageTek Common Array Manager ), its only for hardware 
      management of the JBOD, leaving 
      disk/volumes/zpools/luns/whatever_name management up to the server 
      operating system , correct ? 

That is my understanding see:- http://docs.sun.com/source/820-3765-11/

    * Can i put some readzillas/writezillas in the j4400 along with sata 
      disks, and if so will i have any benefit  , or should i place 
      those *zillas directly into the servers disk tray? 

On the Unified Storage products they go in both. Readzilla in the server Logzillas in the J4400. This is quite logical if you want to move the array between hosts all the data needs to be in the array. Read data can always be re-created so therefore the closer to the CPU the better. See: http://catalog.sun.com/

    * Does any one has experiences with those jbods? If so, are they in 
      general solid/reliable ? 

No: But, get a support contract!

    * The server will probably be a Sun x44xx series, with 32Gb ram, but 
      for the best possible performance, should i invest in more and 
      more spindles, or a couple less spindles and buy some readzillas? 
      This system will be mainly used to export some volumes over ISCSI 
      to a windows 2003 fileserver, and to hold some NFS shares. 

Check Brendon Gregg's blogs *I think* he has done some work here from memory.
 
 





Karl Katzke wrote:
Bruno - 

Sorry, I don't have experience with OpenSolaris, but I *do* have experience running a J4400 with Solaris 10u8. 

First off, you need a LSI HBA for the Multipath support. It won't work with any others as far as I know. 

I ran into problems with the multipath support because it wouldn't allow me to manage the disks with cfgadm and got very confused when I'd do something as silly as replace a disk, causing the disk's GUID (and therefor address under the virtual multipath controller) to change. My take-away was that Solaris 10u8 multipath support is not ready for production environments as there are limited-to-no administration tools. This may have been fixed in recent builds of Nevada. (See a thread that started around 03Nov09 for my experiences with MPxIO.) 

At the moment, I have the J4400 split between the two controllers and simply have even numbered disks on one, and odd numbered disks on the other. Both controllers can *see* all the disks.

You are correct about the CAM software. It also updates the firmware, though, since us commoners don't seemingly have access to the serial management ports on the J4400. 

I can't speak to locating the drives -- that would be something you'd have to test. I have found increases in performance on my faster and more random array; others have found exactly the opposite. 

My configuration is as follows; 
x4250
- rpool - 2x 146 gb 10k SAS
- 'hot' pool - 10x 300gb 10k SAS + 2x 32gb ZIL
j4400
- 'cold' pool - 12x 1tb 7200rpm SATA ... testing adding 2x 146gb SAS in the x4250, but haven't benchmarked yet. 

Performance on the J4400 was disappointing with just one controller to 12 disks in one RAIDZ2 and no ZIL. However, I do not know if the bottleneck was at the disk, controller, backplane, or software level... I'm too close to my deadline to do much besides randomly shotgunning different configs to see what works best! 

-K 


Karl Katzke
Systems Analyst II
TAMU - RGS



  
On 11/25/2009 at 11:13 AM, in message <4b0d65d6.4020...@epinfante.com>, Bruno
        
Sousa <bso...@epinfante.com> wrote: 
  
Hello ! 
 
I'm currently using a X2200 with a LSI HBA connected to a Supermicro 
JBOD chassis, however i want to have more redundancy in the JBOD. 
So i have looked into to market, and into to the wallet, and i think 
that the Sun J4400 suits nicely to my goals. However i have some 
concerns and if anyone can give some suggestions i would trully appreciate. 
And now for my questions : 
 
    * Will i be able to achieve multipath support, if i connect the 
      J4400 to 2 LSI HBA in one server, with SATA disks, or this is only 
      possible with SAS disks? This server will have OpenSolaris (any 
      release i think) . 
    * The CAM ( StorageTek Common Array Manager ), its only for hardware 
      management of the JBOD, leaving 
      disk/volumes/zpools/luns/whatever_name management up to the server 
      operating system , correct ? 
    * Can i put some readzillas/writezillas in the j4400 along with sata 
      disks, and if so will i have any benefit  , or should i place 
      those *zillas directly into the servers disk tray? 
    * Does any one has experiences with those jbods? If so, are they in 
      general solid/reliable ? 
    * The server will probably be a Sun x44xx series, with 32Gb ram, but 
      for the best possible performance, should i invest in more and 
      more spindles, or a couple less spindles and buy some readzillas? 
      This system will be mainly used to export some volumes over ISCSI 
      to a windows 2003 fileserver, and to hold some NFS shares. 
 
 
Thank you for all your time, 
Bruno 
 
    

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  

www.eagle.co.nz 

This email is confidential and may be legally privileged. If received in error please destroy and immediately notify us.

www.eagle.co.nz 

This email is confidential and may be legally privileged. If received in error please destroy and immediately notify us.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to