Karl Don't you just use stmsboot? http://docs.sun.com/source/820-3223-14/SASMultipath.html#50511899_pgfId-1046940 Bruno Next week I'm playing with a M3000 and a J4200 in the local NZ distributor's lab. I had planned to just use the latest version of S10, but if I get the time I might play with OpenSolaris as well, but I don't think there is anything radically different between the two here. >From what I've read in preparation (and I stand to be corrected): * Will i be able to achieve multipath support, if i connect the J4400 to 2 LSI HBA in one server, with SATA disks, or this is only possible with SAS disks? This server will have OpenSolaris (any release i think) . Disk type does not matter (see link above). * The CAM ( StorageTek Common Array Manager ), its only for hardware management of the JBOD, leaving disk/volumes/zpools/luns/whatever_name management up to the server operating system , correct ? That is my understanding see:- http://docs.sun.com/source/820-3765-11/ * Can i put some readzillas/writezillas in the j4400 along with sata disks, and if so will i have any benefit , or should i place those *zillas directly into the servers disk tray? On the Unified Storage products they go in both. Readzilla in the server Logzillas in the J4400. This is quite logical if you want to move the array between hosts all the data needs to be in the array. Read data can always be re-created so therefore the closer to the CPU the better. See: http://catalog.sun.com/ * Does any one has experiences with those jbods? If so, are they in general solid/reliable ? No: But, get a support contract! * The server will probably be a Sun x44xx series, with 32Gb ram, but for the best possible performance, should i invest in more and more spindles, or a couple less spindles and buy some readzillas? This system will be mainly used to export some volumes over ISCSI to a windows 2003 fileserver, and to hold some NFS shares. Check Brendon Gregg's blogs *I think* he has done some work here from memory. Karl Katzke wrote: Bruno - Sorry, I don't have experience with OpenSolaris, but I *do* have experience running a J4400 with Solaris 10u8. First off, you need a LSI HBA for the Multipath support. It won't work with any others as far as I know. I ran into problems with the multipath support because it wouldn't allow me to manage the disks with cfgadm and got very confused when I'd do something as silly as replace a disk, causing the disk's GUID (and therefor address under the virtual multipath controller) to change. My take-away was that Solaris 10u8 multipath support is not ready for production environments as there are limited-to-no administration tools. This may have been fixed in recent builds of Nevada. (See a thread that started around 03Nov09 for my experiences with MPxIO.) At the moment, I have the J4400 split between the two controllers and simply have even numbered disks on one, and odd numbered disks on the other. Both controllers can *see* all the disks. You are correct about the CAM software. It also updates the firmware, though, since us commoners don't seemingly have access to the serial management ports on the J4400. I can't speak to locating the drives -- that would be something you'd have to test. I have found increases in performance on my faster and more random array; others have found exactly the opposite. My configuration is as follows; x4250 - rpool - 2x 146 gb 10k SAS - 'hot' pool - 10x 300gb 10k SAS + 2x 32gb ZIL j4400 - 'cold' pool - 12x 1tb 7200rpm SATA ... testing adding 2x 146gb SAS in the x4250, but haven't benchmarked yet. Performance on the J4400 was disappointing with just one controller to 12 disks in one RAIDZ2 and no ZIL. However, I do not know if the bottleneck was at the disk, controller, backplane, or software level... I'm too close to my deadline to do much besides randomly shotgunning different configs to see what works best! -K Karl Katzke Systems Analyst II TAMU - RGSOn 11/25/2009 at 11:13 AM, in message <4b0d65d6.4020...@epinfante.com>, BrunoSousa <bso...@epinfante.com> wrote:Hello ! I'm currently using a X2200 with a LSI HBA connected to a Supermicro JBOD chassis, however i want to have more redundancy in the JBOD. So i have looked into to market, and into to the wallet, and i think that the Sun J4400 suits nicely to my goals. However i have some concerns and if anyone can give some suggestions i would trully appreciate. And now for my questions : * Will i be able to achieve multipath support, if i connect the J4400 to 2 LSI HBA in one server, with SATA disks, or this is only possible with SAS disks? This server will have OpenSolaris (any release i think) . * The CAM ( StorageTek Common Array Manager ), its only for hardware management of the JBOD, leaving disk/volumes/zpools/luns/whatever_name management up to the server operating system , correct ? * Can i put some readzillas/writezillas in the j4400 along with sata disks, and if so will i have any benefit , or should i place those *zillas directly into the servers disk tray? * Does any one has experiences with those jbods? If so, are they in general solid/reliable ? * The server will probably be a Sun x44xx series, with 32Gb ram, but for the best possible performance, should i invest in more and more spindles, or a couple less spindles and buy some readzillas? This system will be mainly used to export some volumes over ISCSI to a windows 2003 fileserver, and to hold some NFS shares. Thank you for all your time, Bruno_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss This email is confidential and may be legally privileged. If received in error please destroy and immediately notify us. |
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss