I'd go with an HBA and present ZFS with the raw disks. Save yourself a couple of bucks and a bunch of potential hassle. I've had good luck with the LSI 9200-82 (external) and 9210-8i (internal). Both are PCIe 2.0. The 9207-8i and 9207-8e are the PCIe-3.0 equivalents but I have not tested them. 85 drives is not a problem on any of these.
W > On July 23, 2013 at 12:36 PM CJ Keist <cj.ke...@colostate.edu> wrote: > > > Been awhile, thank you all for the recommendations. It took six days to > restore all the data from backups! The LSI MegaRaid 9260-8i doesn't > support JBOD, so I had to restore using the one large disk volume again. > I will be shopping for a new raid controller card that supports JBOD and > will rebuild this file server. Any recommendations for a good JBOD > controller that works well with OI? Must be able to handle 85 disks. > > Thanks... > > > On 7/9/13 10:17 AM, Nikola M. wrote: > > On 07/ 9/13 01:22 PM, Jim Klimov wrote: > >> On 2013-07-08 22:58, CJ Keist wrote: > >>> Thank you all for the replies. > >>> I tried OmniOS and Oracle Solaris 11.1 but both were not able to > >>> import the data pool. So I have reinstalled OI 151a7 and after importing > >>> the data and having it crash, I booted up in single user mode. At this > >>> point I was able to initiate zpool scrub data and it looks to be > >>> running!! I will wait and see if the scrub can finish and then try to > >>> remount everything. See attached pic. > >> > >> That screenshot seems disturbing: with such a large pool you only have > >> one device. Is it on hardware RAID which masks away all the disks and > > Point of using ZFS is that you do not need to be tighten to your hardware. > > Treating all disks as JBOD and letting ZFS handle them is preferred way. > > > > Problem obviously is within that hardware controller. > > If ZFS was handling disks (and managing disks pool) it would most > > certainly boot like nothing happened. > > > > Some people tend to use both ZFS handling volumes from hardware RAID and > > hardware RAID making those volumes out of groups of disks. (to use > > benefits of hardware caching etc), > > just same could be done with ZFS without being tight to hardware issues, > > but eather way ZFS should be presented with multiple disks/volumes, and > > making pool out of them, so he can do something clever with he's > > included volume management. > > > > > > _______________________________________________ > > OpenIndiana-discuss mailing list > > OpenIndiana-discuss@openindiana.org > > http://openindiana.org/mailman/listinfo/openindiana-discuss > > > > -- > C. J. Keist Email: cj.ke...@colostate.edu > Systems Group Manager Solaris 10 OS (SAI) > Engineering Network Services Phone: 970-491-0630 > College of Engineering, CSU Fax: 970-491-5569 > Ft. Collins, CO 80523-1301 > > All I want is a chance to prove 'Money can't buy happiness' > > _______________________________________________ > OpenIndiana-discuss mailing list > OpenIndiana-discuss@openindiana.org > http://openindiana.org/mailman/listinfo/openindiana-discuss _______________________________________________ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss