I will only comment on the chassis, as this is made by AIC (short for American Industrial Computer), and I have three of these in service at my work. These chassis are quite well made, but I have experienced the following two problems:
1) The rails really are not up to the task of supporting such a heavy box when fully extended. If you rack this guy, you are at serious risk of having a rail failure, and dropping the whole party on the floor. Ouch. If you do use this chassis in a rack, I highly recommend you either install a very strong rail mounted shelf below it, or you support it with a lift when the rails are fully extended. 3) The power distribution board in these are a little flaky. I haven't ever had one outright fail on me, but, I have had some interesting power on scenarios. For example, after a planned power outage, the chassis would power on, but then turn it's self off again after about 4-5 seconds. I couldn't get it powered on to stay. What was happening was the power distribution card was confused, and thought it didn't have the necessary 3 (of 4) power supplies on line, and safed its self off. To fix this, I had to pull the power supplied all out, and wait a few minutes to fully discharge the power distribution card, then plug the supplies back in. Then it was able to power on again to stay. A real odd pain in the posterior. For all new systems, I've gone with this chassis instead (I just noticed Rackmount Pro sells 'em also): http://rackmountpro.com/productpage.php?prodid=2043 Functional rails, and better power system. One other thing, that you may know already. Rackmount Pro will try to sell you 3ware cards, which work great in the Linux/Windows environment, but aren't supported in Open Solaris, even in JBOD mode. You will need alternate SATA host adapters for this application. Good luck, Jon Kent Watsen wrote: > Hi all, > > I'm putting together a OpenSolaris ZFS-based system and need help > picking hardware. > > I'm thinking about using this 26-disk case: [FYI: 2-disk RAID1 for the > OS & 4*(4+2) RAIDZ2 for SAN] > > http://rackmountpro.com/productpage.php?prodid=2418 > > Regarding the mobo, cpus, and memory - I searched goggle and the ZFS > site and all I came up with so far is that, for a dedicated iSCSI-based > SAN, I'll need about 1 Gb of memory and a low-end processor - can anyone > clarify exactly how much memory/cpu I'd need to be in the safe-zone? > Also, are there any mobo/chipsets that are particularly well suited for > a dedicated iSCSI-based SAN? > > This is for my home network, which includes internet/intranet services > (mail, web, ldap, samba, netatalk, code-repository), build/test > environments (for my cross-platform projects), and a video server > (mythtv-backend). > > Right now, the aforementioned run on two separate machines, but I'm > planning to consolidate them into a single Xen-based server. One idea I > have is to host a Xen-server on this same machine - that is, an > OpenSolaris-based Dom0 serving ZFS-based volumes to the DomU guest > machines. But if I go this way, then I'd be looking at 4-socket Opteron > mobo to use with AMD's just released quad-core CPUs and tons of memory. > My biggest concern with this approach is getting PSUs large enough to > power it all - if anyone has experience on this front, I'd love to hear > about it too > > Thanks! > Kent > > > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > -- - _____/ _____/ / - Jonathan Loran - - - / / / IT Manager - - _____ / _____ / / Space Sciences Laboratory, UC Berkeley - / / / (510) 643-5146 [EMAIL PROTECTED] - ______/ ______/ ______/ AST:7731^29u18e3 _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss