On Apr 3, 2013, at 2:58 PM, Nicholas Tang wrote: > What I will say, though, is that if anyone is considering blades, I'd > seriously consider Seamicro: http://seamicro.com/SM15000 > > This is an unsponsored, non-employee plug: I used one of their systems for > several months at my last job, and I loved it. It was ridiculously dense, > easy to manage, and worked well. It's basically like the ultimate blade > server; 64 discrete systems in 10u, plus separate blades for shared storage > (although you can provision enough that it doesn't actually have to be > shared) and shared networking. It's not for every application (no more than > a single socket per blade, although the RAM goes surprisingly high) and I > think it maxes out at something like 16 x 10G ports, but for what we were > doing it was great. And our full chassis used (depends on the blades you > get, obviously) something like 16A @ 208V, meaning even w/ redundant power > you can get 64 systems using only 2 x 20A circuits. > > Disclaimer: they were recently acquired by AMD; I can't vouch for how AMD > will run the company now, but Seamicro as a stand-alone company was awesome. > > Nicholas
So far, it hasn't made much difference; they still operate mostly independently. We just bought 3 SeaMicro units, after evaluating them for a year-and-a-half. We're currently running them through their paces before deploying production services to them. The compute density in these things is amazing, relative to power and space usage. This is especially true if you can carve up workloads to run on dual-core Atoms with 4 GB RAM: 256 compute nodes in 10U! Even with the Atom performance penalty compared to Xeon (or Opteron), 4x the number of compute nodes more than compensates. (Although we do have a Xeon unit for workloads that require more RAM per instance.) In-chassis storage and shared networking and power make cabling a dream. -------------------------------------------------------------------- Leon Towns-von Stauber http://www.occam.com/leonvs/ "We have not come to save you, but you will not die in vain!" _______________________________________________ Tech mailing list Tech@lists.lopsa.org https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech This list provided by the League of Professional System Administrators http://lopsa.org/