Re: pictures.. Yes I do, but I'm not sure that everybody can see them.
!@$(&% facebook isn't letting me select public from the dropdown. I have
pictures of a microcloud (8 servers in 3U) and a twin-squared (4 servers in
2U). No issues with cabling interrupting air flow and quite pretty to look
at when done properly. We get integrators to do this for us. You get what
you pay for.
http://www.facebook.com/photo.php?fbid=10200537826080042&set=pcb.10200537826800060&type=1&theater

I can respond to people who cannot see them with pictures if desired. I
didn't think sending big pictures to the mail list would be
non-controversial to some. (the full size pictures are 1MB each, the above
are 60% reduced)




On Wed, Apr 3, 2013 at 5:58 PM, Nicholas Tang <nicholast...@gmail.com>wrote:

> A few comments from my perspective... while I'm not a huge fan of the 4
> systems in 2u boxes, a lot of them are more dense, power-wise, than 1u
> boxes (at least in my experience - ours used shared power supplies, which
> helped) and so that makes up for some of the frustration (but also means
> that if one locks up, hard, you can't just do a remote power cycle if you
> have a 'smart' power strip).
>
> What I will say, though, is that if anyone is considering blades, I'd
> seriously consider Seamicro: http://seamicro.com/SM15000
>
> This is an unsponsored, non-employee plug: I used one of their systems for
> several months at my last job, and I loved it.  It was ridiculously dense,
> easy to manage, and worked well.  It's basically like the ultimate blade
> server; 64 discrete systems in 10u, plus separate blades for shared storage
> (although you can provision enough that it doesn't actually have to be
> shared) and shared networking.  It's not for every application (no more
> than a single socket per blade, although the RAM goes surprisingly high)
> and I think it maxes out at something like 16 x 10G ports, but for what we
> were doing it was great.  And our full chassis used (depends on the blades
> you get, obviously) something like 16A @ 208V, meaning even w/ redundant
> power you can get 64 systems using only 2 x 20A circuits.
>
> Disclaimer: they were recently acquired by AMD; I can't vouch for how AMD
> will run the company now, but Seamicro as a stand-alone company was awesome.
>
> Nicholas
>
>
> On Wed, Apr 3, 2013 at 2:46 PM, Matt Lawrence <m...@technoronin.com>wrote:
>
>> This is a bit of a rant, so adjust your filters accordingly.
>>
>> I'm currently doing some work not really production datacenter (unless
>> you ask the developers) that has a variety of systems.  Some of the systems
>> I'm dealing with are the 4 servers in 2U variety.  It's a neat idea, but
>> great care needs to be taken to avoid problems.  One of the big issues is
>> cable density, the systems I'm managing have a BMC connection, 2x 1Gb
>> connections, 2x 10Gb connections and a KVM dongle.  That's 7 things plugged
>> into the back of each server (KVM is VGA + USB).  Multiply times 4 and
>> that's 28 cables per 2U, plus 2x power cables.  That's a lot of cables and
>> they aren't running in neat rows like a 48 port switch.
>>
>> Adding to the problem is the fact that the disks plug in to the front of
>> the system and the server electronics plug in from the back of the server.
>> Right through that rat's nest of cabling.  It's a challenge.  If you
>> consider yourself ok at cabling, you don't have anywhere near the skills to
>> do cabling at this density.  Typical cabling standards are not adequate for
>> these kinds of setups.  Mediocre cabling also really blocks the air flow,
>> the systems I'm dealing with are nice and toasty at the back of the cabinet.
>>
>> Another option I'm dealing with are blade enclosures.  They manage to get
>> 16 servers into 10U of rack space and (at least here) they have switches
>> built in.  This means I only have 7 network cables running to a top of rack
>> switch/patch panel.  So much easier to deal with.  The blades are
>> accessable via the front of the rack, which is also much easier.  The
>> enclosures have built in management, which again makes things easier.  A
>> downside is that certain failures require taking down the entire encloser
>> to fix, so you lose 16 servers instead of the 4 in the other kind of high
>> density server.  I have never ben a big fan of blade enclosures, but I'm
>> starting to come around.
>>
>> Of course, one issue that too few people think about until it is too late
>> is the issue of power density and cooling capacity.  Being able to put 4
>> servers in 2U sounds really nifty until you discover you can only power and
>> cool half a rack of them.
>>
>> This concludes my rant for today.  Maybe.
>>
>> -- Matt
>> It's not what I know that counts.
>> It's what I can remember in time to use.
>> ______________________________**_________________
>> Tech mailing list
>> Tech@lists.lopsa.org
>> https://lists.lopsa.org/cgi-**bin/mailman/listinfo/tech<https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech>
>> This list provided by the League of Professional System Administrators
>> http://lopsa.org/
>>
>
>
> _______________________________________________
> Tech mailing list
> Tech@lists.lopsa.org
> https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
> This list provided by the League of Professional System Administrators
>  http://lopsa.org/
>
>
_______________________________________________
Tech mailing list
Tech@lists.lopsa.org
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to