In the message dated: Thu, 04 Apr 2013 12:58:49 -0700,
The pithy ruminations from Robert Hajime Lanning on 
<Re: [lopsa-tech] High density servers> were:
=> On 04/04/13 07:12, Leon Towns-von Stauber wrote:
=> 

Some observations:

=> The data center I am at, I have provisioned 4x 20A 120V circuits (billed 
=> as 2x HA pair) per rack for storage and 5x 20A 120V circuits for 

If possible, 240V servers are much more power efficient.

=> compute. The fifth circuit was really because we need more outlets.  The 
=> compute rack contains 16x 1U servers, 5x 1U PDUs, 2x 1U switches, 1x 1U 
=> IP KVM, 1x 2U patch panel.  There is a 1U gap between all servers.
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
That's a lot of air space...is it really necessary? Is that primarily for
cooling, or for some other purpose?

As a point of comparison, we've got pretty heavily loaded racks (25+ 1U
compute nodes, plus ~5U of PDUs, a few switches, a few infrastructure
servers, some cable management) with no gaps between servers. The 1U
compute servers were pretty hefty when new--12 CPU-cores, 64GB RAM,
and not the most power-efficient. The machines are fully loaded (no
CPU powersaving by down clocking) 24x7. Yes, there's a noticible heat
gradient from the bottom to the top of the rack, and IPMI logs show that
the top servers are always hotter...but it's not excessive. We've never
had heat related issues or component failures.

Most servers don't have any vertical ventilation in the top of the
case, or ingress airflow from the bottom of the case, so I wonder if
the 1U gap between servers does much more than reduce conductive heat
transfer between machines. [We do have one machine that has a vented
case above an NVidia GPU for additional passive cooling, but NVidia &
Silicon Mechanics have both insisted that the machine doesn't require
an air gap for cooling--it's just an optional extra path to release heat.]

I'm assuming that your 1U gaps have blanking panels on the front of the
rack...otherwise it would make the overall airflow much worse (assuming
a standard hot aisle/front cold-air ingress layout) to be mixing cool &
hot air within the interior of the rack, and significantly lower the
cool air being pulled in by the upper servers.

=> 
=> I use 1U remote switchable PDUs.  I don't like the 0U (vertical) PDUs. 

Remote switchable PDUs with monitoring (ie., overall & per-circuit loads)
are a very good thing.

[SNIP!]

Mark

=> 
=> -- 
=> Mr. Flibble
=> King of the Potato People
=> _______________________________________________
=> Tech mailing list
=> Tech@lists.lopsa.org
=> https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
=> This list provided by the League of Professional System Administrators
=>  http://lopsa.org/
=> 

-- 
Mark Bergman    Biker, Rock Climber, Unix mechanic, IATSE #1 Stagehand
'94 Yamaha GTS1000A^2
berg...@panix.com

http://wwwkeys.pgp.net:11371/pks/lookup?op=get&search=bergman%40panix.com

I want a newsgroup with a infinite S/N ratio! Now taking CFV on:
rec.motorcycles.stagehands.pet-bird-owners.pinballers.unix-supporters
15+ So Far--Want to join? Check out: http://www.panix.com/~bergman 
_______________________________________________
Tech mailing list
Tech@lists.lopsa.org
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to