In every server room, I always keep a spreadsheet showing how many VA and W 
each UPS is rated for, and how many VA and W each system draws, and which UPS's 
each server is connected to.  It's a pain because I don't blindly add up all 
the max power supply ratings (which are typically about 5x higher than the 
*actual* max power requirements of the box)...  In order to get the 
measurement, I have to plug a power strip into a kill-a-watt, and then plug 
both of the redundant power supplies of the box into the power strip, then 
stress the system for a couple of minutes, and record the measurements.  This 
effectively solves a few problems (a) under sizing UPS's, which I've seen bite 
other admins, whose data centers crash under load etc, and (b) over sizing 
UPS's, which is what most admins do, and wastes money and space, and creates 
unnecessary waste.

Anyway, all that is just a tangent.  Here's what I really came to say:

Yesterday, I turned on a new server, took my power measurements, and was blown 
away to find that the system drew a maximum of 110W and 120VA, including 10% 
margin for error.  Dang!  They're getting good at power reduction.  Not long 
ago, I expected less powerful servers to draw 300-500 or so, *actual* measured 
peak.

BTW, assuming a typical server with nothing but cpu, ram, and disks in it, the 
way I stress the system is to launch a zillion threads of "cat /dev/urandom > 
/dev/null" and gzip, until "top" shows me I'm CPU starved... I have found 
experimentally that stressing the disks and/or network and/or other IO makes no 
noticeable difference to power consumption.  If there were any other power 
hungry components such as GPU, I would need to find a way to stress that 
component as well.
_______________________________________________
Tech mailing list
Tech@lists.lopsa.org
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to