On Mon, 27 Feb 2012, Luke S. Crawford wrote:
So, I'm an American, living in America, using American data centers.
All my equipment expects single-phase (or split-phase, which functions
similarly from the point of view of the equipment) power. All my
equipment (save for the PDUs) are rated for anything between 100v-240v
My understanding is that my equipment runs more efficently on 208v
than on 120v.
The problem is that my data center charges me more per-watt if I buy
a 208v circuit. (oddly, the 120v circuits are cheapest per-watt,
208v is the most expensive, at exactly twice the cost per amp of the
120v circuits when it should be *1.73, and the 3 phase circuits are
between the two. I mean, with 208v, they'd have to balance the third
phase, so I can see charging something, but with a 3 phase, the
balancing is my problem, so I don't see why they'd charge more than
3x what they charge for 120v for 3 phase.
the math in sine-wave AC circuits is odd, but a 208v circuit is exactly 2
120v circuits (if the circuit is wired correctly, you could take either
leg of the 208v circuit and use ground as your neutral and you would
measure 120v on each half of it)
In a typical house breaker panel, what you literally have if the two legs
of 208v going to interleaved teeth that your breakers clip on to. If you
clip two breakers, one to each leg (with neutral/ground return), you have
2x 120v circuits. If you connect one double breaker and run both wires to
a device, you have a 208v circuit.
the added efficiency of 208v circuits has to do with the fact that losses
in wiring are related to the current going through the wire
V=IR (voltage = Current * resistance)
P=IV (power = current * voltage)
so P = IIR (power = current squared * resistance)
by doubling the voltage, and cutting the current in half, you cut the
power lost by wiring resistance to 1/4. This also cuts the heat generated
by the wiring.
you can also get away with smaller wire as your current is half of what it
was before.
At datacenter scale this matters, I'm not sure how high you need to go
before it matters to your equipment.
but if your equipment will run at 208v, you are best off using that
voltage. If you put a meter on the equipment, you will see it talk exactly
half the current in Amps as the same equipment would draw running at 120v
(the gains from the higher voltage are not going to be large enough to
read on the metter). It's not a big difference on an individual server
basis.
David Lang
I mean, this is at data centers that can handle north of 10Kw of load
per rack. It just seems weird to me that I should get 5 20a 120v
circuits rather than a smaller number of 3 phase.
Am I missing something? or are data centers charging you the least
for the power that costs them the most (in terms of efficency) -
I guess it could be that the weirdness of having 'different' power is
a bigger deal than any efficency gains; I have not, in fact, quantified
these efficency gains, they could be quite small.
If it's just me misunderstanding things, please reccomend a good
"Electricity for Systems Administrators" book.
_______________________________________________
Tech mailing list
Tech@lists.lopsa.org
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
http://lopsa.org/
_______________________________________________
Tech mailing list
Tech@lists.lopsa.org
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
http://lopsa.org/