Putting on my GPU architect hat for a minute...
It's generally *not* true that a GPU element is more power efficient than 
regular core.  The GPU relies heavily on having many work elements to compute 
simultaneously that follow the same control paths.  Divergence in that control 
flow typically leads to substantially higher power use to get your results.  
That said, if you have a numerically heavy workload with many paths having the 
same control flow, then you might see a power advantage.   But for most code, 
which is branchy and loopy you'll almost certainly be pessimizing your code.
Paul

Sent from my HTC Inspire™ 4G on AT&T

----- Reply message -----
From: "Josh Marshall" <joshua.r.marshall.1...@gmail.com>
To: "Fans of the OS Plan 9 from Bell Labs" <9fans@9fans.net>
Subject: [9fans] Hey, new to this. Trying to get plan9 to work in a VM.
Date: Mon, Jun 6, 2011 7:57 pm
Well, two reasons come to my mind immediately.  First, I'd be cool.  Second, 
the wattage you listed is the max wattage, not the idle or light load wattage 
which would likely be used.  Per processing element, GPUs use less power, and 
you get more processing power per watt than a CPU under certain loads.  Further 
more, this would greatly increase the available processing power to system, 
could spur a change in model for GPUs to a processor bank which does 
distributed work for the whole system, including graphics and the real video 
card could change to something extrmely abstract which only takes in an image 
and converts it to a signal for the display(s).


So, in short, more system power, and could have long term benifit to hardware 
development, abstraction, and model change.

This concept could be taken as far as to bring all processing off specialized 
areas for general purpose use, allowing potentially for an internally 
distributed system with high regularity, fault tolerance, etc.  That's on the 
far end, but not to be totally discounted.


Also, I'd like to do something interesting with my free time.

On Mon, Jun 6, 2011 at 6:36 PM, erik quanstrom <quans...@quanstro.net> wrote:

> Finally for this, what would it take to have the GPU treated as a processor


> bank for idling and tasks not requiring a full CPU core?



leaving trifling software problems tiny running general-purpose

code on a special-purpose bit of haradware and running multiple

cpu arches in the same machine aside, why wouldn't you prefer

to idle the gpu, since it usually less power-efficient than your cpu?

pci-sig is working on 300+w pcie power for gpus.



- erik

Reply via email to