Hello

AMD and IBM/Sony think a bit different with their Fusion and Cell
processors+gpu integrated, no?

slds.

On Tue, Jun 7, 2011 at 4:38 PM, Josh Marshall <
joshua.r.marshall.1...@gmail.com> wrote:

> A system running more hardware will, for all practical purposes, use more
> energy.  What this would do is increase the efficiency of that power use.
> Say you're using a single threaded indexing program, and its indexing a very
> slow medium.  Why use a CPU processor when you can idle them, and idle most
> of the other GPU processors and just use the one?  This is mainly for max
> hardware utilization though.
>
> In the VERY long run, I'm seeing thing trending towards very distributed
> models.  As system resources grow, I believe it will become practical to
> "network" within a system.  This can manifest itself in two ways.  First, is
> that due to multi-core systems slowly changing to many-core systems, a
> networking model is very scalable and with so many things to break, the
> fault tolerance will become a must.  This could allow then for computer
> systems to continue their march towards a more biological like organization,
> like a multi-cellular organism.  This will likely be abstracted to
> programmers and users, but on a hardware level, it allows for variable
> redundancy, extreme fault tolerance, internal and external networking
> models, and any few components which break will have no or minimal impact on
> the stability and usability of the system.  This is WAY WAY in the future,
> but that's where I imagine it going and this could be a step in that
> direction.  Was that as coherent as it should be?  I'm still playing with
> this in the back of my head, so its by no means well planned :P  I'd be more
> than happy to talk to someone about this, because no one at my university
> knows this area--our math and CS/CIS departments are feeble.
>
>
> On Mon, Jun 6, 2011 at 9:19 PM, erik quanstrom <quans...@quanstro.net>wrote:
>
>> > Well, two reasons come to my mind immediately.  First, I'd be cool.
>>  Second,
>> > the wattage you listed is the max wattage, not the idle or light load
>> > wattage which would likely be used.  Per processing element, GPUs use
>> less
>> > power, and you get more processing power per watt than a CPU under
>> certain
>> > loads.
>>
>> i'd sure like a reference to a case where a system with a gpu draws less
>> power than the same system without.  it's not like you can turn the cpu
>> off.
>>
>> > This concept could be taken as far as to bring all processing off
>> > specialized areas for general purpose use, allowing potentially for an
>> > internally distributed system with high regularity, fault tolerance,
>> etc.
>> > That's on the far end, but not to be totally discounted.
>>
>> please explain.  how is a machine more of any of these things than
>> a regular multi-core machine?
>>
>> - erik
>>
>>
>

Reply via email to