On Fri, Apr 17, 2009 at 2:59 PM, Eris Discordia
<eris.discor...@gmail.com> wrote:
>> even today on an average computer one has this articulation: a CPU (with
>> a FPU perhaps) ; tightly or loosely connected storage (?ATA or SAN) ;
>> graphical capacities (terminal) : GPU.
>
> It happens so that a reversal of specialization has really taken place, as
> Brian Stuart suggests. These "terminals" you speak of, GPUs, contain such
> vast untapped general processing capabilities that new uses and a new
> framework for using them are being defined: GPGPU and OpenCL.
>
> <http://en.wikipedia.org/wiki/OpenCL>
> <http://en.wikipedia.org/wiki/GPGPU>
>
> Right now, the GPU on my low-end video card takes a huge burden off of the
> CPU when leveraged by the right H.264 decoder. Two high definition AVC
> streams would significantly slow down my computer before I began using a
> CUDA-enabled decoder. Now I can easily play four in parallel.
>
> Similarly, the GPUs in PS3 boxes are being integrated into one of the
> largest loosely-coupled clusters on the planet.
>
> <http://folding.stanford.edu/English/FAQ-highperformance>
>
> Today, even a mere cellphone may contain enough processing power to run a
> low-traffic web server or a 3D video game. This processing power comes cheap
> so it is mostly wasted.

I can't find the link, but a recent article described someone's
efforts at CMU to develop what he calls "FAWN" Fast Array of Wimpy
Nodes. He basically took a bunch of eeePC boards and turned them into
a single computer.

The performance per watt of such an array was staggeringly higher than
a monster computer with Xeons and disks.

So hopefully in the future, we will be able to have more fine-grained
control over such things and fewer cycles will be wasted. It's time
people realized that CPU cycles are a bit like employment. Sure
UNemployment is a problem, but so is UNDERemployment, and the latter
is sometimes harder to gauge.

>
> I'd like to add to Brian Stuart's comments the point that previous
> specialization of various "boxes" is mostly disappearing. At some point in
> near future all boxes may contain identical or very similar powerful
> hardware--even probably all integrated into one "black box." So cheap that
> it doesn't matter if one or another hardware resource is wasted. To put to
> good use such a computational environment system software should stop
> incorporating a role-based model of various installations. All boxes, except
> the costliest most special ones, shall be peers.
>
> --On Friday, April 17, 2009 7:11 PM +0200 tlaro...@polynum.com wrote:
>
>> On Fri, Apr 17, 2009 at 11:32:33AM -0500, blstu...@bellsouth.net wrote:
>>>
>>> - First, the gap between the computational power at the
>>> terminal and the computational power in the machine room
>>> has shrunk to the point where it might no longer be significant.
>>> It may be worth rethinking the separation of CPU and terminal.
>>> For example, I'm typing this in acme running in a 9vx terminal
>>> booted using using a combined fs/cpu/auth server for the
>>> file system.  But I rarely use the cpu server capability of
>>> that machine.
>>
>> I'm afraid I don't quite agree with you.
>>
>> The definition of a terminal has changed. In Unix, the graphical
>> interface (X11) was a graphical variant of the text terminal interface,
>> i.e. the articulation (link, network) was put on the wrong place,
>> the graphical terminal (X11 server) being a kind of dumb terminal (a
>> little above a frame buffer), leaving all the processing, including the
>> handling of the graphical interface (generating the image,
>> administrating the UI, the menus) on the CPU (Xlib and toolkits run on
>> the CPU, not the Xserver).
>>
>> A terminal is not a no-processing capabilities (a dumb terminal):
>> it can be a full terminal, that is able to handle the interface,
>> the representation of data and commands (wandering in a menu shall
>> be terminal stuff; other users have not to be impacted by an user's
>> wandering through the UI).
>>
>> More and more, for administration, using light terminals, without
>> software installations is a way to go (less ressources in TCO). "Green"
>> technology. Data less terminals for security (one looses a terminal, not
>> the data), and data less for safety (data is centralized and protected).
>>
>>
>> Secondly, one is accustomed to a physical user being several distinct
>> logical users (accounts), for managing different tasks, or accessing
>> different kind of data.
>>
>> But (to my surprise), the converse is true: a collection of individuals
>> can be a single logical user, having to handle concurrently the very
>> same rw data. Terminals are then just distinct views of the same data
>> (imagine in a CAD program having different windows, different views of a
>> file ; this is the same, except that the windows are on different
>> terminals, with different "instances" of the logical user in front of
>> them).
>>
>> The processing is then better kept on a single CPU, handling the
>> concurrency (and not the fileserver trying to accomodate). The views are
>> multiplexed, but not the handling of the data.
>>
>> Thirdly, you can have a slow/loose link between a CPU and a terminal
>> since the commands are only a small fraction of the processing done.
>> You must have a fast or tight link between the CPU and the fileserver.
>>
>> In some sense, logically (but not efficiently: read the caveats in the
>> Plan9 papers; a processor is nothing without tightly coupled memory, so
>> memory is not a remote pool sharable---Mach!), even today on an
>> average computer one has this articulation: a CPU (with a FPU
>> perhaps) ; tightly or loosely connected storage (?ATA or SAN) ;
>> graphical capacities (terminal) : GPU.
>>
>> --
>> Thierry Laronde (Alceste) <tlaronde +AT+ polynum +dot+ com>
>>                 http://www.kergis.com/
>> Key fingerprint = 0FF7 E906 FBAF FE95 FD89  250D 52B1 AE95 6006 F40C
>>
>
>
>
>
>
>

Reply via email to