> The definition of a terminal has changed. In Unix, the graphical

In the broader sense of terminal, I don't disagree.  I was
being somewhat clumsy in talking about terminals in
the Plan 9 sense of the processing power local to my
fingers.

> A terminal is not a no-processing capabilities (a dumb terminal):
> it can be a full terminal, that is able to handle the interface,
> the representation of data and commands (wandering in a menu shall
> be terminal stuff; other users have not to be impacted by an user's
> wandering through the UI).

Absolutly, but part of what has changed over the past 20
years is that the rate at which this local processing power
has grown has been faster than rate at which the processing
power of the rack-mount box in the machine room has
grown (large clusters not withstanding, that is).  So the
gap between them has narrowed.

> The processing is then better kept on a single CPU, handling the
> concurrency (and not the fileserver trying to accomodate). The views are
> multiplexed, but not the handling of the data....

That is part of the conversation the question is meant
to raise.  If cycles/second isn't as strong a justification
for separate CPU servers, then are there other reasons
we should still have the separation?  If so, do we need
to think differently about the model?

> In some sense, logically (but not efficiently: read the caveats in the
> Plan9 papers; a processor is nothing without tightly coupled memory, so

The flip side is actually what intrigues me more, namely
machines where the connection to the file system is
even more loosly coupled than sharing Ethernet.  I'd
like to have my usage on the laptop sitting in Starbucks
to be as much a part of the model as using one of
the BlueGene machines as an enormous CPU server
while sitting in the next room.

BLS


Reply via email to