I finally got around to reading that note. My principal reponse is that it got 
so
far down into details that I couldn't see the larger picture any more.

Going back to the original IBM 801 work, the RISC concept is very simple: to 
make
the overall system as fast as possible; it did this by making the CPU cycle time
as short as possible. This results in a CPU that is not as easy to work with;
therefore the compiler has to be 'smarter'. In other words, engineering 
complexity
is moved from the hardware to the software.

This is an an acceptable tradeoff; the complexity in the software is not a 
recurring
cost, whereas extra gates add cost to every machine produced. Moreover, while 
the
more complex compiler may be more time-consuming to run, that cost is only paid
once, whereas the efficiency of the binary is felt every time the program is 
run.

Focusing on what features a CPU does or does not have in some ways misses the
whole point of RISC: it's not about what specific features the CPU has, in
isolation; it's about looking at the system as a whole, all the way up through
the compilers, to maximize performance.

I recall Tom Knight laying out the implication for CPU design very simply, in a
seminar I took back when the idea had just come out: look at the CPU design, and
find the longest signal path; this will set the lower limit on the clock time.
Redesign to remove that path; since the capability that needed it will 
inevitably be
used only part of the time, the execution increase caused by losing it will be
outweighed by the speedup of all the other instructions.

The other thing one needs to remember, talking about RISC, is that it's now
been almost 40 years since the concept was devised (an eternity in the computing
field), and the technology environment has changed drastically since then. So
RISC has changed and adapted as that environment changed.

Nowadays, when people throw a billion transistors at each CPU, the picture is
somewhat different. Register widows were just the first instance of this sort of
thing; we have this unused area of the chip, what can we put there?


    >> On 6/15/19 3:40 PM, ben via cctalk wrote:

    >> CISC design is now needed to handle the 'extended features'. ... RISC 
came
    >> along only because Compilers could only generate SIMPLE instructions, 
that
    >> matched the RISC format.

No; compilers had been created that could use the more complex CISC 
instructions of,
say, a VAX. RISC post-dated a lot of those developments, and had an entirely
different point.


    > From: Chuck Guzis

    > For what it's worth, the number of instructions in the ISA does not define
    > RISC, but rather that the instructions execute quickly. Some RISC
    > implementations have large instruction sets.

Right; what's reduced is the complexity of the instructions, which leads to
the speedup which is the goal, not the number of them.

In fact, a RISC CPU may actually have more instructions, e.g. separate ones
for different cases, with the compiler being given the responsibility of
picking the right one, instead of the CPU figuring it out as it goes.

    > RISC does carry a penalty in that you're executing more instructions to 
get
    > something done, so your code space is larger; but, you hopefully have 
them scheduled
    > such that the whole task runs faster.

This in another aspect, which I've mentioned before, behind the rise of RISC, 
which
is the changing size and speed of main memory, relative to the CPU. Simpler
instructions are faster, but a given task will need more of them. This is 
acceptable
if the memory can supply them fast enough. If the memory bandwidth is less, more
complex instructions make sense, to get more out of the limited bandwidth.

Also, if memory is of limited capacity, or expensive, then more complex 
instructions
make sense, since more can be done with a fixed amount of memory. (The PDP-11 
still
scores very high in code density.) This too, however, has been overtaken by the
march of technology.

Still, the basic idea of RISC still applies; make the CPU clock rate as fast as
possible by making the instructions simple, and let software deal with the 
resulting
issues.

        Noel

Reply via email to