> On Apr 22, 2024, at 4:21 PM, Chuck Guzis via cctalk <cctalk@classiccmp.org>
> wrote:
>
> On 4/22/24 13:02, Wayne S wrote:
>> I read somewhere that the cable lengths were expressly engineered to provide
>> that signals arrived to chips at nearly the same time so as to reduce chip
>> “wait” times and provide more speed.
>
> That certainly was true for the 6600. My unit manager, fresh out of
> UofMinn had his first job with CDC, measuring wire loops on the first
> 6600 to which Seymour had attached tags that said "tune".
Not so much "to arrive at the same time" but rather "to arrive at the correct
time". And not so much to reduce chip wait times, because for the most part
that machine doesn't wait for things. Instead, it relies on predictable
timing, so that an action set in motion is known to deliver its result at a
specific later time, and when that signal arrives there will be some element
accepting it right then.
A nice example is the exchange jump instruction processing, which fires off a
bunch of memory read/restore operations and sends off current register values
across the various memory buses. The memory read completes and sends off the
result, then 100 ns or so later the register value shows up and is inserted
into the write data path of the memory to complete the core memory full cycle.
(So it isn't a read/restore actually, but raher a "read/replace".)
Another example is the PPU "barrel" which books like Thornton's show as a
passive thing except at the "slot" where the arithmetic lives. In reality,
about 6 positions before the slot the current memory address (PC or current
operand) is handed off to the memory so that just before that PP rotates to the
slot the read data will be available to it. And then the output of the slot
becomes the restore, or update, data for the write part of the memory full
cycle.
> But then, take a gander at a modern notherboard and the lengths (sic) to
> which the designers have routed the traces so that timing works.
Indeed, and with multi-Gb/s interfaces this stuff really matters. Enough so
that high end processors document the wire lengths inside the package, so that
"match interconnect lengths" doesn't mean "match etch lengths" but rather
"match etch plus in-package lengths".
The mind boggles at the high end.... FPGAs with dozens of interfaces running at
data rates up to 58 Gb/s.
paul