On Sat, Oct 26, 2024, 1:35 PM Rick Bensene via cctalk <cctalk@classiccmp.org>
wrote:

...
> If anyone out there had experience with 
> GE timeshared systems, or may know of existence of any distribution 
> media or source listings of the systems, or perhaps has memories of using 
> them, I'd
> love to read about it. 
...

Bill D. wrote:

> I believe they used straight-up Dartmouth BASIC, but maybe that's obvious and 
> does not need to be 
> stated.  

From all I've been able to determine by reading documentation for the systems 
on Bitsavers (thanks, Al!), the early versions may well have been Dartmouth 
BASIC as it was run there, but GE kept refining and adding features to BASIC, 
and by the time the Mark II systems came along, it was substantially different 
code.

> I have a paper tape exercise saved by someone who took intro  training in use 
> of the system, 
> with the intro brochure materials, etc.  When I printed the paper tape it 
> contained BASIC code and 
> the output.  The first time I printed the tape it was upside down, with 
> confusing results!

I can believe it probably looked like it was some kind of binary executable 
with the tape upside-down.   I'm sure that the introductory materials presented 
examples of code that were very vanilla Dartmouth BASIC...LET, FOR/NEXT, INPUT, 
PRINT, READ, DATA, etc.

> Sorry I don't have the actual BASIC but it very well may be a simH GE mini 
> tape file(s) out there GE > 225 or 235.  I seem to remember seeing this but 
> did not find after a quick google search just now.

As far as I know, there's no simulation for any of the General Electric CPUs 
available under SimH or other commonly known vintage computer simulators.  
There may be tape image files floating around somewhere, but so far I haven't 
been able to find anything that appears to be relevant.

The early versions of the GE Timesharing environment provided three different 
language processors that were able to be run simultaneously on the system.  Of 
course, there was BASIC.  But, there was also a FORTRAN compiler, as well as 
ALGOL.   Indications are that all of these were compiler-based, such that 
actual machine code was generated when a program was run.  I have found 
reference to the ability to be able to save the compiled code as an executable 
that could run (mostly) standalone, although there was a runtime library that 
the executable would link to that provided common math processing and I/O 
routines.   This made the GE systems more flexible than the HP Timeshared 
system, as the HP TSB systems only supported BASIC.    DEC's more advanced 
timeshared operating systems were capable of supporting multiple languages by 
design.

The more I'm learning about the GE timeshared systems, it seems that it'd be a 
bit of an endeavor to simulate them, as both the main computer (executive) and 
the communications processor have shared connections to the disk controller, as 
well as a two-way channel for communication between the executive and front-end 
CPUs. 

The front-end machine does considerably more in the GE environment than it does 
in the HP Timeshared BASIC environment.  The front-end processor parses 
completed lines of user input, and if there is a command such as generating a 
disk catalog (directory listing), it would use its access to disk to pull the 
catalog and generate the output to the user without bothering the executive 
processor. 

The dual-port access to the disk controller did not allow simultaneous access 
from both the executive and communications processor.  Priority was placed on 
disk access requests from the communications processor, with the executive 
processor having to wait for access to the disk if the communications processor 
was busy accessing it.   

One of the "spare time" tasks the communications processor took care of was to 
maintain terminal status data on the disk that was read the executive processor 
so the exec would know if a user's session dropped unexpectedly so it could 
perform the necessary cleanup operations for the user's process running on the 
executive.  I also believe that the communications processor performed user 
login and credential validation locally by accessing the user catalog area on 
disk.
  
The GE communications front-end CPU ran a multi-tasking monitor that had 
high-priority tasks that watched the communications lines for activity (ring, 
carrier detect, carrier loss, etc.) and serial data bit transitions (yes, it 
used bit-banging for the terminal I/O) and accumulating input/output bytes.  
Lower-priority "spare time" tasks were used for doing things like sending a 
completed command line or program statement that couldn't be handled locally 
off to the executive CPU for processing, as well as sending output generated by 
either the executive processor or itself to the user's terminal.

It made more sense back in the mid-1960's when the architecture was designed to 
use a software-based UART rather than using hardware to handle the serial data 
streams, because hardware to do the job for each terminal would have been 
prohibitively expensive, as MOS/LSI UARTs didn't yet exist.  Implementing a 
UART with small or even medium-scale DTL/TTL requires quite a few devices, and 
given that the Mark I systems could support upwards of 40 simultaneous users, 
using hardware to accumulate the bits (including start and stop bits) from each 
communication line would have amounted to a lot of hardware.  I am not aware if 
the later Mark II systems kept with using bit-banging, or if hardware became 
available to handle the serial I/O.

The communications processor in the HP Timeshared BASIC systems just managed 
the communications, and had no access to the disk/drum store.  Nor did it have 
any understanding of the data coming in from the user...it just accumulated it 
in local buffers and passed completed lines of input to the main processor for 
it to handle, as well as sending output generated by the main processor to the 
user terminals.   I am not aware if the HP communications multiplexors 
interrupted the communications processor on each serial data signal transition, 
or if they had hardware to accumulate bytes to/from terminals.  

All of DECs timeshared operating systems ran on one processor, putting both the 
I/O and language processing/compute on the same machine.  However, DEC used 
hardware UARTs for serial I/O, removing the burden of bit-banging, which is 
likely why they were able to put everyone on one CPU.  Even early TSS/8 that 
ran on the PDP 8 mini with a high-speed fixed head disk ran entirely on one 
CPU. RSTS/E and RSX ran on single CPU PDP 11s and could handle a substantial 
number of simultaneous users.  

Simulating the shared disk drive along with the inter-processor link of the GE 
timeshared environment, and getting the timing right could be a challenge. The 
communications processor had an interrupt that would trigger every 9.09mS to 
scan all of the terminal lines at every bit-time (110 baud) to read the logic 
level of each, and accumulate bytes in a local buffer for each terminal.  When 
a line of input had been accumulated(e.g., terminated with a carriage-return), 
the communications processor would parse it to determine whether it was a 
command that it could process locally (like a disk catalog command) or if it 
needed to send it off to the executive processor for handling. 

If the input required attention from the executive processor, the 
communications processor would schedule a "spare time" task to send a message 
to the executive processor over the inter-processor communications interface, 
letting the executive know that there was a line of input ready for it to 
receive.

I know that there were some tricky aspects of getting the timing and interlocks 
right on the HP 21xx simulation when the inter-processor communication 
interface was used to connect two CPUs together.  Given the added complexity of 
the GE environment, with both shared access to the disk, as well as an 
inter-processor communications interface, could be significantly more 
difficult, but that's just assumption on my part.  A lot would depend on just 
exactly how much brains were in the dual-port disk controller, as well as in 
the inter-processor interface.   

Simulating all of this may not be possible at the CPU timing level.  It may 
require that I/O processor be emulated rather than simulated down to the CPU 
level, as it may be too intensive to try to simulate and get all of the timing 
right given that the simulation would need to run on a large variety of 
processors and operating systems.  AFAIK simulating bit-banging isn't really 
possible in any case given the comparatively smart serial interfaces of modern 
computers.

All in all, I truly believe that the historical significance of the GE 
Timesharing Systems is such that if any of the code survives anywhere to this 
day, it needs to be captured and archived in a form that perhaps someone with 
much more programming skill and time than I would embark on a project to bring 
the systems back to life through a simulation/emulation.   

The GE 200-series and 400-series computers were historically significant in 
their own right, with the machines being involved in developing the ERMA system 
that created the way that bank drafts (checks) are encoded with magnetic ink 
characters that are readable by machine to allow full automation of check 
processing at bank clearing houses.   Of course, the development of BASIC at 
Dartmouth was a huge historical point, seeded by GE providing a 225 and a 
Datanet 30 communications processor to the institution.   

I found DTSS.org/DTSS, a website that has an emulation of an early version of 
Dartmouth's DTSS timesharing system.   It appears to be broken (I tried to 
register as a new user, and it threw a ASP error), but the introduction page 
says that the Datanet-30 is emulated, but the executive processor actually runs 
a simulation of the 225 down to the instruction level(in Java), and the BASIC 
Language Processor, as well as the kernel that runs on the 225 were extracted 
from listings that the authors of the emulator have or had access to that were 
scanned/OCRd.  It is stated that they had listings of the Dartmouth ALGOL 
language processor, but had not yet implemented it.

So, at least something is out there that could potentially serve as the basis 
for simulation of the executive processor, and potentially some original source 
listings for the DTSS versions of BASIC and ALGOL  

This code could potentially serve as the basis for running some of the later GE 
production timesharing systems, however, that would be fully dependent on 
whether any of the GE production timeshare system code is still out there 
anywhere, as well as whether the authors of the simulation/emulation were 
willing to share.   There did not appear to be any links on the pages I looked 
through to contact the authors. 

Maybe someone on the list knows of some of the GE code that may still exist 
somewhere so it can at least be archived somewhere like Bitsavers.





  

Reply via email to