On Tue, Oct 9, 2018, at 11:22 PM, Digby R.S. Tarvin wrote: > > > On Tue, 9 Oct 2018 at 23:00, Ethan Gardener <eeke...@fastmail.fm> wrote: >> >> Fascinating thread, but I think you're off by a decade with the 16-bit >> address bus comment, unless you're not actually talking about Plan 9. The >> 8086 and 8088 were introduced with 20-bit addressing in 1978 and 1979 >> respectively. The IBM PC, launched in 1982, had its ROM at the top of that >> 1MByte space, so it couldn't have been constrained in that way. By the end >> of the 80s, all my schoolmates had 68k-powered computers from Commodore and >> Atari, showing hardware with a 24-bit address space was very much affordable >> and ubiquitous at the time Plan 9 development started. Almost all of them >> had 512KB at the time. A few flashy gits had 1MB machines. :) > > Not sure I would agree with that. The 20 bit addressing of the 8086 and 8088 > did not change their 16 bit nature. They were still 16 bit program counter, > with segmentation to provide access to a larger memory - similar in principle > to the PDP11 with MMU.
That's not at all the same as being constrained to 64KB memory. Are we communicating at cross purposes here? If we're not, if I haven't misunderstood you, you might want to read up on creating .exe files for MS-DOS. > The first 32 bit x86 processor was the 386, which I think came out in 1985, > very close to when work on Plan9 was rumored to have started. So it seemed > not impossible that work might have started on an older 16 bit machine, but > at Bell Labs probably a long shot. Mmh, rumors. I read they were starting to think about Plan 9 in 1985, but I haven't read anything about it being up and running until '89 or '90. There's not much to go on. >> I still wish I'd kept the better of the Atari STs which made their way down >> to me -- a "1040 STE" -- 1MB with a better keyboard and ROM than the earlier >> "STFM" models. I remember wanting to try to run Plan 9 on it. Let's >> estimate how tight it would be... >> >> I think it would be terrible, because I got frustrated enough trying to run >> a 4e CPU server with graphics on a 2GB x86. I kept running out of image >> memory! The trouble was the draw device in 4th edition stores images in the >> same "image memory" the kernel loads programs into, and the 386 CPU kernel >> 'only' allocates 64MB of that. :) >> >> 1 bit per pixel would obviously improve matters by a factor of 16 compared >> to my setup, and 640x400 (Atari ST high resolution) would be another 5 times >> smaller than my screen. Putting these numbers together with my experience, >> you'd have to be careful to use images sparingly on a machine with 800KB >> free RAM after the kernel is loaded. That's better than I thought, probably >> achievable on that Atari I had, but it couldn't be used as intensively as I >> used Plan 9 back then. >> >> How could it be used? I think it would be a good idea to push the draw >> device back to user space and make very sure to have it check for failing >> malloc! I certainly wouldn't want a terminal with a filesystem and graphics >> all on a single 1MByte 64000-powered computer, because a filesystem on a >> terminal runs in user space, and thus requires some free memory to run the >> programs to shut it down. Actually, Plan 9's separation of terminal from >> filesystem seems quite the obvious choice when I look at it like this. :) > > I went Commodore Amiga at about that time - because it at least supported > some form of multi-tasking out out the box, and I spent many happy hours > getting OS9 running on it.. An interesting architecture, capable of some > impressive graphics, but subject to quite severe limitations which made > general purpose graphics difficult. (Commodore later released SVR4 Unix for > the A3000, but limited X11 to monochrome when using the inbuilt graphics). It does sound like fun. :) I'm not surprised by the monochrome graphics limitation after my calculations. Still, X11 or any other window system which lacks a backing store may do better in low-memory environments than Plan 9's present draw device. It's a shame, a backing store is a great simplification for programmers. > But being 32 bit didn't give it a huge advantage over the 16 bit x86 systems > for tinkering with operating system, because the 68000 had no MMU. It was > easier to get a Unix like system going with 16 bit segmentation than a 32 bit > linear space and no hardware support for run time relocation. > (OS9 used position independent code throughout to work without an MMU, but > didn't try to implement fork() semantics). I'm sometimes tempted to think that fork() is freakishly high-level crazy stuff. :) Still, like backing store, it's very nice to have. > It wasn't till the 68030 based Amiga 3000 came out in 1990 that it really did > everything I wanted. The 68020 with an optional MMU was equivalent, but not > so common in consumer machines. > > Hardware progress seems to have been rather uninteresting since then. Sure, > hardware is *much* faster and *much* bigger, but fundamentally the same > architecture. Intel had a brief flirtation with a novel architecture with the > iAPX 432 in 81, but obviously found that was more profitable making the > familiar architecture bigger and faster . I rather agree. Multi-core and hyperthreading don't bring in much from an operating system designer's perspective, and I think all the interesting things about caches are means of working around their problems. I would very much like to get my hands on a ga144 to see what sort of operating system structure would work well on 144 processors with 64KW RAM each. :) There's 64KW ROM per processor too, a lot of stock code could go in that. Both the RAM and ROM operate at the full speed of the processor, no caches to worry about. A little rant about MMUs, sort-of saying "unix and C are not without complexifying nonsense": I'm sure the MMU itself is uninteresting or even harmful to many who prefer other languages and system designs. Just look at that other discussion about the penalties of copying versus the cache penalties of page flipping. If that doesn't devolve into "heavy negativity," it'll only be because those who know don't write much, or those who write much don't want to provide actual figures or references to argue about. What about all those languages which don't even give the programmer access to pointers in the first place. Many have run directly on hardware in the past, some can now. Do they need MMUs? Then there's Forth, which relies on pointers even more than C does. I haven't read *anything* about MMUs in relation to Forth, and yet Forth is in practice as much an operating system as a language. It runs directly on hardware. I'm not sure of some details yet, but it looks like many operating system features either "fall out of" the language design (to use a phrase from Ken Thompson & co.), or are trivial to implement. There were multitasking Forth systems in the 70s. No MMU. The full power of pointers *at the prompt*. Potential for stack under- and over-runs too. And yet these were working systems, and the language hasn't been consigned to the graveyard of computing history. My big project includes exploring how this is possible. :) A likely possibility is the power to redefine words (functions) without affecting previous definitions. Pointer store and fetch can trivially be redefined to check bounds. Check your code doesn't go out of bounds, then "empty", and load it without the bounds-checking store and fetch. -- Progress might have been all right once, but it has gone on too long -- Ogden Nash