Re: [9fans] Initial experience with Plan9 on Raspberry Pi
Thanks Richard for doing this port... It is quite I while since I last played with Plan9, and what I remember most from that time was how hard it was to assemble a compatible platform, and that I never had enough bits to try a real multi-host network. the RPi port promises to solve both problems! I don't recall having any real problems with the user level operations (editing, compiling etc).. I have forgotten a lot of the details, but I know it is just a case of reviewing the documentation, which I recall as being quite clear and well written. I think the real challenge is configuring and administering a system.. I would like to get to the point of having separate file, authentication, cpu and terminal servers, but there is a lot to read and it is not always easy to know where to start. For example, I started by trying to create a user account for myself to see if I could get to the point of being able to connect over a network with drawterm and log in as me.. I found fossilcons(8) and used the 'uname' command to successfully add a user to the filesystem, then updated the cmdline.txt to restart as the new user. Then I found instructions for using 'upas/nedmail -c' to create mymailbox, and copied the hierarchy under /usr/glenda to configure a sensible environment before stumbling on newuser(8) and the script to automate the whole process... Anyway, I made some notes on my initial attempts at setting things up, which I thought I would share, as most of the introductory information that I have found so far has concentrated on user activity, not admin. Any corrections/suggestions or pointers to useful alternative notes on this would be much appreciated. 9Pi Initial Configuration == 1. Create a test user and update cmdline.txt in the boot partiion term% con /srv/fscons prompt: uname digbyt digbyt main: ^\q Does anyone know why the fscons prompt changes from 'prompt:' to 'main:' for second and subsequent commands. term% dosmnt 1 /n/d term% cp /n/d/cmdline.txt /n/d/cmdline.txt.orig term% acme /n/d/cmdline.tls -l /n/d I replaced 'readparts=1 nobootprompt=local user=glenda' with 'readparts=1 nobootprompt=local ipconfig=' to allow user selection at boot time and DHCP network intialization Next I tested the changes by rebooting term% fshalt syncing.../srv/fscons... main: halting.../srv/fscons...fsys all sync main sync: wrote 0 blocks CTL-ALT-DEL The boot screen produced the following information: Plan 9 from Bell Labs board rev: 0xa22082 firmware rev: 1488468813 cpu0: 1200MHz ARM Cortex-A53 r0p4 fp: 32 registers, simd fp: arm arch VFPv3+ with null subarch; rev 4 #l0: usb: 100Mbps port 0x0 irq -1: sdhost external clock 250 MHz #u/usb/ep1.0: dwcotg: port 0x0 irq 9 992M memory: 200M kernel data, 792MB user, 376M swap cpu1: 1200MHz ARM Cortex-A53 r0p4 cpu2: 1200MHz ARM Cortex-A53 r0p4 cpu3: 1200MHz ARM Cortex-A53 r0p4 usb/hub.. usb/ether... etherusb smsc: b827eb88b97e7 user[none]: usb/kb... usb/kb... Note that the 'usb/kb' messages made the process a bit confusing - I initially thought that the user prompt had been skipped after defaulting to user 'none', and that the boot process had got stuck during keyboard initialization... However I discovered that it is actually waiting at the prompt but some unfortunately timed async messages disguised the fact... so I entered my new user name and pressed return... time... fossil(#S/sdM0/fossil)...version... init: starting /bin/rc ipconfig... lib/profile: rc: .: can't open: '/bin/lib' file does not exist init: rc exit status: rc 36: error init: starting bin/rc Use the system shell script for new account initialization % /sys/lib/newuser the understated plan9 grey then appeard.. What I havn't yet managed is the step to give the new user a password. A simple minded term% auth/changeuser digbyt yields Password: Confirm password: assign Inferno/POP secret? (y/n) n Expiration date (MMDD or never)[return = never]: changeuser: can't create user digbyt: '/mnt/keys/digbyt' permission denied term% So I am wondering if there is something I need to start to make my Pi an authentication server? I am also not sure if I should be assigning an 'Inferno/POP secret', and if so what I should enter. Any suggestions for other basic configuration requirements? I know network needs to be set up, but not sure if that should logically come before or after sorting out the authentication setup, and at what point I should think of adding a second Pi to take over some of the specialised functions, and which to start with.. Regards, DigbyT
Re: [9fans] Is fossil/venti file system a good choice for SSD?
My experience of running normal (read mostly) Linux filesystems on solid state media is that SSD is more robust but far less reliable than rotating media. MTBF for rotating media for me has been around 10 years. MTBF for SSD has been about 2. And the SSD always seems to fail catastrophically - appearing to work fine one day, then after an inexplicable crash, half the media is unreadable. I assume this is something to do with the wear leveling, which successfully gets most of the blocks to wear out at the same time with no warning. If I reformat and reload the SSD to correct all the mysterious corruptions, it last a few weeks, then does the same thing again. I have had servers running of solid state media continuously since about 2003. Using PATA to CF adapters initially, currently uSD in raspberry pi etc, and 2.5" SATA SSD drives. I used to use mostly SCSI rotating media, so my reliability may have been better than cheaper PC drives. I had quite a few (probably 90%) of the 1GB Wren 7 drives retired after 10-15 years of running 24/7 under very heavy load (in an on-air broadcast system) with no signs of trouble. The 2.5" SATA form factor SSD's seem to last better - perhaps indicating that the advertised capacity is a smaller proportion of overall capacity available to level the wear over.. I don't have a large number of servers, so not really a big enough sample to draw definite conclusions from, but enough to make me wary of relying too much on SSD's as a panacea. My current server configuration is a uSD system drive, with a much larger rotating disk that spins down when not in use (generally only gets used when a user is logged in), and an up to date backup of everything on the uSD is kept on the rotating media. I am not keen on having SSD as a swap device, unless you have multiple SSD's, in which case you just treat the swap/tmp media as disposable. If I am short of ram (like on a raspberry pi), I would prefer to have an external ram drive for swapping. I have had the rotating media fail once in this configuration - quite recently. 1TB 5.25", so quite a few years old. It went through a couple of months of taking too long to spin up when accessed after a spin down, requiring a manual unmount to get the system to recognize it again. Then eventually wouldn't spin up at all. The interesting thing (for me) was that the SMART data from the drive gave it an all clear right to the end. But unlike the SSDs, there was plenty of behavioural warning to remind me to have the backups up to date and a spare at the ready... So bottom line, in my experience, SSDs are great for read access time, low power, low noise, and robustness. But they are not good for for reliability, capacity or usage which is not read-mostly. (and RAID usage is no substitute for backups - but that is another story) DigbyT. On 3 February 2018 at 16:53, hiro <23h...@gmail.com> wrote: > not so sure about mtbf. but it's too early to tell. > >
Re: [9fans] Is fossil/venti file system a good choice for SSD?
Thats why I described my use case - to make the MTBF figures meaningful. As I said, I have my system configured so that most heavy write accesses go to rotating media. I typically try to have my system partitions mounted read only, except var and tmp. I am currently using 32GB uSD devices for raspberry pi based servers, with about 100GB per year in writes (as reported by iostat), plus perhaps an initial 100GB in writing that occurs during installation and configuring. The last failure I had was about 3 months ago, on uSD card that had been in use for just under 2 years. I can only speak of the experience that I have had, but I don't think my usage is sufficiently atypical to not count as 'in practice' for use as computer storage (ie I am not talking about music players and cameras, phones etc). The symptom tends to be thousands of widely dispersed bad sectors appearing almost simulataneously, in this case on a journaling filesystem, so previously valid information goes bad without any access being made to it. Perhaps this is wear leveling going wrong when it is trying to move less frequently used data to parts of the flash which are very worn. Whatever it is, it seems subjectively to be a much more rapid decline than rotating media when it starts going wrong, and whereas rotating media usually returns sporadic read errors when failing, I have found SSD's often silently return the wrong data when they go bad - which I find particularly worrying (you wouldn't want to be using a SSD in a RAID, for example). Consequently I have started using filesystems which checksum data as well as metadata when using flash based storage. My usage of 2.5" SATA SSD's has not really been over a long enough period to get a good feel for how it compares with removable flash media - I would hope it is more robust. But when used as the only storage in a laptop environment, I would expect much higher levels or write access than a specially configured server. I don't pretend to know what 'typical things people do' would be, but it isn't hard to imagine scenarios that could result in several GB's a day for a non-technical user - downloading movies to a laptop to watch while commuting for example... I certainly wouldn't feel comfortable regularly rebuilding the Linux kernel on a SSD based machine. So from my experience, I would still tend to go along with Erik's advice (as relayed by Steve), or perhaps be even more fastidious about backups when using flash... On 3 February 2018 at 20:10, Bakul Shah wrote: > On Sat, 03 Feb 2018 18:49:50 + "Digby R.S. Tarvin" > wrote: > Digby R.S. Tarvin writes: > > > > My experience of running normal (read mostly) Linux filesystems on solid > > state media is that SSD is more robust but far less reliable than > rotating > > media. > > > > MTBF for rotating media for me has been around 10 years. MTBF for SSD has > > been about 2. And the SSD always seems to fail catastrophically - > appearing > > to work fine one day, then after an inexplicable crash, half the media is > > unreadable. I assume this is something to do with the wear leveling, > which > > successfully gets most of the blocks to wear out at the same time with no > > warning. If I reformat and reload the SSD to correct all the mysterious > > corruptions, it last a few weeks, then does the same thing again. > > MTTF doesn't make much sense for SSDs. A 1TB SSD I bought a > couple years ago has a rating of 300 TB written, an MTBF of 2M > hours and a 10 year warranty. It can do over 500MB/s of > sequential writes. If I average 9.5MB/s writes, it will last > year. If I continuoulsy write 100MB/s, it will last under 35 > days. In contrast the life of an HDD depends on how long it > has been spinning, seeks, temperature and load/unloads. A disk > with 5 year warraty will likely last 5 years even if you write > 100MB/s continuously. > > And consumer HDSC cards such as the ones used in cameras and > Raspi are much much worse. > > In practice an SSD will last much longer than a HDD since > average write rates are not high for the typical things people > do. > >
Re: [9fans] Is fossil/venti file system a good choice for SSD?
static web pages, remote login (so that I can power/depower other hardware) and file remote file distribution (via scp) mostly. The main requirement is very low standby power consumption so that it can survive on batteries which are recharged using solar panels. Power consumption was the main reason for switching from laptops (~12W) to Reaspberry Pis (1.2W).. The other advantage to the raspberry pi is that it is a cheap commodity item - if I have one misbehave, I can just swap it out and throw it away. However, other than failing SSD's, the raspberry Pis have proved very reliable. On 4 February 2018 at 09:52, hiro <23h...@gmail.com> wrote: > > raspberry pi based servers > > what are you serving? > >
Re: [9fans] Is fossil/venti file system a good choice for SSD?
I am not familiar with the kirkwoods that you mentioned. Just to be clear, the USB drive I was describing is rotating media in an external enclosure, not a memory stick. Generally self powered, as powering a portable hard drive from USB with a RPi is asking for trouble. I have stopped buying flash memory devices from eBay and other vendors that are not well known with a reputation to protect - far to many counterfeits with less storage than the packaging claims on the market, and if you are unfortunate enough to try using them with the FAT or eFAT filesystem they are supplied with, data will eventually wrap around and destroy itself, probably after sufficient time that the seller (who may well be ignorant of why his/her stock was so cheap) is no longer around to complain to. A minor annoyance for me, but must be a lot of unhappy people losing irreplaceable photographs.. Fortunately for me, attempting for reformat with a Linux filesystem tends to fail on such devices, so I find out straight away and get to send them back and be refunded . I had to write a dedicated test program to demonstrate the subterfuge on the original filesystem - to prove that the reformatting was revealing an issue, not the source of it (as was often claimed). I also don't buy cheap USB stick. I like these https://www.kanguru.com/storage-accessories/kanguru-ss3.shtml because they have a real hardware write protect capability. Indispensable if you are going to be inserting them into other people's machines, but surprisingly uncommon. At over $200.00 for 256GB, they are are a bit upmarket, but I havn't had any problem with them yet. Anyway, so long as you are aware of the risks and limitations, flash memory devices are a useful technology, but not a complete replacement for rotating magnetic storage. On 4 February 2018 at 21:46, hiro <23h...@gmail.com> wrote: > Hey, thanks for explaining. this usage is surprisingly valid. I have > some much much older kirkwoods for the same scenario. The benefit is: > gigabit ethernet, higher stability, case included, power supply > included (and no power problems as on rpi), lower price. > I boot them all from USB HDDs, but I see how flash would save more > power. Carry on! :) > > The main disagreement in this thread is calling all kinds of different > flash storage "SSD". common usage reserves this name for the sata or > more recent nvme disks that actually are much more stable, in my > understanding due to better controllers and their better wear leveling > algorithms. > > With sd cards and usb flash drives you are lucky if your 128GB stick > is not really 1GB flash with %1GB "wear leveling algorithm" where > after 1GB you rewrite your already saved data :D > > It's a low-end market with shitty margins, low quality controllers, > and in general too many counterfeits, even from good shops and big > retailers. so you can't even depend on the company/brand/product name. > > Privately I never had surprising problems with HDDs, I don't manage to > fill enough to notice the small risk in practice. > > All my old HDDs still work. They are only unused cause they got too > small to be worth spinning any more (waste of power). > > I project my SSDs will not fail before i get 10/40gbit connection to > my NAS. Till then my write wear will be limited by my low bandwidth > and high latency practical use cases. > > On 2/4/18, Digby R.S. Tarvin wrote: > > static web pages, remote login (so that I can power/depower other > hardware) > > and file remote file distribution (via scp) mostly. > > > > The main requirement is very low standby power consumption so that it can > > survive on batteries which are recharged using solar panels. > > > > Power consumption was the main reason for switching from laptops (~12W) > to > > Reaspberry Pis (1.2W).. > > > > The other advantage to the raspberry pi is that it is a cheap commodity > > item - if I have one misbehave, I can just swap it out and throw it away. > > However, other than failing SSD's, the raspberry Pis have proved very > > reliable. > > > > On 4 February 2018 at 09:52, hiro <23h...@gmail.com> wrote: > > > >> > raspberry pi based servers > >> > >> what are you serving? > >> > >> > > > >
Re: [9fans] (no subject)
I'd certainly be happy to give it a good home if nobody else has claimed it. digby...@gmail.com if you want to discuss logistics off the list... Digby. On 2 April 2018 at 16:27, Steve Simon wrote: > Hi, > > I am in the Uk, and moving house. > > t have an HP T5325 Thin client which uses the > same Marvel chipset as the guruplug. > http://www.parkytowers.me.uk/thin/hp/t5325/index.shtml > > Someone got as far as getting one to net boot plan9 but > didn't (If I remember correctly) get the graphics working. > http://thread.gmane.org/gmane.os.plan9.general/72588 > > There is also an SATA interface which can be accessed with > a little soldering. > https://habrahabr.ru/post/260631/ > > > Free to anyone who wants it - I may ask for a postage > contribution if you are far from me. Be quick or it goes > into the skip. > > -Steve > >
Re: [9fans] Plan 9's style(6) manual page
I disagree with your assertion that inserting space after 'if', 'for', 'while' etc is universal outside of Plan 9. I have never adopted that convention, and it looks ugly to me. That is probably because I learned C by reading the Unix kernel source code (6th Edition) which displays a preference for brevity. If you go back to Denis Ritchie's original C reference manual from 1974: https://www.bell-labs.com/usr/dmr/www/cman74.pdf you will find he was inconsistent - sometimes adding a space, sometimes not. Perhaps that is why there is no universally accepted style standard. If I had to justify my preference, I would probably say that if the round brackets were optional (as they are, for example, in pascal), then I would put in the space because they would be part of the expression. But I suspect my preference is mainly just habit and now my brain is wired for it. Richie does omit the space before brackets in a function declaration/call, and to me that is more consistent with omitting the space in other language elements which require a parenthesis after a keyword or identifier. I am not sure that there is always a (logical) reason. If the language does not mandate a style, then whatever you choose is valid. Most of the time it boils down to what you first encountered when learning the language. Once you get used to one style, source code becomes much harder to read if it follows a different style, so you need a good reason to change. It makes sense to be consistent within a project, so unfortunately it is sometimes necessary to put up with a different style. Generally project manager or company will arbitrate on which wins. I am normally in a position where I am the only one working on particular source files, so I convert them to my preferred style, and then reformat them according to the dictates of the project when I am finished. Sometimes there are arguments for a particular style change, like putting lvalues on the right of a binary comparison so that accidental use of an assignment operator would generate a compiler error, but to me the result is so ugly, and my days of making that sort of mistake so far in the past that I definitely prefer the standard idioms. I have changed from putting curly brackets on the same line as the if/while/for etc, because I find the structure of the program easier to read if the brackets line up with the opening bracket over the corresponding closing bracket. It seems more consistent with curly brackets around function bodies, it is easier to find forgotten parentheses, and I think the rule about spaces before braces becomes irrelevant. The only down side seems to be taking up an extra line. I am not so keen on the braces around single line blocks - I suppose two wasted lines must be exceeding my tolerance. I sometimes make an exception for nested if/while/for etc statements where, for example, it may not otherwise be clear which statement an 'else' is associated with.. The Plan 9 developers obviously decided it would be better to establish a standard early on, and I suppose unsurprisingly the majority followed the traditions of the Unix source. I am not sure about the rule on automatic variable initialization. That is the only one in your list which puzzles me. DigbyT On 7 April 2018 at 17:00, <8hal...@airmail.cc> wrote: > Just an amateur C programmer looking for answers. My main inspirations for > code > style is K&R 2nd edition and I'm curious about the instructions in Plan 9's > style(6) manual page (for reference, http://man.cat-v.org/plan_9/6/style). > I've > tried to think about the motivations, but not everything is as clear as it > seems. > > Going through style(6): > > no white space before opening braces. >> no white space after the keywords `if', `for', `while', etc. >> > > This is unique to Plan 9, it seems. I can't come up with a reason -- both > BSD > and Linux style use whitespace, and K&R does too, while Plan 9 doesn't. > Why? > > no braces around single-line blocks (e.g., `if', `for', and `while' >> bodies). >> > > Apologies, but I'll have to Go and do it anyway :) > > automatic variables (local variables inside a function) are never >> initialized at declaration. >> > > Why not? In order to reduce visual clutter? It seems like this should be > handled > case-by-case: in some situations this just wastes lines: > > int foo; > foo = 12; > func("blah", &foo); > > follow the standard idioms: use `x < 0' not `0 > x', etc. >> > > I'm guessing this is for consistency and more common coincidence with the > flow > of spoken language. > > don't write `!strcmp' (nor `!memcmp', etc.) nor `if(memcmp(a, b, c))'; >> always >> explicitly compare the result of string or memory comparison with zero >> using a >> relational operator. >> > > Was that a common programmer error? cmp functions should return 0 if the > arguments are identical. Smells like disaster in baking! > > and this is not an exhaustive list >> > > Is there anything missing
Re: [9fans] what heavy negativity!
ooh, there's an idea for new project... I also have a soft spot for the old PDP11 architecture and aesthetics, and like the idea of an emulator sitting behind an 11/70 front panel, but I havn't been able to decide what software to run on it... Unix ran quite nicely on an 11/70 back in the late 70s, but I doubt you would squeeze much more than the boot loader of a modern bloated system onto one And a Unix image from that era would probably be a little limited. (I don't really have enough history with RT11/RSTS to want to use them). So the question is... is plan9 still lean and mean enough to fit onto a machine with a 64K address space? Doing a port would certainly provide plenty of opportunity to tinker with the lights and switches on front panel, and if it the port was initially limited to being a CPU server, there would be no need to worry about displays and mass storage just the compiler back end and low level kernel support. Has anyone already looked at that? I expect it would be a fun, educational and nostalgic exercise, but of course not of much practical use... Regards, DigbyT On Sat, 6 Oct 2018 at 00:23, Brian L. Stuart wrote: > Fri, Oct 5, 2018 at 12:11 AM Mayuresh Kathe wrote: > > man, i experienced such heavy negativity towards my efforts to build ... > > > > the idea was to have a 64-bit linux kernel with the advantages of > > plan9port (small and elegantly designed+developed tools). > > Mayuresh, > To echo what others have said, don't let the negativity > itself affect your work. Consider only the technical points > that have been raised. To the extent that you evaluate > them and consider them relevant to your objectives, factor > them into your work. > > It really doesn't matter if anyone else ever cares about > or uses your work. If you learn from it, get intellectual > satisfaction from it, and it's useful to you, then it's worth > doing. If others can benefit too, great, but lack of interest > on the part of others is not a good reason for lack of > initiative on your part. As far as I can tell, I'm the only > one using a file system I developed. Sure, in some ways > I would like if everyone thought it was as great as I do, > but just because they don't doesn't stop me from benefitting > from it. > > As for the specifics of your project, I personally don't think > I'd be all that interested in the results. As much as I like > the elegance and simplicty of the implementation of the > Plan 9 user-land, much of the beauty of the system comes > from the simplicity and elegance of the kernel. So if I > were using the Plan 9 user-land on top of the LInux kernel, > I wouldn't feel the same sense of beauty, intellectual satisfaction, > and connection to the original developers as I do running > the same user-land on the Plan 9 kernel. But just because > I wouldn't be interested is no reason to stop your research. > Just be sure to study the similar efforts that have come > before and that have been mentioned here. What did > they accomplish? Did they go wrong somewhere? Can > you get to that goal avoiding those mistakes? If nothing > else, the whole experience will almost certainly give you > a greater appreciation for the Plan 9 kernel. > > Just a couple of thoughts from an old-timer who misses > the days of working on PDP-11s. > > BLS > >
Re: [9fans] PDP11 (Was: Re: what heavy negativity!)
A native Inferno port would certainly be a lot easier, but I think you might be a bit pessimistic about would can fit into a 64K address space machine. The 11/70 certainly managed to run a very respectable V7 Unix supporting 20-30 simultaneous active users in its day, and I wouldn't have thought plan 9 arriving about a decade later, would have been hugely bigger than V7 Unix. I recall a demo of Plan9 (I think it also included the source) being given by Rob Pike at UNSW which he carried on a 1.44Mb floppy disc. By its open source release in 2002 the distribution was 65MB The smallest Linux system I have used recently had 256K RAM and 512K flash. A rather stripped down busybox based system, but it did include a full TCP/IP stack and a web server. Thats comparable to a PDP11 except for the limitation on the largest individual process. Bear in mind that 16 bit executables are smaller, and whilst the 11/70 had a 64Kb address space, physical memory could be somewhat larger, and an individual process could have 128K of memory is using separate instruction and data space. I am used to thinking of Plan9 as very compact, but I havn't really looked to see if it has grown much since the 80s, and perhaps it is only next to the astronomical expansion of other systems that it still looks small. It would be an interesting exercise to find out. It would be an interesting thing to try, if only to get a better feel for how compact Plan9 actually is ... DigbyT On Mon, 8 Oct 2018 at 14:38, Lucio De Re wrote: > On 10/8/18, Digby R.S. Tarvin wrote: > > > > So the question is... is plan9 still lean and mean enough to fit onto a > > machine with a 64K address space? Doing a port would certainly provide > > plenty of opportunity to tinker with the lights and switches on front > > panel, and if it the port was initially limited to being a CPU server, > > there would be no need to worry about displays and mass storage just > > the compiler back end and low level kernel support. > > > You really must be thinking of Inferno, native, running in a host with > 1MiB of memory. 64KiB isn't enough for anything other than maybe CPM. > Even MPM won't cut it, I don't think. > > Lucio. >
Re: [9fans] PDP11 (Was: Re: what heavy negativity!)
I quite agree - the PDP 11/70 was quite a high end 16 bit machine, but it was the machine that I was talking about and the one I would most like to revisit (although I wouldn't turn down an 11/40 if somebody offered me a working one). I don't think I would contemplate putting Plan9 on a machine with no MMU or a 64K physical memory limit. My first reasonable multi-user, multi-tasking computer system (back in the early 80s) was home made 6809 machine with 6829 MMU and eventually 1MB of ram, running OS-9/6809. It initially ran with 64K for programs and and the rest of memory was a big ram disk - because what else could you do with such a ridiculous amount of memory. It did pretty well at providing a personal Unix like environment, although counldn't reproduce the fork() semantics and there was no memory protection, and the memory contraints meant always running the C compiler one pass at a time.. But we eventually ported 'Level 2' OS-9 which could use a mapping ram/MMU, and with that I had a quite robust multi-user system, with up to 64K available per process, and 64K available for the kernel. I was able to get most Unix programs running on it (except for a few with big tables that compiled to larger than 64K) and no longer had to worry about exiting the editor before doing a compile. Most of the core system utilities were written in assembly language - so the equivalent of 'ls' for example, required no more than a 256 byte memory allocation. And all executables were loaded read-only and re-entrant (shared text) which helped. The only real Achilles heal was the 6809 had no illegal instruction trapping, so executing data could occasionally result in an unrecoverable freeze.. I never liked the 68K version os OS-9 quite as much. Because of the larger address space it used the MMU for protection only, with no address translation - so the kernel was mapped into the same address space as the user programs but just not accessible in user mode. It just didn't seem as elegant. Anyway, thats why I don't see 64K per process as necessarily being inadequate for a lean operating system, although it would be easy enough to write extravagant code that would not run in 64K, or a design that relied on a large virtual address space - especially if you were used to relying on virtual memory. I just don't know if how small Plan9 can go, and unless someone has already explored those limits, I suppose rather than speculating i'll just have to plan on a little experimentation when I get a bit of spare time. Regards, Digby On Mon, 8 Oct 2018 at 19:13, Nils M Holm wrote: > On 2018-10-08T15:29:02+1100, Digby R.S. Tarvin wrote: > > A native Inferno port would certainly be a lot easier, but I think you > > might be a bit pessimistic about would can fit into a 64K address space > > machine. The 11/70 certainly managed to run a very respectable V7 Unix > > supporting 20-30 simultaneous active users in its day, [...] > > The 11/70 was a completely different beast than, say, an 11/03. > The 70 had a backplane with 22 address lines, a MMU, and up to > 4M bytes of memory. So while its processes were limited to > 64K+64K bytes, I would not consider it to be a typical 16-bit > machine. > > -- > Nils M Holm < n m h @ t 3 x . o r g > www.t3x.org > >
Re: [9fans] PDP11 (Was: Re: what heavy negativity!)
Does anyone know what platform Plan9 was initially implemented on? My guess is that there is no reason in principle that it could not fit comfortably into the constraints of a PDP11/70, but if the initial implementation was done targeting a machine with significantly more resources, it would be easy to make design decisions that would be entirely incompatible. Certainly Richard Millar's comment suggests that might be the case. If it is heavily dependent on VM, then the necessary rewrite is likely to be substantial. I'm not sure how the kernel design has changed since the first release. The earliest version I have is the release I bought through Harcourt Brace back in 1995. But I won't be home till December so it will be a while before I can look at it, and probably won't have time to experiment before then in any case. For what it is worth, I don't think the embarrassment of riches presented to programmers by current hardware has tended to produce more elegant designs. If more resources resulted in elegance, Windows would be a thing of beauty. Perhaps Plan9 is an exception. It certainly excels in elegance and design simplicity, even if it does turn out to be more resource hungry than I imagined. I will admit that the evils of excessively constrained environments are generally worse in terms of coding elegance - especially when it leads to overlays and self modifying code. PDP11's don't support virtual memory, so there doesn't seem any elegant way to overcome that fundamental limitation on size of a singe executable. So I don't think it i would be worth a substantial rewrite to get it going. It is a shame that there don't seem to have been any more powerful machines with a comparably elegant architecture and attractive front panel :) It is sounding like Inferno is going to be the more practical option. I believe gcc can still generate PDP-11 code, so it shouldn't be too hard to try. DigbyT On Tue, 9 Oct 2018 at 04:53, hiro <23h...@gmail.com> wrote: > i should have said could, not can :) > >
Re: [9fans] PDP11 (Was: Re: what heavy negativity!)
On Tue, 9 Oct 2018 at 10:07, Dan Cross wrote: > My guess is that there is no reason in principle that it could not fit >> comfortably into the constraints of a PDP11/70, but if the initial >> implementation was done targeting a machine with significantly more >> resources, it would be easy to make design decisions that would be entirely >> incompatible. >> > > I find this unlikely. > > The PDP-11, while a respectable machine for its day, required too many > tradeoffs to make it attractive as a development platform for a > next-generation research operating system in the late 1980s: be it > electrical power consumption vs computational oomph or dollar cost vs > available memory, the -11 had fallen from the attractive position it held a > decade prior. Perhaps slimming a plan9 kernel down sufficiently so that it > COULD run on a PDP-11 was possible in the early days, but I can't see any > reason one would have WANTED to do so: particularly as part of the impetus > behind plan9 was to exploit advances in contemporary hardware: lower-cost, > higher-performance, RISC-based multiprocessors; ubiquitous networking; > common high-resolution bitmapped graphical displays; even magneto-optical > storage (one bet that didn't pan out); etc. > If you mean that you find it unlikely that that development would have been done on a PDP11, then I agree, for the reasons you mentioned. Not sure that I can see why it wouldn't have been feasible, but I can see why it wouldn't have been desirable. I thought there might have been a chance of an early attempt to target the x86 because of its ubiquity and low cost - which could be useful for a networked operating system. And those were 16 bit address constrained in the early days. But its probably not an architecture you would choose to work with if you had a choice.. 68K is what I would have gone for.. > Certainly Richard Millar's comment suggests that might be the case. If it >> is heavily dependent on VM, then the necessary rewrite is likely to be >> substantial. >> > > As a demonstration project, getting a slimmed-down plan9 kernel to boot on > a PDP-11/70-class machine would be a nifty hack, but it would be quite a > tour de force and most likely the result would not be generally useful. I > think that, as has been suggested, the conceptual simplicity of plan9 > paradoxically means that resource utilization is higher than it might > otherwise be on either a more elaborate OR more constrained system (such as > one targeting e.g. the PDP-11). When you can afford not to care about a few > bytes here or a couple of cycles there and you're not obsessed with > scraping out the very last drop of performance, you can employ a simpler > (some might say 'naive') algorithm or data structure. > > I'm not sure how the kernel design has changed since the first release. >> The earliest version I have is the release I bought through Harcourt Brace >> back in 1995. But I won't be home till December so it will be a while >> before I can look at it, and probably won't have time to experiment before >> then in any case. >> > > The kernel evolved substantially over its life; something like doubling in > size. I remember vaguely having a discussion with Sape where he said he > felt it had grown bloated. That was probably close to 20 years ago now. > I guess kernel size wasn't a priority. I did a bit of searching back through the old papers, and whilst there is a lot of talk about lines of code and numbers of system calls, I didn't find any reference to kernel size or memory requirements. > For what it is worth, I don't think the embarrassment of riches presented >> to programmers by current hardware has tended to produce more elegant >> designs. If more resources resulted in elegance, Windows would be a thing >> of beauty. Perhaps Plan9 is an exception. It certainly excels in elegance >> and design simplicity, even if it does turn out to be more resource hungry >> than I imagined. I will admit that the evils of excessively constrained >> environments are generally worse in terms of coding elegance - especially >> when it leads to overlays and self modifying code. >> > > plan9 is breathtakingly elegant, but this is in no small part because as a > research system it had the luxury of simply ignoring many thorny problems > that would have marred that beauty but that the developers chose not to > tackle. Some of these problems have non-trivial domain complexity and, > while "modern" systems are far too complex by far, that doesn't mean that > all solutions can be recast as elegantly simple pearls in the plan9 style. > Whether we like those problems or not, they exist and real-world solutions > have to at least attempt to deal with them (I'm looking at you, web x.0 for > x >= 2...but curse you you aren't alone). > > PDP11's don't support virtual memory, so there doesn't seem any elegant >> way to overcome that fundamental limitation on size of a singe executable. >> > > No, they do: there is
Re: [9fans] PDP11
Yes, that is exactly what prompted the thinking about Plan9 on a PDP11/70. I have already organized a PiDP11 kit to be shipped to me when I get home in December - so that I can experiment without running the risk of blowing up my old original 11/70 front panel. But a (simulated) 11/70 with a nice front panel isn't so interesting unless I have some interesting PDP11 software to run on it. A small Plan9/Inferno implementation could be integrated into a larger network and allow the old hardware to integrate seamlessly with other things. Such as exporting a device that lets other hosts write to the lights and read from the switches, for example.. Regards. DigbyT On Tue, 9 Oct 2018 at 14:23, David Arnold wrote: > On 9 Oct 2018, at 14:08, Digby R.S. Tarvin wrote: > > > <…> > > So I don't think it i would be worth a substantial rewrite to get it >>> going. It is a shame that there don't seem to have been any more powerful >>> machines with a comparably elegant architecture and attractive front panel >>> :) >>> >> >> An attractive front panel for nearly any machine is just a soldering >> iron, LEDs and some logic chips away. As far as elegant architectures, some >> are very nice: MIPS is kind of retro but elegant, RISC-V is nice, 680x0 >> machines can be had a reasonable prices, and POWER is kind of cool. I know >> I shouldn't, but I have a soft spot for ARM. >> > > I have thought about it, but there are a couple of problems (in addition > to my lack artistic talent when it comes to building physically attractive > enclosures).. One is the sheer number of LEDs required to display all of > the address and data lines in a modern architecture. Mainly an issue if I > want to use the old PDP11/70 front panel that I had saved for the purpose, > I suppose. The other problem is getting access to the all of the machine > state that was displayable on a mini computer console. Virtual addresses, > User/Kernel mode, register contents etc are all hard to get at. I have > toyed with using JTAG etc, but there always seems to be something that I > can't get to. So it is hard to do more than resort to a software controlled > front panel. I used to have a little box of LEDs and switches that I > plugged into the parallel port on PCs, and had my BSDi kernel modified to > update it as part of the clock interrupt. But now the parallel ports are > becoming rare and you can't update LEDs connected via USB in a single > instruction... :-/ > > > Probably not quite what you’re after, but the PiDP8 and PiDP11 kits will > get you an (arguably) attractive front panel without requiring artistic > talent. > >http://obsolescence.wixsite.com/obsolescence/pidp-11 > > I’ve not looked into how the front-panel is driven (from SIMH, I guess?), > but perhaps it could be suitably massaged? > > > > d > >
Re: [9fans] PDP11 (Was: Re: what heavy negativity!)
On Tue, 9 Oct 2018 at 23:00, Ethan Gardener wrote: > > Fascinating thread, but I think you're off by a decade with the 16-bit > address bus comment, unless you're not actually talking about Plan 9. The > 8086 and 8088 were introduced with 20-bit addressing in 1978 and 1979 > respectively. The IBM PC, launched in 1982, had its ROM at the top of that > 1MByte space, so it couldn't have been constrained in that way. By the end > of the 80s, all my schoolmates had 68k-powered computers from Commodore and > Atari, showing hardware with a 24-bit address space was very much > affordable and ubiquitous at the time Plan 9 development started. Almost > all of them had 512KB at the time. A few flashy gits had 1MB machines. :) > Not sure I would agree with that. The 20 bit addressing of the 8086 and 8088 did not change their 16 bit nature. They were still 16 bit program counter, with segmentation to provide access to a larger memory - similar in principle to the PDP11 with MMU. The first 32 bit x86 processor was the 386, which I think came out in 1985, very close to when work on Plan9 was rumored to have started. So it seemed not impossible that work might have started on an older 16 bit machine, but at Bell Labs probably a long shot. > I still wish I'd kept the better of the Atari STs which made their way > down to me -- a "1040 STE" -- 1MB with a better keyboard and ROM than the > earlier "STFM" models. I remember wanting to try to run Plan 9 on it. > Let's estimate how tight it would be... > > I think it would be terrible, because I got frustrated enough trying to > run a 4e CPU server with graphics on a 2GB x86. I kept running out of > image memory! The trouble was the draw device in 4th edition stores images > in the same "image memory" the kernel loads programs into, and the 386 CPU > kernel 'only' allocates 64MB of that. :) > > 1 bit per pixel would obviously improve matters by a factor of 16 compared > to my setup, and 640x400 (Atari ST high resolution) would be another 5 > times smaller than my screen. Putting these numbers together with my > experience, you'd have to be careful to use images sparingly on a machine > with 800KB free RAM after the kernel is loaded. That's better than I > thought, probably achievable on that Atari I had, but it couldn't be used > as intensively as I used Plan 9 back then. > > How could it be used? I think it would be a good idea to push the draw > device back to user space and make very sure to have it check for failing > malloc! I certainly wouldn't want a terminal with a filesystem and > graphics all on a single 1MByte 64000-powered computer, because a > filesystem on a terminal runs in user space, and thus requires some free > memory to run the programs to shut it down. Actually, Plan 9's separation > of terminal from filesystem seems quite the obvious choice when I look at > it like this. :) > I went Commodore Amiga at about that time - because it at least supported some form of multi-tasking out out the box, and I spent many happy hours getting OS9 running on it.. An interesting architecture, capable of some impressive graphics, but subject to quite severe limitations which made general purpose graphics difficult. (Commodore later released SVR4 Unix for the A3000, but limited X11 to monochrome when using the inbuilt graphics). But being 32 bit didn't give it a huge advantage over the 16 bit x86 systems for tinkering with operating system, because the 68000 had no MMU. It was easier to get a Unix like system going with 16 bit segmentation than a 32 bit linear space and no hardware support for run time relocation. (OS9 used position independent code throughout to work without an MMU, but didn't try to implement fork() semantics). It wasn't till the 68030 based Amiga 3000 came out in 1990 that it really did everything I wanted. The 68020 with an optional MMU was equivalent, but not so common in consumer machines. Hardware progress seems to have been rather uninteresting since then. Sure, hardware is *much* faster and *much* bigger, but fundamentally the same architecture. Intel had a brief flirtation with a novel architecture with the iAPX 432 in 81, but obviously found that was more profitable making the familiar architecture bigger and faster .
Re: [9fans] PDP11 (Was: Re: what heavy negativity!)
I don't know which other ARM board you tried, but I have always found terrible I/O performance of the Pi to be a bigger problem that the ARM speed. The USB2 interface is really slow, and there arn't really many other (documented) alternative options. The Ethernet goes through the same slow USB interface, and there is only so much that you can do bit bashing data with GPIO's. The sdCard interface seems to be the only non-usb filesystem I/O available. And that in turn limits the viability of relieving the RAM contraints with virtual memory. So the ARM processor itself is not usually the problem for me. In general I find the pi a nice little device for quite a few things - like low power, low bandwidth, low cost servers or displays with plenty of open source compatability.. Or hacking/prototyping where I don't want to have to worry too much about blowing things up. But it not good for high throughput I/O, memory intensive applications, or anything requiring a lot of processing power. The validity of your conclusion regarding low power ARM in general probably depends on what the other board you tried was.. DigbyT On Wed, 10 Oct 2018 at 17:51, hiro <23h...@gmail.com> wrote: > > Eliminating as much of the copy in/out WRT the kernel cannot but > > help, especially when you're doing SDR decoding near the radios > > using low-powered compute hardware (think Pies and the like). > > Does this include demodulation on the pi? cause even when i dumped the > pi i was given for that purpose (with a <2Mbit I/Q stream) and > replaced it with some similar ARM platform that at least had neon cpu > instruction extensions for faster floating point operations, I was > barely able to run a small FFT. > > My conclusion was that these low-powered ARM systems are just good > enough for gathering low-bandwidth, non-critical USB traffic, like > those raw I/Q samples from a dongle, but unfit for anything else. > >
Re: [9fans] PDP11 (Was: Re: what heavy negativity!)
Well, I think 'avoid at all costs' is a bit strong. The Raspberry Pi is a good little platform for the right applications, so long as you are aware of its limitations. I use one as my 'always on' home server to give me access files when travelling (the networking is slow by LAN standards, but ok for WAN), and another for my energy monitoring system. It is good for experimenting with OS's, especially networking OS's like Plan9 where price is important if you want to try a large number of hosts. Its good for teaching/learning. Or for running/trying different operating systems without having do spend time and resources setting up VMs (downloading and flashing an sd card image is quick and takes up no space on my main systems). Just don't plan on deploying RPi's for mission critical applications that have demanding I/O or processing requirements. It was never intended to compete in that market. On Wed, 10 Oct 2018 at 20:54, hiro <23h...@gmail.com> wrote: > I agree, if you have a choice avoid rpi by all costs. > Even if the software side of that other board was less pleasent at least > it worked with my mouse and keyboard!! :) > > As I said I was looking at 2Mbit/s stuff, which is nothing, even over USB. > But my point is that even though this number is low, the rpi is too limited > to do any meaningful processing anyway (ignoring the usb troubles and lack > of ethernet). It's a mobile phone soc after all, where the modulation is > done by dedicated chips, not on cpu! :) > > On Wednesday, October 10, 2018, Digby R.S. Tarvin > wrote: > > I don't know which other ARM board you tried, but I have always found > terrible I/O performance of the Pi to be a bigger problem that the ARM > speed. The USB2 interface is really slow, and there arn't really many > other (documented) alternative options. The Ethernet goes through the same > slow USB interface, and there is only so much that you can do bit bashing > data with GPIO's. The sdCard interface seems to be the only non-usb > filesystem I/O available. And that in turn limits the viability of > relieving the RAM contraints with virtual memory. So the ARM processor > itself is not usually the problem for me. > > In general I find the pi a nice little device for quite a few things - > like low power, low bandwidth, low cost servers or displays with plenty of > open source compatability.. Or hacking/prototyping where I don't want to > have to worry too much about blowing things up. But it not good for high > throughput I/O, memory intensive applications, or anything requiring a lot > of processing power. > > The validity of your conclusion regarding low power ARM in general > probably depends on what the other board you tried was.. > > DigbyT > > On Wed, 10 Oct 2018 at 17:51, hiro <23h...@gmail.com> wrote: > >> > >> > Eliminating as much of the copy in/out WRT the kernel cannot but > >> > help, especially when you're doing SDR decoding near the radios > >> > using low-powered compute hardware (think Pies and the like). > >> > >> Does this include demodulation on the pi? cause even when i dumped the > >> pi i was given for that purpose (with a <2Mbit I/Q stream) and > >> replaced it with some similar ARM platform that at least had neon cpu > >> instruction extensions for faster floating point operations, I was > >> barely able to run a small FFT. > >> > >> My conclusion was that these low-powered ARM systems are just good > >> enough for gathering low-bandwidth, non-critical USB traffic, like > >> those raw I/Q samples from a dongle, but unfit for anything else. > >> > >
Re: [9fans] PDP11 (Was: Re: what heavy negativity!)
On Wed, 10 Oct 2018 at 21:40, Ethan Gardener wrote: > > > > Not sure I would agree with that. The 20 bit addressing of the 8086 and > 8088 did not change their 16 bit nature. They were still 16 bit program > counter, with segmentation to provide access to a larger memory - similar > in principle to the PDP11 with MMU. > > That's not at all the same as being constrained to 64KB memory. Are we > communicating at cross purposes here? If we're not, if I haven't > misunderstood you, you might want to read up on creating .exe files for > MS-DOS. Agreed, but the PDP11/70 was not constrained to 64KB memory either. I do recall the MS-DOS small/large/medium etc models that used the segmentation in various ways to mitigate the limitations of being a 16 bit computer. Similar techniques were possible on the PDP11, for example Modula-2/VRS under RT-11 used the MMU to transparently support 4MB programs back in 1984 (it used trap instructions to implement subroutine calls). It wasn't possible under Unix, of course, because there were no system calls for manipulating the mmu. Understandable, as it would have complicated the security model in a multi-tasking system. Something neither MS-DOS or RT-11 had to deal with. Address space manipulation was more convenient with Intel segmentation because the instruction set included procedure call/return instructions that manipulated the segmentation registers, but the situation was not fundamentally different. They were both 16 bit machines with hacks to give access to a larger than 64K physical memory. The OS9 operating system allowed some control of application memory maps in a unix like environement by supporting dynamic (but explicit) link and unlink of subroutine and data modules - which would be added and removed from your 64K address space as required.So more analogous to memory based overlays. > > I went Commodore Amiga at about that time - because it at least > supported some form of multi-tasking out out the box, and I spent many > happy hours getting OS9 running on it.. An interesting architecture, > capable of some impressive graphics, but subject to quite severe > limitations which made general purpose graphics difficult. (Commodore later > released SVR4 Unix for the A3000, but limited X11 to monochrome when using > the inbuilt graphics). > > It does sound like fun. :) I'm not surprised by the monochrome graphics > limitation after my calculations. Still, X11 or any other window system > which lacks a backing store may do better in low-memory environments than > Plan 9's present draw device. It's a shame, a backing store is a great > simplification for programmers. > X11 does, of course, support the concept of a backing store. It just doesn't mandate it. It was an expensive thing to provide back when X11 was young, so pretty rare. I remember finding the need to be able to re-create windows on demand rather annoying when I first learned to program in Xlib, but once you get used to it I find it can lead to benefits when you have to retain a knowledge of how an image is created, not just the end result. > > But being 32 bit didn't give it a huge advantage over the 16 bit x86 > systems for tinkering with operating system, because the 68000 had no MMU. > It was easier to get a Unix like system going with 16 bit segmentation than > a 32 bit linear space and no hardware support for run time relocation. > > (OS9 used position independent code throughout to work without an MMU, > but didn't try to implement fork() semantics). > > I'm sometimes tempted to think that fork() is freakishly high-level crazy > stuff. :) Still, like backing store, it's very nice to have. > I agree. Very elegant when you compare it to the hoops you have to jump through to initialize the child process environment in systems with the more common combined 'forkexec' semantics, but a real sticking point for low end hardware. > > It wasn't till the 68030 based Amiga 3000 came out in 1990 that it > really did everything I wanted. The 68020 with an optional MMU was > equivalent, but not so common in consumer machines. > > > > Hardware progress seems to have been rather uninteresting since then. > Sure, hardware is *much* faster and *much* bigger, but fundamentally the > same architecture. Intel had a brief flirtation with a novel architecture > with the iAPX 432 in 81, but obviously found that was more profitable > making the familiar architecture bigger and faster . > > I rather agree. Multi-core and hyperthreading don't bring in much from an > operating system designer's perspective, and I think all the interesting > things about caches are means of working around their problems. I don't think anyone would bother with multiple cores or caches if that same performance could be achieved without them. They just buy a bit more performance at the cost of additional software complexity. I would very much like to get my hands on a ga144 to see what sort of > operating system structure wo
Re: [9fans] PDP11 (Was: Re: what heavy negativity!)
Oh yes, I read Eldon Halls book on that quite a few years ago. Meetings held to discuss competing potential uses for a word of memory that had become free. That one would be a challenging Plan9 port.. On Fri, 12 Oct 2018 at 05:13, Lyndon Nerenberg wrote: > Digby R.S. Tarvin writes: > > > Agreed, but the PDP11/70 was not constrained to 64KB memory either. > > > I do recall the MS-DOS small/large/medium etc models that used the > > segmentation in various ways to mitigate the limitations of being a 16 > bit > > computer. Similar techniques were possible on the PDP11, for example > > Coincidental to this conversation, I'm currently reading "The Apollo > Guidance Computer: Architecture and Operation" by _Framk O'Brien_. > (ISBN 978-1-4419-0876-6) Very interesting to see what you can do with > a 15 bit architecture when sufficiently motivated. > > --lyndon > >
Re: [9fans] Screen rotation on the Raspberry Pi 4?
I bought one of these to test as a portable hdmi display: https://www.waveshare.com/7inch-fhd-monitor.htm What I didn't realize till I read the fine print was that it only supports portrait mode (and rather non-standard video settings) - presumably the display is made with mobile phones in mind. The relevance to this discussion is that the manufacturer seems to have had an incentive to look into the question of screen rotation in the different raspberry pi models, so their installation instructions at https://www.waveshare.com/wiki/7inch_FHD_Monitor provide some insight on how it is expected to be done. For Pi4 that seems to be by using the screen layout editor in the preferences menu. For Pi3 and earlier, it seems to be done with the line in config.txt. I had assumed that for the Pi4 the GUI configuration was just a convenient alternative to editing a text file that is read at boot time. But it now seems that perhaps the menu option exists because the old boot time functionality is no longer supported. DigbyT On Sat, 10 Oct 2020 at 15:31, wrote: > Thanks, but I don't think the issues is that a different way of > getting that directive working is the issue; that family of > directives seems to be no longer supported. There are a bunch > of posts for various linux systems about this, as well; they > now rely on xrandr to do the job. I'm more asking if anyone > has done anything with the fancy new graphics system the 4 > includes, or done anything like generalizing the rotation bits > that were included in the bitsy port (which on a first pass of > reading, doesn't look too hard, but we know how that goes). -- 9fans: 9fans Permalink: https://9fans.topicbox.com/groups/9fans/T4c0ea2725cb1abdf-M9bc1f54dea9fb37ebe23a453 Delivery options: https://9fans.topicbox.com/groups/9fans/subscription