Beyond just the compiler, there are also optimization and other settings
(like the multitude levels of C-compliance or how strict to be about
warnings, or conditional-builds to tailor it specific situations).

Regardless, proper binary deliveries come with CRC checksums.  This isn't
just to verify that you downloaded the file correctly, but to also help
verify that you've used the exact same "Bill of software material" (SBOM),
versions of dependencies, and other settings to produce that same binary.

The "more safe" isn't necessarily that "millions of anonymous eyes" have
scanned the code for mistakes or anything bad.  (yes, that can help, but no
guarantee it is actively taking place).  Rather, IF something does happen,
you now have a kind of paper-trail to investigate where things went wrong.
A form of accountability (provided you can obtain that source, that
compiler version and its runtime support for whatever OS is involved - and
produce a build consistent with that CRC).

Running a prebuild from RandomPlace.com is always risky (even if things go
fine that day, "sleeper code" could park itself somewhere).  But for me,
it's like buying a used car.  Yep, it's a lot of risk on if the prior owner
is telling the truth.  I just have to use good judgement that the car isn't
laced with something, doesn't have a tracker hidden on it, or isn't just
about to blow a gasket.   Part of that good judgement is looking at other
community involvement by said theoretical "RandomPlace.com" and a virus
scan - akin to kicking the tires.  First adopters can use virtual machines
and look for anomalies there, then after that, it's off to posting product
reviews.


It's odd to me that people cry about environmental harm of crypto, but not
a word about all the extra power wasted recompiling from source over and
over.  Maybe if they just combine the two - that solving a crypto block
somehow involved recompiling something :)   Can I patent that idea?   (in a
nutshell, crypto is computing a massive complex-CRC of a ton of transaction
info).


We're still in the infancy of all this computer-stuff.  The hoopla about
Rust bugged me and this term "memory safe languages."  Syntax sugar
altogether bugs me, as I think shoveling ASCII text around in a text-editor
is not the ultimate in how software is create.  UML failed us for various
reasons.  I think the step past high level languages is moving into VR, and
using virtual constructs to "build" instead of "write" software.  In doing
so, now you can actively monitor the dependencies between modules (lines
between declarations, and a real time memory-model of their arrangement),
and decorate AST nodes (floating in 3-space) with optional tags (like
exception policy, whose doing range checks on inputs, or how any logging is
handled) instead of cluttering up all the core business logic.  This system
would also allow actively monitoring resource requirements - to answer a
question like: this code can run on a RPi Nano with 128MB.  Then you add
one aspect (or equivalent to a line of code), now suddenly whatever your
code is requires an RTX9090 in the system somewhere - so maybe you want to
re-think your approach.  But maybe I'm wrong and "writing software" will
always be necessary.  Time will tell.    [ imo, UML "failed" because 2D
huge plotter size maps of class relationships wasn't really that helpful -
it was just a different format of what interface code already told you; and
a bunch of class relationships isn't really the meaningful part of a design
]


-Steve

On Mon, Feb 3, 2025 at 8:58 PM Warner Losh via cctalk <cctalk@classiccmp.org>
wrote:

> On Mon, Feb 3, 2025, 5:37 PM Sean Conner via cctalk <cctalk@classiccmp.org
> >
> wrote:
>
> > It was thus said that the Great ben via cctalk once stated:
> > > On 2025-02-03 3:32 p.m., Sean Conner via cctalk wrote:
> > > >
> > > >   So it could be that C99 will work for you, it might not.  A
> C89-only
> > > >compiler will work---it'll still accept K&R style code, and it will be
> > old
> > > >enough to avoid the "exploit undefined behavior for better benchmarks"
> > > >compilers we have today.  Or find an old K&R C compiler.  They still
> > exist,
> > > >but generate very bad code.
> > > >
> > > >   -spc
> > >
> > > So where does on find a older version, on a current Linux build.
> > > we don't support it!!! We only port the latest intel CPU.
> > >
> > > You have 32 bit compilers, or bigger. I might want a 16 bit compiler
> > > under Linux.
> >
> >   Do you mean you want a compiler to generate 16-bit code?  Or be
> compiled
> > as a 16-bit program to run under Linux?  If the later, it's not
> supported,
> > or at least, not supported by default [1].
> >
>
> I have  Vexix86 emulator that does much the same thing.
>
> > I was hoping to use Embeddable Linux Kernel Subset (ELKS) BCC compiler
> > > and the 6809 code generator, but NO they had to rewrite it just for the
> > > 386.
> >
> >   It took me only a few minutes to find it.  There's the GIT repository
> at
> >
> >         https://github.com/lkundrak/dev86
> >
> > Yes, it requires git to initially download it, but it's available.  And
> it
> > *still* has 6809 code generation.  The code seems to be originally
> written
> > for Unix anyway, or at least it appears so from the initial importation
> > into
> > git [2]:
> >
>
> Bcc was written by Bruce Evans to have a compiler for 386BSD for the boot
> loader (among other reasons). It dates back to the late 80s or so. It's one
> of the reasons we in FreeBSD held onto K&R constructs in certain areas of
> the tree for so long. Well after gcc / binutils could generate the small
> bits of 16bit code the loader eventually required..
>
> Warner
>
>         /* bcc.c - driver for Bruce's C compiler (bcc) and for CvW's C
> > compiler */
> >
> >         /* Copyright (C) 1992 Bruce Evans */
> >
> >         #define _POSIX_SOURCE 1
> >
> >         #include <sys/types.h>
> >         #include <sys/stat.h>
> >         #include <sys/wait.h>
> >         #include <unistd.h>
> >         #include <signal.h>
> >         #include <stdlib.h>
> >         #include <string.h>
> >
> > The latest version was ported to MS-DOS at some point.  I was able to
> > compile the latest version (on a 32-bit Linux system---I no longer have a
> > MS-DOS C compiler so I couldn't test on that), but the code is C89, so in
> > theory, you could use any MS-DOS C compiler post 1989 to compile the code
> > if
> > you so wish.
> >
> >   When I did the compile, there compiler did throw up some warning even
> > though none were specified because the code is that old, but I did get an
> > executable:
> >
> > [spc]matrix:~/repo/dev86/bcc>./bcc
> > Usage: ./bcc [-ansi] [-options] [-o output] file [files].
> >
> > > Open source is popular because it was free.
> > >
> > > No compiler generates bad code,just some hardware was never meant to
> > > have stack based addressing, like the 6502 or the 8088/8086.
> > > Look at the mess that small C has for 8088/8086 code gen.
> > > Self hosting never seems to be important for C compiler on a small
> > machine.
> >
> >   The 8086/8088 was never meant to have stack based addressing?  Um, the
> > 8086/8088 has an entire register dedicated to that (BP by the way).  The
> > limitation with BP is that it's bound to the SS segment by default, and
> in
> > some memory models that becomes an issue, but to say it doesn't have
> stack
> > based addressing?  Methinks you are misrembering here.
> >
> >   And self-hosting a C compiler on a small system isn't easy with 64K of
> > RAM
> > total.  The PDP-11 had at least 64K code space and 64K data space to work
> > with.
> >
> >   -spc
> >
> > [1]     I have run a 16-bit MS-DOS exectuable under Linux, but it was on
> a
> >         32b x86-based Linux system with a custom program I wrote to run
> > it.
> >         I even had to emulate several MS-DOS system calls to get it to
> work
> >         (since I needed input from a Unix program to be piped in, I
> > couldn't
> >         use DOSBox for this).
> >
> > [2]     Dated July 27, 2002, which is before git existed, but this
> >         repository was converted to git at some point.
> >
>

Reply via email to