google websearch ai mode
https://share.google/aimode/PWcuVWzJwK4VwQZnn

there are open hardware 32 bit chips, but when i websearch for linux
on libre silicon i don't get many hits. where are people collaborating
around it?

24 minutes ago

Most 32-bit open hardware development focuses on the RISC-V
architecture for embedded applications, rather than general-purpose
computing
. The 64-bit version of RISC-V is a higher priority for running full
Linux distributions, as the 32-bit variants typically lack enough
memory for general desktop use.
However, collaborative projects do exist in the following open
hardware communities, with some efforts specifically targeting Linux
on 32-bit chips.
RISC-V and embedded Linux projects

lowRISC/Ibex: This production-quality open-source 32-bit RISC-V core
is optimized for embedded applications. While not for desktop Linux,
it is under active development and used by those creating specialized
embedded systems that may run a minimal Linux version.
RISC-V 32-bit Linux From Scratch: The GitHub project
ultraembedded/riscv32_linux_from_scratch provides a build process for
a minimal GCC, Linux kernel, and BusyBox setup on 32-bit RISC-V. It is
a specific resource for developers targeting the architecture for
embedded systems.
Linux-libre Freesh Project: The Freesh sub-project by the Free
Software Foundation Latin America (FSFLA) offers pre-compiled
Linux-libre kernels. This includes a 32-bit ARMv7 with hardware FPU
and 32-bit RISC-V, explicitly for use in completely free and
open-source systems.

Wider open hardware ecosystems

CHIPS Alliance: This is a major collaborative organization for
open-source hardware and software development, relevant to silicon
devices and FPGAs. It is a hub for developing open CPUs, peripherals,
and IP blocks. Collaborators include technology developers, companies
like Antmicro, and the Linux Foundation.
OpenHW Group: This is a member-driven, non-profit organization for
hardware and software designers who collaborate on open-source cores,
IP, and tools. They host high-quality, open-source hardware
developments, including many RISC-V projects.
FOSSi Foundation: The Free and Open Source Silicon Foundation supports
and promotes open silicon, featuring blogs and newsletters like El
Correo Libre that cover projects like LibreBMC, an open-source
hardware-based Baseboard Management Controller.
Embedded Recipes: Events like the Embedded Recipes conference host
talks and workshops that explore open-source hardware relevant to
Linux, including boards and the future of "Libre Silicon" designs.

The challenge for 32-bit Linux
As you discovered with your search, mainstream collaboration is
focused on 64-bit platforms for general computing. The low level of
activity for 32-bit Linux is a result of a few factors:

Memory constraints: Modern Linux kernels and applications have a high
memory footprint, making them difficult to run on the low-RAM systems
that typically use 32-bit chips.
Ecosystem shift: The open-source and commercial hardware industries
have largely moved toward 64-bit architectures, and the tooling and
developer interest have followed.
Minimalist use case: The use cases for Linux on 32-bit open hardware
are highly specialized, so the development happens in smaller, more
focused communities rather than large-scale, mainstream projects.

are there single board computers with fully open silicon that run Linux?

23 minutes ago

No, there are currently no widely available single-board computers (SBCs) with a
fully open silicon main processor that can run a general-purpose Linux
distribution. While many boards feature open-source hardware designs
for the board itself, the core System-on-Chip (SoC)—which includes the
CPU, GPU, and other components—is typically a proprietary black box
from a manufacturer.
The challenge of "fully open"
The term "fully open silicon" refers to a processor where the entire
design, from the low-level circuits to the top-level architecture, is
publicly verifiable. Achieving this is extremely challenging for
several reasons:

Massive investment: Designing and fabricating a modern SoC requires a
huge amount of capital, time, and specialized expertise. It is often a
multinational effort involving thousands of engineers.
Proprietary IP blocks: Even in open ecosystems like RISC-V, SoCs are
constructed from a library of Intellectual Property (IP) blocks. While
the CPU core might be open, blocks for memory controllers, display
outputs, and I/O can be proprietary and confidential.
Vendor software: Many features, especially hardware acceleration for
graphics and AI, require closed-source "firmware blobs" and drivers to
function. Without these, the hardware is often unusable at full
capacity.

The closest options
While a truly "fully open" SBC with Linux remains elusive, several
projects represent significant steps toward that goal, each with its
own trade-offs.
SBCs with open board designs

Olimex OLinuXino boards: This series of SBCs has a very strong
commitment to open-source hardware. Olimex provides the schematics,
PCB layout files, and software under permissive licenses. However, the
SoCs themselves (from vendors like Allwinner and NXP) contain
proprietary components.
Hackaday.io projects: The community on Hackaday features numerous open
hardware projects, including some with SBC designs. A key example is
the Von VisionAI, which released its integrated hardware design but
likely still relies on proprietary SoC components.

Boards with open instruction sets

RISC-V-based SBCs: These boards use the open RISC-V Instruction Set
Architecture, but the surrounding SoC is not necessarily open.

BeagleV: One of the first affordable RISC-V SBCs designed to run
Linux, the BeagleV board from BeagleBoard.org, Seeed Studio, and
StarFive moved the industry toward open ISA hardware. However, the
full SoC is not fully open source.
Milkv Duo: These small RISC-V boards, like the Milk-V Duo and Milk-V
Duo S, are designed for Linux and edge AI applications. While they use
the open RISC-V ISA, the SoC contains proprietary elements.

Boards with significant upstreaming efforts

Libre Computer boards: Libre Computer is a company dedicated to
producing boards with open-source software support for the Linux
kernel and other projects. They provide access to all the hardware
design information available to them and are focused on working with
upstream communities. While the SoCs themselves are proprietary, the
company's commitment to upstreaming greatly improves the board's
long-term viability with open-source software.

What is most realistic right now?
The closest you can get to a "fully open silicon" Linux SBC is a board
with an open hardware design (like an Olimex board) that uses a RISC-V
SoC with the highest possible level of transparent, mainline Linux
support (like some of the newer Milkv boards). This combination
maximizes openness at both the board and CPU architecture levels, but
does not achieve full "libre silicon" due to the proprietary nature of
many SoC components.

now, there are a lot of open silicon chips, I understand — people are
running these on degas I guess? or how does a common person use them?

22 minutes ago

That's an excellent clarification. "Open silicon chips" are not like
readily available components you can buy off the shelf at your local
electronics store. Instead, hobbyists and professionals use them
through one of three main avenues: prototyping on FPGAs, getting
designs fabricated on shared "shuttle" runs, or integrating
intellectual property (IP) cores into custom designs.

You were on the right track about FPGAs, which are central to how many
people interact with open silicon. "Degas," however, is a term used in
semiconductor manufacturing and is not related to running a chip
design.
Here is a breakdown of how a common person or hobbyist can use
open-source silicon:
1. Prototyping and running designs on FPGAs
This is the most common way for enthusiasts and engineers to
experiment with open silicon.

What is an FPGA? A Field-Programmable Gate Array (FPGA) is a chip that
can be reconfigured to behave like almost any other digital circuit.
Think of it as a blank canvas of logic gates that you can reprogram as
many times as you want.
How it works: Instead of manufacturing a custom chip, you write your
open silicon design (such as an open-source RISC-V core like Ibex) in
a hardware description language like Verilog. You then use specialized
software to "synthesize" your design and generate a configuration
file, or "bitstream," which is uploaded to the FPGA.
For the common person: This is the most accessible and affordable
path. You can buy an FPGA development board, like the ones from
TinyFPGA or Digilent, for under a hundred dollars. This allows you to
explore open hardware, learn digital logic, and run your design in a
physical device without paying for expensive chip fabrication.

2. Multi-project wafer (MPW) shuttle programs
For those who want to move beyond prototyping and have an actual
custom chip manufactured, MPW programs offer a much cheaper way to do
it.

How it works: An MPW shuttle is a way for multiple developers to share
the cost of a single fabrication run. Chip designs from dozens or even
hundreds of projects are combined onto one large silicon wafer.
OpenMPW: In a game-changing move for open silicon, Google, SkyWater
Technology, and others have launched the OpenMPW shuttle program. This
allows any developer with an open-source design to submit their chip
for manufacturing at no cost, using an open-source process design kit
(PDK).
For the common person: Projects like Tiny Tapeout, which lets users
submit very small designs, are a direct result of the OpenMPW program.
This has enabled hundreds of hobbyists, students, and engineers to
have their own open-source silicon designs manufactured for free.

3. Integrating open IP cores
Large-scale projects and companies often integrate open-source IP
cores into their custom chip designs.

How it works: An "IP core" is a reusable block of logic, like a
pre-designed RISC-V processor or an I/O controller, that can be
dropped into a larger chip design. Organizations like the CHIPS
Alliance and OpenHW Group maintain libraries of these tested,
open-source IP blocks.
For the common person: Unless you are an experienced chip designer
working on an advanced project, you are more likely to benefit from
the results of this process than to participate directly. For example,
a commercially available system-on-chip might include an open-source
RISC-V core, allowing you to benefit from the open architecture while
using a fully validated product.

In summary, a common person does not buy open silicon chips off the
shelf. Instead, they use them by reconfiguring FPGAs with open-source
designs, participating in shared manufacturing runs to get custom
chips made, or utilizing boards and products that have integrated
open-source components.

I read the z80 was built by hand … is this true? I found an open
source z80 implementation. are there larger cpus that can be built by
hand?

15 minutes ago

Your assumption about the Z80 is largely true in spirit: early
microprocessors were designed in a hands-on, human-intensive process
. The Zilog Z80 was not literally soldered by hand from individual
transistors, but its layout was hand-drawn on large sheets of paper or
Mylar by engineers. This painstaking process, with the complexity
managed by humans, was the standard for all early chip designs.
Hand-built vs. modern CPUs
Feature Hand-built Z80 Era (circa 1976)Modern Multi-Core CPU (circa 2025)
Design and LayoutDrawn by hand, one transistor and wire at a time, on
large drafting materials. A process of months for a small
team.Hand-drawn elements still exist for highly optimized blocks (like
the CPU cores or cache), but the majority is auto-generated by
specialized software.
Design ToolPencil, paper, rulers, and a lot of manual
verification.Sophisticated electronic design automation (EDA) software
that manages complexity on a massive scale.
Complexity~8,500 transistors. The entire chip layout could be
understood by a few people.Billions of transistors. No single human
can comprehend the full layout.
ManufacturingLaid out and double-checked by a human before being
converted into a photomask for fabrication.Automated fabrication in
multi-billion dollar facilities, with designs verified by software.
The closest you can get to "building a CPU by hand"
You can, and many people do, create larger CPUs by hand today, but not
by using individual transistors like you might build a simple radio.
Instead, you can build them from discrete logic chips.

Discrete Logic Chips: These are off-the-shelf integrated circuits
(ICs) that contain a small number of logic gates (AND, OR, NOT) or
more complex building blocks like adders and registers. This approach
abstracts away the individual transistor level, allowing hobbyists to
focus on the overall CPU architecture.
The 6502 on a Breadboard: A famous project by Ben Eater involves
building a fully functional 8-bit computer using a vintage 6502
microprocessor and a handful of supporting discrete logic chips on a
breadboard. The project has been hugely successful in teaching people
how early computers worked and how to assemble the components by hand.
Larger CPUs on FPGAs: For more complex CPUs, like the open-source
RISC-V cores mentioned previously, the "hand-building" happens in the
code. You can use a Hardware Description Language (HDL), like Verilog,
to create a text-based design of the CPU's logic. That design is then
"built by hand" by a human engineer, but the final product is a
programmed FPGA, not a discrete logic computer.

The limits of complexity
It's physically possible to build a larger CPU than the Z80 using
discrete logic, but you'd be limited by several factors:

Space: The physical size would become enormous, requiring huge circuit
boards to contain the thousands of individual ICs.
Speed: With wires running all over the place, the delays would be
significant, and your CPU would run at a very low clock speed, if at
all.
Complexity: As the transistor count grows, so does the complexity of
managing and debugging all the connections, making it essentially
impossible to build a modern, high-performance CPU by hand with
discrete parts.

The trade-off is between complexity and practicality. Building a
simple 8-bit CPU like the 6502 by hand is a feasible and educational
project. Building a multi-core 64-bit Linux-capable CPU by hand is a
logistical impossibility.

I’m trying to make simple text-mode file editing more verifiable.
could I run a cpu in a logged manner, such that I can verify the
physical reality matches the assembly code?

12 minutes ago

Achieving a fully logged CPU execution that can be verifiably mapped
to assembly code is a very difficult and ongoing research problem, not
a feature available for a standard desktop computer. Standard CPUs are
designed for speed, and logging every micro-operation would create an
immense performance bottleneck. However, the field of
verifiable computingand related technologies offers several approaches
and tools that come close to what you're describing, and which are
used for applications where high assurance is required.
What is possible today: A multi-layered approach
You cannot simply "log a physical CPU" and compare it to assembly. The
verification process needs to be handled at different layers, from the
hardware design to the software execution.
1. Hardware-level verification using formal methods
This is the most robust form of verification, used in safety-critical
applications like aerospace. It is not something an end-user does, but
it guarantees the chip itself behaves as specified.

How it works: Instead of just testing a chip's functions, engineers
use mathematical analysis to prove the correctness of the hardware
design.
What it proves: Formal verification can prove that for every possible
instruction and input, the CPU design will behave exactly as defined
by its specification, removing the chance of a bug.
Relevance to you: While you can't perform this yourself, if you use an
open-source CPU design that has been formally verified, you can have a
high degree of confidence that the hardware will not introduce errors.

Formal verification of hardware: misconception and reality
Formal verification of hardware: misconception and reality. Abstract:
Formal verification in hardware, is a novel technique for validating
the functional correc...
IEEE Xplore
Understanding Formal Verification - Verification Horizons
What is formal verification? Formal verification is a method to ensure
that a hardware design behaves as intended by using mathematical
analysis to check its co...
Siemens Blog Network
2. Instruction-level verification in a simulated environment
This is the closest you can get to logging every single CPU
instruction, and it is most accessible to developers and enthusiasts.

How it works: Instead of running the CPU physically, you run the
assembly code on a CPU simulator. The simulator can be configured to
produce a detailed trace of every instruction, register change, and
memory access.
Verification: You can write a separate program or script to analyze
the generated log and verify that the CPU's behavior matches the
expected output for every assembly instruction.
Relevant tools:

FPGA debug tools: These allow you to instrument a "soft CPU" running
on an FPGA with your own probes and verification logic.
Simulators: A tool like the Virtual 6502 mentioned in the search
results can show you the CPU's internal state for every cycle. For
RISC-V, there are specific verification suites with similar
functionality.

3. Execution integrity with trusted execution environments (TEEs)
TEEs are the current industry standard for verifying that a program
has run on genuine, untampered hardware.

How it works: A TEE, such as Intel's Software Guard Extensions (SGX)
or AMD's Secure Encrypted Virtualization (SEV), creates a secure
enclave for your code. The hardware provides a cryptographic
"attestation" that the code and data inside the enclave were not
tampered with.
What it proves: You can prove to an external party that the code you
specified executed correctly on the hardware you expected.
Limitations: TEEs do not log every instruction. They provide a binary
"yes/no" answer: either the execution was valid, or it failed
attestation. They are also vulnerable to side-channel attacks, which
can leak information through shared resources like caches. Mitigations
for these attacks, like those found in the QuanShield paper, are
constantly being developed.

4. Hardware-backed verifiable computation (zkVMs)
For even higher assurance, a new technology called Zero-Knowledge
Virtual Machines (zkVMs) uses cryptographic proofs to certify a
computation.

How it works: A zkVM generates a cryptographic proof of a
computation's correctness. A verifier can check this proof with much
less computational effort than re-running the original program.
What it proves: The zkVM guarantees that the program was executed
correctly, even if the CPU executing it is untrusted.
Where it's used: This is a cutting-edge field with applications in
blockchains and confidential cloud computing.

How to apply this for verifiable text editing
For your specific use case of verifiable text editing, the problem can
be broken down into ensuring:

The text editor program is what you think it is (integrity).
The program executes without interference (confidentiality and integrity).
The program behaves deterministically, and its output is correct.

Combining the approaches above, here is how you could get a high level
of assurance:

Use open hardware: Start with a platform like an open-source RISC-V
core on an FPGA. You can formally verify the CPU design yourself, or
use one that has already been verified by a third party.
Verify the build process: Implement a Reproducible Build process for
your text editor. This ensures that the compiled assembly code is
identical every time it is built from the same source, which can be
verified by multiple parties.
Use a TEE or zkVM: To guarantee that the verified software executes on
the verified hardware without tampering, run your text editor inside a
TEE or a zkVM. This provides a cryptographically secure log of the
execution.
Design a minimalist system: To reduce complexity, minimize the
software and hardware stack. Running a minimal text editor on a
simple, formally verified core, using a verifiable computation
framework, would give you the highest confidence that the "physical
reality matches the assembly code."

for such a simple task, couldn’t I run a slow cpu that could be logged
reasonably?

11 minutes ago

Yes, that is exactly the right approach for a task where verifiability
is more important than speed. A slow CPU, especially a custom-built
one for this purpose, is much more manageable to log and verify. The
key is reducing the complexity of the hardware and software so that
you can control and observe every single step.

A standard high-speed CPU has complex features like instruction
pipelines, out-of-order execution, and deep cache hierarchies to speed
up computation. These features make logging every cycle of execution
impractical and untrustworthy.
How a slow, verifiable CPU would work
Instead of relying on a fast, complex, and opaque CPU, your system
would be built to prioritize simplicity and transparency. It could use
an older or purpose-built architecture with these features:

Simple architecture: A slow, simple CPU (like an 8-bit or 16-bit
processor, or a basic RISC-V core) has a short instruction pipeline,
or no pipeline at all. This means one instruction is fully executed
before the next one is fetched.
Accessible logging: In such a system, you could use a separate piece
of hardware (like a logic analyzer or another microcontroller) to
observe the CPU's pins and bus. Since the CPU is slow, this external
logger could easily keep up, recording:

The instruction fetched.
The address of the instruction.
Any memory reads or writes performed.
The contents of key registers.

Known hardware: The simplicity of the CPU means its internal
microarchitecture and physical layout are well-understood. This allows
for very high confidence in its behavior, especially if it is an
open-source core with a publicly available design.
Simple software: For a task like simple text editing, you don't need a
complex OS, which would introduce countless other variables. A small,
simple, and formally verified text editor program could be written
directly in assembly language.

How you would implement this

Select a simple processor: Use a proven and simple open-source core
like the 32-bit Ibex RISC-V core. You can run this on an FPGA board.
Instrument your system: The FPGA allows you to design your
verification logic right alongside the CPU. You can add extra "probes"
to log the instruction and state of the CPU for every single cycle.
Run a minimal text editor: Write a text editor in assembly or a simple
language for the RISC-V core. Keep the code minimal and deterministic.
Log the execution: Use the FPGA's built-in logging capabilities to
record the full execution trace to an external file.
Verify the output: Compare the final state of the text file in memory
with the recorded log. You can also re-run the log through a software
simulator to confirm the trace is valid.

By limiting the scope and complexity, you make the verification
problem tractable. A general-purpose computer is built for performance
and would be practically impossible to fully log, but a slow,
custom-designed system for a specific, simple task makes this kind of
verifiable computing possible for hobbyists and researchers.

these are my priorities: 1. fully open chip design 2. easy to set up
with Linux and existing hardware 3. gpl/libre/or near communities that
value users/or actively developed by diverse community 4. can buy or
cheaply produce a physical chip

6 minutes ago

This set of priorities represents the ideal for a fully libre hardware
and software ecosystem. While no single existing project perfectly
meets all four goals right now, the
RISC-V ecosystem comes closest, with various sub-projects addressing
your priorities in different ways.
Here is a breakdown of how the RISC-V ecosystem aligns with your
priorities, along with the trade-offs involved.
1. Fully open chip design

RISC-V Cores: The Instruction Set Architecture (ISA) of RISC-V is an
open standard, and many high-quality CPU cores are available as
open-source hardware designs. The Ibex core, developed by lowRISC, is
one example that is production-quality and open. The VexRiscv core
from Charles Papon can also be used for open-source ASIC development
and can run Linux.
The "black box" problem: The biggest challenge here is that an open
core is only one part of a full System-on-Chip (SoC). The SoC includes
other components like memory controllers, bus interconnects, and I/O
controllers. While many of these are also available as open IP,
putting them all together in a single, fully open design is a complex
engineering task.

2. Easy to set up with Linux

System-on-Chip (SoC) builders: Projects like LiteX, which uses the
VexRiscv core, are designed to generate complete SoC designs with
open-source IP. This makes it easier to create a full system that can
run Linux.
Linux support: The RISC-V community has extensive Linux support.
Projects like LiteX have dedicated efforts to run mainline Linux, with
drivers and toolchains available. However, because RISC-V chips can be
customized with different IP blocks, you may need to compile a custom
kernel for your specific hardware configuration.
Trade-off: Fully open vs. Ease-of-use: Boards from commercial vendors
like BeagleBoard or Milk-V that run Linux are easier to use out of the
box, but their SoC often contains proprietary IP. If you build a fully
open SoC using LiteX on an FPGA, you gain full transparency but must
handle more of the integration yourself.

3. GPL/libre communities

Open Hardware communities: The RISC-V ecosystem is rich with
collaborative projects and organizations that align with free and open
principles, such as the CHIPS Alliance, the OpenHW Group, and the
FOSSi Foundation. These groups develop and share open IP and tools.
Diversity and inclusivity: Organizations like the Open Hardware
Diversity Allianceare active within the RISC-V community to promote
participation from underrepresented groups, fostering a more inclusive
environment.
Licensing: While many open hardware designs use permissive licenses,
there is ongoing discussion about creating hardware-specific copyleft
licenses that are more robust for hardware than the software-oriented
GPL. However, many projects still use GPL for their source designs,
and organizations like the Gathering for Open Science Hardware (GOSH)
strongly support user freedom.

4. Can buy or cheaply produce a physical chip

OpenMPW program: The Google-sponsored Open Multi-Project Wafer
(OpenMPW) shuttle program offers the best path for hobbyists and
students to get their open-source chip designs fabricated for free.
"Cheaply produce" scale: While the OpenMPW program allows for free
fabrication, it requires your design to be integrated into a larger,
standardized chip template (like the Caravel harness) and is not a
guaranteed process for every submission. Getting a single, custom chip
made outside of this program remains very expensive.
"Buy a chip" scale: For mass-market devices, commercial RISC-V chips
are becoming widely available and can be bought cheaply. However,
these are almost never fully open down to the silicon layout,
containing the proprietary IP blocks mentioned earlier.

Summary of how to approach your priorities
To best meet all your priorities, you should focus your efforts on the
LiteX + RISC-V + OpenMPW path.

Use LiteX to build an SoC with a VexRiscv or Ibex core.
Work within the FOSSi Foundation and CHIPS Alliance communities, which
are highly active in this space.
Participate in the OpenMPW shuttle program to get your chip fabricated for free.

This approach offers the highest level of openness at every step of
the process while still being able to produce a physical,
Linux-capable chip. However, it requires a significant time investment
and technical expertise compared to buying a proprietary, commercially
available board.

Reply via email to