Thanks, Daniel, done.
On Thu, Jan 9, 2025 at 5:59 PM Daniel Maslowski via 9fans
<9fans@9fans.net> wrote:
>
> Fantastic!
>
> Ron, to make it easier, you can set the regen branch as the new default
> branch in the repo settings on GitHub, so people don't accidentally file
> against master.
>
> On
Fantastic!
Ron, to make it easier, you can set the regen branch as the new default
branch in the repo settings on GitHub, so people don't accidentally file
against master.
On Thu, 9 Jan 2025, 23:22 Ron Minnich, wrote:
> WOW! Paul got it to build.
>
> git/clone g...@github.com:rminnich/nix-os
>
WOW! Paul got it to build.
git/clone g...@github.com:rminnich/nix-os
git/branch -b origin/regen -n regen
cd sys/src/nix
# HEY ANYONE! WANT TO FIX THIS!
rc -x nix # set the x bits?
# make it so it does not have to be in $home/nix-os?
cd boot
mk
cd ../k10
mk
# it may seem like it hangs, it's actual
NIX is moving forward, thank you paul!
The branch is called regen, we have our first commit in many years.
Please take a look. If you submit a PR, please add a signed-off-by:
line.
thanks
On Tue, Jan 7, 2025 at 10:01 PM Ron Minnich wrote:
>
> so for work like this, my motto is commit early, com
so for work like this, my motto is commit early, commit often, to a
branch we can always drop later. no harm. It's easier (for me anyway)
than shuffling patches around in email.
I'm happy to accept a pull request against rminnich/nix-os, , let's
call the branch regen.
thanks
On Tue, Jan 7, 2025
As you say, Ron.
First, here's my nix script, such as it is, cribbed from the old nix one.
It has holes, guaranteed. Also, I went and pulled in a "user" directory,
just for old habits dying hard. Yes, I still use glenda on this old
terminal. Call me names for it.
#!/bin/rc
unmount /sys/includ
if you can document your steps, then others can stand on your
shoulders, possibly, and we can all move forward?
On Tue, Jan 7, 2025 at 9:08 PM Paul Lalonde wrote:
>
> Ok, not a bad first day poking at it. I have a growing (but not ready) new
> nix script to pull the right pieces over top of my
Ok, not a bad first day poking at it. I have a growing (but not ready) new
nix script to pull the right pieces over top of my build environment.
I have a near-complete build, but with hazards: 9front has evolved in a
number of places with many ulong parameters becoming usize. I have a list
of tho
if you look at the first_commit branch, you'll see a sys/src/nix/nix
script, which sets up some binds.
What we did, before building nix, on plan 9, in 2011, was a set of
binds to get the right things such as /sys/include and so on.
This won't be just a 'mk and it builds'. There's 13 years of bit
And a bit more digging. Yes, I'm clearly doing this wrong. In building
nix-os/sys/src/k10/trap.c it should absolutely be using the Tos structure
from nix, not the one in the host system.
How do I re-root this correctly for this build?
Paul
On Tue, Jan 7, 2025 at 4:47 PM Paul Lalonde
wrote:
>
Ok, I thought, what could do.
So I went to my rPi 400, set up SSH for github, got Ron's nix-os repo and
hit "mk".
When that errored out a bunch I realized that I needed /amd64 built, so I
did that. Just as painless as I remembered.
And now, I get a ways further into the build, but hit an incompa
I found the original 2011 paper, which was a sandia report, from may
2011. It's a modification of the original proposal, which I no longer
have; but it is a good summary of where we were at the end of my visit
in May.
This is interesting: "We have changed a surprisingly small amount of
code at thi
On Tue, Jan 07, 2025 at 09:20:06AM -0800, Ron Minnich wrote:
>
> Why NIX?
>
> If you think about it, timesharing is designed for a world where cores
> are scarce. But on a machine with hundreds of cores, running Plan 9,
> there are < 100 processes. We can assign a core to each process, and
> let
That name collision question re Nix-OS came up in 2011. I talked to
some folks, they more or less said "don't worry about the name", so I
decided not to.
The name NxM was intended to mean "N kernel cores, M application
cores" -- i.e. to be evocative of the NIX model. We probably could
have called
On this note, 9front does have an arm64 qemu kernel which is designed
specifically to be
able to make use of the Hypervisor.framework available on modern macs and Linux
KVM.
The relevant pieces of the FQA are: https://fqa.9front.org/fqa3.html#3.3.1.1.1
This section also includes what is required
I had to use:
$ hdiutil attach -imagekey diskimage-class=CRawDiskImage -nomount 9legacy.rpi
Or the image wasn't recognized.
I tried a quick startup but just got a blank screen. I don't have much time to
take a look, but I'll do so later this evening.
Thanks for this.
--
Mat Kovach
m.
But what about the name collision with the other Nix OS?
On Mon, 6 Jan 2025 at 03:57, Ron Minnich wrote:
>
> I think it's ok to start with NIX, not NxM.
>
> On Sun, Jan 5, 2025 at 10:45 AM Stuart Morrow wrote:
> >
> > On Sun, 5 Jan 2025 at 16:39, Ron Minnich wrote:
> > > take that 2011 code, br
On Tue, 7 Jan 2025 at 04:24, Ben Huntsman wrote:
>
> So basically there's no getting 9vx to work on modern Mac OS (on Intel, I
> don't mean ARM)?
>
> That's truly unfortunate.
BoxedWINE exists and works, so I don't see that there can't be a "9pcemu".
And yes, "unfortunate" is right - drawterm e
Regarding the status of 9vx on macOS and m1, I would suggest an
alternative: use 9front or 9legacy with qemu. I use the 9legacy amd64
image with qemu on an M1 regularly and even that runs reasonably fast
for my taste.
In the last few hours I tried to bring up the 9pi4 kernel using
'qemu-system-aar
So basically there's no getting 9vx to work on modern Mac OS (on Intel, I don't
mean ARM)?
That's truly unfortunate.
From: o...@eigenstate.org
Sent: Monday, January 6, 2025 3:02 PM
To: 9fans@9fans.net <9fans@9fans.net>
Subject: Re: [9fans]
yep, I set up ssh keys, git pull is fine, next step is try to figure
out how to build it again.
ron
On Mon, Jan 6, 2025 at 3:25 PM wrote:
>
> I forgot to say -- cloning over ssh is reliable for me, so
> if you set up ssh keys on github, that should be sufficient
> for now.
>
> Quoth o...@eigenst
I forgot to say -- cloning over ssh is reliable for me, so
if you set up ssh keys on github, that should be sufficient
for now.
Quoth o...@eigenstate.org:
> As far as I can tell, it's a github bug. I've opened
> a discussion:
>
> https://github.com/orgs/community/discussions/148609
>
>
As far as I can tell, it's a github bug. I've opened
a discussion:
https://github.com/orgs/community/discussions/148609
Quoth Ron Minnich :
> FYI, as of today, I can not git clone github.com/rminnich/nix-os with
> the 9front git. I've asked Ori to take a look. Once I can do that,
> I'll
It's not just 32 bit, it depends on the quirks of i386
segmentation registers. It's never going to run on any
processor other than a 32-bit 386.
Quoth Ben Huntsman :
> Hi guys-
>I don't mean to take too much of a tangent, but since the nix-os repo
> includes a pre-built copy of 9vx for OSX, h
Hi guys-
I don't mean to take too much of a tangent, but since the nix-os repo
includes a pre-built copy of 9vx for OSX, has anyone looked at updating 9vx to
be able to compile on newer Mac OS versions? One of the big problems is that
it's very 32-bit which Apple doesn't support compiling fo
FYI, as of today, I can not git clone github.com/rminnich/nix-os with
the 9front git. I've asked Ori to take a look. Once I can do that,
I'll start a NOTES file -- I think Charles called his NOTES, or was it
Notes, Charles, either way, I'll follow your model :-), as a way to
record our progress. I'
On Sun, Jan 05, 2025 at 10:46:16PM -0800, Ron Minnich wrote:
> Do people have a preferred place to start from?
>
> I'm inclined to something like this:
> g...@github.com:rminnich/nix-os.git
>
> grab that, cd nix/sys/src/nix/k10
> mk
>
> and see how it goes. We need a shared place to record our e
Do people have a preferred place to start from?
I'm inclined to something like this:
g...@github.com:rminnich/nix-os.git
grab that, cd nix/sys/src/nix/k10
mk
and see how it goes. We need a shared place to record our experiences
-- suggestions?
I think our goal this week should be that we figure
I think it's ok to start with NIX, not NxM.
On Sun, Jan 5, 2025 at 10:45 AM Stuart Morrow wrote:
>
> On Sun, 5 Jan 2025 at 16:39, Ron Minnich wrote:
> > take that 2011 code, bring it to your plan 9 system, and see if
>
> But https://github.com/rminnich/nxm has 410 commits after 2011? My
> under
I'm interested.
Not 100% sure how much work I'll be able to do, but like you said, pace
yourself and be consistent. :-)
Cheers,
Chris
On Sun, Jan 5, 2025, 08:39 Ron Minnich wrote:
> No need for money yet!
>
> Let's get this party started. I have queries in to ampere as to how we
> can set up a
05.01.2025 21:48:24 Stuart Morrow :
> On Sun, 5 Jan 2025 at 20:15, wrote:
>> not sure I have any devices so performance critical that I'd want
>> to dedicate a core to them. Benchmarks would be interseting.
>
> The curried syscall stuff came before the execac stuff.
>
> You were the one wanting u
Quoth Stuart Morrow :
> On Sun, 5 Jan 2025 at 20:15, wrote:
> > not sure I have any devices so performance critical that I'd want
> > to dedicate a core to them. Benchmarks would be interseting.
>
> The curried syscall stuff came before the execac stuff.
>
> You were the one wanting userspace de
On Sun, 5 Jan 2025 at 20:15, wrote:
> not sure I have any devices so performance critical that I'd want
> to dedicate a core to them. Benchmarks would be interseting.
The curried syscall stuff came before the execac stuff.
You were the one wanting userspace devdraw.
I have a hookup at a company that builds modular compute nodes for DoD.
They make X86 pluggable compute devices the size of a credit card and lots
of types of clustering hardware for them. Each card is equivalent to a
typical desktop PC. I've used Plan 9 on these devices successfully in the
past. I
not sure I have any devices so performance critical that I'd want
to dedicate a core to them. Benchmarks would be interseting.
Quoth Stuart Morrow :
> > This has been a very interesting discussion, thanks all. My offer
> > remains: if anyone wants to revive NIX, I am happy to help.
>
> Am I the o
On Sun, 5 Jan 2025 at 16:39, Ron Minnich wrote:
> take that 2011 code, bring it to your plan 9 system, and see if
But https://github.com/rminnich/nxm has 410 commits after 2011? My
understanding was NIX and "NxM" aren't really different things, that
they can be understood as just a name change si
On Sun, Jan 05, 2025 at 08:36:17AM -0800, Ron Minnich wrote:
> No need for money yet!
>
> Let's get this party started. I have queries in to ampere as to how we
> can set up a simulator. However, if someone wants to take a first
> step, take that 2011 code, bring it to your plan 9 system, and see
No need for money yet!
Let's get this party started. I have queries in to ampere as to how we
can set up a simulator. However, if someone wants to take a first
step, take that 2011 code, bring it to your plan 9 system, and see if
it builds.
Again, the key here is a sustained effort. You don't hav
There have been other ideas in similar directions over the years.
E.g.
https://www.researchgate.net/publication/342759611_SCE-Comm_A_Real-Time_Inter-Core_Communication_Framework_for_Strictly_Partitioned_Multi-core_Processors
about the concepts of ACs and CCs (communication cores).
On Sun, 5 Jan 20
On Sat, Jan 04, 2025 at 01:29:19PM -0800, Ron Minnich wrote:
> let's make a deal.
>
> I will talk to Ampere about what is the right way to simulate their
> system. I have friends there. If we get enough people to get NIX
> running on a simulator, then I'll try to figure out how to get us some
> re
i think brazil experimented with networking outside the kernel but it was
pushed back in
On Sun, 5 Jan 2025 at 00:24, Thaddeus Woskowiak
wrote:
> On Sat, Jan 4, 2025 at 1:03 PM Bakul Shah via 9fans <9fans@9fans.net>
> wrote:
> >
> > On Jan 4, 2025, at 9:35 AM, Stuart Morrow
> wrote:
> > >> This
On Sat, Jan 4, 2025 at 1:03 PM Bakul Shah via 9fans <9fans@9fans.net> wrote:
>
> On Jan 4, 2025, at 9:35 AM, Stuart Morrow wrote:
> >> This has been a very interesting discussion, thanks all. My offer
> >> remains: if anyone wants to revive NIX, I am happy to help.
> >
> > Am I the only one who se
let's make a deal.
I will talk to Ampere about what is the right way to simulate their
system. I have friends there. If we get enough people to get NIX
running on a simulator, then I'll try to figure out how to get us some
real hardware, which we could position at a friendly university --
anybody
> On Jan 3, 2025, at 11:56 AM, Ron Minnich wrote:
>
> This has been a very interesting discussion, thanks all. My offer
> remains: if anyone wants to revive NIX, I am happy to help.
>
> ron
I have been interested in NIX since I read about it and would be
willing to assist, if possible, in reviv
On Jan 4, 2025, at 9:35 AM, Stuart Morrow wrote:
>> This has been a very interesting discussion, thanks all. My offer
>> remains: if anyone wants to revive NIX, I am happy to help.
>
> Am I the only one who sees that the Fastcall stuff would be good for
> bringing some devices out of the kernel (
> This has been a very interesting discussion, thanks all. My offer
> remains: if anyone wants to revive NIX, I am happy to help.
Am I the only one who sees that the Fastcall stuff would be good for
bringing some devices out of the kernel (that are devs only for
performance reasons)?
And then, cl
On Fri, Jan 03, 2025 at 12:32:55PM -0800, Skip Tavakkolian wrote:
> Just saw a review of System76 Thelio Astra (Ampere Altra). An arm64 system
> with 128 cores and 512GB of memory for under $7500. NIX's model seems more
> and more applicable to commodity hardware.
>
Since there is a plan9 founda
Just saw a review of System76 Thelio Astra (Ampere Altra). An arm64 system
with 128 cores and 512GB of memory for under $7500. NIX's model seems more
and more applicable to commodity hardware.
On Fri, Jan 3, 2025, 11:11 AM Ron Minnich wrote:
> On Wed, Jan 1, 2025 at 4:35 AM wrote:
> > Is the n
On Wed, Jan 1, 2025 at 4:35 AM wrote:
> Is the number of TC fixed, or is it at least one TC and the number
> can increase if needed (or, put it differently, can a AC, if needed,
> switch to a TC and vice-versa)?
Fixed, I believe, at boot time? I no longer recall. Nemo and lsub did
experiment
wit
On Tue, Dec 31, 2024 at 11:29:43PM -0800, Ron Minnich wrote:
> Timesharing core (TC) processes schedule onto AC (app core) several
> ways, one of them being execac.
>
Is the number of TC fixed, or is it at least one TC and the number
can increase if needed (or, put it differently, can a AC, if ne
Timesharing core (TC) processes schedule onto AC (app core) several
ways, one of them being execac.
execac is exec with benefits. The process is started via the normal
exec path, then you can think of it as suspending on the kernel core,
and resuming on an AC.
The way that's done: AC is running a
On Monday, December 30th, 2024 at 5:25 PM, Ron Minnich wrote:
> Thanks for the good questions.
This has been a very interesting thread and I'm very glad you've given us your
rare insights here, thank you!
There are still some points I'd like to have fleshed out, but I'm not sure how
to put it
Yes, AMD's EPYC line and derivatives, with their reasonably nice memory
partitioning is *excellent* for running independent VMs. It does a good
job of letting you scale your core counts appropriately to the size of the
VM.
Nvidia's GeForce NOW game streaming platform runs (ran? I'm not there
any
On Mon, Dec 30, 2024 at 10:15:13AM -0800, Paul Lalonde wrote:
> The hard part is memory consistency.
>
> x86 has a strong coherence model, which means that any write is immediately
> visible to any other core that loads the same address. This wreaks havoc
> on your cache architecture. You need t
The hard part is memory consistency.
x86 has a strong coherence model, which means that any write is immediately
visible to any other core that loads the same address. This wreaks havoc
on your cache architecture. You need to either have a global
synchronization point (effectively a global share
On Mon, Dec 30, 2024 at 9:39 AM Bakul Shah via 9fans <9fans@9fans.net> wrote:
>
> I wonder how these many-core systems share memory effectively.
Typically there is an on-chip network, and at least on some systems,
memory blocks scattered among the cores.
See the Esperanto SOC-1 for one example.
I wonder how these many-core systems share memory effectively.
> On Dec 30, 2024, at 8:25 AM, Ron Minnich wrote:
>
> BTW, there are 512- and 1024-core risc-v systems in the works, and NIX
> looks pretty good for that kind of CPU.
--
9fans: 9fans
Permalink
Thanks for the good questions.
On Sun, Dec 29, 2024 at 4:10 PM andreas.elding via 9fans
<9fans@9fans.net> wrote:
> How was it presented to the users? Could they query to see the current
> utilization of the system?
It looked very normal. To see what was running, you did ps. In the
status, you co
Thank you for the response, it was quite an interesting read. Unfortunately,
I'm not a great coder, so I can't take you up on that offer. You mentioned that
GPUs took over, but not all problems can run on GPUs. There may still be a
general interest in this (I know I am interested).
How was it p
On Sat, Dec 28, 2024 at 04:26:49PM -0800, Paul Lalonde wrote:
> MIG is interesting. It's something we partly devised to deal with sharing
> interactive workloads on the same GPU.
> Time-slicing is very difficult on a GPU because the state is so large. You
> wind up having to drain the workload and
MIG is interesting. It's something we partly devised to deal with sharing
interactive workloads on the same GPU.
Time-slicing is very difficult on a GPU because the state is so large. You
wind up having to drain the workload and that means you're holding on to
resources until the longest job ends.
I think there is a conflict between two different types of usage that
GPU architectures need to support. One is full-speed performance,
where the resource is fully owned and utilized for a single purpose
for a large chunk of time, and the other is where the GPU is a rare
resource that needs to be s
This data-shuttling is one of the things that GPU vendors have been working
on.
Most of the data the GPU needs is never touched by the CPU, except to move
it to GPU memory. This is wasteful.
But the GPU already sits on the PCIe bus, as does the storage device. Why
not move the data directly from
Apple Silicon chips may be an interesting counter-example to your view
of the architecture. They work directly from system memory; data is not
copied between different sets of memory or different areas in memory to
make it available to the GPU. Consequently the CPU and GPU work
together much
On Sat, 28 Dec 2024 at 04:44, Cyber Fonic wrote:
> Whilst there are many HPC workloads that are well supported by GPGPUs, we
> also have multi-core systems such as Ampere One and AMD EPYC with 192 cores
> (and soon even more).
> I would think that some of the Blue Gene and NIX work might be rele
On Fri, Dec 27, 2024 at 01:31:24PM -0800, Paul Lalonde wrote:
>
> That said, now that NVDA has moved a bunch of their "resource manager"
> (read, OS) to the GPU itself and simplified the linux DRM module, the
> driver layer has simplified significantly. I'm not sure I have anywhere
> near the ban
Personally I think that there is a significant market for Nix, not in HPC,
but as a better, more distributed hypervisor. With Broadcom mismanaging
VMWare, there is a need for something better than ESXI, Hyper-V or Proxmox
for all the enterprises that aren't 100% on the cloud (which is pretty much
a
I'll take 6912 simple cores at 1Ghz over 192 core at 5GHz any day. So long
as I can spend the 3 months rebuilding the implementation to be cache and
memory friendly.
I love the EPYC line, and have spec'd them into DCs. But for raw compute
it's just not competitive for HPC-like workloads.
Paul
all that said, my offer stands: I'd love to help anyone interested to
bring nix back to life. I'd most prefer to do so on 9front, but I'm
not picky.
ron
On Fri, Dec 27, 2024 at 8:18 PM Anthony Sorace wrote:
>
> I've thought for a while now that NIX might still have interesting things to
> say i
Whilst there are many HPC workloads that are well supported by GPGPUs, we
also have multi-core systems such as Ampere One and AMD EPYC with 192 cores
(and soon even more).
I would think that some of the Blue Gene and NIX work might be relevant on
such hardware.
On Sat, 28 Dec 2024 at 15:18, Anthon
I've thought for a while now that NIX might still have interesting things to
say in the middle of the space, even if the HPC origins didn't work out.
Probably most of us are walking around with systems with asymmetrical cores
("performance" vs. "efficiency") in our pockets right now; it seems li
when the plan 9 on blue gene started, and for some time later, there was a
specialised kernel (the "compute kernel"?) for BG,
which was a bit like DOS, so we felt we had a contribution to make, with an
OS designed for an age of
distribution. later it turned out that SC/HPC like everything else just
On Fri, Dec 27, 2024 at 1:12 PM Kurt H Maier via 9fans <9fans@9fans.net> wrote:
> Which architecture and OS did they wind up with? I was part of the team
> that went on to administer the Coral systems, which were linux on POWER
> 9+. Even the early-stage bringup loaders were linux systems. I
>
Not directly. The AVX512 instructions include some significant
permute/shuffle/mask hardware, available on pretty much all instructions.
These in turn lead to very long capacitance chains (ie, transistors in
series that have to stabilize each clock) and so constrain how fast the
clock can run. Fo
On Fri, Dec 27, 2024 at 01:24:40PM -0800, Paul Lalonde wrote:
> The remnants of that work are now living on in the AVX512 instruction set.
> The principal problem with Larrabee was that the ring bus connecting some
> 60+ ring stops was *so wide* (512 bits bidirectional = 1024 bits!) that it
> consu
On Fri, Dec 27, 2024 at 1:25 PM sirjofri
wrote:
> I've been a game developer for >5 years and I'm always surprised by how
> much GPUs can do if used correctly. It's just incredible.
>
Yes, it still doesn't cease to amaze me. The compute density is just
astounding.
> Can't wait to see what will
Xeon Phi was the last remnant of the first GPU architecture I worked on.
It was evolved from Larrabee, meant to run DX11 graphics workloads.
The first Phi was effectively the Larrabee chip but with the texture
sampling hardware fused off.
The remnants of that work are now living on in the AVX512 in
27.12.2024 17:33:13 Paul Lalonde :
> GPUs have been my bread and butter for 20+ years.
Nice to have another GPU enthusiast in this community. I'm pretty sure you know
like 100x more than me though :)
I've been a game developer for >5 years and I'm always surprised by how much
GPUs can do if use
On Thu, Dec 26, 2024 at 10:24:23PM -0800, Ron Minnich wrote:
> We had stopped the k10 work in 2006, when Fred
> Johnson, DOE program manager of FAST-OS, asked the FAST-OS researchers
> to start focusing on the upcoming petaflop HPC systems, which were not
> going to be x86 clusters, and (so long ag
On Fri, Dec 27, 2024 at 08:56:32AM -0800, Bakul Shah via 9fans wrote:
> This may be of some use to non-experts:
>
> https://enccs.github.io/gpu-programming/
>
I will hence start by this before diving in the references Paul
Lalonde has given. Thanks!
> > On Dec 27, 2024, at 8:32?AM, Paul Lalonde
Thanks for the references! (They are hard, at least for me, to find
when one wants to understand at least a little the why...)
On Fri, Dec 27, 2024 at 08:32:31AM -0800, Paul Lalonde wrote:
> GPUs have been my bread and butter for 20+ years.
>
> The best introductory source continues to be Kayvon
This may be of some use to non-experts:
https://enccs.github.io/gpu-programming/
> On Dec 27, 2024, at 8:32 AM, Paul Lalonde wrote:
>
> GPUs have been my bread and butter for 20+ years.
>
> The best introductory source continues to be Kayvon Fatahalian and Mike
> Houston's 2008 CACM paper: ht
GPUs have been my bread and butter for 20+ years.
The best introductory source continues to be Kayvon Fatahalian and Mike
Houston's 2008 CACM paper: https://dl.acm.org/doi/10.1145/1400181.1400197
It says little about the software interface to the GPU, but does a very
good job of motivating and de
On Thu, Dec 26, 2024 at 10:24:23PM -0800, Ron Minnich wrote:
[very interesting stuff]
>
> Finally, why did something like this not ever happen? Because GPUs
> came along a few years later and that's where all the parallelism in
> HPC is nowadays. NIX was a nice idea, but it did not survive in the
> What would you like to know? I also have an initial broken port to > 9front
> if you'd like to try to bring it to life.
Thank you for the response, it was quite an interesting read. Unfortunately,
I'm not a great coder, so I can't take you up on that offer. You mentioned that
GPUs took over,
Reading old emails is always interesting. Turns out I was in
discussions with a company building a CPU that was a very good fit to
NIX. I was trying to get that company to ship a research system to
lsub.
They were initially very agreeable but, finally, stopped talking about
Plan 9 on their system.
OK, I got curious about when NIX started to happen. Basically, in 2011
or so, we had wrapped up the Blue Gene work, the last Blue Gene
systems having been shipped, and jmk and I were thinking about what to
do; there was still DOE money left.. We decided to revive the k10 work
from 2005 or so. We ha
Hello, I more or less started that project with a white paper early in
2011 so may be able to help. NIX was inspired by what we learned from
the Blue Gene work and other Plan 9 work sponsored by DOE FAST-OS,
which ran from 2005-2011. During those years, DOE FAST-OS sponsored
the amd64 compiler, k10
88 matches
Mail list logo