when the plan 9 on blue gene started, and for some time later, there was a
specialised kernel (the "compute kernel"?) for BG,
which was a bit like DOS, so we felt we had a contribution to make, with an
OS designed for an age of
distribution. later it turned out that SC/HPC like everything else just
27.12.2024 17:33:13 Paul Lalonde :
> GPUs have been my bread and butter for 20+ years.
Nice to have another GPU enthusiast in this community. I'm pretty sure you know
like 100x more than me though :)
I've been a game developer for >5 years and I'm always surprised by how much
GPUs can do if use
Xeon Phi was the last remnant of the first GPU architecture I worked on.
It was evolved from Larrabee, meant to run DX11 graphics workloads.
The first Phi was effectively the Larrabee chip but with the texture
sampling hardware fused off.
The remnants of that work are now living on in the AVX512 in
On Fri, Dec 27, 2024 at 1:25 PM sirjofri
wrote:
> I've been a game developer for >5 years and I'm always surprised by how
> much GPUs can do if used correctly. It's just incredible.
>
Yes, it still doesn't cease to amaze me. The compute density is just
astounding.
> Can't wait to see what will
On Fri, Dec 27, 2024 at 01:24:40PM -0800, Paul Lalonde wrote:
> The remnants of that work are now living on in the AVX512 instruction set.
> The principal problem with Larrabee was that the ring bus connecting some
> 60+ ring stops was *so wide* (512 bits bidirectional = 1024 bits!) that it
> consu
Not directly. The AVX512 instructions include some significant
permute/shuffle/mask hardware, available on pretty much all instructions.
These in turn lead to very long capacitance chains (ie, transistors in
series that have to stabilize each clock) and so constrain how fast the
clock can run. Fo
Thanks for the references! (They are hard, at least for me, to find
when one wants to understand at least a little the why...)
On Fri, Dec 27, 2024 at 08:32:31AM -0800, Paul Lalonde wrote:
> GPUs have been my bread and butter for 20+ years.
>
> The best introductory source continues to be Kayvon
On Fri, Dec 27, 2024 at 08:56:32AM -0800, Bakul Shah via 9fans wrote:
> This may be of some use to non-experts:
>
> https://enccs.github.io/gpu-programming/
>
I will hence start by this before diving in the references Paul
Lalonde has given. Thanks!
> > On Dec 27, 2024, at 8:32?AM, Paul Lalonde
This may be of some use to non-experts:
https://enccs.github.io/gpu-programming/
> On Dec 27, 2024, at 8:32 AM, Paul Lalonde wrote:
>
> GPUs have been my bread and butter for 20+ years.
>
> The best introductory source continues to be Kayvon Fatahalian and Mike
> Houston's 2008 CACM paper: ht
GPUs have been my bread and butter for 20+ years.
The best introductory source continues to be Kayvon Fatahalian and Mike
Houston's 2008 CACM paper: https://dl.acm.org/doi/10.1145/1400181.1400197
It says little about the software interface to the GPU, but does a very
good job of motivating and de
On Thu, Dec 26, 2024 at 10:24:23PM -0800, Ron Minnich wrote:
> We had stopped the k10 work in 2006, when Fred
> Johnson, DOE program manager of FAST-OS, asked the FAST-OS researchers
> to start focusing on the upcoming petaflop HPC systems, which were not
> going to be x86 clusters, and (so long ag
On Thu, Dec 26, 2024 at 10:24:23PM -0800, Ron Minnich wrote:
[very interesting stuff]
>
> Finally, why did something like this not ever happen? Because GPUs
> came along a few years later and that's where all the parallelism in
> HPC is nowadays. NIX was a nice idea, but it did not survive in the
I've thought for a while now that NIX might still have interesting things to
say in the middle of the space, even if the HPC origins didn't work out.
Probably most of us are walking around with systems with asymmetrical cores
("performance" vs. "efficiency") in our pockets right now; it seems li
all that said, my offer stands: I'd love to help anyone interested to
bring nix back to life. I'd most prefer to do so on 9front, but I'm
not picky.
ron
On Fri, Dec 27, 2024 at 8:18 PM Anthony Sorace wrote:
>
> I've thought for a while now that NIX might still have interesting things to
> say i
Whilst there are many HPC workloads that are well supported by GPGPUs, we
also have multi-core systems such as Ampere One and AMD EPYC with 192 cores
(and soon even more).
I would think that some of the Blue Gene and NIX work might be relevant on
such hardware.
On Sat, 28 Dec 2024 at 15:18, Anthon
I'll take 6912 simple cores at 1Ghz over 192 core at 5GHz any day. So long
as I can spend the 3 months rebuilding the implementation to be cache and
memory friendly.
I love the EPYC line, and have spec'd them into DCs. But for raw compute
it's just not competitive for HPC-like workloads.
Paul
Personally I think that there is a significant market for Nix, not in HPC,
but as a better, more distributed hypervisor. With Broadcom mismanaging
VMWare, there is a need for something better than ESXI, Hyper-V or Proxmox
for all the enterprises that aren't 100% on the cloud (which is pretty much
a
On Fri, Dec 27, 2024 at 1:12 PM Kurt H Maier via 9fans <9fans@9fans.net> wrote:
> Which architecture and OS did they wind up with? I was part of the team
> that went on to administer the Coral systems, which were linux on POWER
> 9+. Even the early-stage bringup loaders were linux systems. I
>
18 matches
Mail list logo