Re: [slurm-users] GPUs not available after making use of all threads?

2023-02-14 Thread Diego Zuccato
Not a debate, we're saying nearly the same thing but at different "granularity". If you consider the core as a whole that's the effect you see. But a core is composed of different units (fetcher, decode/execute, registers, ALU, FPU, MMU, etc). The concept behind hyperthreading is having some

[slurm-users] Slurm release candidate version 23.02rc1 available for testing

2023-02-14 Thread Tim Wickberg
We are pleased to announce the availability of Slurm release candidate version 23.02rc1. To highlight some new features coming in 23.02: - Added a new (optional) RPC rate limiting system in slurmctld. - Added usage gathering for gpu/nvml (Nvidia) and gpu/rsmi (AMD) plugins. - Added a new jobc

[slurm-users] Shard accounting in sreport

2023-02-14 Thread Reed Dier
Hoping someone can tell me if I’m just thinking about this wrong, or if maybe this is somewhere with room for improvement. I recently upgraded my cluster to 22.05.8 and am testing out gpu sharding on a subset of GPUs, specifically my T4’s. > -

Re: [slurm-users] GPUs not available after making use of all threads?

2023-02-14 Thread Brian Andrus
Diego, Not to start a debate, I guess it is in how you look at it. From Intel's descriptions: How does Hyper-Threading work? When Intel® Hyper-Threading Technology is active, the CPU exposes two execution contexts per physical core. This means that one physical core now works like two “logica