Laura, yes, as long as there's around 10 GB of RAM available, and
ideally at least 5 harts too, but I expect 50 most of the time, not 5.
On Thu, Aug 1, 2024 at 4:28 PM Laura Hild wrote:
>
> So you're wanting that, instead of waiting for the task to finish and then
> running on the whole node, t
Bill, would this allow allocating all the remaining harts when the
node is initially half full ? How are the parameters set up for that ?
The cluster has 14 machines with 56 harts and 128 GB RAM and 12
machines with 104 harts and 256 GB RAM.
Some of the algorithms used have hot loops that scale c
So you're wanting that, instead of waiting for the task to finish and then
running on the whole node, that the job should run immediately on n-1 CPUs? If
there were only one CPU available in the entire cluster, would you want the job
to start running immediately on one CPU instead of waiting fo
Either allocate the whole node's cores or the whole node's memory? Both
will allocate the node exclusively for you.
So you'll need to know what a node looks like. For a homogeneous
cluster, this is straightforward. For a heterogeneous cluster, you may
also need to specify a nodelist for say
Hello, sharing would be unavoidable when all nodes are either fully
or partially allocated. There will be cases of very simple background
tasks occupying, for example, 1 hart in a machine.
On Thu, Aug 1, 2024 at 3:08 PM Laura Hild wrote:
>
> Hi Henrique. Can you give an example of sharing being
Hello, maybe rephrase the question to fill a whole node ?
On Thu, Aug 1, 2024 at 3:08 PM Jason Simms wrote:
>
> On the one hand, you say you want "to allocate a whole node for a single
> multi-threaded process," but on the other you say you want to allow it to
> "share nodes with other running
Hi Henrique. Can you give an example of sharing being unavoidable?
--
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-le...@lists.schedmd.com
On the one hand, you say you want "to *allocate a whole node* for a single
multi-threaded process," but on the other you say you want to allow it
to "*share
nodes* with other running jobs." Those seem like mutually exclusive
requirements.
Jason
On Thu, Aug 1, 2024 at 1:32 PM Henrique Almeida via
Hello, I'm testing it right now and it's working pretty well in a
normal situation, but that's not exactly what I want. --exclusive
documentation says that the job allocation cannot share nodes with
other running jobs, but I want to allow it to do so, if that's
unavoidable. Are there other ways to
In part, it depends on how it's been configured, but have you tried
--exclusive?
On Thu, Aug 1, 2024 at 7:39 AM Henrique Almeida via slurm-users <
slurm-users@lists.schedmd.com> wrote:
> Hello, everyone, with slurm, how to allocate a whole node for a
> single multi-threaded process?
>
>
> https:
Hello, everyone, with slurm, how to allocate a whole node for a
single multi-threaded process?
https://stackoverflow.com/questions/78818547/with-slurm-how-to-allocate-a-whole-node-for-a-single-multi-threaded-process
--
Henrique Dante de Almeida
hda...@gmail.com
--
slurm-users mailing list
I can confirm that after update to recently released 24.05.2 the API endpoint
GET /slurm/v0.0.41/jobs
works now well.
cheers
josef
From: Daniel Letai via slurm-users
Sent: Wednesday, 24 July 2024 19:29
To: slurm-users@lists.schedmd.com
Subject: [slurm-users] R
12 matches
Mail list logo