Hello, everyone, I'll answer everyone in a single reply because I've
reached a conclusion: I'll give up on the idea of using shared nodes
and will require exclusive allocation to a whole node. The final
command line used will be:
sbatch -N 1 --exclusive --ntasks-per-node=1 --mem=0 pz-train.ba
Laura, yes, as long as there's around 10 GB of RAM available, and
ideally at least 5 harts too, but I expect 50 most of the time, not 5.
On Thu, Aug 1, 2024 at 4:28 PM Laura Hild wrote:
>
> So you're wanting that, instead of waiting for the task to finish and then
> running on the whole node, t
hat). And you DO know the memory footprint by past jobs with
> similar inputs I hope.
>
> Bill
>
> On 8/1/24 3:17 PM, Henrique Almeida via slurm-users wrote:
> > Hello, maybe rephrase the question to fill a whole node ?
> >
> > On Thu, Aug 1, 2024 at 3:08 PM Jason Si
Hello, sharing would be unavoidable when all nodes are either fully
or partially allocated. There will be cases of very simple background
tasks occupying, for example, 1 hart in a machine.
On Thu, Aug 1, 2024 at 3:08 PM Laura Hild wrote:
>
> Hi Henrique. Can you give an example of sharing being
quot;share nodes with other running jobs." Those seem like mutually exclusive
> requirements.
>
> Jason
>
> On Thu, Aug 1, 2024 at 1:32 PM Henrique Almeida via slurm-users
> wrote:
>>
>> Hello, I'm testing it right now and it's working pretty well i
exclusive?
>
> On Thu, Aug 1, 2024 at 7:39 AM Henrique Almeida via slurm-users
> wrote:
>>
>> Hello, everyone, with slurm, how to allocate a whole node for a
>> single multi-threaded process?
>>
>> https://stackoverflow.com/questions/788185
Hello, everyone, with slurm, how to allocate a whole node for a
single multi-threaded process?
https://stackoverflow.com/questions/78818547/with-slurm-how-to-allocate-a-whole-node-for-a-single-multi-threaded-process
--
Henrique Dante de Almeida
hda...@gmail.com
--
slurm-users mailing list