Hi,

You could try something like: srun -w
"mybigmemorynode","mybigmemorynode2","mynodewithlessmemory"



Le 19/10/2015 21:44, Ghislain LE MEUR a écrit :
> Problem to run a job with more memory only on the node where the job
> start
> Hello,
>
> I have a pool of nodes with the same memory size (128G) and 2 others
> with more memory size (512G) for other kind of softwares.
>
> Is it possible to start a job on one of these 2 nodes with 512G and
> also compute on others nodes with 128G of memory ? The job needs only
> more memory on the first node where the job start.
>
> if I run "srun/sbatch --mem 512" the job will failed because there is
> not enough memory on others nodes.
>
> I have played with the Prolog/PrologSlurmctld variable to try to
> define RSS limits without success because prolog script is not the
> parent of surmstepd.
>
> Anyone have an idea ?
>
> Regards,

-- 
---
Mehdi Denou
Bull/Atos international HPC support

Reply via email to