On 23/3/20 8:32 am, CB wrote:
I've looked at the heterogeneous job support but it creates two-separate
jobs.
Yes, but the web page does say:
# By default, the applications launched by a single execution of
# the srun command (even for different components of the
# heterogeneous job) are combi
; > Andy
> >
> >
> >
> > From: slurm-users [mailto:slurm-users-boun...@lists.schedmd.com] On
> Behalf Of CB
> > Sent: Monday, March 23, 2020 11:32 AM
> > To: Slurm User Community List
> > Subject: [slurm-users] Running an MPI job across two partit
f Of
> CB
> Sent: Monday, March 23, 2020 11:32 AM
> To: Slurm User Community List
> Subject: [slurm-users] Running an MPI job across two partitions
>
>
>
> Hi,
>
>
>
> I'm running Slurm 19.05 version.
>
>
>
> Is there any way to la
Andy
>
>
>
> *From:* slurm-users [mailto:slurm-users-boun...@lists.schedmd.com] *On
> Behalf Of *CB
> *Sent:* Monday, March 23, 2020 11:32 AM
> *To:* Slurm User Community List
> *Subject:* [slurm-users] Running an MPI job across two partitions
>
>
>
> Hi,
>
>
Of CB
Sent: Monday, March 23, 2020 11:32 AM
To: Slurm User Community List
Subject: [slurm-users] Running an MPI job across two partitions
Hi,
I'm running Slurm 19.05 version.
Is there any way to launch an MPI job on a group of distributed nodes from two
or more partitions, where
Hi,
I'm running Slurm 19.05 version.
Is there any way to launch an MPI job on a group of distributed nodes from
two or more partitions, where each partition has distinct compute nodes?
I've looked at the heterogeneous job support but it creates two-separate
jobs.
If there is no such capability