Hi Michael,
Sorry for the late response. Do you mean supplying --exclusive to the
srun command? Or I have to do something else for partitions? Currently
they use

srun -n 1 -c 6 --x11 -A monthly -p CAT --mem=32GB ./fluent.sh

where fluent.sh is

#!/bin/bash
unset SLURM_GTIDS
/state/partition1/ansys_inc/v140/fluent/bin/fluent


Regards,
Mahmood




On Sat, Sep 1, 2018 at 7:45 PM Renfro, Michael <ren...@tntech.edu> wrote:
>
> Depending on the scale (what percent are Fluent users, how many nodes you 
> have), you could use exclusive mode on either a per-partition or per-job 
> basis.
>
> Here, my (currently few) Fluent users do all their GUI work off the cluster, 
> and just submit batch jobs using the generated case and data files.
>
> --
> Mike Renfro  / HPC Systems Administrator, Information Technology Services
> 931 372-3601 / Tennessee Tech University
>
> > On Sep 1, 2018, at 9:53 AM, Mahmood Naderan <mahmood...@gmail.com> wrote:
> >
> > Hi,
> > I have found that when user A is running a fluent job (some 100% processes 
> > in top) and user B decides to run a fluent job for his own, the console 
> > window of fluent shows some messages that another fluent process is running 
> > and it can not set affinity. This is not an error, but I see that the speed 
> > is somehow low.
> >
> > Think that when a user runs "srun --x11 .... script" where script launches 
> > some fluent processes and slurm put that job on compute-0-0, there should 
> > be a way that another "script" from another user goes to compute-0-1 even 
> > if compute-0-0 has free cores.
> >
> > Is there any way in slurm configuration to set such a constraint? If slurm 
> > wants to dispatch a job, first see if process X is running there or not.
> >
> >
> > Regards,
> > Mahmood
> >
> >
>
>

Reply via email to