The available features / constraints aren't necessary; their purpose is
to offer a slightly more flexible way to request resources (esp. GPU).
As in, quite often people don't specifically need a P100 or V100, but
they can't run on a Kepler card; with the '--gres=gpu:p100:X' syntax
they can (I b
On Thu, Dec 6, 2018 at 2:08 AM Loris Bennett wrote:
>
> Eli V writes:
>
> > We run our cluster using select parms CR_Core_Memory and always
> > require a user to set the memory used when submitting a job to avoid
> > swapping our nodes to uselessness. However, since slurmd is pretty
> > vigilant
On Wed, Dec 5, 2018 at 5:04 PM Bjørn-Helge Mevik wrote:
>
> I don't think Slurm has any facility for soft memory limits.
>
> But you could emulate it by simply configure the nodes in slurm.conf
> with, e.g., 15% higher RealMemory value than what is actually available
> on the node. Then a node wi
PERSONAL/NONWORK // EXTERNAL
I took a look through the archives, and I did not see an clear answer to the
issue I was seeing, so I thought I would go ahead and ask.
I am having a cluster issue with SLURM and I hoped you might be able to help me
out. I built a small test cluster to determine if i
Wes,
You didn't list the Slurm command that you used to get your interactive
session. In particular did you ask Slurm for access to all 14 cores?
Also note that since Matlab is using threads to distribute work among cores you
don't want to ask for multiple tasks (-n or --ntasks) as that will gi
On Thu, Dec 6, 2018 at 10:01 PM Eli V wrote:
> On Thu, Dec 6, 2018 at 2:08 AM Loris Bennett
> wrote:
> > > Anyone have some thoughts/ideas about this? Seems like it should be
> > > relatively straightforward to implement, though of course using it
> > > effectively will require some tuning.
> >