Thanks. It seems EnforcePartLimits=ANY is what I need:
If set to "ANY" a job must satisfy any of the requested partitions to be
submitted.
Probably it got changed by who reinstalled the cluster and I didn't
notice :(
And Slurm was doing what it's been told to do. As usual :)
Tks again
Diego
Il 21/09/2023 16:25, Bernstein, Noam CIV USN NRL (6393) Washington DC
(USA) ha scritto:
What
if you list multiple partitions, and increase the number of nodes so
that there aren't enough in one of the partitions, but not realize this
problem?
That's exactly the case that lead me to write that
As I read again on the pasted slurm.conf info, it includes
"AllowAccounts, AllowGroups,", so it seems slurm actually takes this
into account. So I think it should work...
Best,
Feng
On Thu, Sep 21, 2023 at 2:33 PM Feng Zhang wrote:
>
> As I said I am not sure, but it depends on the algorithm a
As I said I am not sure, but it depends on the algorithm and the code
structure of the slurm(no chance to dig into...). My imagination
is(for the way slurm works...):
Check limits on b1, ok,b2: ok: b3,ok; then b4, nook...(or any order by slurm)
If it works with the EnforcePartLimits=ANY or NO, y
On Sep 21, 2023, at 11:37 AM, Feng Zhang
mailto:prod.f...@gmail.com>> wrote:
Set slurm.conf parameter: EnforcePartLimits=ANY or NO may help this, not sure.
Hmm, interesting, but it looks like this is just a check at submission time.
The slurm.conf web page doesn't indicate that it affects the a
Set slurm.conf parameter: EnforcePartLimits=ANY or NO may help this, not sure.
Best,
Feng
Best,
Feng
On Thu, Sep 21, 2023 at 11:27 AM Jason Simms wrote:
>
> I personally don't think that we should assume users will always know which
> partitions are available to them. Ideally, of course, th
I personally don't think that we should assume users will always know which
partitions are available to them. Ideally, of course, they would, but I
think it's fine to assume users should be able to submit a list of
partitions that they would be fine running their jobs on, and if one is
forbidden fo
That's not at all how I interpreted this man page description. By "If the
job can use more than..." I thought it was completely obvious (although
perhaps wrong, if your interpretation is correct, but it never crossed my
mind) that it referred to whether the _submitting user_ is OK with it using
m
On Sep 21, 2023, at 9:46 AM, David mailto:dr...@umich.edu>>
wrote:
Slurm is working as it should. From your own examples you proved that; by not
submitting to b4 the job works. However, looking at man sbatch:
-p, --partition=
Request a specific partition for the resource
Slurm is working as it should. From your own examples you proved that; by
not submitting to b4 the job works. However, looking at man sbatch:
-p, --partition=
Request a specific partition for the resource allocation.
If not specified, the default behavior is to allow the slu
Uh? It's not a problem if other users see there are jobs in the
partition (IIUC it's what 'hidden' is for), even if they can't use it.
The problem is that if it's included in --partition it prevents jobs
from being queued!
Nothing in the documentation about --partition made me think that
forb
I would think that slurm would only filter it out, potentially, if the
partition in question (b4) was marked as "hidden" and only accessible by
the correct account.
On Thu, Sep 21, 2023 at 3:11 AM Diego Zuccato
wrote:
> Hello all.
>
> We have one partition (b4) that's reserved for an account whi
Hello all.
We have one partition (b4) that's reserved for an account while the
others are "free for all".
The problem is that
sbatch --partition=b1,b2,b3,b4,b5 test.sh
fails with
sbatch: error: Batch job submission failed: Invalid account or
account/partition combination specified
while
sbatc
13 matches
Mail list logo