Hi Guys,
Thanks for your answers.
I would like not to patch the source code of Slurm, like Jacek does it, to make
things easier.
But I think, it is the way to go.
When I try the solutions, Florian and Angelos suggested, slurm will still think
that the nodes are "powered down", even if they not
On Monday, 31 August 2020 7:41:13 AM PDT Manuel BERTRAND wrote:
> Every thing works great so far but now I would like to bound a specific
> core to each GPUs on each node. By "bound" I mean to make a particular
> core not assignable to a CPU job alone so that the GPU is available
> whatever the CP
We're still nailing down a few details with the streaming platform (and
will add them to the website when resolved), but do expect to have the
video available for one or two weeks afterwards.
- Tim
On 8/31/20 7:07 AM, Ole Holm Nielsen wrote:
On 8/28/20 10:45 PM, Tim Wickberg wrote:
The Slurm
Thank you for your reply.
I think I found the issue. We have only few "skylake" nodes and this job is
requesting them. Thus, this user is limited to the (relatively few) Skylake
generation CPU nodes.
d'oh!
--
Dr. Manuel Holtgrewe, Dipl.-Inform.
Bioinformatician
Core Unit Bioinformatics – CUBI
One pending job in this partition should have a reason of “Resources”. That job
has the highest priority, and if your job below would delay the
highest-priority job’s start, it’ll get pushed back like you see here.
On Aug 31, 2020, at 12:13 PM, Holtgrewe, Manuel
wrote:
Dear all,
I'm seeing s
Dear all,
I'm seeing some user's job getting a StartTime 3 days in the future although
there are plenty of resources available in the the partition (and the user is
well below maxTRESPU of the partition).
Attached is our slurm.conf and the dump of "sacctmgr list qos -P". I'd be
grateful for an
Hi,
I'm also very interested in how this could be done properly. At the moment
what we are doing is setting up partitions with MaxCPUsPerNode set to
CPUs-GPUs. Maybe this can help you in the meanwhile, but this is a
suboptimal solution (in fact we have nodes with different number of CPUs,
so we had
Hi list,
I am totally new to Slurm and have just deployed a heterogeneous GPU/CPU
cluster by following the latest OpenHPC recipe on CentOS 8.2 (thanks
OpenHPC team for making those !)
Every thing works great so far but now I would like to bound a specific
core to each GPUs on each node. By "bo
On 8/28/20 10:45 PM, Tim Wickberg wrote:
The Slurm User Group Meeting (SLUG'20) this fall will be moving online. In
lieu of an in-person meeting, SchedMD will broadcast a select set of
presentations on Tuesday, September 15th, 2020, from 9am to noon (MDT).
The agenda is now posted online at:
h
Just wondering, will we get our t-shirts by email? :D
--
Cheers,
Bjørn-Helge Mevik, dr. scient,
Department for Research Computing, University of Oslo
signature.asc
Description: PGP signature
10 matches
Mail list logo