Can I use slurm_submit_sbatch_job API to submit a batch job by only giving a
complete job script to the job_desc_msg_t data structure?
For example (pseudo code):
job_desc_msg_t my_job;
submit_response_msg_t *resp = NULL;
char sbatch_script[4096] = "#SBATCH x\n mpirun hostname
I'm writing a plugin (based on the select/cons_res plugin). I need to add
this library for the linker when my plugin is built:
/usr/lib/x86_64-linux-gnu/libcurl.a
Apparently I need to add this library to the Makefile.in. Where do I add
this?
Do I need to add this in the Makefile.am too?
Thank
I'm trying to setup a small partition where oversubscription is allowed. I
want to be able to have several jobs assigned to the same core
simultaneously. The idea is to facilitate some low-consumption interactive
workloads in instructional settings (eg. students running Matlab during a
class). I've
Am Thu, 30 Jan 2020 19:03:38 +0100
schrieb "Dr. Thomas Orgis" :
> batch 1548429637 1548429637 - - 0 1 4294536312
> 48 node[09-15,22] (null)
>
> So, matching for job ID, user name (via numerical uid lookup),
> timestamps and the nodes should be possible, it's all there.
>
> Can someone conf
Am Thu, 30 Jan 2020 19:07:59 +0300
schrieb mercan :
> Note: The filetxt plugin records only a limited subset of accounting
> information and will prevent some sacct options from proper operation.
Thank you for looking this up. But since the filetxt does contain the
start/end timestamps and th
hi;
From the slurm.conf documentation web page:
Note: The filetxt plugin records only a limited subset of accounting
information and will prevent some sacct options from proper operation.
regards;
Ahmet M.
29.01.2020 21:47 tarihinde Dr. Thomas Orgis yazdı:
Hi,
I happen to run a small cl
Hi
I want to run an epilogctld after all parts of an array job have completed
in order to clean up an on demand filesystem created in the prologctld.
First I though I could just assume that I could run the epilog after the
completion of the final job step until I realised that they might not take