Hi Lech,
IMHO, the Slurm user community would benefit the most from your
interesting work on MySQL/MariaDB performance, if your patch could be
made against the current 18.08 and the coming 19.05 releases. This
would ensure that your work is carried forward.
Would you be able to make patches
That’s probably it.
Sub-queries are known for potential performance issues, so one wonders why the
devs didn’t extract it accordingly and made the code more robust or at least
compatible with RHEL/CentOS 6 rather than including that remark in the release
notes.
> Am 02.04.2019 um 07:20 schrie
Hi Marcus,
Following jobs are running or pending after I killed job 100816, which was
running on computelab-134's T4:
100815 RUNNING computelab-134 gpu:gv100:1 None1
100817 PENDING gpu:gv100:1 Resources1
100818 PENDING gpu:tu104:1 Resources1
$ scontrol -d show node computelab-134
NodeName=compute
Hi,
We have got Molpro 2015.1.2 running with Slurm and MPI, but were
wondering whether anyone has a more elegant solution than the following :
module load Molpro/mpp-2015.1.2.linux_x86_64_openmp iimpi/2018b
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
NLIST=$(scontrol show hostname $SLRUM_
Hi;
The multi partition bug was fixed. Data collection and output parts of
the code are now separated. This release is tagged as v0.1.
If you tried the spart command, I would be grateful if you send me the
output of the "./spart -m" command in your cluster, so I can see how
it works in dif