Hello,
another workaround could be to use the InitScript=/path/to/script.sh option of
the plugin.
For example, if user's home directory is under autofs:
script.sh:
uid=$(squeue -h -O username -j $SLURM_JOB_ID)
cd /home/$uid
Best regards
Gizo
> Hi there,
> we excitedly found the job_container
Hi Magnus,
We had the same challenge some time ago. A long description of solutions
is in my Wiki page at
https://wiki.fysik.dtu.dk/Niflheim_system/Slurm_configuration/#temporary-job-directories
The issue may have been solved in
https://bugs.schedmd.com/show_bug.cgi?id=12567 which will be i
We had the same issue when we switched to job_container plugin. We ended up
running cvmfs_cpnfig probe as part of the health check tool so that the
cvmfs repos stay mounted. However after we switched on power saving we ran
into some race conditions (job landed on a node before the cvmfs was
mounted
In my opinion, the problem is with autofs, not with tmpfs. Autofs
simply doesn't work well when you are using detached fs name spaces and
bind mounting. We ran into this problem years ago (with an inhouse
spank plugin doing more or less what tmpfs does), and ended up simply
not using autofs.
I g
Hi there,
we excitedly found the job_container/tmpfs plugin which neatly allows
us to provide local scratch space and a way of ensuring that /dev/shm
gets cleaned up after a job finishes. Unfortunately we found that it
does not play nicely with autofs which we use to provide networked
project and s