Hello, another workaround could be to use the InitScript=/path/to/script.sh option of the plugin.
For example, if user's home directory is under autofs: script.sh: uid=$(squeue -h -O username -j $SLURM_JOB_ID) cd /home/$uid Best regards Gizo > Hi there, > we excitedly found the job_container/tmpfs plugin which neatly allows > us to provide local scratch space and a way of ensuring that /dev/shm > gets cleaned up after a job finishes. Unfortunately we found that it > does not play nicely with autofs which we use to provide networked > project and scratch directories. We found that this is a known issue > [1]. I was wondering if that has been solved? I think it would be > really useful to have a warning about this issue in the documentation > for the job_container/tmpfs plugin. > Regards > magnus > > [1] > https://cernvm-forum.cern.ch/t/intermittent-client-failures-too-many-levels-of-symbolic-links/156/4 > -- > Magnus Hagdorn > Charité – Universitätsmedizin Berlin > Geschäftsbereich IT | Scientific Computing > > Campus Charité Virchow Klinikum > Forum 4 | Ebene 02 | Raum 2.020 > Augustenburger Platz 1 > 13353 Berlin > > magnus.hagd...@charite.de > https://www.charite.de > HPC Helpdesk: sc-hpc-helpd...@charite.de -- _______________________ Dr. Gizo Nanava Group Leader, Scientific Computing Leibniz Universität IT Services Leibniz Universität Hannover Schlosswender Str. 5 D-30159 Hannover Tel +49 511 762 7919085 http://www.luis.uni-hannover.de