Hello Alain,
maybe I'm missing the point, but from my understanding the
job_container/tmpfs plugin uses the directory under BasePath to store
its data, used to create the bind mounts for the users. The folder
itself is not meant to be used by others.
The folders in the hidden directory with user privileges under your
/scratch are the bind mounts. Those folders are specified in the Dirs
parameter <https://slurm.schedmd.com/job_container.conf.html#OPT_Dirs>
of job_container.conf. You may have more luck trying to use this
parameter for your needs, perhaps? There is also a parameter to specify
an "InitScript" which may be used to create folders dinamically.
One last thing, the those configuration has been added in one of the
latest releases of Slurm, so they may not work with your version.
Best regards,
Lorenzo Bosio
On 21/11/23 14:07, Arsene Marian Alain wrote:
Thanks Sean. I’ve tried using slurm prolog/epilog scripts but without
any success. That's why I decided to look for other solutions and
job_container/tmpfs plugin seemed like a good alternative.
*De:* slurm-users <slurm-users-boun...@lists.schedmd.com> *En nombre
de *Sean Mc Grath
*Enviado el:* martes, 21 de noviembre de 2023 12:57
*Para:* Slurm User Community List <slurm-users@lists.schedmd.com>
*Asunto:* Re: [slurm-users] slurm job_container/tmpfs
*ATENCIÓN*:Este correo electrónico se envió desde fuera de la UAH. No
haga clic en enlaces ni abra archivos adjuntos a menos que reconozca
al remitente y sepa que el contenido es seguro.
Would a prolog script, https://slurm.schedmd.com/prolog_epilog.html,
do what you need? Sorry if you have already considered that and I
missed it.
---
Sean McGrath
Senior Systems Administrator, IT Services
------------------------------------------------------------------------
*From:*slurm-users <slurm-users-boun...@lists.schedmd.com> on behalf
of Arsene Marian Alain <alain.ars...@uah.es>
*Sent:* Tuesday 21 November 2023 09:58
*To:* Slurm User Community List <slurm-users@lists.schedmd.com>
*Subject:* Re: [slurm-users] slurm job_container/tmpfs
Hello Brian,
Thanks for your answer. With the job_container/tmpfs plugin I don't
really create the directory manually.
I just give my Basepath=/scratch (a local directory for each node that
is already mounted with 1777 permissions) in job_container.conf. The
plugin automatically generates for each job a directory with the
"JOB_ID", for example: /scratch/1805
The only problem is that directory 1805 is generated with root owner
and permissions 700. So the user who submitted the job cannot
write/read inside directory 1805.
Is there a way for the owner of directory 1805 to be the user who
submitted the job and not root?
*De:* slurm-users <slurm-users-boun...@lists.schedmd.com> *En nombre
de *Brian Andrus
*Enviado el:* lunes, 20 de noviembre de 2023 23:29
*Para:* slurm-users@lists.schedmd.com
*Asunto:* Re: [slurm-users] slurm job_container/tmpfs
*ATENCIÓN*:Este correo electrónico se envió desde fuera de la UAH. No
haga clic en enlaces ni abra archivos adjuntos a menos que reconozca
al remitente y sepa que el contenido es seguro.
How do you 'manually create a directory'? That would be when the
ownership of root would be occurring. After creating it, you can
chown/chmod it as well.
Brian Andrus
On 11/18/2023 7:35 AM, Arsene Marian Alain wrote:
Dear slurm community,
I run slurm 21.08.1 under Rocky Linux 8.5 on my small HPC cluster
and am trying to configure job_container/tmpfs to manage the
temporary directories.
I have a shared nfs drive "/home" and a local "/scratch" (with
permissions 1777) on each node.
For each submitted job I manually create a directory with the
"JOB_ID.$USER" in the local "/scratch" which is where all the temp
files for the job will be generated. Now, I would like to do these
automatically (especially to remove the directory when the job
finishes or is canceled):
I added the following parameters in my /etc/slurm.conf:
JobContainerType=job_container/tmpfs
PrologFlags=contain
So, I have created the "job_container.conf" in the directory
"/etc/slurm"
with the following configuration:
AutoBasePath=false
BasePath=/scratch
Then, I replicated the changes to all nodes and restarted the
slurm daemons.
Finally, when I launch the job a directory with the "JOB_ID" is
created in the local "/scratch" of the compute node. The only
problem is that the owner of the directory is "root" and the user
who submitted the job doesn’t have read and write permissions to
that directory (other users do not either).
I would like that:
1) The name of the automatically created directory will be:
"JOB_ID.$USER"
2) The owner of the directory will be the user who submitted the
job, not "root".
Please, could someone help me?
Thanks a lot.
Best regards,
Alain
--
*/Dott. Mag. Lorenzo Bosio/*
Tecnico di Ricerca
Dipartimento di Informatica
Università degli Studi di Torino
Corso Svizzera, 185 - 10149 Torino
tel. +39 011 670 6836