Thanks again Patryk,
For your insights, we have implemented many of the same things, but the socket
errors are still occurring regularly.
If we find a solution that works I will be sure to add it to this thread.
Many thanks
Jason
Jason Ellul
Head - Research Computing Facility
Office of
it appears the errors became more consistent after upgrading
our instance and replica to REHL9.
May I please ask what optimizations did you put in place for SSSD?
Many thanks
Jason
Jason Ellul
Head - Research Computing Facility
Office of Cancer Research
My onsite days are Mon, alt Wed and
and disk activity
during this 1 second, always at approx. 1 hour after restarting the controller.
Many Thanks in advance
Jason
Jason Ellul
Head - Research Computing Facility
Office of Cancer Research
Peter MacCallum Cancer Centre
--
slurm-users mailing list -- slurm-users@lists.schedmd.com
To
Hi Michael,
Thanks so much for the info will try 23.02.
Cheers,
Jason
Jason Ellul
Head - Research Computing Facility
Office of Cancer Research
Peter MacCallum Cancer Center
From: slurm-users on behalf of Michael
Jennings
Date: Thursday, 2 March 2023 at 9:17 am
To: slurm-users
compatibility and reduce complexity.
Will try 23.02 and if that does not resolve our issue consider moving back to
slurm-spank-private-tmpdir or auto_tmpdir.
Thanks again,
Jason
Jason Ellul
Head - Research Computing Facility
Office of Cancer Research
Peter MacCallum Cancer Center
From: slurm-users on
expected or should the folder /slurm/ also be removed?
Do I need to create an epilog script to remove the directory that is left?
Many thanks for the assistance,
Jason
Jason Ellul
Head - Research Computing Facility
Office of Cancer Research
Peter MacCallum Cancer Center
Disclaimer: This email