Thanks. It turns out that not all program use $http_proxy in bashrc. For
pip, which I had problem I had to use "pip --proxy http://somewhere install
pkg".
So, there is not problem with srun. Hope that it helps other too.
Regards,
Mahmood
On Mon, Aug 3, 2020 at 7:28 PM Matthew BETTINGER <
matt
Hi,
I'd like to prevent my Slurm users from taking up resources with dummy
shell process jobs left unaware/intentionally.
To that end, I simply want to put a tougher maximum time limit for srun
only.
One possible way might be to wrap the srun binary.
But could someone tell me if there is any prope
Is there a way for Slurm to detect when a user quota has been exceeded? We
use XFS and when users are over the quota they will get a "Disk quota
exceeded" message, e.g., when trying to scp or create a new file. However
if they are not aware of this and try using a sbatch file, they don't
receive an
Hi,
I'm experiencing a connectivity problem and I'm out of ideas, why this
is happening. I'm running a slurmctld on a multihomed host.
(10.9.8.0/8) - master - (10.11.12.0/8)
There is no routing between these two subnets.
So far, all slurmds resided in the first subnet and worked fine. I added
so
Untested, but you should be able to use a job_submit.lua file to detect if the
job was started with srun or sbatch:
* Check with (job_desc.script == nil or job_desc.script == '')
* Adjust job_desc.time_limit accordingly
Here, I just gave people a shell function "hpcshell", which automati