Hi Loris,
Am 29.03.2019 um 14:01 schrieb Loris Bennett:
Hi Marcus,
Marcus Wagner writes:
Hi Loris,
On 3/25/19 1:42 PM, Loris Bennett wrote:
3. salloc works fine too without --x11, subsequent srun with a x11 app works
great
Doing 'salloc' followed by 'ssh -X' works for us too, which is
Hi,
Is there any way to view current memory allocation of a running job? With
'sstat' I can get only MAX values, including MaxVMSize, MaxRSS.
Any idea?
Regards,
Mahmood
I found out that the standard script that specifies the number of tasks and
memory per cpu will do the same thing that I was expecting from packjob
(heterogeneous job).
#SBATCH --job-name=myQE
#SBATCH --output=big-mem
#SBATCH --ntasks=14
#SBATCH --mem-per-cpu=17G
#SBATCH --nodes=6
#SBATCH --partit
Hi Marcus,
Marcus Wagner writes:
> Hi Loris,
>
> On 3/25/19 1:42 PM, Loris Bennett wrote:
>>
>>> 3. salloc works fine too without --x11, subsequent srun with a x11 app
>>> works great
>> Doing 'salloc' followed by 'ssh -X' works for us too, which is surprising
>> to me.
>>
>> This last option
Just to follow up, I filed a medium bug report with schedmd on this:
https://bugs.schedmd.com/show_bug.cgi?id=6763
Best,
Peter
On 3/25/19 10:30 AM, Peter Steinbach wrote:
Dear all,
Using these config files,
https://github.com/psteinb/docker-centos7-slurm/blob/7bdb89161febacfd2dbbcb3c5684336fb
Hi Noam,
if you use the RealMemory parameter for the hosts, slurm will close a
host, which has less than the configured memory. Thus
1. you would have seen much earlier, that something was wrong with the node
2. no job would have been submitted to that node, since it would have
been closed
Hi Loris,
On 3/25/19 1:42 PM, Loris Bennett wrote:
3. salloc works fine too without --x11, subsequent srun with a x11 app works
great
Doing 'salloc' followed by 'ssh -X' works for us too, which is surprising to
me.
This last option currently seems to me to be the best option for users,
be