Excuse me, but it doesn't work. I set --mem to 2GB and I put free
command in the script. I don't know why it failed.
[mahmood@rocks7 ~]$ sbatch sl.sh
Submitted batch job 19
[mahmood@rocks7 ~]$ squeue
JOBID PARTITION NAME USER ST TIME NODES
NODELIST(REASON)
[mahmood@rock
On 14 March 2018 at 14:53, Christopher Samuel wrote:
> On 14/03/18 14:50, Lachlan Musicman wrote:
>
> As per subject, recently I've been shuffling nodes around into new
>> partitions. In that time somehow the default partition switched from prod
>> to dev. Not the end of the world - desirable in
On 14/03/18 14:50, Lachlan Musicman wrote:
As per subject, recently I've been shuffling nodes around into new
partitions. In that time somehow the default partition switched from
prod to dev. Not the end of the world - desirable in fact. But I'd like
to know what happened to cause it?
Did yo
As per subject, recently I've been shuffling nodes around into new
partitions. In that time somehow the default partition switched from prod
to dev. Not the end of the world - desirable in fact. But I'd like to know
what happened to cause it?
cheers
L.
--
"The antidote to apocalypticism is *
On 14/03/18 07:11, Mahmood Naderan wrote:
Any idea about that?
You've not requested any memory in your batch job and I guess your
default limit is too low.
To get the 1GB (and a little head room) try:
#SBATCH --mem=1100M
That's a per node limit, so for MPI jobs (which Gaussian is not)
you'l
On 14/03/18 06:30, Mahmood Naderan wrote:
I expected to see one compute-0-0.local and one compute-0-1.local
messages. Any idea about that?
You've asked for 2 MPI ranks each using 1 CPU and as you've got 2 cores
on one and 4 cores on the other Slurm can fit both on to one of your
nodes so that'
Hi,
By specifying the following parameters in a gaussian file
%nprocshared=2
%mem=1GB
and a slurm script as below
#!/bin/bash
#SBATCH --output=test.out
#SBATCH --job-name=gaus-test
#SBATCH --nodelist=compute-0-1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=2
g09 test.gjf
the run terminates with
Hi,
For a simple mpi hello program, I have written this script in order to
receive one message from each of the compute nodes.
#!/bin/bash
#SBATCH --output=hello.out
#SBATCH --job-name=hello
#SBATCH --ntasks=2
#SBATCH --cpus-per-task=1
mpirun mpihello
The node information show
[mahmood@rocks7 ~]
On Tuesday, 13 March 2018 5:46:09 AM AEDT Keith Ball wrote:
> 1.) For a “night” partition, jobs will only be allocated resources one the
> “night-time” window is reached (e.g. 6pm – 7am). Ideally, the jobs in the
> “night” partition would also have higher priority during this window (so
> that the
Hi,
We have information of the every jobs historically in slurmdb and
job_completions logs. We want to rebuild assosc_usage file ( and other
related files) from the beginning.. Is it possible? Current slurm
version is 17.11.2
Regards..
Sefa ARSLAN
10 matches
Mail list logo