Thanks all for your comments, i will look into that El El mié, 7 feb 2018 a las 16:37, Loris Bennett <loris.benn...@fu-berlin.de> escribió:
> > I was make the unwarranted assumption that you have multiple processes. > So if you have a single process which needs more than 2GB, Ralph is of > course right and there is nothing you can do. > > However, you are using R, so, depending on your problem, you may be able > to make use of a package like Rmpi to allow your job to run on multiple > nodes. > > Cheers, > > Loris > > "r...@open-mpi.org" <r...@open-mpi.org> writes: > > > Afraid not - since you don’t have any nodes that meet the 3G > requirement, you’ll just hang. > > > >> On Feb 7, 2018, at 7:01 AM, david vilanova <vila...@gmail.com> wrote: > >> > >> Thanks for the quick response. > >> > >> Should the following script do the trick ?? meaning use all required > nodes to have at least 3G total memory ? even though my nodes were setup > with 2G each ?? > >> > >> #SBATCH array 1-10%10:1 > >> > >> #SBATCH mem-per-cpu=3000m > >> > >> srun R CMD BATCH myscript.R > >> > >> > >> > >> thanks > >> > >> > >> > >> > >> On 07/02/2018 15:50, Loris Bennett wrote: > >>> Hi David, > >>> > >>> david martin <vila...@gmail.com> writes: > >>> > >>>> > >>>> > >>>> Hi, > >>>> > >>>> I would like to submit a job that requires 3Go. The problem is that I > have 70 nodes available each node with 2Gb memory. > >>>> > >>>> So the command sbatch --mem=3G will wait for ressources to become > available. > >>>> > >>>> Can I run sbatch and tell the cluster to use the 3Go out of the 70Go > >>>> available or is that a particular setup ? meaning is the memory > >>>> restricted to each node ? or should i allocate two nodes so that i > >>>> have 2x4Go availble ? > >>> Check > >>> > >>> man sbatch > >>> > >>> You'll find that --mem means memory per node. Thus, if you specify 3GB > >>> but all the nodes have 2GB, your job will wait forever (or until you > buy > >>> more RAM and reconfigure Slurm). > >>> > >>> You probably want --mem-per-cpu, which is actually more like memory per > >>> task. This is obviously only going to work if your job can actually > run > >>> on more than one node, e.g. is MPI enabled. > >>> > >>> Cheers, > >>> > >>> Loris > >>> > >> > >> > -- > Dr. Loris Bennett (Mr.) > ZEDAT, Freie Universität Berlin Email loris.benn...@fu-berlin.de > >