Yes, when working with the human genome you can easily go up to 16Gb. El El mié, 7 feb 2018 a las 16:20, Krieger, Donald N. <krieg...@upmc.edu> escribió:
> Sorry for jumping in without full knowledge of the thread. > But it sounds like the key issue is that each job requires 3 GBytes. > Even if that's true, won't jobs start on cores with less memory and then > just page? > Of course as the previous post states, you must tailor your slurm request > to the physical limits of your cluster. > > But the real question is do the jobs really require 3 GBytes of resident > memory. > Most code declares far more than required and then ends up running in what > it actually uses. > You can tell by running a job and viewing the memory statistics with top > or something similar. > > Anyway - best - Don > > -----Original Message----- > From: slurm-users [mailto:slurm-users-boun...@lists.schedmd.com] On > Behalf Of r...@open-mpi.org > Sent: Wednesday, February 7, 2018 10:03 AM > To: Slurm User Community List <slurm-users@lists.schedmd.com> > Subject: Re: [slurm-users] Allocate more memory > > Afraid not - since you don’t have any nodes that meet the 3G requirement, > you’ll just hang. > > > On Feb 7, 2018, at 7:01 AM, david vilanova <vila...@gmail.com> wrote: > > > > Thanks for the quick response. > > > > Should the following script do the trick ?? meaning use all required > nodes to have at least 3G total memory ? even though my nodes were setup > with 2G each ?? > > > > #SBATCH array 1-10%10:1 > > > > #SBATCH mem-per-cpu=3000m > > > > srun R CMD BATCH myscript.R > > > > > > > > thanks > > > > > > > > > > On 07/02/2018 15:50, Loris Bennett wrote: > >> Hi David, > >> > >> david martin <vila...@gmail.com> writes: > >> > >>> > >>> > >>> Hi, > >>> > >>> I would like to submit a job that requires 3Go. The problem is that I > have 70 nodes available each node with 2Gb memory. > >>> > >>> So the command sbatch --mem=3G will wait for ressources to become > available. > >>> > >>> Can I run sbatch and tell the cluster to use the 3Go out of the 70Go > >>> available or is that a particular setup ? meaning is the memory > >>> restricted to each node ? or should i allocate two nodes so that i > >>> have 2x4Go availble ? > >> Check > >> > >> man sbatch > >> > >> You'll find that --mem means memory per node. Thus, if you specify > >> 3GB but all the nodes have 2GB, your job will wait forever (or until > >> you buy more RAM and reconfigure Slurm). > >> > >> You probably want --mem-per-cpu, which is actually more like memory > >> per task. This is obviously only going to work if your job can > >> actually run on more than one node, e.g. is MPI enabled. > >> > >> Cheers, > >> > >> Loris > >> > > > > > > >