On Tue, Mar 21, 2017 at 03:15:11PM +0000, juanesteban.jime...@mdc-berlin.de 
wrote:
   The "size" of job metadata (scripts, ENV, etc) doesn't really affect
   the RAM usage appreciably that I've seen.  We routinely have jobs
   ENVs of almost 4k or more, and it's never been a problem.  The
   "data" processed by jobs isn't a factor in qmaster RAM usage, so far as
   I know.

I’ve been reading otherwise, with specific recommendations to use –v rather 
than –V, but many tools out there are lazy and just use –V. passing the entire 
environment.

Oh?  Where are you reading that?  Perhaps we've just not had much of a
problem...

   BDB spooling can be "faster" on large clusters; it doesn't make much
   difference on small ones.

I don’t know what is classified as large or small. We’re 115 hosts 3800 cores. 
That’s doubling in a few days.

Small-Medium. :)

   You might want to look at running SoGE, which has a more recent...

That is what we are using, v8.1.8.

8.1.9 is out:  http://arc.liv.ac.uk/downloads/SGE/releases/8.1.9/


--
Jesse Becker (Contractor)
_______________________________________________
SGE-discuss mailing list
SGE-discuss@liv.ac.uk
https://arc.liv.ac.uk/mailman/listinfo/sge-discuss

Reply via email to