>    The "size" of job metadata (scripts, ENV, etc) doesn't really affect
>    the RAM usage appreciably that I've seen.  We routinely have jobs
>    ENVs of almost 4k or more, and it's never been a problem.  The
>    "data" processed by jobs isn't a factor in qmaster RAM usage, so far as
>    I know.

I’ve been reading otherwise, with specific recommendations to use –v rather 
than –V, but many tools out there are lazy and just use –V. passing the entire 
environment.

>    One thing I'm not sure about is submitting large, static binaries as...

I have to ask about that.

>  I don't know that you can convert spooling methods on a "live" system...

I’m crazy, not stupid. There is a difference. LOL! :)

>    BDB spooling can be "faster" on large clusters; it doesn't make much
>    difference on small ones.

I don’t know what is classified as large or small. We’re 115 hosts 3800 cores. 
That’s doubling in a few days.

>    You might want to look at running SoGE, which has a more recent...

That is what we are using, v8.1.8.

Also, if you are just starting out, take a look at these slides from
    BioTeam.  They are a wonderful resource:
        https://bioteam.net/2009/09/sge-training-slides/
        https://bioteam.net/2011/03/grid-engine-for-users/ (this too)
    
    This is also an excellent presentation, once you've gotten past the
    "learning" stage:
        http://beowulf.rutgers.edu/info-user/pdf/ge_presentation.pdf
 
Thanks for the training info, that helps! :)

Juan


_______________________________________________
SGE-discuss mailing list
SGE-discuss@liv.ac.uk
https://arc.liv.ac.uk/mailman/listinfo/sge-discuss

Reply via email to