So, if I reinstall using the Berkeley DB spooler, will this mitigate this kind 
of problem, or will the qmaster still want to commit hara-kiri by trying to 
load everything into memory from the DB?

Mfg,
Juan Jimenez
System Administrator, BIH HPC Cluster
MDC Berlin / IT-Dept.
Tel.: +49 30 9406 2800


 

On 27.06.17, 10:41, "William Hay" <w....@ucl.ac.uk> wrote:

    On Tue, Jun 27, 2017 at 08:30:55AM +0000, juanesteban.jime...@mdc-berlin.de 
wrote:
    > Never mind. One of my users submitted a job with 139k subjobs.
    > 
    > A few other questions:
    > 
    > 1) Is it possible to stop a job submission if I know it???s going to make 
the qmaster croak?
    Not sure exactly what you mean but setting max_u_jobs and max_aj_tasks in 
the grid engine configuration
    should allow you to control job submission to a degree.  Also user 
education could be helpful
    array jobs should put less stress on the qmaster than lots of individual 
jobs.
    
    
    > 2) Will switching to the berkeley DB setup in the qmaster alleviate this?
    > 3) Can that be done and still retain the existing data in /opt/sge?
    I believe spool type can only be chosen when initially setting up SGE so no.
    
    William
    

_______________________________________________
SGE-discuss mailing list
SGE-discuss@liv.ac.uk
https://arc.liv.ac.uk/mailman/listinfo/sge-discuss

Reply via email to