Some of the jobs I run essentially just copy stuff to/from a remote
filesystem, and as such more or less max out the available network
bandwidth.  This in itself is not a problem, but it can be if more than
one such job runs on the same system, or if too many are running
in total.

The solution seemed straightforward - create a complex value that would
limit per machine and per queue usage of high io jobs, and have those
jobs request the resource.

First, the complex value itself:
qconf -sc:
#name   shortcut type relop requestable consumable default urgency 
high_io io       INT  <=    YES         JOB        0       100

Per queue limit:
qconf -sq all.q:
complex_values high_io=10

Per machine limit:
qconf -se pc65-gsc:
complex_values high_io=1

Submitting processes to use the resource:
qsub -l high_io=1 -q all.q do_thing

Grid will track the resource:
qstat -F high_io:
al...@pc65-gsc.haib.org        BIP   0/16/16        5.25     linux-x64     
        hc:high_io=-4
 317388 0.50887 Pe9f1e5cf2 flowers      r     04/13/2017 12:59:31     2        
 317389 0.50887 P2133afabd flowers      r     04/13/2017 12:59:31     2        
 317390 0.50887 Pae6a146a5 flowers      r     04/13/2017 12:59:31     2        
 317391 0.50887 P05685178e flowers      r     04/13/2017 12:59:31     2        
 317392 0.50887 P16fd5e5ae flowers      r     04/13/2017 12:59:31     2        

(I don't know how to show queue-level resource consumption.)

As you can see, grid does not limit high_io resource usage (neither by
machine, as you can see here, or by queue, as I've had 65 high_io=1
jobs running at once in all.q).  I'm assuming I missed some part of
using consumables?  I thought the point was that the value couldn't go
below zero (or, rather, that grid would not schedule a job in such a
way that the value would go negative).

Is there any way I can propery set up this complex value, or is this
the wrong approach?
_______________________________________________
users mailing list
users@gridengine.org
https://gridengine.org/mailman/listinfo/users

Reply via email to