> Am 02.05.2017 um 16:38 schrieb pa...@re-gister.com:
> 
> Thanks, Reuti! So I added the project to the share tree and assign all newly 
> submitted jobs "-P gpuproj" via JSV:
> 
> # qconf -sstree
> id=0
> name=root
> type=0
> shares=1
> childnodes=1,2
> id=1
> name=default
> type=0
> shares=1
> childnodes=NONE
> id=2
> name=gpuproj
> type=1
> shares=1
> childnodes=NONE

I would increase the shares to read have a value somewhere in the 10,000

It's also necessary, to have in SGEs configuration:

auto_user_delete_time  0

so that past usage is honored (the usage is recorded in the user account).


> What do I need to change in my scheduler configuration to make use of the new 
> sub-tree?
> 
> 
> # qconf -ssconf
> algorithm                         default
> schedule_interval                 0:0:10
> maxujobs                          1900
> queue_sort_method                 seqno
> job_load_adjustments              np_load_avg=0.50
> load_adjustment_decay_time        0:7:30
> load_formula                      np_load_avg
> schedd_job_info                   true
> flush_submit_sec                  0
> flush_finish_sec                  0
> params                            none
> reprioritize_interval             0:0:0
> halftime                          48
> usage_weight_list                 cpu=0.333000,mem=0.333000,io=0.334000
> compensation_factor               5.000000
> weight_user                       1.000000
> weight_project                    0.250000
> weight_department                 0.250000
> weight_job                        0.000000

weight_user                       0.500000
weight_project                    0.500000
weight_department                 0.000000


> weight_tickets_functional         9800
> weight_tickets_share              200

Both should get 100,000 if you use both


> share_override_tickets            TRUE
> share_functional_shares           TRUE
> max_functional_jobs_to_schedule   2000
> report_pjob_tickets               TRUE
> max_pending_tasks_per_job         50
> halflife_decay_list               cpu=1440:mem=1440:io=1440
> policy_hierarchy                  FOS

policy_hierarchy                  SF


> weight_ticket                     0.500000
> weight_waiting_time               1.000000
> weight_deadline                   0.000000
> weight_urgency                    0.000000
> weight_priority                   0.500000

It's a matter of taste whether you like to include the weight_waiting_time or 
not. Maybe you observe the effects of the changes and adjust it later on in 
case it doesn't work out as expected.


> max_reservation                   0
> default_duration                  0:10:0

No max_reservation; so you don't use backfilling?

-- Reuti

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
users mailing list
users@gridengine.org
https://gridengine.org/mailman/listinfo/users

Reply via email to