Am 22.12.2016 um 22:29 schrieb Michael Stauffer:

> > >
> > > I'm trying to setup a fair share policy.
> <snip> 
> > > Do I need to setup a basic share tree like listed here:
> > >
> > > id=0
> > > name=Root
> > > type=0
> > > shares=1
> > > childnodes=1
> > > id=1
> > > name=default
> > > type=0
> > > shares=1000
> 
> Correct. All users which are collected under this "default" leaf will 
> automatically show up there (after you saved the tree and open it again and 
> click the "+").
> 
> Thanks Reuti! It looks like it's working now. A couple questions though:
> 
> 1) newly submitted qsub jobs are showing up with 'stckt' values in qstat 
> -ext. However they're sometimes in the thousands, when the sharetree I set up 
> like above has shares=1000. What might that mean?

The shares in the share-tree will determine the percentage normalized to an 
overall sum of 1. Then this value will be multiplied by the 
weight_tickets_share in the scheduler configuration and past usage per job and 
user.


> 2) qlogin jobs that were already running got share tickets assigned, as shown 
> by qstat -ext. And they have LOTS of tickets, some up to 24k. Does that seem 
> normal?

Looks like it's the same as for parallel jobs: they will keep their tickets 
even when weight_tickets_share is changed (and maybe even other settings). One 
can only wait for their completion to leave the cluster.

The ticket system is normally set up once and changed seldom once it's working. 
Then it wouldn't matter to have this flaw.


> ===
> 
> BTW: I notice that you have no backfilling enabled. Is this by intention?
> 
> Yes, but more by ignorance than thoughtfulness. I only know the basic idea of 
> backfilling, and figured it needs to have each job's expected execution 
> duration in order to work. And I haven't set up timed queues and told user to 
> submit with expected executation duration (although it's on the list), so I 
> figured backfilling wouldn't make sense yet. Am I right?

Correct.

-- Reuti


> Thanks again.
> 
> -M
> 
>  
> 
> -- Reuti
> 
> 
> > in the global config was enough. Thanks for any thoughts.
> > >
> > > # qconf -sconf
> > > <snip>
> > > enforce_user                 auto
> > > auto_user_fshare             100
> > > <snip>
> > >
> > > # qconf -ssconf
> > > algorithm                         default
> > > schedule_interval                 0:0:5
> > > maxujobs                          200
> > > queue_sort_method                 load
> > > job_load_adjustments              np_load_avg=0.50
> > > load_adjustment_decay_time        0:7:30
> > > load_formula                      np_load_avg
> > > schedd_job_info                   true
> > > flush_submit_sec                  0
> > > flush_finish_sec                  0
> > > params                            none
> > > reprioritize_interval             0:0:0
> > > halftime                          168
> > > usage_weight_list                 cpu=1.000000,mem=0.000000,io=0.000000
> > > compensation_factor               5.000000
> > > weight_user                       0.250000
> > > weight_project                    0.250000
> > > weight_department                 0.250000
> > > weight_job                        0.250000
> > > weight_tickets_functional         1000
> > > weight_tickets_share              100000
> > > share_override_tickets            TRUE
> > > share_functional_shares           TRUE
> > > max_functional_jobs_to_schedule   2000
> > > report_pjob_tickets               TRUE
> > > max_pending_tasks_per_job         100
> > > halflife_decay_list               none
> > > policy_hierarchy                  OSF
> > > weight_ticket                     1.000000
> > > weight_waiting_time               0.100000
> > > weight_deadline                   3600000.000000
> > > weight_urgency                    0.100000
> > > weight_priority                   1.000000
> > > max_reservation                   0
> > > default_duration                  INFINITY
> > > _______________________________________________
> > > users mailing list
> > > users@gridengine.org
> > > https://gridengine.org/mailman/listinfo/users
> >
> >
> 
> 


_______________________________________________
users mailing list
users@gridengine.org
https://gridengine.org/mailman/listinfo/users

Reply via email to