Hi Patrick,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on tip/sched/core]
[also build test ERROR on v4.17 next-20180604]
[if your patch is applied to the wrong git tree, please drop us a note to help 
improve the system]

url:    
https://github.com/0day-ci/linux/commits/Patrick-Bellasi/sched-fair-pelt-use-u32-for-util_avg/20180605-082640
config: i386-randconfig-s0-201822 (attached as .config)
compiler: gcc-6 (Debian 6.4.0-9) 6.4.0 20171026
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All errors (new ones prefixed by >>):

   kernel/sched/fair.o: In function `post_init_entity_util_avg':
>> kernel/sched/fair.c:761: undefined reference to `__udivdi3'

vim +761 kernel/sched/fair.c

   724  
   725  /*
   726   * With new tasks being created, their initial util_avgs are 
extrapolated
   727   * based on the cfs_rq's current util_avg:
   728   *
   729   *   util_avg = cfs_rq->util_avg / (cfs_rq->load_avg + 1) * 
se.load.weight
   730   *
   731   * However, in many cases, the above util_avg does not give a desired
   732   * value. Moreover, the sum of the util_avgs may be divergent, such
   733   * as when the series is a harmonic series.
   734   *
   735   * To solve this problem, we also cap the util_avg of successive tasks 
to
   736   * only 1/2 of the left utilization budget:
   737   *
   738   *   util_avg_cap = (1024 - cfs_rq->avg.util_avg) / 2^n
   739   *
   740   * where n denotes the nth task.
   741   *
   742   * For example, a simplest series from the beginning would be like:
   743   *
   744   *  task  util_avg: 512, 256, 128,  64,  32,   16,    8, ...
   745   * cfs_rq util_avg: 512, 768, 896, 960, 992, 1008, 1016, ...
   746   *
   747   * Finally, that extrapolated util_avg is clamped to the cap 
(util_avg_cap)
   748   * if util_avg > util_avg_cap.
   749   */
   750  void post_init_entity_util_avg(struct sched_entity *se)
   751  {
   752          struct cfs_rq *cfs_rq = cfs_rq_of(se);
   753          long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) 
/ 2;
   754  
   755          if (cap > 0) {
   756                  struct sched_avg *sa = &se->avg;
   757                  u64 util_avg = READ_ONCE(sa->util_avg);
   758  
   759                  if (cfs_rq->avg.util_avg != 0) {
   760                          util_avg  =  cfs_rq->avg.util_avg * 
se->load.weight;
 > 761                          util_avg /= (cfs_rq->avg.load_avg + 1);
   762                          if (util_avg > cap)
   763                                  util_avg = cap;
   764                  } else {
   765                          util_avg = cap;
   766                  }
   767  
   768                  WRITE_ONCE(sa->util_avg, util_avg);
   769          }
   770  
   771          if (entity_is_task(se)) {
   772                  struct task_struct *p = task_of(se);
   773                  if (p->sched_class != &fair_sched_class) {
   774                          /*
   775                           * For !fair tasks do:
   776                           *
   777                          update_cfs_rq_load_avg(now, cfs_rq);
   778                          attach_entity_load_avg(cfs_rq, se, 0);
   779                          switched_from_fair(rq, p);
   780                           *
   781                           * such that the next switched_to_fair() has the
   782                           * expected state.
   783                           */
   784                          se->avg.last_update_time = 
cfs_rq_clock_task(cfs_rq);
   785                          return;
   786                  }
   787          }
   788  
   789          attach_entity_cfs_rq(se);
   790  }
   791  

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

Attachment: .config.gz
Description: application/gzip

Reply via email to