On 19.09.19 17:27, Jan Beulich wrote:
On 14.09.2019 10:52, Juergen Gross wrote:
This prepares support of larger scheduling granularities, e.g. core
scheduling.
While at it move sched_has_urgent_vcpu() from include/asm-x86/cpuidle.h
into sched.h removing the need for including sched-if.h in cpuidle.h.
For that purpose remobe urgent_count from the scheduler private data
and make it a plain percpu variable.
Signed-off-by: Juergen Gross <jgr...@suse.com>
Fundamentally
Reviewed-by: Jan Beulich <jbeul...@suse.com>
but a couple of remarks:
@@ -643,7 +643,7 @@ static spinlock_t *
a653_switch_sched(struct scheduler *new_ops, unsigned int cpu,
void *pdata, void *vdata)
{
- struct schedule_data *sd = &per_cpu(schedule_data, cpu);
+ struct sched_resource *sd = get_sched_res(cpu);
I can understand why you keep "sd" as a name here and in similar
cases. But ...
@@ -3881,6 +3881,7 @@ csched2_switch_sched(struct scheduler *new_ops, unsigned
int cpu,
{
struct csched2_private *prv = csched2_priv(new_ops);
struct csched2_unit *svc = vdata;
+ struct sched_resource *sd = get_sched_res(cpu);
... here (and in at least one more place) you introduce a new
variable. Wouldn't this better be named e.g. "sr"? Furthermore
you use it just once ...
@@ -3906,7 +3907,7 @@ csched2_switch_sched(struct scheduler *new_ops, unsigned
int cpu,
* this scheduler, and so it's safe to have taken it /before/ our
* private global lock.
*/
- ASSERT(per_cpu(schedule_data, cpu).schedule_lock != &prv->rqd[rqi].lock);
+ ASSERT(sd->schedule_lock != &prv->rqd[rqi].lock);
... here; it doesn't seem worthwhile here, but I guess
subsequent changes make more use of it?
In fact they don't. I'll remove it here.
Regarding to rename "sd" to "sr": I agree this would be a sensible
change. OTOH I'd like to be consistent, so I'd like to defer that to
the planned scheduling cleanup series.
@@ -393,7 +395,7 @@ int sched_init_vcpu(struct vcpu *v, unsigned int processor)
/* Idle VCPUs are scheduled immediately, so don't put them in runqueue. */
if ( is_idle_domain(d) )
{
- per_cpu(schedule_data, v->processor).curr = unit;
+ get_sched_res(v->processor)->curr = unit;
As long as it's a macro (see below), why not use curr_on_cpu() here?
There will be another sched_resource initialization added here
later. This makes it more obvious.
@@ -1916,7 +1917,7 @@ void __init scheduler_init(void)
idle_domain->max_vcpus = nr_cpu_ids;
if ( vcpu_create(idle_domain, 0, 0) == NULL )
BUG();
- this_cpu(schedule_data).curr = idle_vcpu[0]->sched_unit;
+ get_sched_res(0)->curr = idle_vcpu[0]->sched_unit;
Hmm, yet another plain 0. But yes, there are quite a few of them
here already, so one more doesn't really matter.
Yes, we should add a boot_cpu variable. But not in this series.
--- a/xen/include/xen/sched-if.h
+++ b/xen/include/xen/sched-if.h
@@ -33,22 +33,19 @@ extern int sched_ratelimit_us;
* For cache betterness, keep the actual lock in the same cache area
* as the rest of the struct. Just have the scheduler point to the
* one it wants (This may be the one right in front of it).*/
-struct schedule_data {
+struct sched_resource {
spinlock_t *schedule_lock,
_lock;
struct sched_unit *curr;
void *sched_priv;
struct timer s_timer; /* scheduling timer */
- atomic_t urgent_count; /* how many urgent vcpus */
-};
-
-#define curr_on_cpu(c) (per_cpu(schedule_data, c).curr)
-struct sched_resource {
- unsigned int master_cpu; /* Cpu with lowest id in scheduling resource. */
+ /* Cpu with lowest id in scheduling resource. */
+ unsigned int master_cpu;
};
-DECLARE_PER_CPU(struct schedule_data, schedule_data);
+#define curr_on_cpu(c) (get_sched_res(c)->curr)
By moving this a few lines down if could become an inline function
as it seems, if (see above) its use as an lvalue is not intended.
I like that idea. Will change to inline function.
Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel