On 09/14/20 11:31, Valentin Schneider wrote: > > On 12/09/20 00:04, Li, Aubrey wrote: > >>> +++ b/include/linux/sched/topology.h > >>> @@ -65,8 +65,21 @@ struct sched_domain_shared { > >>> atomic_t ref; > >>> atomic_t nr_busy_cpus; > >>> int has_idle_cores; > >>> + /* > >>> + * Span of all idle CPUs in this domain. > >>> + * > >>> + * NOTE: this field is variable length. (Allocated dynamically > >>> + * by attaching extra space to the end of the structure, > >>> + * depending on how many CPUs the kernel has booted up with) > >>> + */ > >>> + unsigned long idle_cpus_span[]; > >> > >> Can't you use cpumask_var_t and zalloc_cpumask_var() instead? > > > > I can use the existing free code. Do we have a problem of this? > > > > Nah, flexible array members are the preferred approach here; this also
Is this your opinion or a rule written somewhere I missed? > means we don't let CONFIG_CPUMASK_OFFSTACK dictate where this gets > allocated. > > See struct numa_group, struct sched_group, struct sched_domain, struct > em_perf_domain... struct root_domain, struct cpupri_vec, struct generic_pm_domain, struct irq_common_data.. Use cpumask_var_t. Both approach look correct to me, so no objection in principle. cpumask_var_t looks neater IMO and will be necessary once more than one cpumask are required in a struct. > > >> > >> The patch looks useful. Did it help you with any particular workload? It'd > >> be > >> good to expand on that in the commit message. > >> > > Odd, that included in patch v1 0/1, did you receive it? Aubrey, Sorry I didn't see that no. It's important justification to be part of the commit message, I think worth adding it. Thanks -- Qais Yousef