On 10/19/20 9:43 PM, Joel Fernandes (Google) wrote:
In order to prevent interference and clearly support both per-task and CGroup
APIs, split the cookie into 2 and allow it to be set from either per-task, or
CGroup API. The final cookie is the combined value of both and is computed when
the stop-machine executes during a change of cookie.

Also, for the per-task cookie, it will get weird if we use pointers of any
emphemeral objects. For this reason, introduce a refcounted object who's sole
purpose is to assign unique cookie value by way of the object's pointer.

While at it, refactor the CGroup code a bit. Future patches will introduce more
APIs and support.

Tested-by: Julien Desfossez <jdesfos...@digitalocean.com>
Signed-off-by: Joel Fernandes (Google) <j...@joelfernandes.org>
---
  include/linux/sched.h |   2 +
  kernel/sched/core.c   | 241 ++++++++++++++++++++++++++++++++++++++++--
  kernel/sched/debug.c  |   4 +
  3 files changed, 236 insertions(+), 11 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index fe6f225bfbf9..c6034c00846a 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -688,6 +688,8 @@ struct task_struct {
  #ifdef CONFIG_SCHED_CORE
      struct rb_node            core_node;
      unsigned long            core_cookie;
+    unsigned long            core_task_cookie;
+    unsigned long            core_group_cookie;
      unsigned int            core_occupation;
  #endif
  diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index bab4ea2f5cd8..30a9e4cb5ce1 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -346,11 +346,14 @@ void sched_core_put(void)
      mutex_unlock(&sched_core_mutex);
  }
  +static int sched_core_share_tasks(struct task_struct *t1, struct task_struct 
*t2);
+
  #else /* !CONFIG_SCHED_CORE */
    static inline void sched_core_enqueue(struct rq *rq, struct task_struct *p) 
{ }
  static inline void sched_core_dequeue(struct rq *rq, struct task_struct *p) { 
}
  static bool sched_core_enqueued(struct task_struct *task) { return false; }
+static int sched_core_share_tasks(struct task_struct *t1, struct task_struct 
*t2) { }
    #endif /* CONFIG_SCHED_CORE */
  @@ -3583,6 +3586,20 @@ int sched_fork(unsigned long clone_flags, struct 
task_struct *p)
  #endif
  #ifdef CONFIG_SCHED_CORE
      RB_CLEAR_NODE(&p->core_node);
+
+    /*
+     * Tag child via per-task cookie only if parent is tagged via per-task
+     * cookie. This is independent of, but can be additive to the CGroup 
tagging.
+     */
+    if (current->core_task_cookie) {
+
+        /* If it is not CLONE_THREAD fork, assign a unique per-task tag. */
+        if (!(clone_flags & CLONE_THREAD)) {
+            return sched_core_share_tasks(p, p);
+               }
+        /* Otherwise share the parent's per-task tag. */
+        return sched_core_share_tasks(p, current);
+    }
  #endif
      return 0;
}

sched_core_share_tasks() looks at the value of the new tasks core_task_cookie 
which is non-zero on a
process or thread clone and causes underflow for both the enable flag itself 
and for cookie ref counts.

So just zero it in __sched_fork().

-chris


PATCH] sched: zero out the core scheduling cookie on clone

From: chris hyser <chris.hy...@oracle.com>

As the cookie is reference counted, even if inherited, zero this and allow
explicit sharing.

Signed-off-by: Chris Hyser <chris.hy...@oracle.com>
---
 kernel/sched/core.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index fd3cc03..2af0ea6 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3378,6 +3378,9 @@ static void __sched_fork(unsigned long clone_flags, 
struct task_struct *p)
        p->capture_control = NULL;
 #endif
        init_numa_balancing(clone_flags, p);
+#ifdef CONFIG_SCHED_CORE
+       p->core_task_cookie = 0;
+#endif
 #ifdef CONFIG_SMP
        p->wake_entry.u_flags = CSD_TYPE_TTWU;
 #endif
--
1.8.3.1

Reply via email to