On Wed, May 20, 2020 at 05:25:09PM +0530, Pavankumar Kondeti wrote:
> When kernel threads are created for later use, they will be in
> TASK_UNINTERRUPTIBLE state until they are woken up. This results
> in increased loadavg and false hung task reports. To fix this,
> use TASK_IDLE state instead of TASK_UNINTERRUPTIBLE when
> a kernel thread schedules out for the first time.
> 
> Signed-off-by: Pavankumar Kondeti <pkond...@codeaurora.org>
> ---
>  kernel/kthread.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/kernel/kthread.c b/kernel/kthread.c
> index bfbfa48..b74ed8e 100644
> --- a/kernel/kthread.c
> +++ b/kernel/kthread.c
> @@ -250,7 +250,7 @@ static int kthread(void *_create)
>       current->vfork_done = &self->exited;
>  
>       /* OK, tell user we're spawned, wait for stop or wakeup */
> -     __set_current_state(TASK_UNINTERRUPTIBLE);
> +     __set_current_state(TASK_IDLE);
>       create->result = current;
>       /*
>        * Thread is going to call schedule(), do not preempt it,
> @@ -428,7 +428,7 @@ static void __kthread_bind(struct task_struct *p, 
> unsigned int cpu, long state)
>  
>  void kthread_bind_mask(struct task_struct *p, const struct cpumask *mask)
>  {
> -     __kthread_bind_mask(p, mask, TASK_UNINTERRUPTIBLE);
> +     __kthread_bind_mask(p, mask, TASK_IDLE);
>  }
>  
>  /**
> @@ -442,7 +442,7 @@ void kthread_bind_mask(struct task_struct *p, const 
> struct cpumask *mask)
>   */
>  void kthread_bind(struct task_struct *p, unsigned int cpu)
>  {
> -     __kthread_bind(p, cpu, TASK_UNINTERRUPTIBLE);
> +     __kthread_bind(p, cpu, TASK_IDLE);
>  }
>  EXPORT_SYMBOL(kthread_bind);

It's as if people never read mailing lists:
        
https://lore.kernel.org/r/dm6pr11mb3531d3b164357b2dc476102ddf...@dm6pr11mb3531.namprd11.prod.outlook.com

Given that this is an identical resend of the previous patch, why are
you doing so, and what has changed since that original rejection?

greg k-h

Reply via email to