Oleg Nesterov wrote:
> Yes, yes, and I already tried to comment this part. We probably need a
> dedicated kernel thread, but I still think (although I am not sure) that
> initial change can use workueue. In the likely case system_unbound_wq pool
> should have an idle thread, if not - OK, this change won't help in this
> case. This is minor.
> 
I imagined a dedicated kernel thread doing something like shown below.
(I don't know about mm->mmap management.)
mm->mmap_zapped corresponds to MMF_MEMDIE.
I think this kernel thread can be used for normal kill(pid, SIGKILL) cases.

----------
bool has_sigkill_task;
wait_queue_head_t kick_mm_zapper;

static void mm_zapper(void *unused)
{
        struct task_struct *g, *p;
        struct mm_struct *mm;

sleep:
        wait_event(kick_remover, has_sigkill_task);
        has_sigkill_task = false;
restart:
        rcu_read_lock();
        for_each_process_thread(g, p) {
                if (likely(!fatal_signal_pending(p)))
                        continue;
                task_lock(p);
                mm = p->mm;
                if (mm && mm->mmap && !mm->mmap_zapped && 
down_read_trylock(&mm->mmap_sem)) {
                        atomic_inc(&mm->mm_users);
                        task_unlock(p);
                        rcu_read_unlock();
                        if (mm->mmap && !mm->mmap_zapped)
                                zap_page_range(mm->mmap, 0, TASK_SIZE, NULL);
                        mm->mmap_zapped = 1;
                        up_read(&mm->mmap_sem);
                        mmput(mm);
                        cond_resched();
                        goto restart;
                }
                task_unlock(p);
        }
        rcu_read_unlock();
        goto sleep;
}

kthread_run(mm_zapper, NULL, "mm_zapper");
----------
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to