On Thu 20-10-16 20:44:35, Andrew Morton wrote: > On Tue, 4 Oct 2016 16:14:17 +0300 Vladimir Davydov <vdavydov....@gmail.com> > wrote: > > > Creating a lot of cgroups at the same time might stall all worker > > threads with kmem cache creation works, because kmem cache creation is > > done with the slab_mutex held. The problem was amplified by commits > > 801faf0db894 ("mm/slab: lockless decision to grow cache") in case of > > SLAB and 81ae6d03952c ("mm/slub.c: replace kick_all_cpus_sync() with > > synchronize_sched() in kmem_cache_shrink()") in case of SLUB, which > > increased the maximal time the slab_mutex can be held. > > > > To prevent that from happening, let's use a special ordered single > > threaded workqueue for kmem cache creation. This shouldn't introduce any > > functional changes regarding how kmem caches are created, as the work > > function holds the global slab_mutex during its whole runtime anyway, > > making it impossible to run more than one work at a time. By using a > > single threaded workqueue, we just avoid creating a thread per each > > work. Ordering is required to avoid a situation when a cgroup's work is > > put off indefinitely because there are other cgroups to serve, in other > > words to guarantee fairness. > > I'm having trouble working out the urgency of this patch?
Seeing thousands of kernel threads is certainly annoying so I think we want to merge it sooner rather than later and have it backported to stable as well. -- Michal Hocko SUSE Labs