On Thu, May 21, 2015 at 05:57:59AM -0700, Paul E. McKenney wrote: > On Thu, May 21, 2015 at 05:42:46PM +0530, Afzal Mohammed wrote: > > Hi, > > > > On Wed, May 20, 2015 at 02:00:26PM -0700, Paul E. McKenney wrote: > > > > > > > Given that kernel initiated association to isolcpus, a user turning > > > > > NO_HZ_FULL_ALL on had better not have much generic load to manage. If > > > > > > > > On a quad-core desktop system with NO_HZ_FULL_ALL, hackbench took 3x > > > > time as compared to w/o this patch, except boot cpu every one else > > > > jobless. Though NO_HZ_FULL_ALL (afaik) is not meant for generic load, > > > > it was working fine, but not after this - it is now like a single core > > > > system. > > > > > > I have to ask... What is your use case? What are you wanting NO_HZ_FULL > > > to do for you? > > > > I was just playing NO_HZ_FULL with tip-[sched,timers]-* changes. > > > > Thought that shutting down ticks as much as possible would be > > beneficial to normal loads too, though it has been mentioned to be used > > for specialized loads. Seems like drawbacks due to it weigh against > > normal loads, but haven't so far observed any (on a laptop with normal > > activities) before this change. > > Indeed, NO_HZ_FULL is special purpose. You normally would select > NO_HZ_FULL_ALL only on a system intended for heavy compute without > normal-workload distractions or for some real-time systems. For mixed > workloads, you would build with NO_HZ_FULL (but not NO_HZ_FULL_ALL) and > use the boot parameters to select which CPUs are to be running the > specialized portion of the workload. > > And you would of course need to lead enough CPUs running normally to > handle the non-specialized portion of the workload. > > This sort of thing has traditionally required specialized kernels, > so the cool thing here is that we can make Linux do it. Though, as > you noticed, careful configuration is still required. > > Seem reasonable?
That said if he saw a big performance regression after applying these patches, then there is likely a problem in the patchset. Well it could be due to that mode which loops on full dynticks before resuming to userspace. Indeed when that is enabled, I expect real throughput issues on workloads doing lots of kernel <-> userspace roundtrips. We just need to make sure this thing only works when requested. Anyway, I need to look at the patchset. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

