From: Christoph Lameter
KFENCE manages its own pools and redirects regular memory allocations
to those pools in a sporadic way. The usual memory allocator features
like NUMA, memory policies and pfmemalloc are not supported.
This means that one gets surprising object placement with KFENCE that
From: Christoph Lameter
KFENCE manages its own pools and redirects regular memory allocations
to those pools in a sporadic way. The usual memory allocator features
like NUMA, memory policies and pfmemalloc are not supported.
This means that one gets surprising object placement with KFENCE that
On Wed, 12 Oct 2016, Piotr Kwapulinski wrote:
> That's right. This could be "local allocation" or any other memory policy.
Correct.
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.ker
On Wed, 12 Oct 2016, Michael Kerrisk (man-pages) wrote:
> > +arguments must specify the empty set. If the "local node" is low
> > +on free memory the kernel will try to allocate memory from other
> > +nodes. The kernel will allocate memory from the "local node"
> > +whenever memory for this node i
the memory for this node will be released. If the
"whenever memory for this node is available"?
Otherwise
Reviewed-by: Christoph Lameter
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Well the difference between MPOL_DEFAULT and MPOL_LOCAL may be confusing.
Mention somewhere in the MPOL_LOCAL description that the policy with
MPOL_DEFAULT reverts to the policy of the process and MPOL_LOCAL to try to
allocate local? Note that MPOL_LOCAL also will not be local if it just
happens th
alls without triggering a tick again but the
processing time in those calls is negligible. Any wait or sleep
occurrence during syscalls would activate the tick again.
Any inaccuracy is corrected once the tick is switched on again
since the actual value where cputime aggregates is not changed.
On Thu, 11 Aug 2016, Paul E. McKenney wrote:
> Heh! The only really good idea is for clocks to be reliably in sync.
>
> But if they go out of sync, what do you want to do instead?
For a NOHZ task? Write a message to the syslog and reenable tick.
--
To unsubscribe from this list: send the line "
On Thu, 11 Aug 2016, Paul E. McKenney wrote:
> > With modern Intel we could run it on one CPU per package I think, but at
> > the same time, too much in NOHZ_FULL assumes the TSC is indeed sane so
> > it doesn't make sense to me to keep the watchdog running, when it
> > triggers it would also have
On Thu, 11 Aug 2016, Frederic Weisbecker wrote:
> Do we need to quiesce vmstat everytime before entering userspace?
> I thought that vmstat only need to be offlined once and for all?
Once is sufficient after disabling the tick.
--
To unsubscribe from this list: send the line "unsubscribe linux-d
On Wed, 27 Jul 2016, Chris Metcalf wrote:
> Looks good. Did you omit the equivalent fix in clocksource_start_watchdog()
> on purpose? For now I just took your change, but tweaked it to add the
> equivalent diff with cpumask_first_and() there.
Can the watchdog be started on an isolated cpu at al
tchdog checks can only run on housekeeping capable cpus. Otherwise
we will be generating noise that we would like to avoid on the isolated
processors.
Signed-off-by: Christoph Lameter
Index: linux/kernel/time/clocksource.c
===
--- linu
processors.
Signed-off-by: Christoph Lameter
Index: linux/kernel/time/clocksource.c
===
--- linux.orig/kernel/time/clocksource.c2016-07-27 08:41:17.109862517
-0500
+++ linux/kernel/time/clocksource.c 2016-07-27 10:28:31.172447732
On Wed, 27 Jul 2016, Chris Metcalf wrote:
> > Should we just cycle through the cpus that are not isolated? Otherwise we
> > need to have some means to check the clocksources for accuracy remotely
> > (probably impossible for TSC etc).
>
> That sounds like the right idea - use the housekeeping cpu
We tested this with 4.7-rc7 and aside from the issue with
clocksource_watchdog() this is working fine.
Tested-by: Christoph Lameter
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo inf
On Mon, 25 Jul 2016, Christoph Lameter wrote:
> Guess so. I will have a look at this when I get some time again.
Ok so the problem is the clocksource_watchdog() function in
kernel/time/clocksource.c. This function is active if
CONFIG_CLOCKSOURCE_WATCHDOG is defined. It will check the timesour
On Fri, 22 Jul 2016, Chris Metcalf wrote:
> > It already as a stable clocksource. Sorry but that was one of the criteria
> > for the server when we ordered them. Could this be clock adjustments?
>
> We probably need to get clock folks to jump in on this thread!
Guess so. I will have a look at thi
On Thu, 21 Jul 2016, Chris Metcalf wrote:
> On 7/20/2016 10:04 PM, Christoph Lameter wrote:
> unstable, and then scheduling work to safely remove that timer.
> I haven't looked at this code before (in kernel/time/clocksource.c
> under CONFIG_CLOCKSOURCE_WATCHDOG) since the time
We are trying to test the patchset on x86 and are getting strange
backtraces and aborts. It seems that the cpu before the cpu we are running
on creates an irq_work event that causes a latency event on the next cpu.
This is weird. Is there a new round robin IPI feature in the kernel that I
am not a
On Thu, 14 Jul 2016, Andy Lutomirski wrote:
> As an example, enough vmalloc/vfree activity will eventually cause
> flush_tlb_kernel_range to be called and *boom*, there goes your shiny
> production dataplane application. Once virtually mapped kernel stacks
> happen, the frequency with which this
On Tue, 5 Jul 2016, Frederic Weisbecker wrote:
> > >>That's true, but I'd argue the behavior in that case should be that you
> > >>can
> > >>raise that kind of exception validly (so you can debug), and then you
> > >>should
> > >>quiesce on return to userspace so the application doesn't see addi
21 matches
Mail list logo