The disable callback can be used to compute timeout for other states
whenever a state is enabled or disabled. We store the computed timeout
in "timeout" defined in cpuidle state strucure. So, we compute timeout
only when some state is enabled or disabled and not every time in the
fast idle path.
We
To force wakeup a cpu, we need to compute the timeout in the fast idle
path as a state may be enabled or disabled but there did not exist a
feedback to driver when a state is enabled or disabled.
This patch adds a callback whenever a state_usage records a store for
disable attribute
Signed-off-by:
Currently, the cpuidle governors determine what idle state a idling CPU
should enter into based on heuristics that depend on the idle history on
that CPU. Given that no predictive heuristic is perfect, there are cases
where the governor predicts a shallow idle state, hoping that the CPU will
be bus
Currently, the cpuidle governors determine what idle state a idling CPU
should enter into based on heuristics that depend on the idle history on
that CPU. Given that no predictive heuristic is perfect, there are cases
where the governor predicts a shallow idle state, hoping that the CPU will
be bus
On 10/3/19 10:26 AM, Jeremy Kerr wrote:
Hi Vasant,
Correct. We will have `ibm,prd-label` property. That's not the issue.
It sure sounds like the issue - someone has represented a range that
should be mapped by HBRT, but isn't appropriate for mapping by HBRT.
Here issueis HBRT is loaded into
Hi Vasant,
> Correct. We will have `ibm,prd-label` property. That's not the issue.
It sure sounds like the issue - someone has represented a range that
should be mapped by HBRT, but isn't appropriate for mapping by HBRT.
> Here issueis HBRT is loaded into NVDIMM memory.
OK. How about we just d
On 10/3/19 7:17 AM, Jeremy Kerr wrote:
Hi Vasant,
Jeremy,
Add check to validate whether requested page is part of system RAM
or not before mmap() and error out if its not part of system RAM.
opal_prd_range_is_valid() will return false if the reserved memory range
does not have an ibm,prd-
> On Oct 2, 2019, at 9:36 PM, Leonardo Bras wrote:
>
> Adds config option LOCKLESS_PAGE_TABLE_WALK_TRACKING to make possible
> enabling tracking lockless pagetable walks directly from kernel config.
Can’t this name and all those new *lockless* function names be shorter? There
are many functi
Hi Vasant,
> Add check to validate whether requested page is part of system RAM
> or not before mmap() and error out if its not part of system RAM.
opal_prd_range_is_valid() will return false if the reserved memory range
does not have an ibm,prd-label property. If this you're getting invalid
memo
Adds config option LOCKLESS_PAGE_TABLE_WALK_TRACKING to make possible
enabling tracking lockless pagetable walks directly from kernel config.
Signed-off-by: Leonardo Bras
---
mm/Kconfig | 11 +++
1 file changed, 11 insertions(+)
diff --git a/mm/Kconfig b/mm/Kconfig
index a5dae9a7eb51..0
Applies the counting-based method for monitoring lockless pgtable walks on
read_user_stack_slow.
local_irq_{save,restore} is already inside {begin,end}_lockless_pgtbl_walk,
so there is no need to repeat it here.
Variable that saves the irq mask was renamed from flags to irq_mask so it
doesn't los
Skips slow part of serialize_against_pte_lookup if there is no running
lockless pagetable walk.
Signed-off-by: Leonardo Bras
---
arch/powerpc/mm/book3s64/pgtable.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/book3s64/pgtable.c
b/arch/powerpc/mm/book3s64
Applies the counting-based method for monitoring all book3s_64-related
functions that do lockless pagetable walks.
Adds comments explaining that some lockless pagetable walks don't need
protection due to guest pgd not being a target of THP collapse/split, or
due to being called from Realmode + MSR
Applies the counting-based method for monitoring all book3s_hv related
functions that do lockless pagetable walks.
Adds comments explaining that some lockless pagetable walks don't need
protection due to guest pgd not being a target of THP collapse/split, or
due to being called from Realmode + MSR
Applies the counting-based method for monitoring lockless pgtable walks on
kvmppc_e500_shadow_map().
Fixes the place where local_irq_restore() is called: previously, if ptep
was NULL, local_irq_restore() would never be called.
local_irq_{save,restore} is already inside {begin,end}_lockless_pgtbl_
Applies the counting-based method for monitoring all hash-related functions
that do lockless pagetable walks.
hash_page_mm: Adds comment that explain that there is no need to
local_int_disable/save given that it is only called from DataAccess
interrupt, so interrupts are already disabled.
local_i
Applies the counting-based method for monitoring lockless pgtable walks on
addr_to_pfn().
local_irq_{save,restore} is already inside {begin,end}_lockless_pgtbl_walk,
so there is no need to repeat it here.
Signed-off-by: Leonardo Bras
---
arch/powerpc/kernel/mce_power.c | 6 +++---
1 file change
It's necessary to monitor lockless pagetable walks, in order to avoid doing
THP splitting/collapsing during them.
Some methods rely on irq enable/disable, but that can be slow on
cases with a lot of cpus are used for the process, given all these cpus
have to run a IPI.
In order to speedup some ca
If a process (qemu) with a lot of CPUs (128) try to munmap() a large
chunk of memory (496GB) mapped with THP, it takes an average of 275
seconds, which can cause a lot of problems to the load (in qemu case,
the guest will lock for this time).
Trying to find the source of this bug, I found out most
As described, gup_pgd_range is a lockless pagetable walk. So, in order to
monitor against THP split/collapse with the counting method, it's necessary
to bound it with {begin,end}_lockless_pgtbl_walk.
local_irq_{save,restore} is already inside {begin,end}_lockless_pgtbl_walk,
so there is no need to
It's necessary to monitor lockless pagetable walks, in order to avoid doing
THP splitting/collapsing during them.
On powerpc, we need to do some lockless pagetable walks from functions
that already have disabled interrupts, specially from real mode with
MSR[EE=0].
In these contexts, disabling/ena
[Cc'ing Prakhar]
On Fri, 2019-09-27 at 10:25 -0400, Nayna Jain wrote:
> To add the support for checking against blacklist, it would be needed
> to add an additional measurement record that identifies the record
> as blacklisted.
>
> This patch modifies the process_buffer_measurement() and makes i
On Tue, 2019-10-01 at 12:07 -0400, Nayna wrote:
>
> On 09/30/2019 09:04 PM, Thiago Jung Bauermann wrote:
> > Hello,
>
> Hi,
>
> >
> >> diff --git a/arch/powerpc/kernel/ima_arch.c
> >> b/arch/powerpc/kernel/ima_arch.c
> >> new file mode 100644
> >> index ..39401b67f19e
> >> --- /dev/
Now that instances of input_dev support polling mode natively,
we no longer need to create input_polled_dev instance.
Signed-off-by: Dmitry Torokhov
---
drivers/macintosh/Kconfig | 1 -
drivers/macintosh/ams/ams-input.c | 37 +++
drivers/macintosh/ams/ams.h
Hi Nayna,
On Fri, 2019-09-27 at 10:25 -0400, Nayna Jain wrote:
> This patch deprecates the existing permit_directio flag, instead adds
> it as possible value to appraise_flag parameter.
> For eg.
> appraise_flag=permit_directio
Defining a generic "appraise_flag=", which supports different options
On Fri, 2019-09-27 at 10:25 -0400, Nayna Jain wrote:
> Asymmetric private keys are used to sign multiple files. The kernel
> currently support checking against the blacklisted keys. However, if the
> public key is blacklisted, any file signed by the blacklisted key will
> automatically fail signatu
3.16.75-rc1 review patch. If anyone has any objections, please let me know.
--
From: Ravi Bangoria
commit 913a90bc5a3a06b1f04c337320e9aeee2328dd77 upstream.
perf_event_open() limits the sample_period to 63 bits. See:
0819b2e30ccb ("perf: Limit perf_event_attr::sample_period
Hi Srikar,
Srikar Dronamraju writes:
> Abdul reported a warning on a shared lpar.
> "WARNING: workqueue cpumask: online intersect > possible intersect".
> This is because per node workqueue possible mask is set very early in the
> boot process even before the system was querying the home node
>
Srikar Dronamraju writes:
> With commit ("powerpc/numa: Early request for home node associativity"),
> commit 2ea626306810 ("powerpc/topology: Get topology for shared
> processors at boot") which was requesting home node associativity
> becomes redundant.
>
> Hence remove the late request for hom
Srikar Dronamraju writes:
> Currently the kernel detects if its running on a shared lpar platform
> and requests home node associativity before the scheduler sched_domains
> are setup. However between the time NUMA setup is initialized and the
> request for home node associativity, workqueue init
Srikar Dronamraju writes:
> All the sibling threads of a core have to be part of the same node.
> To ensure that all the sibling threads map to the same node, always
> lookup/update the cpu-to-node map of the first thread in the core.
Reviewed-by: Nathan Lynch
Srikar Dronamraju writes:
> Currently code handles H_FUNCTION, H_SUCCESS, H_HARDWARE return codes.
> However hcall_vphn can return other return codes. Now it also handles
> H_PARAMETER return code. Also the rest return codes are handled under the
> default case.
Reviewed-by: Nathan Lynch
Howev
Srikar Dronamraju writes:
> There is no value in unpacking associativity, if
> H_HOME_NODE_ASSOCIATIVITY hcall has returned an error.
Reviewed-by: Nathan Lynch
On Wed, Oct 02, 2019 at 11:23:06AM +1000, Daniel Axtens wrote:
> Hi,
>
> >>/*
> >> * Find a place in the tree where VA potentially will be
> >> * inserted, unless it is merged with its sibling/siblings.
> >> @@ -741,6 +752,10 @@ merge_or_add_vmap_area(struct vmap_area *va,
> >>
On Wed, Oct 2, 2019 at 2:36 AM Mike Rapoport wrote:
>
> Hi Adam,
>
> On Tue, Oct 01, 2019 at 07:14:13PM -0500, Adam Ford wrote:
> > On Sun, Sep 29, 2019 at 8:33 AM Adam Ford wrote:
> > >
> > > I am attaching two logs. I now the mailing lists will be unhappy, but
> > > don't want to try and spam
Andy Shevchenko wrote:
> > but I confess to being a little ambivalent. It's
> > arguably a little easier to read,
>
> I have another opinion here. Instead of parsing body of for-loop, the name of
> the function tells you exactly what it's done. Besides the fact that reading
> and parsing two li
* Vasant Hegde [2019-10-02 13:18:56]:
> Add check to validate whether requested page is part of system RAM
> or not before mmap() and error out if its not part of system RAM.
>
> cat /proc/iomem:
> -
> -27 : System RAM
> 28-2f : namespace0.0
> 2000
On Sun, Sep 15, 2019 at 11:28:09AM +1000, Nicholas Piggin wrote:
> System call entry and particularly exit code is beyond the limit of what
> is reasonable to implement in asm.
>
> This conversion moves all conditional branches out of the asm code,
> except for the case that all GPRs should be res
On Sun, Sep 15, 2019 at 11:28:13AM +1000, Nicholas Piggin wrote:
> Add support for the scv instruction on POWER9 and later CPUs.
>
> For now this implements the zeroth scv vector 'scv 0', as identical
> to 'sc' system calls, with the exception that lr is not preserved, and
> it is 64-bit only. The
On Sun, Sep 15, 2019 at 11:28:10AM +1000, Nicholas Piggin wrote:
> Implement the bulk of interrupt return logic in C. The asm return code
> must handle a few cases: restoring full GPRs, and emulating stack store.
>
> The asm return code is moved into 64e for now. The new logic has made
> allowance
Hello,
can you mark the individual patches with RFC rather than the wole
series?
Thanks
Michal
On Sun, Sep 15, 2019 at 11:27:46AM +1000, Nicholas Piggin wrote:
> My interrupt entry patches have finally collided with syscall and
> interrupt exit patches, so I'll merge the series. Most patches ha
Add check to validate whether requested page is part of system RAM
or not before mmap() and error out if its not part of system RAM.
cat /proc/iomem:
-
-27 : System RAM
28-2f : namespace0.0
2000-2027 : System RAM
2028-202f
Hi Adam,
On Tue, Oct 01, 2019 at 07:14:13PM -0500, Adam Ford wrote:
> On Sun, Sep 29, 2019 at 8:33 AM Adam Ford wrote:
> >
> > I am attaching two logs. I now the mailing lists will be unhappy, but
> > don't want to try and spam a bunch of log through the mailing liast.
> > The two logs show the
Daniel Axtens a écrit :
Hi,
/*
* Find a place in the tree where VA potentially will be
* inserted, unless it is merged with its sibling/siblings.
@@ -741,6 +752,10 @@ merge_or_add_vmap_area(struct vmap_area *va,
if (sibling->va_end == va->va_start) {
On 02.10.19 02:06, kbuild test robot wrote:
> Hi David,
>
> I love your patch! Perhaps something to improve:
>
> [auto build test WARNING on mmotm/master]
>
> url:
> https://github.com/0day-ci/linux/commits/David-Hildenbrand/mm-memory_hotplug-Shrink-zones-before-
45 matches
Mail list logo