Update the parisc arch_get_unmapped_area function to make use of
vm_unmapped_area() instead of implementing a brute force search.
Signed-off-by: Michel Lespinasse
---
arch/parisc/kernel/sys_parisc.c | 46 ++
1 files changed, 17 insertions(+), 29 deletions
ested.
Michel Lespinasse (8):
mm: use vm_unmapped_area() on parisc architecture
mm: use vm_unmapped_area() on alpha architecture
mm: use vm_unmapped_area() on frv architecture
mm: use vm_unmapped_area() on ia64 architecture
mm: use vm_unmapped_area() in hugetlbfs on ia64 architecture
mm: r
Update the alpha arch_get_unmapped_area function to make use of
vm_unmapped_area() instead of implementing a brute force search.
Signed-off-by: Michel Lespinasse
---
arch/alpha/kernel/osf_sys.c | 20 +---
1 files changed, 9 insertions(+), 11 deletions(-)
diff --git a/arch
Update the frv arch_get_unmapped_area function to make use of
vm_unmapped_area() instead of implementing a brute force search.
Signed-off-by: Michel Lespinasse
---
arch/frv/mm/elf-fdpic.c | 49 --
1 files changed, 17 insertions(+), 32 deletions
Update the ia64 arch_get_unmapped_area function to make use of
vm_unmapped_area() instead of implementing a brute force search.
Signed-off-by: Michel Lespinasse
---
arch/ia64/kernel/sys_ia64.c | 37 -
1 files changed, 12 insertions(+), 25 deletions
Update the ia64 hugetlb_get_unmapped_area function to make use of
vm_unmapped_area() instead of implementing a brute force search.
Signed-off-by: Michel Lespinasse
---
arch/ia64/mm/hugetlbpage.c | 20 +---
1 files changed, 9 insertions(+), 11 deletions(-)
diff --git a/arch
vm_unmapped_area() infrastructure and regain the performance.
Signed-off-by: Michel Lespinasse
---
arch/powerpc/include/asm/page_64.h |3 +-
arch/powerpc/mm/hugetlbpage.c|2 +-
arch/powerpc/mm/slice.c | 108 +
arch/powerpc
Update the powerpc slice_get_unmapped_area function to make use of
vm_unmapped_area() instead of implementing a brute force search.
Signed-off-by: Michel Lespinasse
---
arch/powerpc/mm/slice.c | 128 +-
1 files changed, 81 insertions(+), 47
Since all architectures have been converted to use vm_unmapped_area(),
there is no remaining use for the free_area_cache.
Signed-off-by: Michel Lespinasse
---
arch/arm/mm/mmap.c |2 --
arch/arm64/mm/mmap.c |2 --
arch/mips/mm/mmap.c |2
Whoops, I was supposed to find a more appropriate subject line before
sending this :]
On Tue, Jan 8, 2013 at 5:28 PM, Michel Lespinasse wrote:
> These patches, which apply on top of v3.8-rc kernels, are to complete the
> VMA gap finding code I introduced (following Rik's initial p
On Tue, Jan 8, 2013 at 6:15 PM, Benjamin Herrenschmidt
wrote:
> On Tue, 2013-01-08 at 17:28 -0800, Michel Lespinasse wrote:
>> Update the powerpc slice_get_unmapped_area function to make use of
>> vm_unmapped_area() instead of implementing a brute force search.
>>
>
eadable and it will avoid a fuckup in the future if
> somebody changes the algorithm and forgets to update one of the
> copies :-)
All right, does the following look more palatable then ?
(didn't re-test it, though)
Signed-off-by: Michel Lespinasse
---
arch/powerpc/mm/slice.c | 1
ructure.
I did some basic testing on x86 and powerpc; however the first 5 (simpler)
patches for parisc, alpha, frv and ia64 architectures are untested.
Michel Lespinasse (8):
mm: use vm_unmapped_area() on parisc architecture
mm: use vm_unmapped_area() on alpha architecture
mm: use vm_un
Update the parisc arch_get_unmapped_area function to make use of
vm_unmapped_area() instead of implementing a brute force search.
Signed-off-by: Michel Lespinasse
Acked-by: Rik van Riel
---
arch/parisc/kernel/sys_parisc.c | 46 ++
1 files changed, 17
Update the alpha arch_get_unmapped_area function to make use of
vm_unmapped_area() instead of implementing a brute force search.
Signed-off-by: Michel Lespinasse
Acked-by: Rik van Riel
---
arch/alpha/kernel/osf_sys.c | 20 +---
1 files changed, 9 insertions(+), 11 deletions
Update the frv arch_get_unmapped_area function to make use of
vm_unmapped_area() instead of implementing a brute force search.
Signed-off-by: Michel Lespinasse
Acked-by: Rik van Riel
---
arch/frv/mm/elf-fdpic.c | 49 --
1 files changed, 17
Update the ia64 arch_get_unmapped_area function to make use of
vm_unmapped_area() instead of implementing a brute force search.
Signed-off-by: Michel Lespinasse
Acked-by: Rik van Riel
---
arch/ia64/kernel/sys_ia64.c | 37 -
1 files changed, 12 insertions
Update the ia64 hugetlb_get_unmapped_area function to make use of
vm_unmapped_area() instead of implementing a brute force search.
Signed-off-by: Michel Lespinasse
Acked-by: Rik van Riel
---
arch/ia64/mm/hugetlbpage.c | 20 +---
1 files changed, 9 insertions(+), 11 deletions
vm_unmapped_area() infrastructure and regain the performance.
Signed-off-by: Michel Lespinasse
Acked-by: Rik van Riel
---
arch/powerpc/include/asm/page_64.h |3 +-
arch/powerpc/mm/hugetlbpage.c|2 +-
arch/powerpc/mm/slice.c | 108
Update the powerpc slice_get_unmapped_area function to make use of
vm_unmapped_area() instead of implementing a brute force search.
Signed-off-by: Michel Lespinasse
---
arch/powerpc/mm/slice.c | 123 ++-
1 files changed, 78 insertions(+), 45
Since all architectures have been converted to use vm_unmapped_area(),
there is no remaining use for the free_area_cache.
Signed-off-by: Michel Lespinasse
Acked-by: Rik van Riel
---
arch/arm/mm/mmap.c |2 --
arch/arm64/mm/mmap.c |2 --
arch/mips/mm/mmap.c
On Tue, Jan 22, 2013 at 11:32 AM, Steven Rostedt wrote:
> On Tue, 2013-01-22 at 13:03 +0530, Srivatsa S. Bhat wrote:
>> A straight-forward (and obvious) algorithm to implement Per-CPU Reader-Writer
>> locks can also lead to too many deadlock possibilities which can make it very
>> hard/impossible
Hi Srivasta,
I admit not having followed in detail the threads about the previous
iteration, so some of my comments may have been discussed already
before - apologies if that is the case.
On Mon, Feb 18, 2013 at 8:38 PM, Srivatsa S. Bhat
wrote:
> Reader-writer locks and per-cpu counters are recu
On Mon, Feb 18, 2013 at 8:39 PM, Srivatsa S. Bhat
wrote:
> @@ -200,6 +217,16 @@ void percpu_write_lock_irqsave(struct percpu_rwlock
> *pcpu_rwlock,
>
> smp_mb(); /* Complete the wait-for-readers, before taking the lock */
> write_lock_irqsave(&pcpu_rwlock->global_rwlock, *flags);
On Mon, Feb 18, 2013 at 8:39 PM, Srivatsa S. Bhat
wrote:
> Some important design requirements and considerations:
> -
>
> 1. Scalable synchronization at the reader-side, especially in the fast-path
>
>Any synchronization at the atomic hotplug
On Tue, Feb 19, 2013 at 12:43 AM, Srivatsa S. Bhat
wrote:
> On 02/18/2013 09:53 PM, Michel Lespinasse wrote:
>> I am wondering though, if you could take care of recursive uses in
>> get/put_online_cpus_atomic() instead of doing it as a property of your
>> rwlock:
>>
On Tue, Feb 19, 2013 at 1:56 AM, Srivatsa S. Bhat
wrote:
> On 02/18/2013 09:51 PM, Srivatsa S. Bhat wrote:
>> On 02/18/2013 09:15 PM, Michel Lespinasse wrote:
>>> I don't see anything preventing a race with the corresponding code in
>>> percpu_write_unlock() that
On Tue, Feb 19, 2013 at 2:50 AM, Srivatsa S. Bhat
wrote:
> But, the whole intention behind removing the parts depending on the
> recursive property of rwlocks would be to make it easier to make rwlocks
> fair (going forward) right? Then, that won't work for CPU hotplug, because,
> just like we hav
Hi Srivatsa,
I think there is some elegance in Lai's proposal of using a local
trylock for the reader uncontended case and global rwlock to deal with
the contended case without deadlocks. He apparently didn't realize
initially that nested read locks are common, and he seems to have
confused you be
On Thu, Feb 28, 2013 at 3:25 AM, Oleg Nesterov wrote:
> On 02/27, Michel Lespinasse wrote:
>>
>> +void lg_rwlock_local_read_lock(struct lgrwlock *lgrw)
>> +{
>> + preempt_disable();
>> +
>> + if (__this_cpu_read(*lgrw->local_refcnt) ||
>
On Sat, Mar 2, 2013 at 2:28 AM, Oleg Nesterov wrote:
> Lai, I didn't read this discussion except the code posted by Michel.
> I'll try to read this patch carefully later, but I'd like to ask
> a couple of questions.
>
> This version looks more complex than Michel's, why? Just curious, I
> am tryin
On Sun, Mar 3, 2013 at 9:40 AM, Oleg Nesterov wrote:
>> However, I still think that FALLBACK_BASE only adds the unnecessary
>> complications. But even if I am right this is subjective of course, please
>> feel free to ignore.
Would it help if I sent out that version (without FALLBACK_BASE) as a
f
nt is zero and the trylock fails on lglock.
- writers take both the lglock write side and the fallback_rwlock, thus
making sure to exclude both local and global readers.
Thanks to Srivatsa S. Bhat for proposing a lock with these requirements
and Lai Jiangshan for proposing this algorithm as a
On Tue, Mar 5, 2013 at 7:54 AM, Lai Jiangshan wrote:
> On 03/03/13 01:06, Oleg Nesterov wrote:
>> On 03/02, Michel Lespinasse wrote:
>>>
>>> My version would be slower if it needs to take the
>>> slow path in a reentrant way, but I'm not sure it matt
On Thu, Feb 21, 2013 at 3:05 PM, wrote:
> From: Michel Lespinasse
> Subject: mm: remove free_area_cache use in powerpc architecture
>
> As all other architectures have been converted to use vm_unmapped_area(),
> we are about to retire the free_area_cache.
Ping ? I haven't s
On Mon, Mar 18, 2013 at 4:12 AM, Aneesh Kumar K.V
wrote:
> how about ?
>
> static bool slice_scan_available(unsigned long addr,
> struct slice_mask available,
> int end,
> unsigned long *boundary_ad
at
would be to measure a single threaded workload - kernbench would be
fine - without the special-case optimization in patch 22 where
handle_speculative_fault() immediately aborts in the single-threaded case.
Reviewed-by: Michel Lespinasse
This is for the series as a whole; I expect to do another review
rations on mm's more flexible and convenient. In particular, it
> allows some code paths to avoid the need to hold task_lock.
Reviewed-by: Michel Lespinasse
May I suggest ia32_compat instead of just compat for the flag name ?
--
Michel "Walken" Lespinasse
A program is never f
On Mon, Apr 4, 2011 at 11:24 PM, Michael Ellerman
wrote:
> In access_process_vm() we need to check that we have found the right
> vma, not the following vma, before we try to access it. Otherwise
> we might call the vma's access routine with an address which does
> not fall inside the vma.
>
> Sig
On Wed, Jun 08, 2022 at 04:27:27PM +0200, Peter Zijlstra wrote:
> Commit c227233ad64c ("intel_idle: enable interrupts before C1 on
> Xeons") wrecked intel_idle in two ways:
>
> - must not have tracing in idle functions
> - must return with IRQs disabled
>
> Additionally, it added a branch for n
On Thu, Jul 28, 2022 at 10:20:53AM -0700, Paul E. McKenney wrote:
> On Mon, Jul 25, 2022 at 12:43:06PM -0700, Michel Lespinasse wrote:
> > On Wed, Jun 08, 2022 at 04:27:27PM +0200, Peter Zijlstra wrote:
> > > Commit c227233ad64c ("intel_idle: enable interrupts before C1 o
On Fri, Jul 29, 2022 at 08:26:22AM -0700, Paul E. McKenney wrote:> Would you be
willing to try another shot in the dark, but untested
> this time? I freely admit that this is getting strange.
>
> Thanx, Paul
Yes, adding this second change go
On Fri, Jul 29, 2022 at 04:59:50PM +0200, Rafael J. Wysocki wrote:
> On Fri, Jul 29, 2022 at 12:25 PM Michel Lespinasse
> wrote:
> >
> > On Thu, Jul 28, 2022 at 10:20:53AM -0700, Paul E. McKenney wrote:
> > > On Mon, Jul 25, 2022 at 12:43:06PM -0700, Michel Lespinasse
On Sat, Jul 30, 2022 at 09:52:34PM +0200, Rafael J. Wysocki wrote:
> On Sat, Jul 30, 2022 at 11:48 AM Michel Lespinasse
> wrote:
> > I'm not sure if that was the patch you meant to send though, as it
> > seems it's only adding a tracepoint so shouldn't make any d
44 matches
Mail list logo