> On 18 Apr 2024, at 13:20, Mike Rapoport wrote:
>
> On Tue, Apr 16, 2024 at 12:36:08PM +0300, Nadav Amit wrote:
>>
>>
>>
>> I might be missing something, but it seems a bit racy.
>>
>> IIUC, module_finalize() calls alternatives_smp_module_add(
> On 11 Apr 2024, at 19:05, Mike Rapoport wrote:
>
> @@ -2440,7 +2479,24 @@ static int post_relocation(struct module *mod, const
> struct load_info *info)
> add_kallsyms(mod, info);
>
> /* Arch-specific module finalizing. */
> - return module_finalize(info->hdr, info->sechdrs
>
> On Jun 20, 2023, at 7:46 AM, Yair Podemsky wrote:
>
> @@ -1525,7 +1525,7 @@ static void collapse_and_free_pmd(struct mm_struct *mm,
> struct vm_area_struct *v
> addr + HPAGE_PMD_SIZE);
> mmu_notifier_invalidate_range_start(&range);
> pmd = pmdp_coll
> On Jun 19, 2023, at 10:09 AM, Andy Lutomirski wrote:
>
> But jit_text_alloc() can't do this, because the order of operations doesn't
> match. With jit_text_alloc(), the executable mapping shows up before the
> text is populated, so there is no atomic change from not-there to
> populated-
> On Jun 5, 2023, at 9:10 AM, Edgecombe, Rick P
> wrote:
>
> On Mon, 2023-06-05 at 11:11 +0300, Mike Rapoport wrote:
>> On Sun, Jun 04, 2023 at 10:52:44PM -0400, Steven Rostedt wrote:
>>> On Thu, 1 Jun 2023 16:54:36 -0700
>>> Nadav Amit wrote:
>>>
> On Jun 1, 2023, at 1:50 PM, Edgecombe, Rick P
> wrote:
>
> On Thu, 2023-06-01 at 14:38 -0400, Kent Overstreet wrote:
>> On Thu, Jun 01, 2023 at 06:13:44PM +, Edgecombe, Rick P wrote:
text_poke() _does_ create a separate RW mapping.
>>>
>>> Sorry, I meant a separate RW allocation.
On 1/19/23 6:22 AM, Nicholas Piggin wrote:
On Thu Jan 19, 2023 at 8:22 AM AEST, Nadav Amit wrote:
On Jan 18, 2023, at 12:00 AM, Nicholas Piggin wrote:
+static void do_shoot_lazy_tlb(void *arg)
+{
+ struct mm_struct *mm = arg;
+
+ if (current->active_mm ==
On 1/23/23 9:35 AM, Nadav Amit wrote:
+ if (IS_ENABLED(CONFIG_MMU_LAZY_TLB_REFCOUNT)) {
+ mmdrop(mm);
+ } else {
+ /*
+ * mmdrop_lazy_tlb must provide a full memory barrier, see the
+ * membarrier comment finish_task_switch which relies on this
On 1/18/23 10:00 AM, Nicholas Piggin wrote:
Add CONFIG_MMU_TLB_REFCOUNT which enables refcounting of the lazy tlb mm
when it is context switched. This can be disabled by architectures that
don't require this refcounting if they clean up lazy tlb mms when the
last refcount is dropped. Currently
> On Jan 18, 2023, at 12:00 AM, Nicholas Piggin wrote:
>
> +static void do_shoot_lazy_tlb(void *arg)
> +{
> + struct mm_struct *mm = arg;
> +
> + if (current->active_mm == mm) {
> + WARN_ON_ONCE(current->mm);
> + current->active_mm = &init_mm;
> + sw
On Nov 15, 2022, at 5:50 PM, Yicong Yang wrote:
> !! External Email
>
> On 2022/11/16 7:38, Nadav Amit wrote:
>> On Nov 14, 2022, at 7:14 PM, Yicong Yang wrote:
>>
>>> diff --git a/arch/x86/include/asm/tlbflush.h
>>> b/arch/x86/include/asm/tlbflush
On Nov 14, 2022, at 7:14 PM, Yicong Yang wrote:
> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
> index 8a497d902c16..5bd78ae55cd4 100644
> --- a/arch/x86/include/asm/tlbflush.h
> +++ b/arch/x86/include/asm/tlbflush.h
> @@ -264,7 +264,8 @@ static inline u64 inc_mm
On Nov 2, 2022, at 12:12 PM, David Hildenbrand wrote:
> !! External Email
>
> commit b191f9b106ea ("mm: numa: preserve PTE write permissions across a
> NUMA hinting fault") added remembering write permissions using ordinary
> pte_write() for PROT_NONE mapped pages to avoid write faults when
> re
On Sep 20, 2022, at 11:53 PM, Anshuman Khandual
wrote:
> ⚠ External Email
>
> On 8/22/22 13:51, Yicong Yang wrote:
>> +static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch
>> *batch,
>> + struct mm_struct *mm,
>> +
> On Sep 14, 2022, at 11:42 PM, Barry Song <21cn...@gmail.com> wrote:
>
>>
>> The very idea behind TLB deferral is the opportunity it (might) provide
>> to accumulate address ranges and cpu masks so that individual TLB flush
>> can be replaced with a more cost effective range based TLB flush.
gt; Remove it from files which do not actually use it.
> Drop externs from function delcarations.
>
> Signed-off-by: Alexander Atanasov
Makes so much sense.
Acked-by: Nadav Amit
On Aug 17, 2022, at 12:17 AM, Huang, Ying wrote:
> Alistair Popple writes:
>
>> Peter Xu writes:
>>
>>> On Wed, Aug 17, 2022 at 11:49:03AM +1000, Alistair Popple wrote:
Peter Xu writes:
> On Tue, Aug 16, 2022 at 04:10:29PM +0800, huang ying wrote:
>>> @@ -193,11 +194,10 @@
�ۚ�,ڶ*'�+-�X���wZ�*'�� jg��m�i^�j�gz�!��(���z�h��&��j{��w���r��rkۑ�
���r��rkۑ� ���r���'��*�v)�f���&�yا�
��W(�G���qz}'��z\^�I�jg��''y�ڎꮉȧq�&�蜝�j;��'"��(�X��7�Ib��l��/���z�ޖ���!�蝭�aj�(���w�v\�h�z
��,�)'�^��g�
�b��k�x�u�j�.��첋�q����+���z�,���y�+���}�-z�f���&}�-z
> On Feb 2, 2021, at 1:31 AM, Peter Zijlstra wrote:
>
> On Tue, Feb 02, 2021 at 07:20:55AM +0000, Nadav Amit wrote:
>> Arm does not define tlb_end_vma, and consequently it flushes the TLB after
>> each VMA. I suspect it is not intentional.
>
> ARM is one of those that
> On Feb 1, 2021, at 10:41 PM, Nicholas Piggin wrote:
>
> Excerpts from Peter Zijlstra's message of February 1, 2021 10:09 pm:
>> I also don't think AGRESSIVE_FLUSH_BATCHING quite captures what it does.
>> How about:
>>
>> CONFIG_MMU_GATHER_NO_PER_VMA_FLUSH
>
> Yes please, have to have des
> On Jan 30, 2021, at 11:57 PM, Nadav Amit wrote:
>
>> On Jan 30, 2021, at 7:30 PM, Nicholas Piggin wrote:
>>
>> Excerpts from Nadav Amit's message of January 31, 2021 10:11 am:
>>> From: Nadav Amit
>>>
>>> There are currently (at le
> On Jan 30, 2021, at 7:30 PM, Nicholas Piggin wrote:
>
> Excerpts from Nadav Amit's message of January 31, 2021 10:11 am:
>> From: Nadav Amit
>>
>> There are currently (at least?) 5 different TLB batching schemes in the
>> kernel:
>>
>> 1. U
> On Jan 30, 2021, at 4:39 PM, Andy Lutomirski wrote:
>
> On Sat, Jan 30, 2021 at 4:16 PM Nadav Amit wrote:
>> From: Nadav Amit
>>
>> There are currently (at least?) 5 different TLB batching schemes in the
>> kernel:
>>
>> 1. Using mmu_gather (
From: Nadav Amit
Architecture-specific tlb_start_vma() and tlb_end_vma() seem
unnecessary. They are currently used for:
1. Avoid per-VMA TLB flushes. This can be determined by introducing
a new config option.
2. Avoid saving information on the vma that is being flushed. Saving
this
From: Nadav Amit
There are currently (at least?) 5 different TLB batching schemes in the
kernel:
1. Using mmu_gather (e.g., zap_page_range()).
2. Using {inc|dec}_tlb_flush_pending() to inform other threads on the
ongoing deferred TLB flush and flushing the entire range eventually
(e.g
I am not very familiar with membarrier, but here are my 2 cents while trying
to answer your questions.
> On Dec 3, 2020, at 9:26 PM, Andy Lutomirski wrote:
> @@ -496,6 +497,8 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct
> mm_struct *next,
>* from one thread in a proc
> On Jun 11, 2019, at 5:52 PM, Nicholas Piggin wrote:
>
> Christoph Hellwig's on June 12, 2019 12:41 am:
>> Instead of passing a set of always repeated arguments down the
>> get_user_pages_fast iterators, create a struct gup_args to hold them and
>> pass that by reference. This leads to an over
than adding a complex API, ensure that no stale
> - * TLB entries exist when this call returns.
> - */
> - flush_tlb_range(vma, start, end);
> - }
> -
> mmu_notifier_invalidate_range_end(mm, start, end);
> tlb_finish_mmu(&tlb, start, end);
> }
Yes, this was in my “to check when I have time” todo list, especially since
the flush was from start to end, not even vma->vm_start to vma->vm_end.
The revert seems correct.
Reviewed-by: Nadav Amit
28 matches
Mail list logo