Hi, Waiman,
What's the status of this patchset? And its merging plan?
Best Regards,
Huang, Ying
On Thu, Apr 11, 2019 at 12:08 AM Waiman Long wrote:
>
> On 04/10/2019 04:15 AM, huang ying wrote:
> > Hi, Waiman,
> >
> > What's the status of this patchset? And its merging plan?
> >
> > Best Regards,
> > Huang, Ying
>
> I have broken the
1253
My 2 cents. I think you should include at least part of the discussion
in the patch description to make it more readable by itself.
Best Regards,
Huang, Ying
"Aneesh Kumar K.V" writes:
> Add a new kconfig option that can be selected if we want to allow
> pageblock alignment by reserving pages in the vmemmap altmap area.
> This implies we will be reserving some pages for every memoryblock
> This also allows the memmap on memory feature to be widely use
Nadav Amit writes:
> On Aug 17, 2022, at 12:17 AM, Huang, Ying wrote:
>
>> Alistair Popple writes:
>>
>>> Peter Xu writes:
>>>
>>>> On Wed, Aug 17, 2022 at 11:49:03AM +1000, Alistair Popple wrote:
>>>>> Peter Xu writes:
>
ter_lazy_mmu_mode();
> // If any pending tlb, do it now
> if (mm_tlb_flush_pending())
> flush_tlb_range(vma, start, end);
> else
> flush_tlb_batched_pending();
I don't think we need the above 4 lines. Because we will flush TLB
before we
Peter Xu writes:
> On Thu, Aug 18, 2022 at 02:34:45PM +0800, Huang, Ying wrote:
>> > In this specific case, the only way to do safe tlb batching in my mind is:
>> >
>> >pte_offset_map_lock();
>> >arch_enter_lazy_mmu_mode();
>> >
/* Only flush the TLB if we actually modified any entries */
> if (unmapped)
> flush_tlb_range(walk->vma, start, end);
It appears that we can increase "unmapped" only if ptep_get_and_clear()
is used?
Best Regards,
Huang, Ying
> + arch_leave_lazy_mmu_mode();
> + pte_unmap_unlock(ptep - 1, ptl);
> +
> return 0;
> }
>
>
> base-commit: ffcf9c5700e49c0aee42dcba9a12ba21338e8136
Alistair Popple writes:
> "Huang, Ying" writes:
>
>> Alistair Popple writes:
>>
>>> When clearing a PTE the TLB should be flushed whilst still holding the
>>> PTL to avoid a potential race with madvise/munmap/etc. For example
>&
e migration will fail due to unexpected references but the
>>> >> dirty pte bit will be lost. If the page is subsequently reclaimed data
>>> >> won't be written back to swap storage as it is considered uptodate,
>>> >> resulting in data loss if
d data
> won't be written back to swap storage as it is considered uptodate,
> resulting in data loss if the page is subsequently accessed.
>
> Prevent this by copying the dirty bit to the page when removing the pte
> to match what try_to_migrate_one() does.
>
> Signed-
Alistair Popple writes:
> Peter Xu writes:
>
>> On Wed, Aug 17, 2022 at 11:49:03AM +1000, Alistair Popple wrote:
>>>
>>> Peter Xu writes:
>>>
>>> > On Tue, Aug 16, 2022 at 04:10:29PM +0800, huang ying wrote:
>>> >> >
Alistair Popple writes:
> On Sun, Jun 29, 2025 at 07:28:50PM +0800, Huang, Ying wrote:
>> David Hildenbrand writes:
>>
>> > On 18.06.25 20:48, Zi Yan wrote:
>> >> On 18 Jun 2025, at 14:39, Matthew Wilcox wrote:
>> >>
>> >>> On Wed
; balloon list while isolated), we don't have to worry about this case in
> the putback and migration callback. Add a WARN_ON_ONCE for now.
>
> Signed-off-by: David Hildenbrand
[snip]
---
Best Regards,
Huang, Ying
e sure these things will be folios even before they are assigned to a
> filesystem? I recall the answer was "yes".
>
> So we don't (and will not) support movable_ops for folios.
Is it possible to use some device specific callbacks (DMA?) to copy
from/to the device private folios (or pages) to/from the normal
file/anon folios in the future?
---
Best Regards,
Huang, Ying
15 matches
Mail list logo