> > Do you want to update ->addr here?
> >
>
> I don't get that question. We wanted to track the alst adjusted addr in
> tlb->addr because when we do a tlb_flush_mmu_tlbonly() we does a
> __tlb_reset_range(), which clears tlb->start and tlb->end. Now we need
> to update the range again with the la
Hillf Danton writes:
>> >> @@ -1202,7 +1205,12 @@ again:
>> >> if (force_flush) {
>> >> force_flush = 0;
>> >> tlb_flush_mmu_free(tlb);
>> >> -
>> >> + if (pending_page) {
>> >> + /* remove the page with new size */
>> >> + __tlb_adjus
> >> @@ -1202,7 +1205,12 @@ again:
> >>if (force_flush) {
> >>force_flush = 0;
> >>tlb_flush_mmu_free(tlb);
> >> -
> >> + if (pending_page) {
> >> + /* remove the page with new size */
> >> + __tlb_adjust_range(tlb, tlb->addr);
Hillf Danton writes:
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 15322b73636b..a01db5bc756b 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -292,23 +292,24 @@ void tlb_finish_mmu(struct mmu_gather *tlb, unsigned
>> long start, unsigned long e
>> * handling the additional races i
> diff --git a/mm/memory.c b/mm/memory.c
> index 15322b73636b..a01db5bc756b 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -292,23 +292,24 @@ void tlb_finish_mmu(struct mmu_gather *tlb, unsigned
> long start, unsigned long e
> * handling the additional races in SMP caused by other CPUs ca
This update the generic and arch specific implementation to return true
if we need to do a tlb flush. That means if a __tlb_remove_page indicate
a flush is needed, the page we try to remove need to be tracked and
added again after the flush. We need to track it because we have already
update the pt
6 matches
Mail list logo