On 08/29/2018 09:32 AM, Andy Lutomirski wrote:
> It's plausible that there are workloads where the current code is
> faster, such as where we're munmapping a single page via syscall and
> we'd prefer to only flush that one TLB entry even if the flush
> operation is slower as a result.
Yeah, I don'
On Wed, Aug 29, 2018 at 9:14 AM, Peter Zijlstra wrote:
> On Wed, Aug 29, 2018 at 08:46:04AM -0700, Andy Lutomirski wrote:
>> On Wed, Aug 29, 2018 at 2:28 AM, Peter Zijlstra wrote:
>> > On Wed, Aug 29, 2018 at 01:11:46AM -0700, Nadav Amit wrote:
>>
>> >> + pte_clear(poking_mm, poking_addr, pte
On Wed, Aug 29, 2018 at 08:46:04AM -0700, Andy Lutomirski wrote:
> On Wed, Aug 29, 2018 at 2:28 AM, Peter Zijlstra wrote:
> > On Wed, Aug 29, 2018 at 01:11:46AM -0700, Nadav Amit wrote:
>
> >> + pte_clear(poking_mm, poking_addr, ptep);
> >> +
> >> + /*
> >> + * __flush_tlb_one_user()
On Wed, Aug 29, 2018 at 2:28 AM, Peter Zijlstra wrote:
> On Wed, Aug 29, 2018 at 01:11:46AM -0700, Nadav Amit wrote:
>> + pte_clear(poking_mm, poking_addr, ptep);
>> +
>> + /*
>> + * __flush_tlb_one_user() performs a redundant TLB flush when PTI is
>> on,
>> + * as it also flus
On Wed, Aug 29, 2018 at 01:11:46AM -0700, Nadav Amit wrote:
> +static void text_poke_fixmap(void *addr, const void *opcode, size_t len,
> + struct page *pages[2])
> +{
> + u8 *vaddr;
> +
> + set_fixmap(FIX_TEXT_POKE0, page_to_phys(pages[0]));
> + if (pages[1])
>
5 matches
Mail list logo