On Fri, Apr 12, 2019 at 09:52:47AM +0200, Vitaly Kuznetsov wrote:
> Maya Nakamura <m.maya.nakam...@gmail.com> writes:
> 
> > On Fri, Apr 05, 2019 at 01:31:02PM +0200, Vitaly Kuznetsov wrote:
> >> Maya Nakamura <m.maya.nakam...@gmail.com> writes:
> >> 
> >> > @@ -98,18 +99,20 @@ EXPORT_SYMBOL_GPL(hyperv_pcpu_input_arg);
> >> >  u32 hv_max_vp_index;
> >> >  EXPORT_SYMBOL_GPL(hv_max_vp_index);
> >> >  
> >> > +struct kmem_cache *cachep;
> >> > +EXPORT_SYMBOL_GPL(cachep);
> >> > +
> >> >  static int hv_cpu_init(unsigned int cpu)
> >> >  {
> >> >          u64 msr_vp_index;
> >> >          struct hv_vp_assist_page **hvp = 
> >> > &hv_vp_assist_page[smp_processor_id()];
> >> >          void **input_arg;
> >> > -        struct page *pg;
> >> >  
> >> >          input_arg = (void **)this_cpu_ptr(hyperv_pcpu_input_arg);
> >> > -        pg = alloc_page(GFP_KERNEL);
> >> > -        if (unlikely(!pg))
> >> > +        *input_arg = kmem_cache_alloc(cachep, GFP_KERNEL);
> >> 
> >> I'm not sure use of kmem_cache is justified here: pages we allocate are
> >> not cache-line and all these allocations are supposed to persist for the
> >> lifetime of the guest. In case you think that even on x86 it will be
> >> possible to see PAGE_SIZE != HV_HYP_PAGE_SIZE you can use alloc_pages()
> >> instead.
> >> 
> > Thank you for your feedback, Vitaly!
> >
> > Will you please tell me how cache-line relates to kmem_cache?
> >
> > I understand that alloc_pages() would work when PAGE_SIZE <=
> > HV_HYP_PAGE_SIZE, but I think that it would not work if PAGE_SIZE >
> > HV_HYP_PAGE_SIZE.
> 
> Sorry, my bad: I meant to say "not cache-like" (these allocations are
> not 'cache') but the typo made it completely incomprehensible. 
 
No worries! Thank you for sharing your thoughts with me, Vitaly.

Do you know of any alternatives to kmem_cache that can allocate memory
in a specified size (different than a guest page size) with alignment?
Memory allocated by alloc_page() is aligned but limited to the guest
page size, and kmalloc() can allocate a particular size but it seems
that it does not guarantee alignment. I am asking this while considering
the changes for architecture independent code.

> >> Also, in case the idea is to generalize stuff, what will happen if
> >> PAGE_SIZE > HV_HYP_PAGE_SIZE? Who will guarantee proper alignment?
> >> 
> >> I think we can leave hypercall arguments, vp_assist and similar pages
> >> alone for now: the code is not going to be shared among architectures
> >> anyways.
> >> 
> > About the alignment, kmem_cache_create() aligns memory with its third
> > parameter, offset.
> 
> Yes, I know, I was trying to think about a (hypothetical) situation when
> page sizes differ: what would be the memory alignment requirements from
> the hypervisor for e.g. hypercall arguments? In case it's always
> HV_HYP_PAGE_SIZE we're good but could it be PAGE_SIZE (for e.g. TLB
> flush hypercall)? I don't know. For x86 this discussion probably makes
> no sense. I'm, however, struggling to understand what benefit we will
> get from the change. Maybe just leave it as-is for now and fix
> arch-independent code only? And later, if we decide to generalize this
> code, make another approach? (Not insisting, just a suggestion)

Thank you for the suggestion, Vitaly!

The introduction of HV_HYP_PAGE_SIZE is weighing the assumption of the
future page size—it can be bigger based on the general trend, not
smaller, which is a reasonable assumption, I think.

> >> > @@ -338,7 +349,10 @@ void __init hyperv_init(void)
> >> >          guest_id = generate_guest_id(0, LINUX_VERSION_CODE, 0);
> >> >          wrmsrl(HV_X64_MSR_GUEST_OS_ID, guest_id);
> >> >  
> >> > -        hv_hypercall_pg  = __vmalloc(PAGE_SIZE, GFP_KERNEL, 
> >> > PAGE_KERNEL_RX);
> >> > +        hv_hypercall_pg = kmem_cache_alloc(cachep, GFP_KERNEL);
> >> > +        if (hv_hypercall_pg)
> >> > +                set_memory_x((unsigned long)hv_hypercall_pg, 1);
> >> 
> >> _RX is not writeable, right?
> >> 
> > Yes, you are correct. I should use set_memory_ro() in addition to
> > set_memory_x().
> >
> >> > @@ -416,6 +431,7 @@ void hyperv_cleanup(void)
> >> >           * let hypercall operations fail safely rather than
> >> >           * panic the kernel for using invalid hypercall page
> >> >           */
> >> > +        kmem_cache_free(cachep, hv_hypercall_pg);
> >> 
> >> Please don't do that: hyperv_cleanup() is called on kexec/kdump and
> >> we're trying to do the bare minimum to allow next kernel to boot. Doing
> >> excessive work here will likely lead to consequent problems (we're
> >> already crashing the case it's kdump!).
> >> 
> > Thank you for the explanation! I will remove that.
> >
> 
> -- 
> Vitaly

Reply via email to