On Fri, Mar 05, 2021 at 04:20:58PM -0800, 'Ira Weiny' wrote:
> From: Ira Weiny <ira.we...@intel.com>
> 
> kmap is inefficient and we are trying to reduce the usage in the kernel.
> There is no readily apparent reason why initp_page needs to be allocated
> and kmap'ed() but sigstruct needs to be page aligned and token
> 512 byte aligned.

Friendly ping, maybe I missed a response?
Ira

> 
> kmalloc() can give us this alignment but we need to allocate PAGE_SIZE
> bytes to do so.  Rather than change this kmap() to kmap_local_page() use
> kmalloc() instead.
> 
> Remove the alloc_page()/kmap() and replace with kmalloc(PAGE_SIZE, ...)
> to get a page aligned kernel address to use.
> 
> In addition add a comment to document the alignment requirements so that
> others like myself don't attempt to 'fix' this again.
> 
> Cc: Sean Christopherson <sea...@google.com>
> Cc: Jethro Beekman <jet...@fortanix.com>
> Reviewed-by: Jarkko Sakkinen <jar...@kernel.org>
> Acked-by: Dave Hansen <dave.han...@intel.com>
> Signed-off-by: Ira Weiny <ira.we...@intel.com>
> 
> ---
> Changes from v4[4]:
>       Add Ack and Reviews
>       Send to the correct maintainers
> 
> Changes from v3[3]:
>       Remove BUILD_BUG_ONs
> 
> Changes from v2[2]:
>         When allocating a power of 2 size kmalloc() now guarantees the
>         alignment of the respective size.  So go back to using kmalloc() but
>         with a PAGE_SIZE allocation to get the alignment.  This also follows
>         the pattern in sgx_ioc_enclave_create()
> 
> Changes from v1[1]:
>       Use page_address() instead of kcmalloc() to ensure sigstruct is
>       page aligned
>       Use BUILD_BUG_ON to ensure token and sigstruct don't collide.
> 
> [1] https://lore.kernel.org/lkml/20210129001459.1538805-1-ira.we...@intel.com/
> [2] https://lore.kernel.org/lkml/20210202013725.3514671-1-ira.we...@intel.com/
> [3] 
> https://lore.kernel.org/lkml/20210205050850.gc5...@iweiny-desk2.sc.intel.com/#t
> [4] https://lore.kernel.org/lkml/ycby02ieklvyj...@kernel.org/
> 
> ---
>  arch/x86/kernel/cpu/sgx/ioctl.c | 13 +++++++------
>  1 file changed, 7 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
> index 90a5caf76939..38e540de5e2a 100644
> --- a/arch/x86/kernel/cpu/sgx/ioctl.c
> +++ b/arch/x86/kernel/cpu/sgx/ioctl.c
> @@ -604,7 +604,6 @@ static long sgx_ioc_enclave_init(struct sgx_encl *encl, 
> void __user *arg)
>  {
>       struct sgx_sigstruct *sigstruct;
>       struct sgx_enclave_init init_arg;
> -     struct page *initp_page;
>       void *token;
>       int ret;
>  
> @@ -615,11 +614,14 @@ static long sgx_ioc_enclave_init(struct sgx_encl *encl, 
> void __user *arg)
>       if (copy_from_user(&init_arg, arg, sizeof(init_arg)))
>               return -EFAULT;
>  
> -     initp_page = alloc_page(GFP_KERNEL);
> -     if (!initp_page)
> +     /*
> +      * sigstruct must be on a page boundry and token on a 512 byte boundry
> +      * kmalloc() gives us this alignment when allocating PAGE_SIZE bytes
> +      */
> +     sigstruct = kmalloc(PAGE_SIZE, GFP_KERNEL);
> +     if (!sigstruct)
>               return -ENOMEM;
>  
> -     sigstruct = kmap(initp_page);
>       token = (void *)((unsigned long)sigstruct + PAGE_SIZE / 2);
>       memset(token, 0, SGX_LAUNCH_TOKEN_SIZE);
>  
> @@ -645,8 +647,7 @@ static long sgx_ioc_enclave_init(struct sgx_encl *encl, 
> void __user *arg)
>       ret = sgx_encl_init(encl, sigstruct, token);
>  
>  out:
> -     kunmap(initp_page);
> -     __free_page(initp_page);
> +     kfree(sigstruct);
>       return ret;
>  }
>  
> -- 
> 2.28.0.rc0.12.gb6a658bd00c9
> 

Reply via email to