> -----Original Message----- > From: Thomas Monjalon <tho...@monjalon.net> > Sent: Friday, June 4, 2021 20:44 > To: Wang, Haiyue <haiyue.w...@intel.com> > Cc: dev@dpdk.org; Elena Agostini <eagost...@nvidia.com> > Subject: Re: [dpdk-dev] [PATCH] gpudev: introduce memory API > > 04/06/2021 13:07, Wang, Haiyue: > > > From: Elena Agostini <eagost...@nvidia.com> > > > +typedef int (*gpu_malloc_t)(struct rte_gpu_dev *dev, size_t size, void > > > **ptr); > > > +typedef int (*gpu_free_t)(struct rte_gpu_dev *dev, void *ptr); > > > + > [...] > > > + /* FUNCTION: Allocate memory on the GPU. */ > > > + gpu_malloc_t gpu_malloc; > > > + /* FUNCTION: Allocate memory on the CPU visible from the GPU. */ > > > + gpu_malloc_t gpu_malloc_visible; > > > + /* FUNCTION: Free allocated memory on the GPU. */ > > > + gpu_free_t gpu_free; > > > > > > I'm wondering that we can define the malloc type as: > > > > typedef int (*gpu_malloc_t)(struct rte_gpu_dev *dev, size_t size, void > > **ptr, > > unsigned int flags) > > > > #define RTE_GPU_MALLOC_F_CPU_VISIBLE 0x01u --> gpu_malloc_visible > > > > Then only one malloc function member is needed, paired with 'gpu_free'. > [...] > > > +int rte_gpu_malloc(uint16_t gpu_id, size_t size, void **ptr); > [...] > > > +int rte_gpu_malloc_visible(uint16_t gpu_id, size_t size, void **ptr); > > > > Then 'rte_gpu_malloc_visible' is no needed, and the new call is: > > > > rte_gpu_malloc(uint16_t gpu_id, size_t size, void **ptr, > > RTE_GPU_MALLOC_F_CPU_VISIBLE). > > > > Also, we can define more flags for feature extension. ;-) > > Yes it is a good idea. > > Another question is about the function rte_gpu_free(). > How do we recognize that a memory chunk is from the CPU and GPU visible, > or just from GPU? >
I didn't find the rte_gpu_free_visible definition, and the rte_gpu_free's comment just says: deallocate a chunk of memory allocated with rte_gpu_malloc* Looks like the rte_gpu_free can handle this case ? And from the definition "rte_gpu_free(uint16_t gpu_id, void *ptr)", the free needs to check whether this memory belong to the GPU or not, so it also can recognize the memory type, I think.