04/06/2021 13:07, Wang, Haiyue: > > From: Elena Agostini <eagost...@nvidia.com> > > +typedef int (*gpu_malloc_t)(struct rte_gpu_dev *dev, size_t size, void > > **ptr); > > +typedef int (*gpu_free_t)(struct rte_gpu_dev *dev, void *ptr); > > + [...] > > + /* FUNCTION: Allocate memory on the GPU. */ > > + gpu_malloc_t gpu_malloc; > > + /* FUNCTION: Allocate memory on the CPU visible from the GPU. */ > > + gpu_malloc_t gpu_malloc_visible; > > + /* FUNCTION: Free allocated memory on the GPU. */ > > + gpu_free_t gpu_free; > > > I'm wondering that we can define the malloc type as: > > typedef int (*gpu_malloc_t)(struct rte_gpu_dev *dev, size_t size, void **ptr, > unsigned int flags) > > #define RTE_GPU_MALLOC_F_CPU_VISIBLE 0x01u --> gpu_malloc_visible > > Then only one malloc function member is needed, paired with 'gpu_free'. [...] > > +int rte_gpu_malloc(uint16_t gpu_id, size_t size, void **ptr); [...] > > +int rte_gpu_malloc_visible(uint16_t gpu_id, size_t size, void **ptr); > > Then 'rte_gpu_malloc_visible' is no needed, and the new call is: > > rte_gpu_malloc(uint16_t gpu_id, size_t size, void **ptr, > RTE_GPU_MALLOC_F_CPU_VISIBLE). > > Also, we can define more flags for feature extension. ;-)
Yes it is a good idea. Another question is about the function rte_gpu_free(). How do we recognize that a memory chunk is from the CPU and GPU visible, or just from GPU?