On Fri, 2018-05-18 at 22:52 -0400, Jan Vesely wrote: > On Fri, 2018-05-18 at 21:31 -0500, Aaron Watry wrote: > > On Fri, May 18, 2018 at 1:15 PM, Jan Vesely <jan.ves...@rutgers.edu > > > wrote: > > > On Thu, 2018-05-17 at 19:20 -0500, Aaron Watry wrote: > > > > They're not used anywhere else, so keep them private > > > > > > > > Signed-off-by: Aaron Watry <awa...@gmail.com> > > > > > > I'm not sure what the original purpose of the removed functions > > > was. If > > > you recall please add it to the commit message. > > > > I didn't really bother looking into it when I made this change (I > > didn't write this code in the first place). This is something that > > I've had sitting in my local repo for a while, and just want to > > stop > > rebasing it. > > > > It was originally found a while ago when I started looking into the > > thread-safety (or lack thereof) of the compute pool for r600. > > Figured > > there was no point trying to fix code that was unused. > > OK, nvm then. I was just curious how we ended up with so much dead > code.
Hiya, I can comment on that (mainly because I was the one who almost re-wrote the memory pool). The reason is that I did most of these as a GSoC which started trying to fix a bug in the code that growth the pool when memory was needed and it continued with improving the pool. These functions are mostly remains from the original implementation. If I remember correctly, I sent a patch to remove them (and also a big patch which was a comment explaning how the pool worked and why it was like that) but it never got reviewed. In fact, the implementation IS NOT complete as it lacks a path to unmap memory. I also had some patches for this, but as before, they never got reviewed... I might still have the patches in my old laptop, but I don't have access to it, sorry. Either way, I more or less remember how everything worked, although I don't have a way of testing it because I no longer have a computer with a R600 card. If nothing breaks (compiling) you can have my Reviewed-by: Bruno Jiménez <brunoji...@gmail.com> If you need anything regarding this code, just ask and I'll see if I can dig up from my memory how it worked (and why) Best Regards, Bruno > > > > > > Either way: > > > > > > Reviewed-by: Jan Vesely <jan.ves...@rutgers.edu> > > > > For this one, or both? > > for the series. > > thanks, > Jan > > > > > --Aaron > > > > > > > > Jan > > > > > > > --- > > > > .../drivers/r600/compute_memory_pool.c | 35 > > > > +++++++++++++++---- > > > > .../drivers/r600/compute_memory_pool.h | 24 --------- > > > > ---- > > > > 2 files changed, 29 insertions(+), 30 deletions(-) > > > > > > > > diff --git a/src/gallium/drivers/r600/compute_memory_pool.c > > > > b/src/gallium/drivers/r600/compute_memory_pool.c > > > > index d1ef25ae1e..981d944b8d 100644 > > > > --- a/src/gallium/drivers/r600/compute_memory_pool.c > > > > +++ b/src/gallium/drivers/r600/compute_memory_pool.c > > > > @@ -43,6 +43,29 @@ > > > > #include <inttypes.h> > > > > > > > > #define ITEM_ALIGNMENT 1024 > > > > + > > > > +/* A few forward declarations of static functions */ > > > > +static void compute_memory_shadow(struct compute_memory_pool* > > > > pool, > > > > + struct pipe_context *pipe, int device_to_host); > > > > + > > > > +static void compute_memory_defrag(struct compute_memory_pool > > > > *pool, > > > > + struct pipe_resource *src, struct pipe_resource *dst, > > > > + struct pipe_context *pipe); > > > > + > > > > +static int compute_memory_promote_item(struct > > > > compute_memory_pool *pool, > > > > + struct compute_memory_item *item, struct pipe_context > > > > *pipe, > > > > + int64_t allocated); > > > > + > > > > +static void compute_memory_move_item(struct > > > > compute_memory_pool *pool, > > > > + struct pipe_resource *src, struct pipe_resource *dst, > > > > + struct compute_memory_item *item, uint64_t > > > > new_start_in_dw, > > > > + struct pipe_context *pipe); > > > > + > > > > +static void compute_memory_transfer(struct > > > > compute_memory_pool* pool, > > > > + struct pipe_context * pipe, int device_to_host, > > > > + struct compute_memory_item* chunk, void* data, > > > > + int offset_in_chunk, int size); > > > > + > > > > /** > > > > * Creates a new pool. > > > > */ > > > > @@ -106,7 +129,7 @@ void compute_memory_pool_delete(struct > > > > compute_memory_pool* pool) > > > > * \returns -1 if it fails, 0 otherwise > > > > * \see compute_memory_finalize_pending > > > > */ > > > > -int compute_memory_grow_defrag_pool(struct compute_memory_pool > > > > *pool, > > > > +static int compute_memory_grow_defrag_pool(struct > > > > compute_memory_pool *pool, > > > > struct pipe_context *pipe, int new_size_in_dw) > > > > { > > > > new_size_in_dw = align(new_size_in_dw, ITEM_ALIGNMENT); > > > > @@ -168,7 +191,7 @@ int compute_memory_grow_defrag_pool(struct > > > > compute_memory_pool *pool, > > > > * \param device_to_host 1 for device->host, 0 for host- > > > > >device > > > > * \see compute_memory_grow_defrag_pool > > > > */ > > > > -void compute_memory_shadow(struct compute_memory_pool* pool, > > > > +static void compute_memory_shadow(struct compute_memory_pool* > > > > pool, > > > > struct pipe_context * pipe, int device_to_host) > > > > { > > > > struct compute_memory_item chunk; > > > > @@ -262,7 +285,7 @@ int compute_memory_finalize_pending(struct > > > > compute_memory_pool* pool, > > > > * \param dst The destination resource > > > > * \see compute_memory_grow_defrag_pool and > > > > compute_memory_finalize_pending > > > > */ > > > > -void compute_memory_defrag(struct compute_memory_pool *pool, > > > > +static void compute_memory_defrag(struct compute_memory_pool > > > > *pool, > > > > struct pipe_resource *src, struct pipe_resource *dst, > > > > struct pipe_context *pipe) > > > > { > > > > @@ -292,7 +315,7 @@ void compute_memory_defrag(struct > > > > compute_memory_pool *pool, > > > > * \return -1 if it fails, 0 otherwise > > > > * \see compute_memory_finalize_pending > > > > */ > > > > -int compute_memory_promote_item(struct compute_memory_pool > > > > *pool, > > > > +static int compute_memory_promote_item(struct > > > > compute_memory_pool *pool, > > > > struct compute_memory_item *item, struct > > > > pipe_context *pipe, > > > > int64_t start_in_dw) > > > > { > > > > @@ -397,7 +420,7 @@ void compute_memory_demote_item(struct > > > > compute_memory_pool *pool, > > > > * \param new_start_in_dw The new position of the item in > > > > \a item_list > > > > * \see compute_memory_defrag > > > > */ > > > > -void compute_memory_move_item(struct compute_memory_pool > > > > *pool, > > > > +static void compute_memory_move_item(struct > > > > compute_memory_pool *pool, > > > > struct pipe_resource *src, struct pipe_resource *dst, > > > > struct compute_memory_item *item, uint64_t > > > > new_start_in_dw, > > > > struct pipe_context *pipe) > > > > @@ -569,7 +592,7 @@ struct compute_memory_item* > > > > compute_memory_alloc( > > > > * \param device_to_host 1 for device->host, 0 for host- > > > > >device. > > > > * \see compute_memory_shadow > > > > */ > > > > -void compute_memory_transfer( > > > > +static void compute_memory_transfer( > > > > struct compute_memory_pool* pool, > > > > struct pipe_context * pipe, > > > > int device_to_host, > > > > diff --git a/src/gallium/drivers/r600/compute_memory_pool.h > > > > b/src/gallium/drivers/r600/compute_memory_pool.h > > > > index 3a17c5176b..2064e56352 100644 > > > > --- a/src/gallium/drivers/r600/compute_memory_pool.h > > > > +++ b/src/gallium/drivers/r600/compute_memory_pool.h > > > > @@ -86,39 +86,15 @@ struct compute_memory_pool* > > > > compute_memory_pool_new(struct r600_screen *rscreen) > > > > > > > > void compute_memory_pool_delete(struct compute_memory_pool* > > > > pool); > > > > > > > > -int compute_memory_grow_defrag_pool(struct > > > > compute_memory_pool* pool, > > > > - struct pipe_context *pipe, int new_size_in_dw); > > > > - > > > > -void compute_memory_shadow(struct compute_memory_pool* pool, > > > > - struct pipe_context *pipe, int device_to_host); > > > > - > > > > int compute_memory_finalize_pending(struct > > > > compute_memory_pool* pool, > > > > struct pipe_context * pipe); > > > > > > > > -void compute_memory_defrag(struct compute_memory_pool *pool, > > > > - struct pipe_resource *src, struct pipe_resource *dst, > > > > - struct pipe_context *pipe); > > > > - > > > > -int compute_memory_promote_item(struct compute_memory_pool > > > > *pool, > > > > - struct compute_memory_item *item, struct pipe_context > > > > *pipe, > > > > - int64_t allocated); > > > > - > > > > void compute_memory_demote_item(struct compute_memory_pool > > > > *pool, > > > > struct compute_memory_item *item, struct pipe_context > > > > *pipe); > > > > > > > > -void compute_memory_move_item(struct compute_memory_pool > > > > *pool, > > > > - struct pipe_resource *src, struct pipe_resource *dst, > > > > - struct compute_memory_item *item, uint64_t > > > > new_start_in_dw, > > > > - struct pipe_context *pipe); > > > > - > > > > void compute_memory_free(struct compute_memory_pool* pool, > > > > int64_t id); > > > > > > > > struct compute_memory_item* compute_memory_alloc(struct > > > > compute_memory_pool* pool, > > > > int64_t size_in_dw); > > > > > > > > -void compute_memory_transfer(struct compute_memory_pool* pool, > > > > - struct pipe_context * pipe, int device_to_host, > > > > - struct compute_memory_item* chunk, void* data, > > > > - int offset_in_chunk, int size); > > > > - > > > > #endif > > _______________________________________________ > mesa-dev mailing list > mesa-dev@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/mesa-dev _______________________________________________ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev