On Tue, 2017-03-07 at 09:49 -0800, Jason Ekstrand wrote:
> On Tue, Mar 7, 2017 at 12:22 AM, Juan A. Suarez Romero <jasua...@igalia.com> 
> wrote:
> > On Mon, 2017-02-27 at 13:48 +0100, Juan A. Suarez Romero wrote:
> > 
> > > Current Anv allocator assign memory in terms of a fixed block size.
> > 
> > >
> > 
> > > But there can be cases where this block is not enough for a memory
> > 
> > > request, and thus several blocks must be assigned in a row.
> > 
> > >
> > 
> > > This commit adds support for specifying how many blocks of memory must
> > 
> > > be assigned.
> > 
> > >
> > 
> > >
> > 
> > 
> > 
> > Jason, any thoughts about this?
> > 
> 
> The issue you guys ran into as well as bug 100092 have gotten me thinking 
> about allocation a bit...
> 
> I don't think the solution proposed here is really workable.  Kristian and I 
> talked about something similar in the early days of the driver.  However, the 
> way the free lists work, all blocks on the free list have to be the same 
> size.  Because it's a lock-free data structure, you can only ever pull the 
> first thing off the list.  This means that you can never really hope to get a 
> N consecutive blocks from the free list and every time you want to allocate 
> something bigger than the block size, you have to pull "fresh" blocks.  If 
> someone actually is using a framebuffer with 2k attachments (which is crazy) 
> in a real rendering loop, then the block pool will grow unbounded.
> 

Not sure if I follow. It is true that if you request more blocks than available 
in the free list, it will pull new fresh blocks. But the free list can contain 
more than N blocks. This happens when you return back the N fresh blocks you 
pulled before in to the free list. The list is storing how many blocks it has. 
So the next alloc() will check in the free list is there are enough blocks to 
satisfy the requirement. If so then it assign them (drawback at this moment is 
that if you need just 1 block and the list has more blocks, all of them are 
returned). If they are not enough, then it discard them and get all of them as 
fresh blocks. In this case, we are not reusing the blocks in the free list.

I agree that this solution is not the best one; it is just a small improvement 
to the current implementation just to fix a case that not sure if it is very 
common (never found it until running those tests). 


Related with this: the free list, rather than a list, it only contains just one 
block (or N, with the changes done here). Just wondering why this list is 
required? Couldn't just adjust the pointers in the pool when returning the 
blocks?

> There's a couple potential solutions to this.  One would be to allocate a 
> blob of surface states per subpass rather than one for the entire render 
> pass.  Since the number of surface states per subpass is limited by the size 
> of a binding table (~250), this would guarantee an upper bound on the number 
> of surface states that get allocated per-block.  This would probably mean 
> more surface states being allocated but that's not a big deal.
> 
> Another option would be to make the block pool so that it can have different 
> block sizes on the different ends of the pool.  We could continue to allocate 
> whatever size blocks we want for binding tables but would be able to bump the 
> surface state block size up to something absurd such as 1MiB.  It would still 
> have a hard limit but if your Vulkan application is using a render pass with 
> 8k attachments, something is probably wrong.
> 

_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to