On 5/22/21 3:13 AM, Christian König wrote:
Hi Zack,
IIRC that was for the VMW_PL_GMR type, wasn't it?
As far as I have seen that backend was just giving out unique numbers and it
looked questionable that we allocated pages for that.
E.g. when you set that flag then for each allocation we also allocate a TTM tt
structure and a corresponding page.
Got ya. Yea, it's a little messy. I think it's correct. Those unique numbers
are just identifiers for the bo's but the actual memory for them is regular
system memory (e.g. we just tell our virtual hardware, here's some guest system
pages and here's a unique id that we'll be using the refer to them).
Tangentially this also relates to a small issue with the rework of the memory accounting
and removing the old page allocator. In the old page allocator we could specify what's
the limit of system memory that the allocator could use (via ttm_check_under_lowerlimit)
so the memory accounting that we've moved back to vmwgfx does nothing right now (well, it
"accounts" just doesn't act on the limit ;) ).
We could probably add a call to ttm_check_under_lowerlimit in our ttm_populate
callback (vmw_ttm_populate) but it is a little wacky. That's because in some
situations we do want to ignore the limit on system memory allocations, purpose
which we used to use ttm_operation_ctx::force_alloc for. I don't love designs
that are so driver specific so I'd prefer to avoid using force_alloc that is
only used internally by vmwgfx but I don't see a clean way of being able to put
a limit on system memory that our driver is using.
Just to explain, our virtual hardware is basically an integrated gpu nowadays,
so all the memory it allocates comes from system memory (with those unique
numbers to identify it) and, especially on vm's that have lower amount of ram,
we would like to limit how much of it will be used for graphics.
z