On 2019年04月26日 17:11, Koenig, Christian wrote:
Am 26.04.19 um 11:07 schrieb zhoucm1:
[SNIP]
+ spin_lock(&glob->lru_lock);
+ for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i) {
+ if (list_empty(&man->lru[i]))
+ continue;
+ bo = list_first_entry(&man->lru[i],
Am 26.04.19 um 11:07 schrieb zhoucm1:
> [SNIP]
>>> + spin_lock(&glob->lru_lock);
>>> + for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i) {
>>> + if (list_empty(&man->lru[i]))
>>> + continue;
>>> + bo = list_first_entry(&man->lru[i],
>>> +
On 2019年04月26日 16:31, Christian König wrote:
Am 25.04.19 um 09:39 schrieb Chunming Zhou:
heavy gpu job could occupy memory long time, which lead other user
fail to get memory.
basically pick up Christian idea:
1. Reserve the BO in DC using a ww_mutex ticket (trivial).
Any reason you don't
Am 25.04.19 um 09:39 schrieb Chunming Zhou:
heavy gpu job could occupy memory long time, which lead other user fail to get
memory.
basically pick up Christian idea:
1. Reserve the BO in DC using a ww_mutex ticket (trivial).
Any reason you don't want to split this into a separate patch?
2.
heavy gpu job could occupy memory long time, which lead other user fail to get
memory.
basically pick up Christian idea:
1. Reserve the BO in DC using a ww_mutex ticket (trivial).
2. If we then run into this EBUSY condition in TTM check if the BO we need
memory for (or rather the ww_mutex of it