On Fri, Oct 23, 2020 at 11:20 AM Lucas Stach wrote:
>
> On Fr, 2020-10-23 at 09:51 -0700, Rob Clark wrote:
> > From: Rob Clark
> >
> > If there is only a single ring (no-preemption), everything is FIFO order
> > and there is no need to implicit-sync.
> >
> > Mesa should probably just always use M
On Fr, 2020-10-23 at 09:51 -0700, Rob Clark wrote:
> From: Rob Clark
>
> If there is only a single ring (no-preemption), everything is FIFO order
> and there is no need to implicit-sync.
>
> Mesa should probably just always use MSM_SUBMIT_NO_IMPLICIT, as behavior
> is undefined when fences are n
From: Rob Clark
Now that the inactive_list is protected by mm_lock, and everything
else on per-obj basis is protected by obj->lock, we no longer depend
on struct_mutex.
Signed-off-by: Rob Clark
Reviewed-by: Kristian H. Kristensen
---
drivers/gpu/drm/msm/msm_gem.c | 1 -
drivers/gpu/
From: Rob Clark
Small cleanup, update_fences() is used in the hangcheck path, but also
in the normal retire path.
Signed-off-by: Rob Clark
Reviewed-by: Jordan Crouse
Reviewed-by: Kristian H. Kristensen
---
drivers/gpu/drm/msm/msm_gpu.c | 28 ++--
1 file changed, 14 in
From: Rob Clark
Unfortunately, due to an dev_pm_opp locking interaction with
mm->mmap_sem, we need to do pm get before aquiring obj locks,
otherwise we can have anger lockdep with the chain:
opp_table_lock --> &mm->mmap_sem --> reservation_ww_class_mutex
For an explicit fencing userspace, the
From: Rob Clark
If there is only a single ring (no-preemption), everything is FIFO order
and there is no need to implicit-sync.
Mesa should probably just always use MSM_SUBMIT_NO_IMPLICIT, as behavior
is undefined when fences are not used to synchronize buffer usage across
contexts (which is the
From: Rob Clark
One less place to rely on dev->struct_mutex.
Signed-off-by: Rob Clark
Reviewed-by: Jordan Crouse
Reviewed-by: Kristian H. Kristensen
---
drivers/gpu/drm/msm/msm_gem_submit.c | 2 ++
drivers/gpu/drm/msm/msm_gpu.c| 37 ++--
drivers/gpu/drm/msm/m
From: Rob Clark
Now that we don't need struct_mutex in the free path, we can get rid of
the asynchronous free altogether.
Signed-off-by: Rob Clark
Reviewed-by: Kristian H. Kristensen
---
drivers/gpu/drm/msm/msm_drv.c | 3 ---
drivers/gpu/drm/msm/msm_drv.h | 5 -
drivers/gpu/drm/msm/msm_
From: Rob Clark
We cannot switch to using obj->resv for locking without first moving all
the copy_from_user() ahead of submit_lock_objects(). Otherwise in the
mm fault path we aquire mm->mmap_sem before obj lock, but in the submit
path the order is reversed.
Signed-off-by: Rob Clark
Reviewed-b
From: Rob Clark
The obj->lock is sufficient for what we need.
This *does* have the implication that userspace can try to shoot
themselves in the foot by racing madvise(DONTNEED) with submit. But
the result will be about the same if they did madvise(DONTNEED) before
the submit ioctl, ie. they mi
From: Rob Clark
It is somewhat redundant with the gpu tracepoints, and anyways not too
useful to justify spamming the log when debug traces are enabled.
Signed-off-by: Rob Clark
Reviewed-by: Jordan Crouse
Reviewed-by: Kristian H. Kristensen
---
drivers/gpu/drm/msm/msm_gpu.c | 1 -
1 file cha
From: Rob Clark
The microcode bo's should never be madvise(WONTNEED), so these should
not be using msm_gem_get_vaddr_active().
Signed-off-by: Rob Clark
Reviewed-by: Kristian H. Kristensen
---
drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 2 +-
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +-
2 files c
From: Rob Clark
Move grabbing the bo lock into shrinker, with a msm_gem_trylock() to
skip over bo's that are already locked. This gets rid of the nested
lock classes.
Signed-off-by: Rob Clark
Reviewed-by: Kristian H. Kristensen
---
drivers/gpu/drm/msm/msm_gem.c | 24 +---
From: Rob Clark
Before we remove dev->struct_mutex from the retire path, we have to deal
with the situation of a submit retiring before the submit ioctl returns.
To deal with this, ring->submits will hold a reference to the submit,
which is dropped when the submit is retired. And the submit ioc
From: Rob Clark
Now that active_list/inactive_list is protected by mm_lock, we no longer
need dev->struct_mutex in the free_object() path.
Signed-off-by: Rob Clark
Reviewed-by: Kristian H. Kristensen
---
drivers/gpu/drm/msm/msm_gem.c | 8
1 file changed, 8 deletions(-)
diff --git a/
From: Rob Clark
When we cut-over to using dma_resv_lock/etc instead of msm_obj->lock,
we'll need these for the submit path (where resv->lock is already held).
Signed-off-by: Rob Clark
Reviewed-by: Kristian H. Kristensen
---
drivers/gpu/drm/msm/msm_gem.c | 89 +++---
From: Rob Clark
Now that we are not relying on dev->struct_mutex to protect the
ring->submits lists, drop the struct_mutex lock.
Signed-off-by: Rob Clark
Reviewed-by: Jordan Crouse
Reviewed-by: Kristian H. Kristensen
---
drivers/gpu/drm/msm/msm_gpu.c | 8 +---
1 file changed, 1 insertion
From: Rob Clark
Before adding another lock, give ring->lock a more descriptive name.
Signed-off-by: Rob Clark
Reviewed-by: Jordan Crouse
Reviewed-by: Kristian H. Kristensen
---
drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 4 ++--
drivers/gpu/drm/msm/adreno/a5xx_preempt.c | 12 ++--
d
From: Rob Clark
We only want to use the _unlocked() variant in the unlocked case.
Signed-off-by: Rob Clark
Reviewed-by: Kristian H. Kristensen
---
drivers/gpu/drm/msm/msm_gem.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu
From: Rob Clark
It cannot be atomically updated with obj->active_count, and the only
purpose is a useless WARN_ON() (which becomes a buggy WARN_ON() once
retire_submits() is not serialized with incoming submits via
struct_mutex)
Signed-off-by: Rob Clark
Reviewed-by: Kristian H. Kristensen
---
From: Rob Clark
Rather than relying on the big dev->struct_mutex hammer, introduce a
more specific lock for protecting the bo lists.
Signed-off-by: Rob Clark
Reviewed-by: Jordan Crouse
Reviewed-by: Kristian H. Kristensen
---
drivers/gpu/drm/msm/msm_debugfs.c | 7 +++
drivers/gpu/dr
From: Rob Clark
This also converts the special msm_gem_get_vaddr_active() to expect the
lock to already be held. There are two call-sites for this, one already
has the lock held, so it is more straightforward to just open-code the
locking for the other caller.
Signed-off-by: Rob Clark
Reviewed
From: Rob Clark
Signed-off-by: Rob Clark
Reviewed-by: Kristian H. Kristensen
---
drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c | 1 +
drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c | 1 +
drivers/gpu/drm/msm/dsi/dsi_host.c| 1 +
drivers/gpu/drm/msm/msm_drv.h | 54
From: Rob Clark
We'll need to introduce a _locked() version of msm_gem_get_iova(), so we
need to make that name available.
Signed-off-by: Rob Clark
Reviewed-by: Kristian H. Kristensen
---
drivers/gpu/drm/msm/msm_gem.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/d
From: Rob Clark
This doesn't remove *all* the struct_mutex, but it covers the worst
of it, ie. shrinker/madvise/free/retire. The submit path still uses
struct_mutex, but it still needs *something* serialize a portion of
the submit path, and lock_stat mostly just shows the lock contention
there b
From: Rob Clark
This will make it easier to transition over to obj->resv locking for
everything that is per-bo locking.
Signed-off-by: Rob Clark
Reviewed-by: Kristian H. Kristensen
---
drivers/gpu/drm/msm/msm_gem.c | 99 ---
drivers/gpu/drm/msm/msm_gem.h | 28 +
On Fri, Oct 23, 2020 at 1:55 AM Kristian Høgsberg wrote:
>
> On Mon, Oct 19, 2020 at 10:45 PM Rob Clark wrote:
> >
> > From: Rob Clark
> >
> > Move grabbing the bo lock into shrinker, with a msm_gem_trylock() to
> > skip over bo's that are already locked. This gets rid of the nested
> > lock cl
On Mon, Oct 19, 2020 at 10:45 PM Rob Clark wrote:
>
> From: Rob Clark
>
> This doesn't remove *all* the struct_mutex, but it covers the worst
> of it, ie. shrinker/madvise/free/retire. The submit path still uses
> struct_mutex, but it still needs *something* serialize a portion of
> the submit p
On Mon, Oct 19, 2020 at 10:45 PM Rob Clark wrote:
>
> From: Rob Clark
>
> We cannot switch to using obj->resv for locking without first moving all
> the copy_from_user() ahead of submit_lock_objects(). Otherwise in the
> mm fault path we aquire mm->mmap_sem before obj lock, but in the submit
> p
On Mon, Oct 19, 2020 at 10:45 PM Rob Clark wrote:
>
> From: Rob Clark
>
> Move grabbing the bo lock into shrinker, with a msm_gem_trylock() to
> skip over bo's that are already locked. This gets rid of the nested
> lock classes.
>
> Signed-off-by: Rob Clark
> ---
> drivers/gpu/drm/msm/msm_gem.
30 matches
Mail list logo