On Tue, Nov 17, 2020 at 01:38:14PM -0500, Andrey Grodzovsky wrote:
> 
> On 6/22/20 5:53 AM, Daniel Vetter wrote:
> > On Sun, Jun 21, 2020 at 02:03:08AM -0400, Andrey Grodzovsky wrote:
> > > No point to try recovery if device is gone, just messes up things.
> > > 
> > > Signed-off-by: Andrey Grodzovsky <andrey.grodzov...@amd.com>
> > > ---
> > >   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 16 ++++++++++++++++
> > >   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c |  8 ++++++++
> > >   2 files changed, 24 insertions(+)
> > > 
> > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
> > > b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> > > index 6932d75..5d6d3d9 100644
> > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> > > @@ -1129,12 +1129,28 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
> > >           return ret;
> > >   }
> > > +static void amdgpu_cancel_all_tdr(struct amdgpu_device *adev)
> > > +{
> > > + int i;
> > > +
> > > + for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
> > > +         struct amdgpu_ring *ring = adev->rings[i];
> > > +
> > > +         if (!ring || !ring->sched.thread)
> > > +                 continue;
> > > +
> > > +         cancel_delayed_work_sync(&ring->sched.work_tdr);
> > > + }
> > > +}
> > I think this is a function that's supposed to be in drm/scheduler, not
> > here. Might also just be your cleanup code being ordered wrongly, or your
> > split in one of the earlier patches not done quite right.
> > -Daniel
> 
> 
> This function iterates across all the schedulers  per amdgpu device and 
> accesses
> amdgpu specific structures , drm/scheduler deals with single scheduler at most
> so looks to me like this is the right place for this function
I guess we could keep track of all schedulers somewhere in a list in
struct drm_device and wrap this up. That was kinda the idea.

Minimally I think a tiny wrapper with docs for the
cancel_delayed_work_sync(&sched->work_tdr); which explains what you must
observe to make sure there's no race. I'm not exactly sure there's no
guarantee here we won't get a new tdr work launched right afterwards at
least, so this looks a bit like a hack.
-Daniel

> 
> Andrey
> 
> 
> > 
> > > +
> > >   static void
> > >   amdgpu_pci_remove(struct pci_dev *pdev)
> > >   {
> > >           struct drm_device *dev = pci_get_drvdata(pdev);
> > > + struct amdgpu_device *adev = dev->dev_private;
> > >           drm_dev_unplug(dev);
> > > + amdgpu_cancel_all_tdr(adev);
> > >           ttm_bo_unmap_virtual_address_space(&adev->mman.bdev);
> > >           amdgpu_driver_unload_kms(dev);
> > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
> > > b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> > > index 4720718..87ff0c0 100644
> > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> > > @@ -28,6 +28,8 @@
> > >   #include "amdgpu.h"
> > >   #include "amdgpu_trace.h"
> > > +#include <drm/drm_drv.h>
> > > +
> > >   static void amdgpu_job_timedout(struct drm_sched_job *s_job)
> > >   {
> > >           struct amdgpu_ring *ring = to_amdgpu_ring(s_job->sched);
> > > @@ -37,6 +39,12 @@ static void amdgpu_job_timedout(struct drm_sched_job 
> > > *s_job)
> > >           memset(&ti, 0, sizeof(struct amdgpu_task_info));
> > > + if (drm_dev_is_unplugged(adev->ddev)) {
> > > +         DRM_INFO("ring %s timeout, but device unplugged, skipping.\n",
> > > +                                   s_job->sched->name);
> > > +         return;
> > > + }
> > > +
> > >           if (amdgpu_ring_soft_recovery(ring, job->vmid, 
> > > s_job->s_fence->parent)) {
> > >                   DRM_ERROR("ring %s timeout, but soft recovered\n",
> > >                             s_job->sched->name);
> > > -- 
> > > 2.7.4
> > > 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Reply via email to