On Tue, May 12, 2020 at 7:31 PM Christian König
wrote:
>
> Ah!
>
> So we can't allocate memory while scheduling anything because it could
> be that memory reclaim is waiting for the scheduler to finish pushing
> things to the hardware, right?
Yup, that's my understanding. But like with all things
Ah!
So we can't allocate memory while scheduling anything because it could
be that memory reclaim is waiting for the scheduler to finish pushing
things to the hardware, right?
Indeed a nice problem, haven't noticed that one.
Christian.
Am 12.05.20 um 18:27 schrieb Daniel Vetter:
On Tue, Ma
On Tue, May 12, 2020 at 6:20 PM Daniel Vetter wrote:
>
> On Tue, May 12, 2020 at 5:56 PM Christian König
> wrote:
> >
> > Hui what? Of hand that doesn't looks correct to me.
>
> It's not GFP_ATOMIC, it's just that GFP_ATOMIC is the only shotgun we
> have to avoid direct reclaim. And direct reclai
On Tue, May 12, 2020 at 5:56 PM Christian König
wrote:
>
> Hui what? Of hand that doesn't looks correct to me.
It's not GFP_ATOMIC, it's just that GFP_ATOMIC is the only shotgun we
have to avoid direct reclaim. And direct reclaim might need to call
into your mmu notifier, which might need to wait
Hui what? Of hand that doesn't looks correct to me.
Why the heck should this be an atomic context? If that's correct
allocating memory is the least of the problems we have.
Regards,
Christian.
Am 12.05.20 um 10:59 schrieb Daniel Vetter:
My dma-fence lockdep annotations caught an inversion be
My dma-fence lockdep annotations caught an inversion because we
allocate memory where we really shouldn't:
kmem_cache_alloc+0x2b/0x6d0
amdgpu_fence_emit+0x30/0x330 [amdgpu]
amdgpu_ib_schedule+0x306/0x550 [amdgpu]
amdgpu_job_run+0x10f/0x260 [amdgpu]
drm_sched