On 01/02/2021 08:21, Boris Brezillon wrote:
Doing a hw-irq -> threaded-irq round-trip is counter-productive, stay
in the threaded irq handler as long as we can.

Signed-off-by: Boris Brezillon <boris.brezil...@collabora.com>

Looks fine to me, but I'm interested to know if you actually saw a performance improvement. Back-to-back MMU faults should (hopefully) be fairly uncommon.

Regardless:

Reviewed-by: Steven Price <steven.pr...@arm.com>

---
  drivers/gpu/drm/panfrost/panfrost_mmu.c | 7 +++++++
  1 file changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c 
b/drivers/gpu/drm/panfrost/panfrost_mmu.c
index 21e552d1ac71..65bc20628c4e 100644
--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
+++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
@@ -580,6 +580,8 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int irq, 
void *data)
        u32 status = mmu_read(pfdev, MMU_INT_RAWSTAT);
        int i, ret;
+again:
+
        for (i = 0; status; i++) {
                u32 mask = BIT(i) | BIT(i + 16);
                u64 addr;
@@ -628,6 +630,11 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int 
irq, void *data)
                status &= ~mask;
        }
+ /* If we received new MMU interrupts, process them before returning. */
+       status = mmu_read(pfdev, MMU_INT_RAWSTAT);
+       if (status)
+               goto again;
+
        mmu_write(pfdev, MMU_INT_MASK, ~0);
        return IRQ_HANDLED;
  };


_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Reply via email to