Theoretically, on some machines faults might be generated faster than they're cleared by CPU. Let's limit the cleaning-loop by number of hw fault registers.
Cc: Alex Williamson <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Joerg Roedel <[email protected]> Cc: Lu Baolu <[email protected]> Cc: [email protected] Signed-off-by: Dmitry Safonov <[email protected]> --- drivers/iommu/dmar.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c index 6c4ea32ee6a9..cf1105111209 100644 --- a/drivers/iommu/dmar.c +++ b/drivers/iommu/dmar.c @@ -1615,7 +1615,7 @@ static int dmar_fault_do_one(struct intel_iommu *iommu, int type, irqreturn_t dmar_fault(int irq, void *dev_id) { struct intel_iommu *iommu = dev_id; - int reg, fault_index; + int reg, fault_index, i; u32 fault_status; unsigned long flag; static DEFINE_RATELIMIT_STATE(rs, @@ -1633,7 +1633,7 @@ irqreturn_t dmar_fault(int irq, void *dev_id) fault_index = dma_fsts_fault_record_index(fault_status); reg = cap_fault_reg_offset(iommu->cap); - while (1) { + for (i = 0; i < cap_num_fault_regs(iommu->cap); i++) { /* Disable printing, simply clear the fault when ratelimited */ bool ratelimited = !__ratelimit(&rs); u8 fault_reason; -- 2.13.6 _______________________________________________ iommu mailing list [email protected] https://lists.linuxfoundation.org/mailman/listinfo/iommu
