As mentioned in [0], the CPU may consume many cycles processing arm_smmu_cmdq_issue_cmdlist(). One issue we find is the cmpxchg() loop to get space on the queue takes approx 25% of the cycles for this function.
This series removes that cmpxchg(). For my NVMe test with 3x NVMe SSDs, I'm getting a ~24% throughput increase: Before: 1310 IOPs After: 1630 IOPs I also have a test harness to check the rate of DMA map+unmaps we can achieve: CPU count 32 64 128 Before: 63187 19418 10169 After: 93287 44789 15862 (unit is map+unmaps per CPU per second) There's no specific problem that I know of with this series, as previous issues should now be fixed - but I'm a bit nervous about how we deal with the queue being full and wrapping. And I want to test more. Thanks [0] https://lore.kernel.org/linux-iommu/b926444035e5e2439431908e3842afd24b8...@dggemi525-mbs.china.huawei.com/T/#ma02e301c38c3e94b7725e685757c27e39c7cbde3 John Garry (4): iommu/arm-smmu-v3: Fix trivial typo iommu/arm-smmu-v3: Calculate bits for prod and owner iommu/arm-smmu-v3: Always issue a CMD_SYNC per batch iommu/arm-smmu-v3: Remove cmpxchg() in arm_smmu_cmdq_issue_cmdlist() drivers/iommu/arm-smmu-v3.c | 210 ++++++++++++++++++++++-------------- 1 file changed, 131 insertions(+), 79 deletions(-) -- 2.26.2 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu