As mentioned in [0], the CPU may consume many cycles processing arm_smmu_cmdq_issue_cmdlist(). One issue we find is the cmpxchg() loop to get space on the queue takes a lot of time once we start getting many CPUs contending - from experiment, for 64 CPUs contending the cmdq, success rate is ~ 1 in 12, which is poor, but not totally awful.
This series removes that cmpxchg() and replaces with an atomic_add, same as how the actual cmdq deals with maintaining the prod pointer. For my NVMe test with 3x NVMe SSDs, I'm getting a ~24% throughput increase: Before: 1250K IOPs After: 1550K IOPs I also have a test harness to check the rate of DMA map+unmaps we can achieve: CPU count 8 16 32 64 Before: 282K 115K 36K 11K After: 302K 193K 80K 30K (unit is map+unmaps per CPU per second) [0] https://lore.kernel.org/linux-iommu/b926444035e5e2439431908e3842afd24b8...@dggemi525-mbs.china.huawei.com/T/#ma02e301c38c3e94b7725e685757c27e39c7cbde3 Differences to v1: - Simplify by dropping patch to always issue a CMD_SYNC - Use 64b atomic add, keeping prod in a separate 32b field John Garry (2): iommu/arm-smmu-v3: Calculate max commands per batch iommu/arm-smmu-v3: Remove cmpxchg() in arm_smmu_cmdq_issue_cmdlist() drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 166 ++++++++++++++------ 1 file changed, 114 insertions(+), 52 deletions(-) -- 2.26.2