On 22/06/2020 18:28, John Garry wrote:

Hi, Can you guys let me know if this is on the radar at all?

I have been talking about this performance issue since Jan, and not getting anything really.

thanks

As mentioned in [0], the CPU may consume many cycles processing
arm_smmu_cmdq_issue_cmdlist(). One issue we find is the cmpxchg() loop to
get space on the queue takes approx 25% of the cycles for this function.

This series removes that cmpxchg().

For my NVMe test with 3x NVMe SSDs, I'm getting a ~24% throughput
increase:
Before: 1310 IOPs
After: 1630 IOPs

I also have a test harness to check the rate of DMA map+unmaps we can
achieve:

CPU count       32      64      128
Before:         63187   19418   10169
After:          93287   44789   15862

(unit is map+unmaps per CPU per second)

[0] 
https://lore.kernel.org/linux-iommu/b926444035e5e2439431908e3842afd24b8...@dggemi525-mbs.china.huawei.com/T/#ma02e301c38c3e94b7725e685757c27e39c7cbde3

John Garry (4):
   iommu/arm-smmu-v3: Fix trivial typo
   iommu/arm-smmu-v3: Calculate bits for prod and owner
   iommu/arm-smmu-v3: Always issue a CMD_SYNC per batch
   iommu/arm-smmu-v3: Remove cmpxchg() in arm_smmu_cmdq_issue_cmdlist()

  drivers/iommu/arm-smmu-v3.c | 233 +++++++++++++++++++++++-------------
  1 file changed, 151 insertions(+), 82 deletions(-)


Reply via email to