This series contains a patch to solve the longterm IOVA issue which leizhen originally tried to address at [0].
A sieved kernel log is at the following, showing periodic dumps of IOVA sizes, per CPU and per depot bin, per IOVA size granule: https://raw.githubusercontent.com/hisilicon/kernel-dev/topic-iommu-5.10-iova-debug-v3/aging_test Notice, for example, the following logs: [13175.355584] print_iova1 cpu_total=40135 depot_total=3866 total=44001 [83483.457858] print_iova1 cpu_total=62532 depot_total=24476 total=87008 Where total IOVA rcache size has grown from 44K->87K over a long time. Along with this patch, I included the following: - A smaller helper to clear all IOVAs for a domain - Change polarity of the IOVA magazine helpers - Small optimisation from Cong Wang included, which was never applied [1]. There was some debate of the other patches in that series, but this one is quite straightforward. Differnces to v2: - Update commit message for patch 3/4 Differences to v1: - Add IOVA clearing helper - Add patch to change polarity of mag helpers - Avoid logically-redundant extra variable in __iova_rcache_insert() [0] https://lore.kernel.org/linux-iommu/20190815121104.29140-3-thunder.leiz...@huawei.com/ [1] https://lore.kernel.org/linux-iommu/4b74d40a-22d1-af53-fcb6-5d7018370...@huawei.com/ Cong Wang (1): iommu: avoid taking iova_rbtree_lock twice John Garry (3): iommu/iova: Add free_all_cpu_cached_iovas() iommu/iova: Avoid double-negatives in magazine helpers iommu/iova: Flush CPU rcache for when a depot fills drivers/iommu/iova.c | 66 +++++++++++++++++++++++++------------------- 1 file changed, 38 insertions(+), 28 deletions(-) -- 2.26.2 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu