On 9/28/2020 6:11 PM, Vijayanand Jitta wrote: > > > On 9/18/2020 8:11 PM, Robin Murphy wrote: >> On 2020-08-20 13:49, vji...@codeaurora.org wrote: >>> From: Vijayanand Jitta <vji...@codeaurora.org> >>> >>> When ever an iova alloc request fails we free the iova >>> ranges present in the percpu iova rcaches and then retry >>> but the global iova rcache is not freed as a result we could >>> still see iova alloc failure even after retry as global >>> rcache is holding the iova's which can cause fragmentation. >>> So, free the global iova rcache as well and then go for the >>> retry. >>> >>> Signed-off-by: Vijayanand Jitta <vji...@codeaurora.org> >>> --- >>> drivers/iommu/iova.c | 23 +++++++++++++++++++++++ >>> include/linux/iova.h | 6 ++++++ >>> 2 files changed, 29 insertions(+) >>> >>> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c >>> index 4e77116..5836c87 100644 >>> --- a/drivers/iommu/iova.c >>> +++ b/drivers/iommu/iova.c >>> @@ -442,6 +442,7 @@ struct iova *find_iova(struct iova_domain *iovad, >>> unsigned long pfn) >>> flush_rcache = false; >>> for_each_online_cpu(cpu) >>> free_cpu_cached_iovas(cpu, iovad); >>> + free_global_cached_iovas(iovad); >>> goto retry; >>> } >>> @@ -1055,5 +1056,27 @@ void free_cpu_cached_iovas(unsigned int cpu, >>> struct iova_domain *iovad) >>> } >>> } >>> +/* >>> + * free all the IOVA ranges of global cache >>> + */ >>> +void free_global_cached_iovas(struct iova_domain *iovad) >> >> As John pointed out last time, this should be static and the header >> changes dropped. >> >> (TBH we should probably register our own hotplug notifier instance for a >> flush queue, so that external code has no need to poke at the per-CPU >> caches either) >> >> Robin. >> > > Right, I have made it static and dropped header changes in v3. > can you please review that. > > Thanks, > Vijay
Please review v4 instead of v3, I have updated other patch as well in v4. Thanks, Vijay >>> +{ >>> + struct iova_rcache *rcache; >>> + unsigned long flags; >>> + int i, j; >>> + >>> + for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) { >>> + rcache = &iovad->rcaches[i]; >>> + spin_lock_irqsave(&rcache->lock, flags); >>> + for (j = 0; j < rcache->depot_size; ++j) { >>> + iova_magazine_free_pfns(rcache->depot[j], iovad); >>> + iova_magazine_free(rcache->depot[j]); >>> + rcache->depot[j] = NULL; >>> + } >>> + rcache->depot_size = 0; >>> + spin_unlock_irqrestore(&rcache->lock, flags); >>> + } >>> +} >>> + >>> MODULE_AUTHOR("Anil S Keshavamurthy <anil.s.keshavamur...@intel.com>"); >>> MODULE_LICENSE("GPL"); >>> diff --git a/include/linux/iova.h b/include/linux/iova.h >>> index a0637ab..a905726 100644 >>> --- a/include/linux/iova.h >>> +++ b/include/linux/iova.h >>> @@ -163,6 +163,7 @@ int init_iova_flush_queue(struct iova_domain *iovad, >>> struct iova *split_and_remove_iova(struct iova_domain *iovad, >>> struct iova *iova, unsigned long pfn_lo, unsigned long pfn_hi); >>> void free_cpu_cached_iovas(unsigned int cpu, struct iova_domain >>> *iovad); >>> +void free_global_cached_iovas(struct iova_domain *iovad); >>> #else >>> static inline int iova_cache_get(void) >>> { >>> @@ -270,6 +271,11 @@ static inline void free_cpu_cached_iovas(unsigned >>> int cpu, >>> struct iova_domain *iovad) >>> { >>> } >>> + >>> +static inline void free_global_cached_iovas(struct iova_domain *iovad) >>> +{ >>> +} >>> + >>> #endif >>> #endif >>> > -- QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu