On 07.09.2020 09:40, Paul Durrant wrote: > From: Paul Durrant <pdurr...@amazon.com> > > This patch adds a full I/O TLB flush to the error paths of iommu_map() and > iommu_unmap(). > > Without this change callers need constructs such as: > > rc = iommu_map/unmap(...) > err = iommu_flush(...) > if ( !rc ) > rc = err; > > With this change, it can be simplified to: > > rc = iommu_map/unmap(...) > if ( !rc ) > rc = iommu_flush(...) > > because, if the map or unmap fails the flush will be unnecessary. This saves > a stack variable and generally makes the call sites tidier. > > Signed-off-by: Paul Durrant <pdurr...@amazon.com>
Reviewed-by: Jan Beulich <jbeul...@suse.com> with one cosmetic issue taken care of (perhaps while committing): > @@ -338,14 +346,8 @@ int iommu_legacy_unmap(struct domain *d, dfn_t dfn, > unsigned int page_order) > unsigned int flush_flags = 0; > int rc = iommu_unmap(d, dfn, page_order, &flush_flags); > > - if ( !this_cpu(iommu_dont_flush_iotlb) ) > - { > - int err = iommu_iotlb_flush(d, dfn, (1u << page_order), > - flush_flags); > - > - if ( !rc ) > - rc = err; > - } > + if ( !this_cpu(iommu_dont_flush_iotlb) && ! rc ) There's a stray blank after the latter ! here. Jan