>>> On 26.04.16 at 14:23, <quan...@intel.com> wrote: > On April 25, 2016 6:19 PM, Jan Beulich <jbeul...@suse.com> wrote: >> >>> On 18.04.16 at 16:00, <quan...@intel.com> wrote: >> > --- a/xen/common/memory.c >> > +++ b/xen/common/memory.c >> > @@ -678,8 +678,9 @@ static int xenmem_add_to_physmap(struct domain >> *d, >> > if ( need_iommu(d) ) >> > { >> > this_cpu(iommu_dont_flush_iotlb) = 0; >> > - iommu_iotlb_flush(d, xatp->idx - done, done); >> > - iommu_iotlb_flush(d, xatp->gpfn - done, done); >> > + rc = iommu_iotlb_flush(d, xatp->idx - done, done); >> > + if ( !rc ) >> > + rc = iommu_iotlb_flush(d, xatp->gpfn - done, done); >> >> And again - best effort flushing in all cases. I.e. you shouldn't bypass the >> second one if the first one failed, but instead properly accumulate the >> return >> value, and also again not clobber any negative value rc may already hold. > > > Thanks for correcting me. I did like accumulating the return value. :( > I will follow your suggestion in next v3. > >> What's worse (and makes me wonder whether this got tested) - you also >> clobber the positive return value this function may produce, even in the case >> the flushes succeed (and hence the function as a whole was successful). >> > I have tested it with previous patches (i.e. 1st patch), launching a VM > with PT ATS device to invoke this call tree. > btw, I fix the positive issue in 1st patch. > I know this is not strict, as I assume this patch set will be committed at > the same time.
Hmm, the "positive" here has nothing to do with the "positive" in patch 1. Please just have a look at xenmem_add_to_physmap() as a whole. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel