>>> On 04.05.15 at 11:14, <andrew.coop...@citrix.com> wrote: > On 04/05/2015 09:52, Jan Beulich wrote: >>>>> On 04.05.15 at 04:16, <tiejun.c...@intel.com> wrote: >>> --- a/xen/drivers/passthrough/vtd/x86/vtd.c >>> +++ b/xen/drivers/passthrough/vtd/x86/vtd.c >>> @@ -56,7 +56,9 @@ unsigned int get_cache_line_size(void) >>> >>> void cacheline_flush(char * addr) >>> { >>> + mb(); >>> clflush(addr); >>> + mb(); >>> } >> I think the purpose of the flush is to force write back, not to evict >> the cache line, and if so wmb() would appear to be sufficient. As >> the SDM says that's not the case, a comment explaining why wmb() >> is not sufficient would seem necessary. Plus in the description I >> think "serializing" needs to be changed to "fencing", as serialization >> is not what we really care about here. If you and the maintainers >> agree, I could certainly fix up both aspects while committing. > > On the subject of writebacks, we should get around to alternating-up the > use of clflushopt and clwb, either of which would be better than a > clflush in this case (avoiding the need for the leading mfence).
Plus the barrier would perhaps rather sit around the loop invoking cacheline_flush() in __iommu_flush_cache(), and I wonder whether VT-d code shouldn't use available flushing code elsewhere in the system, and whether that code then wouldn't need barriers added (or use clflushopt/clwb as you suggest) instead. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel