On Fri, Sep 14, 2018 at 01:26:58PM -0500, Brijesh Singh wrote: > Currently, the amdvi_validate_dte() assumes that a valid DTE will > always have V=1. This is not true. The V=1 means that bit[127:1] are > valid. A valid DTE can have IV=1 and V=0 (i.e pt=off, intremap=on).
"pt" might be a bit confusing here. Now "intel-iommu" device has the "pt" parameter to specify IOMMU DMAR passthrough support. Also the corresponding guest kernel parameter "iommu_pt". So I would suggest to use "page translation" (is this really the term that AMD spec is used after all?) or directly DMAR (DMA remapping). > > Remove the V=1 check from amdvi_validate_dte(), make the caller > responsible to check for V or IV bits. > > Signed-off-by: Brijesh Singh <brijesh.si...@amd.com> > Cc: "Michael S. Tsirkin" <m...@redhat.com> > Cc: Paolo Bonzini <pbonz...@redhat.com> > Cc: Richard Henderson <r...@twiddle.net> > Cc: Eduardo Habkost <ehabk...@redhat.com> > Cc: Marcel Apfelbaum <marcel.apfelb...@gmail.com> > Cc: Tom Lendacky <thomas.lenda...@amd.com> > Cc: Suravee Suthikulpanit <suravee.suthikulpa...@amd.com> > --- > hw/i386/amd_iommu.c | 7 ++++--- > 1 file changed, 4 insertions(+), 3 deletions(-) > > diff --git a/hw/i386/amd_iommu.c b/hw/i386/amd_iommu.c > index 1fd669f..225825e 100644 > --- a/hw/i386/amd_iommu.c > +++ b/hw/i386/amd_iommu.c > @@ -807,7 +807,7 @@ static inline uint64_t amdvi_get_perms(uint64_t entry) > AMDVI_DEV_PERM_SHIFT; > } > > -/* a valid entry should have V = 1 and reserved bits honoured */ > +/* validate that reserved bits are honoured */ > static bool amdvi_validate_dte(AMDVIState *s, uint16_t devid, > uint64_t *dte) > { > @@ -820,7 +820,7 @@ static bool amdvi_validate_dte(AMDVIState *s, uint16_t > devid, > return false; > } > > - return dte[0] & AMDVI_DEV_VALID; > + return true; > } > > /* get a device table entry given the devid */ > @@ -967,7 +967,8 @@ static void amdvi_do_translate(AMDVIAddressSpace *as, > hwaddr addr, > } > > /* devices with V = 0 are not translated */ > - if (!amdvi_get_dte(s, devid, entry)) { > + if (!amdvi_get_dte(s, devid, entry) && > + !(entry[0] & AMDVI_DEV_VALID)) { Here I'm not sure whether you're considering endianess. I think amdvi_get_dte() tried to fix the endianess somehow but I'm not sure it's complete (so entry[0] is special here...): static bool amdvi_get_dte(AMDVIState *s, int devid, uint64_t *entry) { uint32_t offset = devid * AMDVI_DEVTAB_ENTRY_SIZE; if (dma_memory_read(&address_space_memory, s->devtab + offset, entry, AMDVI_DEVTAB_ENTRY_SIZE)) { trace_amdvi_dte_get_fail(s->devtab, offset); /* log error accessing dte */ amdvi_log_devtab_error(s, devid, s->devtab + offset, 0); return false; } *entry = le64_to_cpu(*entry); <----------------------- [1] if (!amdvi_validate_dte(s, devid, entry)) { trace_amdvi_invalid_dte(entry[0]); return false; } return true; } At [1] only one 64bits entry is swapped correctly to cpu endianess, IMHO the rest of the three uint64_t is still using LE. I'm not really sure whether there would be anyone that wants to run the AMD IOMMU on big endian hosts, but I just want to know the goal of this series - do you want to support this scenario? If so, you might need to fixup the places too AFAIU. > goto out; > } > > -- > 2.7.4 > > Regards, -- Peter Xu