On 06.05.2022 13:16, Roger Pau Monné wrote:
> On Mon, Apr 25, 2022 at 10:40:55AM +0200, Jan Beulich wrote:
>> ---
>> An alternative to the ASSERT()s added to set_iommu_ptes_present() would
>> be to make the function less general-purpose; it's used in a single
>> place only after all (i.e. it might as well be folded into its only
>> caller).
> 
> I would think adding a comment that the function requires the PDE to
> be empty would be good.

But that's not the case - what the function expects to be clear is
what is being ASSERT()ed.

>  Also given the current usage we could drop
> the nr_ptes parameter and just name the function fill_pde() or
> similar.

Right, but that would want to be a separate change.

>> --- a/xen/drivers/passthrough/amd/iommu_map.c
>> +++ b/xen/drivers/passthrough/amd/iommu_map.c
>> @@ -115,7 +115,19 @@ static void set_iommu_ptes_present(unsig
>>  
>>      while ( nr_ptes-- )
>>      {
>> -        set_iommu_pde_present(pde, next_mfn, 0, iw, ir);
>> +        ASSERT(!pde->next_level);
>> +        ASSERT(!pde->u);
>> +
>> +        if ( pde > table )
>> +            ASSERT(pde->ign0 == find_first_set_bit(pde - table));
>> +        else
>> +            ASSERT(pde->ign0 == PAGE_SHIFT - 3);
> 
> I think PAGETABLE_ORDER would be clearer here.

I disagree - PAGETABLE_ORDER is a CPU-side concept. It's not used anywhere
in IOMMU code afaics.

> While here, could you also assert that next_mfn matches the contiguous
> order currently set in the PTE?

I can, yet that wouldn't be here, but outside (ahead) of the loop.

>> @@ -717,7 +729,7 @@ static int fill_qpt(union amd_iommu_pte
>>                   * page table pages, and the resulting allocations are 
>> always
>>                   * zeroed.
>>                   */
>> -                pgs[level] = iommu_alloc_pgtable(hd);
>> +                pgs[level] = iommu_alloc_pgtable(hd, 0);
> 
> Is it worth not setting up the contiguous data for quarantine page
> tables?

Well, it's (slightly) less code, and (hopefully) faster due to the use
of clear_page().

> I think it's fine now given the current code, but you having added
> ASSERTs that the contig data is correct in set_iommu_ptes_present()
> makes me wonder whether we could trigger those in the future.

I'd like to deal with that if and when needed.

> I understand that the contig data is not helpful for quarantine page
> tables, but still doesn't seem bad to have it just for coherency.

You realize that the markers all being zero in a table is a valid
state, functionality-wise? It would merely mean no re-coalescing
until respective entries were touched (updated) at least once.

>> @@ -276,7 +280,7 @@ struct dma_pte {
>>  #define dma_pte_write(p) (dma_pte_prot(p) & DMA_PTE_WRITE)
>>  #define dma_pte_addr(p) ((p).val & PADDR_MASK & PAGE_MASK_4K)
>>  #define dma_set_pte_addr(p, addr) do {\
>> -            (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
>> +            (p).val |= ((addr) & PADDR_MASK & PAGE_MASK_4K); } while (0)
> 
> While I'm not opposed to this, I would assume that addr is not
> expected to contain bit cleared by PADDR_MASK? (or PAGE_MASK_4K FWIW)

Indeed. But I'd prefer to be on the safe side, now that some of the
bits have gained a different meaning.

>> @@ -538,7 +539,29 @@ struct page_info *iommu_alloc_pgtable(st
>>          return NULL;
>>  
>>      p = __map_domain_page(pg);
>> -    clear_page(p);
>> +
>> +    if ( contig_mask )
>> +    {
>> +        /* See pt-contig-markers.h for a description of the marker scheme. 
>> */
>> +        unsigned int i, shift = find_first_set_bit(contig_mask);
>> +
>> +        ASSERT(((PAGE_SHIFT - 3) & (contig_mask >> shift)) == PAGE_SHIFT - 
>> 3);
> 
> I think it might be clearer to use PAGETABLE_ORDER rather than
> PAGE_SHIFT - 3.

See above.

Jan


Reply via email to