Hi Eric,

On Wed, Nov 13, 2024 at 06:12:15PM +0100, Eric Auger wrote:
> On 11/8/24 13:52, Shameer Kolothum wrote:
> > @@ -181,6 +181,7 @@ static const MemMapEntry base_memmap[] = {
> >      [VIRT_PVTIME] =             { 0x090a0000, 0x00010000 },
> >      [VIRT_SECURE_GPIO] =        { 0x090b0000, 0x00001000 },
> >      [VIRT_MMIO] =               { 0x0a000000, 0x00000200 },
> > +    [VIRT_SMMU_NESTED] =        { 0x0b000000, 0x01000000 },

> I agree with Mostafa that the _NESTED terminology may not be the best
> choice.
> The motivation behind that multi-instance attempt, as introduced in
> https://lore.kernel.org/all/ZEcT%2F7erkhHDaNvD@Asurada-Nvidia/
> was:
> - SMMUs with different feature bits
> - support of VCMDQ HW extension for SMMU CMDQ
> - need for separate S1 invalidation paths
> 
> If I understand correctly this is mostly wanted for VCMDQ handling? if
> this is correct we may indicate that somehow in the terminology.
> 
> If I understand correctly VCMDQ terminology is NVidia specific while
> ECMDQ is the baseline (?).

VCMDQ makes a multi-vSMMU-instance design a hard requirement, yet
the point (3) for separate invalidation paths also matters. Jason
suggested VMM in base case to create multi vSMMU instances as the
kernel doc mentioned here:
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/tree/Documentation/userspace-api/iommufd.rst#n84

W.r.t naming, maybe something related to "hardware-accelerated"?

Thanks
Nicolin

Reply via email to