On 05/06/2025 8:41 am, Tomeu Vizoso wrote:
[...]
In fact this is precisely the usage model I would suggest for this sort
of thing, and IIRC I had a similar conversation with the Ethos driver
folks a few years back. Running your own IOMMU domain is no biggie, see
several other DRM drivers (including rockchip). As long as you have a
separate struct device per NPU core then indeed it should be perfectly
straightforward to maintain distinct IOMMU domains per job, and
attach/detach them as part of scheduling the jobs on and off the cores.
Looks like rockchip-iommu supports cross-instance attach, so if
necessary you should already be OK to have multiple cores working on the
same job at once, without needing extra work at the IOMMU end.
Ok, so if I understood it correctly, the plan would be for each DRM
client to have one IOMMU domain per each core (each core has its own
IOMMU), and map all its buffers in all these domains.
Then when a job is about to be scheduled on a given core, make sure
that the IOMMU for that core uses the domain for the client that
submitted the job.
Did I get that right?
It should only need a single IOMMU domain per DRM client, so no faffing
about replicating mappings. iommu_paging_domain_alloc() does need *an*
appropriate target device so it can identify the right IOMMU driver, but
that in itself doesn't preclude attaching other devices to the resulting
domain as well as (or even instead of) the nominal one. In general, not
all IOMMU drivers support cross-instance attach so it may fail with
-EINVAL, and *that*'s when you might need to fall back to allocating
multiple per-instance domains - but as I say since this is a
Rockchip-specific driver where the IOMMU *is* intended to support this
already, you don't need to bother.
Thanks,
Robin.