Hi Dave and Daniel,
Just prevented pointer leakage in printk() of samsung-dsim.c module.
Please kindly let me know if there is any problem.
Thanks,
Inki Dae
The following changes since commit fd03f82a026cc03cb8051a8c6487c99f96c9029f:
drm/bridge: analogix_dp: Fix clk-disable removal (202
These runners are no more. So remove the jobs.
Signed-off-by: Rob Clark
---
drivers/gpu/drm/ci/build.sh | 17 -
drivers/gpu/drm/ci/test.yml | 14 -
.../gpu/drm/ci/xfails/msm-sdm845-fails.txt| 29 --
.../gpu/drm/ci/xfails/msm-sdm845-flakes.txt | 139 -
From: Rob Clark
This fits better drm_gpuvm/drm_gpuva.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_gem.c | 16 +++-
drivers/gpu/drm/msm/msm_gem_vma.c | 2 ++
2 files changed, 5
From: Rob Clark
Most of the driver code doesn't need to reach in to msm specific fields,
so just use the drm_gpuvm/drm_gpuva types directly. This should
hopefully improve commonality with other drivers and make the code
easier to understand.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
T
From: Rob Clark
Re-aligning naming to better match drm_gpuvm terminology will make
things less confusing at the end of the drm_gpuvm conversion.
This is just rename churn, no functional change.
Signed-off-by: Rob Clark
Reviewed-by: Dmitry Baryshkov
Signed-off-by: Rob Clark
Tested-by: Antonin
From: Rob Clark
We'll re-use this in the vm_bind path.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_gem.c | 12 ++--
drivers/gpu/drm/msm/msm_gem.h | 1 +
2 files changed, 11 insertions(
From: Rob Clark
Add PRR (Partial Resident Region) is a bypass address which make GPU
writes go to /dev/null and reads return zero. This is used to implement
vulkan sparse residency.
To support PRR/NULL mappings, we allocate a page to reserve a physical
address which we know will not be used as
From: Rob Clark
Now that we've realigned deletion and allocation, switch over to using
drm_gpuvm/drm_gpuva. This allows us to support multiple VMAs per BO per
VM, to allow mapping different parts of a single BO at different virtual
addresses, which is a key requirement for sparse/VM_BIND.
This
We were already keeping a refcount of # of prepares (pins), to clear the
iova array. Use that to avoid unpinning the iova until the last cleanup
(unpin). This way, when msm_gem_unpin_iova() actually tears down the
mapping, we won't have problems if the fb is being scanned out on
another display (
From: Rob Clark
Previously we'd also tear down the VMA, making the address space
available again. But with drm_gpuvm conversion, this would require
holding the locks of all VMs the GEM object is mapped in. Which is
problematic for the shrinker.
Instead just let the VMA hang around until the GE
The fb only deals with kms->vm, so make that explicit. This will start
letting us refcount the # of times the fb is pinned, so we can only
unpin the vma after last user of the fb is done. Having a single
reference count really only works if there is only a single vm.
Signed-off-by: Rob Clark
Te
From: Rob Clark
It is standing in the way of drm_gpuvm / VM_BIND support. Not to
mention frequently broken and rarely tested. And I think only needed
for a 10yr old not quite upstream SoC (msm8974).
Maybe we can add support back in later, but I'm doubtful.
Signed-off-by: Rob Clark
Signed-off
From: Rob Clark
Now that we've dropped vram carveout support, we can collapse vma
allocation and initialization. This better matches how things work
with drm_gpuvm.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers
From: Rob Clark
Just some tidying up.
Signed-off-by: Rob Clark
Reviewed-by: Dmitry Baryshkov
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_gpu.h | 44 +++
1 file changed, 29 insertions(+)
From: Rob Clark
If the callback is going to have to attempt to grab more locks, it is
useful to have an ww_acquire_ctx to avoid locking order problems.
Why not use the drm_exec helper instead? Mainly because (a) where
ww_acquire_init() is called is awkward, and (b) we don't really
need to retry
From: Rob Clark
This is a more descriptive name.
Signed-off-by: Rob Clark
Reviewed-by: Dmitry Baryshkov
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +-
drivers/gpu/drm/msm/adreno/adreno_gpu.c | 6
Conversion to DRM GPU VA Manager[1], and adding support for Vulkan Sparse
Memory[2] in the form of:
1. A new VM_BIND submitqueue type for executing VM MSM_SUBMIT_BO_OP_MAP/
MAP_NULL/UNMAP commands
2. A new VM_BIND ioctl to allow submitting batches of one or more
MAP/MAP_NULL/UNMAP com
Correctly summerize drm_gpuvm_sm_map/unmap, and fix the parameter order
and names. Just something I noticed in passing.
v2: Don't rename the arg names in prototypes to match function
declarations [Danilo]
Signed-off-by: Rob Clark
Acked-by: Danilo Krummrich
Tested-by: Antonino Maniscalco
R
For UNMAP/REMAP steps we could be needing to lock objects that are not
explicitly listed in the VM_BIND ioctl in order to tear-down unmapped
VAs. These helpers handle locking/preparing the needed objects.
Note that these functions do not strictly require the VM changes to be
applied before the ne
From: Rob Clark
When userspace opts in to VM_BIND, the submit no longer holds references
keeping the VMA alive. This makes it difficult to distinguish between
UMD/KMD/app bugs. So add a debug option for logging the most recent VM
updates and capturing these in GPU devcoredumps.
The submitqueue
From: Rob Clark
Only needs to be supported for iopgtables mmu, the other cases are
either only used for kernel managed mappings (where offset is always
zero) or devices which do not support sparse bindings.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Revie
From: Rob Clark
This submitqueue type isn't tied to a hw ringbuffer, but instead
executes on the CPU for performing async VM_BIND ops.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_gem.h
From: Rob Clark
Introduce a mechanism to count the worst case # of pages required in a
VM_BIND op.
Note that previously we would have had to somehow account for
allocations in unmap, when splitting a block. This behavior was removed
in commit 33729a5fc0ca ("iommu/io-pgtable-arm: Remove split on
From: Rob Clark
Add a SET_PARAM for userspace to request to manage to the VM itself,
instead of getting a kernel managed VM.
In order to transition to a userspace managed VM, this param must be set
before any mappings are created.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: A
With the conversion to drm_gpuvm, we lost the lazy VMA cleanup, which
means that fb cleanup/unpin when pageflipping to new scanout buffers
immediately unmaps the scanout buffer. This is costly (with tlbinv,
it can be 4-6ms for a 1080p scanout buffer, and more for higher
resolutions)!
To avoid thi
From: Rob Clark
Add a VM_BIND ioctl for binding/unbinding buffers into a VM. This is
only supported if userspace has opted in to MSM_PARAM_EN_VM_BIND.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/m
From: Rob Clark
Bump version to signal to userspace that VM_BIND is supported.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_drv.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --
From: Rob Clark
In this case, we need to iterate the VMAs looking for ones with
MSM_VMA_DUMP flag.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_gpu.c | 96 ++-
1
From: Rob Clark
In this case, userspace could request dumping partial GEM obj mappings.
Also drop use of should_dump() helper, which really only makes sense in
the old submit->bos[] table world.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Anto
From: Rob Clark
Similar to the previous commit, add support for dumping partial
mappings.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_gem.h | 10 -
drivers/gpu/drm/msm/msm_rd.c | 38 ++
From: Rob Clark
Any place we wait for a BO to become idle, we should use BOOKKEEP usage,
to ensure that it waits for _any_ activity.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_gem.c |
From: Rob Clark
With async VM_BIND, the actual pgtable updates are deferred.
Synchronously, a list of map/unmap ops will be generated, but the
actual pgtable changes are deferred. To support that, split out
op handlers and change the existing non-VM_BIND paths to use them.
Note in particular, t
From: Rob Clark
With user managed VMs and multiple queues, it is in theory possible to
trigger map/unmap errors. These will (in a later patch) mark the VM as
unusable. But we want to tell the io-pgtable helpers not to spam the
log. In addition, in the unmap path, we don't want to bail early fr
From: Rob Clark
So we can monitor how many pages are getting preallocated vs how many
get used.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_gpu_trace.h | 14 ++
drivers/gpu/drm/msm/
From: Rob Clark
Make the VM log a bit more useful by providing a reason for the unmap
(ie. closing VM vs evict/purge, etc)
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_gem.c | 20 +++
From: Rob Clark
If userspace has opted-in to VM_BIND, then GPU hangs and VM_BIND errors
will mark the VM as unusable.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_gem.h| 17 +
From: Rob Clark
We'll be re-using these for the VM_BIND ioctl.
Also, rename a few things in the uapi header to reflect that syncobj use
is not specific to the submit ioctl.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
A large number of (unsorted or separate) small (<2MB) mappings can cause
a lot of, probably unnecessary, prealloc pages. Ie. a single 4k page
size mapping will pre-allocate 3 pages (for levels 2-4) for the
pagetable. Which can chew up a large amount of unneeded memory. So add
a mechanism to put
From: Rob Clark
As with devcoredump, we need to iterate the VMAs to figure out what to
dump.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_rd.c | 48 +---
1 file c
From: Rob Clark
Convert to using the gpuvm's r_obj for serializing access to the VM.
This way we can use the drm_exec helper for dealing with deadlock
detection and backoff.
This will let us deal with upcoming locking order conflicts with the
VM_BIND implmentation (ie. in some scenarious we need
From: Rob Clark
Buffers that are not shared between contexts can share a single resv
object. This way drm_gpuvm will not track them as external objects, and
submit-time validating overhead will be O(1) for all N non-shared BOs,
instead of O(n).
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
From: Rob Clark
If we haven't written the submit into the ringbuffer yet, then drop it.
The submit still retires through the normal path, to preserve fence
signalling order, but we can skip the IB's to userspace cmdstream.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino M
From: Rob Clark
This is a more descriptive name.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_gem.c | 6 +++---
drivers/gpu/drm/msm/msm_gem.h | 2 +-
drivers/gpu/drm/msm/msm_gem_vma.c |
From: Rob Clark
This resolves a potential deadlock vs msm_gem_vm_close(). Otherwise for
_NO_SHARE buffers msm_gem_describe() could be trying to acquire the
shared vm resv, while already holding priv->obj_lock. But _vm_close()
might drop the last reference to a GEM obj while already holding the
From: Rob Clark
In the next commit, a way for userspace to opt-in to userspace managed
VM is added. For this to work, we need to defer creation of the VM
until it is needed.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
Hi Louis,
kernel test robot noticed the following build warnings:
[auto build test WARNING on bb8aa27eff6f3376242da37c2d02b9dcc66934b1]
url:
https://github.com/intel-lab-lkp/linux/commits/Louis-Chauvet/drm-vkms-Create-helpers-macro-to-avoid-code-duplication-in-format-callbacks/20250628-06514
From: Rob Clark
Only needs to be supported for iopgtables mmu, the other cases are
either only used for kernel managed mappings (where offset is always
zero) or devices which do not support sparse bindings.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Revie
From: Rob Clark
Re-aligning naming to better match drm_gpuvm terminology will make
things less confusing at the end of the drm_gpuvm conversion.
This is just rename churn, no functional change.
Signed-off-by: Rob Clark
Reviewed-by: Dmitry Baryshkov
Signed-off-by: Rob Clark
Tested-by: Antonin
From: Rob Clark
It is standing in the way of drm_gpuvm / VM_BIND support. Not to
mention frequently broken and rarely tested. And I think only needed
for a 10yr old not quite upstream SoC (msm8974).
Maybe we can add support back in later, but I'm doubtful.
Signed-off-by: Rob Clark
Signed-off
From: Rob Clark
This fits better drm_gpuvm/drm_gpuva.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_gem.c | 16 +++-
drivers/gpu/drm/msm/msm_gem_vma.c | 2 ++
2 files changed, 5
From: Rob Clark
This is a more descriptive name.
Signed-off-by: Rob Clark
Reviewed-by: Dmitry Baryshkov
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +-
drivers/gpu/drm/msm/adreno/adreno_gpu.c | 6
For UNMAP/REMAP steps we could be needing to lock objects that are not
explicitly listed in the VM_BIND ioctl in order to tear-down unmapped
VAs. These helpers handle locking/preparing the needed objects.
Note that these functions do not strictly require the VM changes to be
applied before the ne
From: Rob Clark
If the callback is going to have to attempt to grab more locks, it is
useful to have an ww_acquire_ctx to avoid locking order problems.
Why not use the drm_exec helper instead? Mainly because (a) where
ww_acquire_init() is called is awkward, and (b) we don't really
need to retry
Conversion to DRM GPU VA Manager[1], and adding support for Vulkan Sparse
Memory[2] in the form of:
1. A new VM_BIND submitqueue type for executing VM MSM_SUBMIT_BO_OP_MAP/
MAP_NULL/UNMAP commands
2. A new VM_BIND ioctl to allow submitting batches of one or more
MAP/MAP_NULL/UNMAP com
From: Rob Clark
Most of the driver code doesn't need to reach in to msm specific fields,
so just use the drm_gpuvm/drm_gpuva types directly. This should
hopefully improve commonality with other drivers and make the code
easier to understand.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
T
From: Rob Clark
Add a SET_PARAM for userspace to request to manage to the VM itself,
instead of getting a kernel managed VM.
In order to transition to a userspace managed VM, this param must be set
before any mappings are created.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: A
From: Rob Clark
Now that we've dropped vram carveout support, we can collapse vma
allocation and initialization. This better matches how things work
with drm_gpuvm.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers
From: Rob Clark
Now that we've realigned deletion and allocation, switch over to using
drm_gpuvm/drm_gpuva. This allows us to support multiple VMAs per BO per
VM, to allow mapping different parts of a single BO at different virtual
addresses, which is a key requirement for sparse/VM_BIND.
This
Correctly summerize drm_gpuvm_sm_map/unmap, and fix the parameter order
and names. Just something I noticed in passing.
v2: Don't rename the arg names in prototypes to match function
declarations [Danilo]
Signed-off-by: Rob Clark
Acked-by: Danilo Krummrich
Tested-by: Antonino Maniscalco
R
From: Rob Clark
In this case, userspace could request dumping partial GEM obj mappings.
Also drop use of should_dump() helper, which really only makes sense in
the old submit->bos[] table world.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Anto
From: Rob Clark
If userspace has opted-in to VM_BIND, then GPU hangs and VM_BIND errors
will mark the VM as unusable.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_gem.h| 17 +
From: Rob Clark
We'll be re-using these for the VM_BIND ioctl.
Also, rename a few things in the uapi header to reflect that syncobj use
is not specific to the submit ioctl.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
We were already keeping a refcount of # of prepares (pins), to clear the
iova array. Use that to avoid unpinning the iova until the last cleanup
(unpin). This way, when msm_gem_unpin_iova() actually tears down the
mapping, we won't have problems if the fb is being scanned out on
another display (
From: Rob Clark
Convert to using the gpuvm's r_obj for serializing access to the VM.
This way we can use the drm_exec helper for dealing with deadlock
detection and backoff.
This will let us deal with upcoming locking order conflicts with the
VM_BIND implmentation (ie. in some scenarious we need
From: Rob Clark
Introduce a mechanism to count the worst case # of pages required in a
VM_BIND op.
Note that previously we would have had to somehow account for
allocations in unmap, when splitting a block. This behavior was removed
in commit 33729a5fc0ca ("iommu/io-pgtable-arm: Remove split on
From: Rob Clark
So we can monitor how many pages are getting preallocated vs how many
get used.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_gpu_trace.h | 14 ++
drivers/gpu/drm/msm/
From: Rob Clark
Similar to the previous commit, add support for dumping partial
mappings.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_gem.h | 10 -
drivers/gpu/drm/msm/msm_rd.c | 38 ++
From: Rob Clark
Any place we wait for a BO to become idle, we should use BOOKKEEP usage,
to ensure that it waits for _any_ activity.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_gem.c |
From: Rob Clark
This is a more descriptive name.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_gem.c | 6 +++---
drivers/gpu/drm/msm/msm_gem.h | 2 +-
drivers/gpu/drm/msm/msm_gem_vma.c |
From: Rob Clark
Buffers that are not shared between contexts can share a single resv
object. This way drm_gpuvm will not track them as external objects, and
submit-time validating overhead will be O(1) for all N non-shared BOs,
instead of O(n).
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
From: Rob Clark
If we haven't written the submit into the ringbuffer yet, then drop it.
The submit still retires through the normal path, to preserve fence
signalling order, but we can skip the IB's to userspace cmdstream.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino M
From: Rob Clark
In this case, we need to iterate the VMAs looking for ones with
MSM_VMA_DUMP flag.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_gpu.c | 96 ++-
1
From: Rob Clark
Add PRR (Partial Resident Region) is a bypass address which make GPU
writes go to /dev/null and reads return zero. This is used to implement
vulkan sparse residency.
To support PRR/NULL mappings, we allocate a page to reserve a physical
address which we know will not be used as
From: Rob Clark
In the next commit, a way for userspace to opt-in to userspace managed
VM is added. For this to work, we need to defer creation of the VM
until it is needed.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
From: Rob Clark
Previously we'd also tear down the VMA, making the address space
available again. But with drm_gpuvm conversion, this would require
holding the locks of all VMs the GEM object is mapped in. Which is
problematic for the shrinker.
Instead just let the VMA hang around until the GE
The fb only deals with kms->vm, so make that explicit. This will start
letting us refcount the # of times the fb is pinned, so we can only
unpin the vma after last user of the fb is done. Having a single
reference count really only works if there is only a single vm.
Signed-off-by: Rob Clark
Te
From: Rob Clark
Add a VM_BIND ioctl for binding/unbinding buffers into a VM. This is
only supported if userspace has opted in to MSM_PARAM_EN_VM_BIND.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/m
From: Rob Clark
Make the VM log a bit more useful by providing a reason for the unmap
(ie. closing VM vs evict/purge, etc)
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_gem.c | 20 +++
From: Rob Clark
This submitqueue type isn't tied to a hw ringbuffer, but instead
executes on the CPU for performing async VM_BIND ops.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_gem.h
From: Rob Clark
With user managed VMs and multiple queues, it is in theory possible to
trigger map/unmap errors. These will (in a later patch) mark the VM as
unusable. But we want to tell the io-pgtable helpers not to spam the
log. In addition, in the unmap path, we don't want to bail early fr
From: Rob Clark
Bump version to signal to userspace that VM_BIND is supported.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_drv.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --
A large number of (unsorted or separate) small (<2MB) mappings can cause
a lot of, probably unnecessary, prealloc pages. Ie. a single 4k page
size mapping will pre-allocate 3 pages (for levels 2-4) for the
pagetable. Which can chew up a large amount of unneeded memory. So add
a mechanism to put
From: Rob Clark
When userspace opts in to VM_BIND, the submit no longer holds references
keeping the VMA alive. This makes it difficult to distinguish between
UMD/KMD/app bugs. So add a debug option for logging the most recent VM
updates and capturing these in GPU devcoredumps.
The submitqueue
With the conversion to drm_gpuvm, we lost the lazy VMA cleanup, which
means that fb cleanup/unpin when pageflipping to new scanout buffers
immediately unmaps the scanout buffer. This is costly (with tlbinv,
it can be 4-6ms for a 1080p scanout buffer, and more for higher
resolutions)!
To avoid thi
From: Rob Clark
As with devcoredump, we need to iterate the VMAs to figure out what to
dump.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_rd.c | 48 +---
1 file c
From: Rob Clark
This resolves a potential deadlock vs msm_gem_vm_close(). Otherwise for
_NO_SHARE buffers msm_gem_describe() could be trying to acquire the
shared vm resv, while already holding priv->obj_lock. But _vm_close()
might drop the last reference to a GEM obj while already holding the
From: Rob Clark
Just some tidying up.
Signed-off-by: Rob Clark
Reviewed-by: Dmitry Baryshkov
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_gpu.h | 44 +++
1 file changed, 29 insertions(+)
From: Rob Clark
We'll re-use this in the vm_bind path.
Signed-off-by: Rob Clark
Signed-off-by: Rob Clark
Tested-by: Antonino Maniscalco
Reviewed-by: Antonino Maniscalco
---
drivers/gpu/drm/msm/msm_gem.c | 12 ++--
drivers/gpu/drm/msm/msm_gem.h | 1 +
2 files changed, 11 insertions(
From: Rob Clark
With async VM_BIND, the actual pgtable updates are deferred.
Synchronously, a list of map/unmap ops will be generated, but the
actual pgtable changes are deferred. To support that, split out
op handlers and change the existing non-VM_BIND paths to use them.
Note in particular, t
Hi Louis,
kernel test robot noticed the following build warnings:
[auto build test WARNING on bb8aa27eff6f3376242da37c2d02b9dcc66934b1]
url:
https://github.com/intel-lab-lkp/linux/commits/Louis-Chauvet/drm-vkms-Create-helpers-macro-to-avoid-code-duplication-in-format-callbacks/20250628-06514
On 6/17/25 6:26 PM, Diogo Ivo wrote:
On 6/17/25 5:40 AM, Mikko Perttunen wrote:
On 6/16/25 7:21 PM, Diogo Ivo wrote:
On 6/11/25 4:06 PM, Thierry Reding wrote:
On Wed, Jun 11, 2025 at 01:05:40PM +0100, Diogo Ivo wrote:
On 6/10/25 10:52 AM, Mikko Perttunen wrote:
On 6/10/25 6:05 PM, Th
Hi
Am 27.06.25 um 17:37 schrieb Mario Limonciello:
On 6/27/2025 2:07 AM, Thomas Zimmermann wrote:
Hi
Am 27.06.25 um 06:31 schrieb Mario Limonciello:
From: Mario Limonciello
On systems with multiple GPUs there can be uncertainty which GPU is the
primary one used to drive the display at bootu
Hi
Am 28.06.25 um 13:49 schrieb Krzysztof Kozlowski:
On 27/06/2025 11:48, Luca Weiss wrote:
Hi Krzysztof,
On Fri Jun 27, 2025 at 10:08 AM CEST, Krzysztof Kozlowski wrote:
On Mon, Jun 23, 2025 at 08:44:45AM +0200, Luca Weiss wrote:
Document the interconnects property which is a list of interc
Hi
Am 28.06.25 um 13:50 schrieb Krzysztof Kozlowski:
On 27/06/2025 13:34, Thomas Zimmermann wrote:
Hi
Am 27.06.25 um 10:08 schrieb Krzysztof Kozlowski:
On Mon, Jun 23, 2025 at 08:44:45AM +0200, Luca Weiss wrote:
Document the interconnects property which is a list of interconnect
paths that i
On 6/11/25 9:18 PM, Diogo Ivo wrote:
...
+static int nvjpg_load_falcon_firmware(struct nvjpg *nvjpg)
+{
+ struct host1x_client *client = &nvjpg->client.base;
+ struct tegra_drm *tegra = nvjpg->client.drm;
+ dma_addr_t iova;
+ size_t size;
+ void *virt;
+ int er
On Sun, Jun 29, 2025 at 08:35:09PM -0300, Marcelo Moreira wrote:
> Update the receive_timing_debugfs_show() function to utilize
> sysfs_emit_at() for formatting output to the debugfs buffer.
> This change adheres to the recommendation outlined
> in Documentation/filesystems/sysfs.rst.
>
> This mod
> -Original Message-
> From: Kandpal, Suraj
> Sent: Wednesday, June 25, 2025 4:50 PM
> To: Jani Nikula ; intel...@lists.freedesktop.org;
> intel-...@lists.freedesktop.org; dri-devel@lists.freedesktop.org;
> nouv...@lists.freedesktop.org; Lyude Paul
> Cc: Murthy, Arun R
> Subject: RE: [
From: Dave Airlie
amdgpu wants to use the objcg api and not have to enable ifdef
around it, so just add a dummy function for the config off path.
Signed-off-by: Dave Airlie
---
include/linux/memcontrol.h | 5 +
1 file changed, 5 insertions(+)
diff --git a/include/linux/memcontrol.h b/incl
Hi all,
tl;dr: start using list_lru/numa/memcg in GPU driver core and amdgpu driver for
now.
This is a complete series of patches, some of which have been sent before and
reviewed,
but I want to get the complete picture for others, and try to figure out how
best to land this.
There are 3 piec
From: Dave Airlie
This gets the memory sizes from the nodes and stores the limit
as 50% of those. I think eventually we should drop the limits
once we have memcg aware shrinking, but this should be more NUMA
friendly, and I think seems like what people would prefer to
happen on NUMA aware systems
1 - 100 of 129 matches
Mail list logo