The TEE subsystem allows session-based access to trusted services,
requiring a session to be established to receive a service. This
is not suitable for an environment that represents services as objects.
An object supports various operations that a client can invoke,
potentially generating a result
For drivers that can transfer data to the TEE without using shared
memory from client, it is necessary to receive the user address
directly, bypassing any processing by the TEE subsystem. Introduce
TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_INPUT/OUTPUT/INOUT to represent
userspace buffers.
Reviewed-by: Sumit
Qualcomm TEE (QTEE) hosts Trusted Applications (TAs) and services in
the secure world, accessed via objects. A QTEE client can invoke these
objects to request services. Similarly, QTEE can request services from
the nonsecure world using objects exported to the secure world.
Add low-level primitive
Increase TEE_MAX_ARG_SIZE to accommodate worst-case scenarios where
additional buffer space is required to pass all arguments to TEE.
This change is necessary for upcoming support for Qualcomm TEE, which
requires a larger buffer for argument marshaling.
Reviewed-by: Sumit Garg
Tested-by: Harshal
Enable userspace to allocate shared memory with QTEE. Since
QTEE handles shared memory as object, a wrapper is implemented
to represent tee_shm as an object. The shared memory identifier,
obtained through TEE_IOC_SHM_ALLOC, is transferred to the driver using
TEE_IOCTL_PARAM_ATTR_TYPE_OBJREF_INPUT/O
This patch series introduces a Trusted Execution Environment (TEE)
driver for Qualcomm TEE (QTEE). QTEE enables Trusted Applications (TAs)
and services to run securely. It uses an object-based interface, where
each service is an object with sets of operations. Clients can invoke
these operations on
A TEE driver doesn't always need to provide a pool if it doesn't
support memory sharing ioctls and can allocate memory for TEE
messages in another way. Although this is mentioned in the
documentation for tee_device_alloc(), it is not handled correctly.
Reviewed-by: Sumit Garg
Signed-off-by: Amirr
The tee_context can be used to manage TEE user resources, including
those allocated by the driver for the TEE on behalf of the user.
The release() callback is invoked only when all resources, such as
tee_shm, are released and there are no references to the tee_context.
When a user closes the devic
shm_bridge create/delete functions always use the scm device.
There is no need to pass it as an argument.
Tested-by: Neil Armstrong
Tested-by: Harshal Dev
Signed-off-by: Amirreza Zarrabi
---
drivers/firmware/qcom/qcom_scm.c | 4 ++--
drivers/firmware/qcom/qcom_tzmem.c | 8
i
Anyone with access to contiguous physical memory should be able to
share memory with QTEE using shm_bridge.
Tested-by: Neil Armstrong
Tested-by: Harshal Dev
Signed-off-by: Amirreza Zarrabi
---
drivers/firmware/qcom/qcom_tzmem.c | 62 ++--
include/linux/firmwar
After booting, the kernel provides a static object known as the
primordial object. This object is utilized by QTEE for native
kernel services such as yield or privileged operations.
Acked-by: Sumit Garg
Tested-by: Neil Armstrong
Tested-by: Harshal Dev
Signed-off-by: Amirreza Zarrabi
---
drive
Introduce qcomtee_object, which represents an object in both QTEE and
the kernel. QTEE clients can invoke an instance of qcomtee_object to
access QTEE services. If this invocation produces a new object in QTEE,
an instance of qcomtee_object will be returned.
Similarly, QTEE can request services fr
Add documentation for the Qualcomm TEE driver.
Signed-off-by: Amirreza Zarrabi
---
Documentation/tee/index.rst | 1 +
Documentation/tee/qtee.rst | 96 +
MAINTAINERS | 1 +
3 files changed, 98 insertions(+)
diff --git a/Documentation
On Sun Jul 13, 2025 at 11:51 AM JST, Rhys Lloyd wrote:
> data is sliced from 2..6, but the bounds check data.len() < 5
> does not satisfy those bounds.
>
> Fixes: 47c4846e4319 ("gpu: nova-core: vbios: Add support for FWSEC ucode
> extraction")
>
> Signed-off-by: Rhys Lloyd
> ---
> Changes in v2:
On Sun Jul 13, 2025 at 11:51 AM JST, Rhys Lloyd wrote:
> Introduce an associated constant `MIN_LEN` for each struct that checks
> the length of the input data in its constructor against a magic number.
>
> Signed-off-by: Rhys Lloyd
As I mentioned in [1], I think this would be better addressed by
When Application A submits jobs (a1, a2, a3) and application B submits
job b1 with a dependency on a2's scheduler fence, killing application A
before run_job(a1) causes drm_sched_entity_kill_jobs_work() to force
signal all jobs sequentially. However, due to missing work_run_job or
work_free_job in
Looks good to me:
Reviewed-by: Iago Toral Quiroga
Iago
El vie, 11-07-2025 a las 12:18 -0300, Maíra Canal escribió:
> The GL extension KHR_robustness requires a mechanism for a GL
> application
> to learn about graphics resets that affect a GL context. With the
> goal
> to provide support for suc
From: "Dr. David Alan Gilbert"
xe_bo_create_from_data() last use was removed in 2023 by
commit 0e1a47fcabc8 ("drm/xe: Add a helper for DRM device-lifetime BO
create")
xe_rtp_match_first_gslice_fused_off() last use was removed in 2023 by
commit 4e124151fcfc ("drm/xe/dg2: Drop pre-production worka
Hi Christian,
On 11/07/25 12:20, Christian König wrote:
On 11.07.25 15:37, Philipp Stanner wrote:
On Fri, 2025-07-11 at 15:22 +0200, Christian König wrote:
On 08.07.25 15:25, Maíra Canal wrote:
When the DRM scheduler times out, it's possible that the GPU isn't hung;
instead, a job just took
Hi, Pavel,
On 2025-07-10 at 10:24 +02, Pavel Machek wrote:
> [[PGP Signed Part:Undecided]]
> Hi!
>
> It seems that DMA-BUFs are always uncached on arm64... which is a
> problem.
>
> I'm trying to get useful camera support on Librem 5, and that includes
> recording vidos (and taking photos).
E
On 7/13/25 6:16 AM, Rob Clark wrote:
> On Sat, Jul 12, 2025 at 11:49 PM Randy Dunlap wrote:
>>
>>
>>
>> On 7/11/25 2:10 AM, Stephen Rothwell wrote:
>>> Hi all,
>>>
>>> Changes since 20250710:
>>
>> on i386, when:
>>
>> CONFIG_DRM_MSM=y
>> CONFIG_DRM_MSM_GPU_STATE=y
>> CONFIG_DRM_MSM_GPU_SUDO=y
On Sat, Jul 12, 2025 at 9:02 PM Mario Limonciello wrote:
>
>
>
> On 7/12/25 3:11 AM, Rafael J. Wysocki wrote:
> > On Fri, Jul 11, 2025 at 11:25 PM Randy Dunlap wrote:
> >>
> >>
> >>
> >> On 7/11/25 2:10 AM, Stephen Rothwell wrote:
> >>> Hi all,
> >>>
> >>> Changes since 20250710:
> >>>
> >>
> >>
On Sat, Jul 12, 2025 at 11:49 PM Randy Dunlap wrote:
>
>
>
> On 7/11/25 2:10 AM, Stephen Rothwell wrote:
> > Hi all,
> >
> > Changes since 20250710:
>
> on i386, when:
>
> CONFIG_DRM_MSM=y
> CONFIG_DRM_MSM_GPU_STATE=y
> CONFIG_DRM_MSM_GPU_SUDO=y
> # CONFIG_DRM_MSM_VALIDATE_XML is not set
> # CONFI
Le 09/06/2025 à 15:35, Raphael Gallais-Pou a écrit :
Documentation/devicetree/bindings/graph.txt content has move directly to
the dt-schema repo.
Point to the YAML of the official repo instead of the old file.
Signed-off-by: Raphael Gallais-Pou
Hi,
Gentle ping !
Best regards,
Raphaël
--
Are normal panics (i. e. not drm panics) still supposed to work with bochs?
If yes, then I want to point out that they, in fact, don't work since 2019.
I. e. panics are not shown in Qemu if:
1) bochs is used
2) we use "normal" panics (not drm panics)
I already reported this here:
https://lore.ke
Jamie Heilman wrote:
> Ben Skeggs wrote:
> > On 7/9/25 09:16, Jamie Heilman wrote:
> > > Rui Salvaterra wrote:
> > > > Unfortunately, bisecting is not feasible for me.
> > > That looks pretty similar to the problem I posted
> > > (https://lore.kernel.org/lkml/aeljio9_se6ta...@audible.transient.net/
Hi all,
This is a repost with some fixes and cleanups.
Differences since last posting:
1. Added patch 18: add a module option to allow pooled pages to not be stored
in the lru per-memcg
(Requested by Christian Konig)
2. Converged the naming and stats between vmstat and memcg (Suggested by
Sh
On 6/27/25 03:15, M Henning wrote:
On Tue, Jun 24, 2025 at 3:13 PM Timur Tabi wrote:
You have a good point, but I think your change, in effect, necessitates my
request. Previously, the
default was no GSP-RM unless needed. Now it's yes GSP-RM, and the concept of
"need" has been
removed. So
From: Dave Airlie
While discussing memcg intergration with gpu memory allocations,
it was pointed out that there was no numa/system counters for
GPU memory allocations.
With more integrated memory GPU server systems turning up, and
more requirements for memory tracking it seems we should start
c
From: Dave Airlie
This enables all the backend code to use the list lru in memcg mode,
and set the shrinker to be memcg aware.
It adds the loop case for when pooled pages end up being reparented
to a higher memcg group, that newer memcg can search for them there
and take them back.
Signed-off-b
From: Dave Airlie
This adds support for adding a obj cgroup to a buffer object,
and passing in the placement flags to make sure it's accounted
properly.
Signed-off-by: Dave Airlie
---
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 2 ++
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 13 +-
From: Dave Airlie
There is an existing workload that cgroup support might regress,
the systems are setup to allocate 1GB of uncached pages at system
startup to prime the pool, then any further users will take them
from the pool. The current cgroup code might handle that, but
it also may regress,
From: Dave Airlie
This just adds the obj cgroup pointer to the bo and tt structs,
and sets it between them.
Signed-off-by: Dave Airlie
---
drivers/gpu/drm/ttm/ttm_tt.c | 1 +
include/drm/ttm/ttm_bo.h | 6 ++
include/drm/ttm/ttm_tt.h | 2 ++
3 files changed, 9 insertions(+)
diff --
From: Dave Airlie
amdgpu wants to use the objcg api and not have to enable ifdef
around it, so just add a dummy function for the config off path.
Signed-off-by: Dave Airlie
---
include/linux/memcontrol.h | 5 +
1 file changed, 5 insertions(+)
diff --git a/include/linux/memcontrol.h b/incl
From: Dave Airlie
This is needed to use get_obj_cgroup_from_current from a module.
Signed-off-by: Dave Airlie
---
mm/memcontrol.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 4c8ded9501c6..4c041c5b3a15 100644
--- a/mm/memcontrol.c
+++ b/mm/memcont
From: Dave Airlie
This adds a placement flag that requests that any bo with this
placement flag set gets accounted for memcg if it's a system memory
allocation.
Signed-off-by: Dave Airlie
---
drivers/gpu/drm/ttm/ttm_bo.c | 2 +-
drivers/gpu/drm/ttm/ttm_bo_util.c | 6 +++---
drivers/gpu/dr
From: Dave Airlie
Later memcg enablement needs the shrinker initialised before the list lru,
Just move it for now.
Signed-off-by: Dave Airlie
---
drivers/gpu/drm/ttm/ttm_pool.c | 22 +++---
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_
From: Dave Airlie
This is an initial port of the TTM pools for
write combined and uncached pages to use the list_lru.
This makes the pool's more NUMA aware and avoids
needing separate NUMA pools (later commit enables this).
Cc: Christian Koenig
Cc: Johannes Weiner
Cc: Dave Chinner
Signed-off
From: Dave Airlie
The list_lru will now handle numa for us, so need to keep
separate pool types for it. Just consoldiate into the global ones.
This adds a debugfs change to avoid dumping non-existant orders due
to this change.
Cc: Christian Koenig
Cc: Johannes Weiner
Signed-off-by: Dave Airli
From: Dave Airlie
This flag does nothing yet, but this just changes the APIs to accept
it in the future across all users.
This flag will eventually be filled out with when to account a tt
populate to a memcg.
Signed-off-by: Dave Airlie
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 3
From: Dave Airlie
This is need to use list lru with memcg from a module. drm/ttm
wants to use this interface.
Signed-off-by: Dave Airlie
---
mm/list_lru.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/list_lru.c b/mm/list_lru.c
index 315362e3df3d..2892c1d945dd 100644
--- a/mm/list_lru
From: Dave Airlie
This introduces 2 new statistics and 3 new memcontrol APIs for dealing
with GPU system memory allocations.
The stats corresponds to the same stats in the global vmstat,
for number of active GPU pages, and number of pages in pools that
can be reclaimed.
The first API charges a
From: Dave Airlie
This gets the memory sizes from the nodes and stores the limit
as 50% of those. I think eventually we should drop the limits
once we have memcg aware shrinking, but this should be more NUMA
friendly, and I think seems like what people would prefer to
happen on NUMA aware systems
From: Dave Airlie
This uses the newly introduced per-node gpu tracking stats,
to track GPU memory allocated via TTM and reclaimable memory in
the TTM page pools.
These stats will be useful later for system information and
later when mem cgroups are integrated.
Cc: Christian Koenig
Cc: Matthew
From: Dave Airlie
DRM/TTM wants to use this for it's page pool
LRU tracking.
This effective is a revert of
78c0ed09131b772f062b986a2fcca6600daa6285
Author: Kairui Song
Date: Tue Nov 5 01:52:53 2024 +0800
mm/list_lru: don't export list_lru_add
Cc: Kairui Song
Cc: Johannes Weiner
Cc: Sh
From: Dave Airlie
This enable NUMA awareness for the shrinker on the
ttm pools.
Cc: Christian Koenig
Cc: Dave Chinner
Signed-off-by: Dave Airlie
---
drivers/gpu/drm/ttm/ttm_pool.c | 38 +++---
1 file changed, 21 insertions(+), 17 deletions(-)
diff --git a/drivers
On 7/11/2025 2:29 PM, Simona Vetter wrote:
On Thu, Jul 10, 2025 at 11:37:14AM +0200, Christian König wrote:
On 10.07.25 11:01, Simona Vetter wrote:
On Wed, Jul 09, 2025 at 12:52:05PM -0400, Rodrigo Vivi wrote:
On Wed, Jul 09, 2025 at 05:18:54PM +0300, Raag Jadav wrote:
On Wed, Jul 09, 2025
On 14/07/2025 04:59, LiangCheng Wang wrote:
> ---
> Changes in v2:
> - Reordered patches so that DT bindings come before the driver (suggested by
> Rob Herring)
> - Fixed sparse warning: removed duplicate `.reset` initializer in
> `pixpaper_plane_funcs`
> - Fixed checkpatch issues reported by Med
On 14/07/2025 04:59, LiangCheng Wang wrote:
> From: Wig Cheng
>
> Mayqueen is a Taiwan-based company primarily focused on the development
> of arm64 development boards and e-paper displays.
>
> Signed-off-by: Wig Cheng
> ---
This is a friendly reminder during the review process.
It looks like
On 14/07/2025 04:59, LiangCheng Wang wrote:
> The binding is for the Mayqueen Pixpaper e-ink display panel,
> controlled via an SPI interface.
>
> Signed-off-by: LiangCheng Wang
This is a friendly reminder during the review process.
It looks like you received a tag and forgot to add it.
If yo
Add GDSP0 and GDSP1 fastrpc compute-cb nodes for sa8775p SoC.
Reviewed-by: Dmitry Baryshkov
Reviewed-by: Konrad Dybcio
Signed-off-by: Ling Xu
---
arch/arm64/boot/dts/qcom/sa8775p.dtsi | 57 +++
1 file changed, 57 insertions(+)
diff --git a/arch/arm64/boot/dts/qcom/sa87
Currently the domain ids are added for each instance of domains, this is
totally not scalable approach. Clean this mess and create domain ids for
only domains not its instances.
Co-developed-by: Srinivas Kandagatla
Signed-off-by: Srinivas Kandagatla
Signed-off-by: Ling Xu
---
drivers/misc/fast
The fastrpc driver has support for 5 types of remoteprocs. There are
some products which support GDSP remoteprocs. GDSP is General Purpose
DSP where tasks can be offloaded. Add fastrpc nodes and task offload
support for GDSP. Also strict domain IDs for domain.
Patch [v6]:
https://lore.kernel.org/l
There are some products which support GDSP remoteprocs. GDSP is General
Purpose DSP where tasks can be offloaded. There are 2 GDSPs named gdsp0
and gdsp1. Add "gdsp0" and "gdsp1" as the new supported labels for GDSP
fastrpc domains.
Acked-by: Krzysztof Kozlowski
Signed-off-by: Ling Xu
---
Docum
Some platforms (like sa8775p) feature one or more GPDSPs (General
Purpose DSPs). Similar to other kinds of Hexagon DSPs, they provide
a FastRPC implementation, allowing code execution in both signed and
unsigned protection domains. Extend the checks to allow domain names
starting with "gdsp" (possi
The NPU cores have their own access to the memory bus, and this isn't
cache coherent with the CPUs.
Add IOCTLs so userspace can mark when the caches need to be flushed, and
also when a writer job needs to be waited for before the buffer can be
accessed from the CPU.
Initially based on the same IO
Using the DRM GPU scheduler infrastructure, with a scheduler for each
core.
Userspace can decide for a series of tasks to be executed sequentially
in the same core, so SRAM locality can be taken advantage of.
The job submission code was initially based on Panfrost.
v2:
- Remove hardcoded number
Add the bindings for the Neural Processing Unit IP from Rockchip.
v2:
- Adapt to new node structure (one node per core, each with its own
IOMMU)
- Several misc. fixes from Sebastian Reichel
v3:
- Split register block in its constituent subblocks, and only require
the ones that the kernel woul
This initial version supports the NPU as shipped in the RK3588 SoC and
described in the first part of its TRM, in Chapter 36.
This NPU contains 3 independent cores that the driver can submit jobs
to.
This commit adds just hardware initialization and power management.
v2:
- Split cores and IOMMUs
This uses the SHMEM DRM helpers and we map right away to the CPU and NPU
sides, as all buffers are expected to be accessed from both.
v2:
- Sync the IOMMUs for the other cores when mapping and unmapping.
v3:
- Make use of GPL-2.0-only for the copyright notice (Jeff Hugo)
v6:
- Use mutexes guard
This series adds a new driver for the NPU that Rockchip includes in its
newer SoCs, developed by them on the NVDLA base.
In its current form, it supports the specific NPU in the RK3588 SoC.
The userspace driver is part of Mesa and an initial draft can be found at:
https://gitlab.freedesktop.org/
See Chapter 36 "RKNN" from the RK3588 TRM (Part 1).
The IP is divided in three cores, programmed independently. The first
core though is special, being able to delegate work to the other cores.
The IOMMU of the first core is also special in that it has two subunits
(read/write?) that need to be p
Enable the nodes added in a previous commit to the rk3588s device tree.
v2:
- Split nodes (Sebastian Reichel)
- Sort nodes (Sebastian Reichel)
- Add board regulators (Sebastian Reichel)
v8:
- Remove notion of top core (Robin Murphy)
Tested-by: Heiko Stuebner
Signed-off-by: Tomeu Vizoso
---
..
From: Nicolas Frattaroli
The NPU of the RK3588 has an external supply. This supply also affects
the power domain of the NPU, not just the NPU device nodes themselves.
Since correctly modelled boards will want the power domain to be aware
of the regulator so that it doesn't always have to be on, a
From: Nicolas Frattaroli
The NPU on the ROCK5B uses the same regulator for both the sram-supply
and the npu's supply. Add this regulator, and enable all the NPU bits.
Also add the regulator as a domain-supply to the pd_npu power domain.
v8:
- Remove notion of top core (Robin Murphy)
Signed-off-
65 matches
Mail list logo