On Fri, Aug 30, 2024 at 07:36:32PM +0200, György Kurucz wrote:
> For context, I have a Lenovo Yoga Slim 7x laptop, and was having issues
> with the display staying black after sleep. As a workaround, I could
> switch to a different VT and back.
>
> > [ 1185.831970] [dpu error]connector not conn
When requesting a DDR bandwidth level along a GPU frequency
level via the GMU, we can also specify the bus bandwidth usage in a 16bit
quantitized value.
For now simply request the maximum bus usage.
Signed-off-by: Neil Armstrong
---
drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 11 +++
driver
Make hda_get_mode_idx() accept const struct drm_display_mode pointer
instead of just raw struct drm_display_mode. This is a preparation to
converting the mode_valid() callback of drm_connector to accept const
struct drm_display_mode argument.
Signed-off-by: Dmitry Baryshkov
---
Hi Dmitry,
Le 15/11/2024 à 22:09, Dmitry Baryshkov a écrit :
The mode_valid() callbacks of drm_encoder, drm_crtc and drm_bridge
accept const struct drm_display_mode argument. Change the mode_valid
callback of drm_connector to also accept const argument.
Signed-off-by: Dmitry Baryshkov
---
Hi Dmitry,
On Tue, Nov 12, 2024 at 3:15 PM Akhil P Oommen wrote:
>
> On 11/11/2024 8:38 PM, Rob Clark wrote:
> > On Sun, Nov 10, 2024 at 9:31 AM Bjorn Andersson
> > wrote:
> >>
> >> Support for per-process page tables requires the SMMU aparture to be
> >> setup such that the GPU can make updates with the SM
On Tue, Nov 19, 2024 at 09:33:26AM -0500, Leonard Lausen wrote:
> > I'm seeing the same issue as György on the x1e80100 CRD and Lenovo
> > ThinkPad T14s. Without this patch, the internal display fails to resume
> > properly (switching VT brings it back) and the following errors are
> > logged:
> >
The Adreno GMU Management Unit (GMU) can also scale DDR Bandwidth along
the Frequency and Power Domain level, but by default we leave the
OPP core scale the interconnect ddr path.
In order to calculate vote values used by the GPU Management
Unit (GMU), we need to parse all the possible OPP Bandwid
The Adreno GMU Management Unit (GMU) can also scale the DDR Bandwidth
along the Frequency and Power Domain level, until now we left the OPP
core scale the OPP bandwidth via the interconnect path.
In order to enable bandwidth voting via the GPU Management
Unit (GMU), when an opp is set by devfreq w
Each GPU OPP requires a specific peak DDR bandwidth, let's add
those to each OPP and also the related interconnect path.
Signed-off-by: Neil Armstrong
---
arch/arm64/boot/dts/qcom/sm8550.dtsi | 11 +++
1 file changed, 11 insertions(+)
diff --git a/arch/arm64/boot/dts/qcom/sm8550.dtsi
b
The Adreno GMU Management Unit (GMU) can also vote for DDR Bandwidth
along the Frequency and Power Domain level, but by default we leave the
OPP core scale the interconnect ddr path.
While scaling the interconnect path was sufficient, newer GPUs
like the A750 requires specific vote parameters and
Now the features defines have the right name, introduce a features
bitfield and move the features defines in it, fixing all code checking
for them.
No functional changes intended.
Signed-off-by: Neil Armstrong
---
drivers/gpu/drm/msm/adreno/a6xx_catalog.c | 34 +++---
d
Add and implement the dev_pm_opp_get_bw() to retrieve
the OPP's bandwidth in the same way as the dev_pm_opp_get_voltage()
helper.
Retrieving bandwidth is required in the case of the Adreno GPU
where the GPU Management Unit can handle the Bandwidth scaling.
The helper can get the peak or average b
The Adreno GPU Management Unit (GMU) can also scale the ddr
bandwidth along the frequency and power domain level, but for
now we statically fill the bw_table with values from the
downstream driver.
Only the first entry is used, which is a disable vote, so we
currently rely on scaling via the linux
Each GPU OPP requires a specific peak DDR bandwidth, let's add
those to each OPP and also the related interconnect path.
Signed-off-by: Neil Armstrong
---
arch/arm64/boot/dts/qcom/sm8650.dtsi | 14 ++
1 file changed, 14 insertions(+)
diff --git a/arch/arm64/boot/dts/qcom/sm8650.dtsi
Hi Johan,
> I'm seeing the same issue as György on the x1e80100 CRD and Lenovo
> ThinkPad T14s. Without this patch, the internal display fails to resume
> properly (switching VT brings it back) and the following errors are
> logged:
>
> [dpu error]connector not connected 3
> [drm:drm_
Now all the DDR bandwidth voting via the GPU Management Unit (GMU)
is in place, declare the Bus Control Modules (BCMs) and the
corresponding parameters in the GPU info struct and add the
GMU_BW_VOTE feature bit to enable it.
Signed-off-by: Neil Armstrong
---
drivers/gpu/drm/msm/adreno/a6xx_catal
16 matches
Mail list logo