From: Thomas Zimmermann Sent: Thursday, March 9, 2023
8:01 AM
>
> Assume that the driver does not own the option string or its substrings
> and hence duplicate the option string for the video mode. As the driver
> implements a very simple mode parser in a fairly unstructured way, just
> duplicat
smatch reports
drivers/gpu/drm/nouveau/nvkm/subdev/mc/ga100.c:51:1:
warning: symbol 'ga100_mc_device' was not declared. Should it be static?
ga100_mc_device is only used in ga100.c, so it should be static
Signed-off-by: Tom Rix
---
drivers/gpu/drm/nouveau/nvkm/subdev/mc/ga100.c | 2 +-
1 file
Hi all,
Currently, we are working to add VirtIO GPU and Passthrough GPU support on
Xen. We expected to use HVM on domU and PVH on dom0. The x86 PVH dom0
support needs a few modifications on our APU platform. These functions
requires multiple software components support including kernel, xen, qemu,
The xen grant table will be initialied before parsing the PCI resources,
so xen_alloc_unpopulated_pages() ends up using a range from the PCI
window because Linux hasn't parsed the PCI information yet.
So modify the initialization order to make sure the real PCI resources
are parsed before.
Signed
Xen PVH is the paravirtualized mode and takes advantage of hardware
virtualization support when possible. It will using the hardware IOMMU
support instead of xen-swiotlb, so disable swiotlb if current domain is
Xen PVH.
Signed-off-by: Huang Rui
---
arch/x86/kernel/pci-dma.c | 8 +++-
1 file
There is an second stage translation between the guest machine address
and host machine address in Xen PVH/HVM. The PCI bar address in the xen
guest kernel are not translated at the second stage on Xen PVH/HVM, so
it's not the real physical address that hardware would like to know, so
we need to se
From: Chen Jiqian
Add acpi_register_gsi_xen_pvh() to register gsi for PVH mode.
In addition to call acpi_register_gsi_ioapic(), it also setup
a map between gsi and vector in hypervisor side. So that,
when dgpu create an interrupt, hypervisor can correctly find
which guest domain to process interr
From: Chen Jiqian
When hypervisor get an interrupt, it needs interrupt's
gsi number instead of irq number. Gsi number is unique
in xen, but irq number is only unique in one domain.
So, we need to record the relationship between irq and
gsi when dom0 initialized pci devices, and provide syscall
IO
On 10.03.23 11:20, Karol Herbst wrote:
> On Fri, Mar 10, 2023 at 10:26 AM Chris Clayton
> wrote:
>>
>> Is it likely that this fix will be sumbmitted to mainline during the ongoing
>> 6.3 development cycle?
>>
>
> yes, it's already pushed to drm-misc-fixed, which then will go into
> the current
The type of return value of drm_fb_helper_initial_config is int,
which may return wrong result, so we add error handling for it
to reclaim memory resource, and return when an error occurs.
Found by Linux Verification Center (linuxtesting.org) with SVACE.
Fixes: d38ceaf99ed0 (drm/amdgp
Hi, Justin:
Justin Green 於 2023年3月10日 週五 上午5:05寫道:
>
> This patch series adds support for 10-bit overlays to the Mediatek DRM driver.
> Specifically, we add support for AR30 and BA30 overlays on MT8195 devices and
> lay the groundwork for supporting more 10-bit formats on more devices.
I've appl
Hi, Chen-Yu:
Chen-Yu Tsai 於 2023年2月2日 週四 下午12:57寫道:
>
> The MediaTek DisplayPort interface bridge driver starts its interrupts
> as soon as its probed. However when the interrupts trigger the bridge
> might not have been attached to a DRM device. As drm_helper_hpd_irq_event()
> does not check whe
The VCN firmware loading path enables the indirect SRAM mode if it's
advertised as supported. We might have some cases of FW issues that
prevents this mode to working properly though, ending-up in a failed
probe. An example below, observed in the Steam Deck:
[...]
[drm] failed to load ucode VCN0_R
Hi,
On 3/8/23 22:58, Hans de Goede wrote:
> The parent for the backlight device should be the drm-connector object,
> not the PCI device.
>
> Userspace relies on this to be able to detect which backlight class device
> to use on hybrid gfx devices where there may be multiple native (raw)
> backli
Hi All,
Here is version 3 of my patch series to pass the proper parent device
to backlight_device_register().
Changes in v3:
- Make amdgpu_dm_register_backlight_device() check bl_idx != 1 before
registering the backlight since amdgpu_dm_connector_late_register()
now calls it for _all_ connect
backlight_device_register() returns an ERR_PTR on error, but other code
such as amdgpu_dm_connector_destroy() assumes dm->backlight_dev[i] is NULL
if no backlight is registered.
Clear dm->backlight_dev[i] on registration failure, to avoid other code
trying to deref an ERR_PTR pointer.
Signed-off-
Refactor register_backlight_device():
1) Turn the connector-type + signal check into an early exit
condition to avoid the indentation level of the rest of the code
2) Add an array bounds check for the arrays indexed by dm->num_of_edps
3) register_backlight_device() always increases dm->num_of_ed
Currently functions like update_connector_ext_caps() and
amdgpu_dm_connector_destroy() are iterating over dm->backlight_link[i]
to find the index of the (optional) backlight_dev associated with
the connector.
Instead make register_backlight_device() store the dm->backlight_dev[]
index used for the
Make amdgpu_dm_register_backlight_device() take an amdgpu_dm_connector
pointer to the connector for which it should register the backlight
as its only argument.
This is a preparation patch for moving the actual backlight class device
registering to drm_connector_funcs.late_register.
Signed-off-by
The parent for the backlight device should be the drm-connector object,
not the PCI device.
Userspace relies on this to be able to detect which backlight class device
to use on hybrid gfx devices where there may be multiple native (raw)
backlight devices registered.
Specifically gnome-settings-da
Rename register_backlight_device() to setup_backlight_device()
and move all backlight setup related calls from
amdgpu_dm_register_backlight_device() and from
amdgpu_dm_initialize_drm_device() there.
This leaves amdgpu_dm_register_backlight_device() dealing purely
with registering the actual backli
I am trying to work through a series that was submitted for enabling
the DSI on the i.MX8M Mini and Nano. I have extended this series to
route the DSI to an HDMI bridge, and I am able to get several
resolutions to properly sync on my monitor. However, there are also a
bunch that appear on the li
From: Rob Clark
Inspired by
https://lore.kernel.org/dri-devel/20200604081224.863494-10-daniel.vet...@ffwll.ch/
it seemed like a good idea to get rid of memory allocation in job_run()
and use lockdep annotations to yell at us about anything that could
deadlock against shrinker/reclaim. Anything
From: Rob Clark
Add a way to initialize a fence without touching the refcount. This is
useful, for example, if the fence is embedded in a drm_sched_job. In
this case the refcount will be initialized before the job is queued.
But the seqno of the hw_fence is not known until job_run().
Signed-of
From: Rob Clark
Avoid allocating memory in job_run() by embedding the fence in the
submit object. Since msm gpu fences are always 1:1 with msm_gem_submit
we can just use the fence's refcnt to track the submit. And since we
can get the fence ctx from the submit we can just drop the msm_fence
str
From: Rob Clark
It is already a no-op, since we've already loaded the fw from
adreno_load_gpu(), so drop the redundant call.
Signed-off-by: Rob Clark
---
drivers/gpu/drm/msm/adreno/adreno_gpu.c | 9 +
1 file changed, 1 insertion(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/msm/adre
From: Rob Clark
These allocations are only done the first (successful) time through
hw_init() so they won't actually happen in the job_run() path. But
lockdep doesn't know this. So dis-entangle them from the hw_init()
path.
Signed-off-by: Rob Clark
---
drivers/gpu/drm/msm/adreno/a5xx_gpu.c
From: Rob Clark
In the process of adding lockdep annotation for GPU job_run() path to
catch potential deadlocks against the shrinker/reclaim path, I turned
up this lockdep splat:
==
WARNING: possible circular locking dependency detected
From: Rob Clark
Move the one-time RPMh setup to a6xx_gmu_init(). To get rid of the hack
for one-time init vs start, add in an extra a6xx_rpmh_stop() at the end
of the init sequence.
Signed-off-by: Rob Clark
---
drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 18 --
1 file changed, 8 i
From: Rob Clark
This will make it easier to catch places doing allocations that can
trigger reclaim under devfreq->lock.
Because devfreq->lock is held over various devfreq_dev_profile
callbacks, there might be some fallout if those callbacks do allocations
that can trigger reclaim, but I've look
From: Rob Clark
Similar to the previous patch, move the allocation out from under
dev_pm_qos_mtx, by speculatively doing the allocation and handle
any race after acquiring dev_pm_qos_mtx by freeing the redundant
allocation.
Signed-off-by: Rob Clark
---
drivers/base/power/qos.c | 12 +++
From: Rob Clark
Annotate dev_pm_qos_mtx to teach lockdep to scream about allocations
that could trigger reclaim under dev_pm_qos_mtx.
Signed-off-by: Rob Clark
---
drivers/base/power/qos.c | 11 +++
1 file changed, 11 insertions(+)
diff --git a/drivers/base/power/qos.c b/drivers/base/p
From: Rob Clark
Teach lockdep that icc_bw_lock is needed in code paths that could
deadlock if they trigger reclaim.
Signed-off-by: Rob Clark
---
drivers/interconnect/core.c | 10 +-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/interconnect/core.c b/drivers/inter
From: Rob Clark
For cases where icc_bw_set() can be called in callbaths that could
deadlock against shrinker/reclaim, such as runpm resume, we need to
decouple the icc locking. Introduce a new icc_bw_lock for cases where
we need to serialize bw aggregation and update to decouple that from
paths
From: Rob Clark
In the process of adding lockdep annotation for drm GPU scheduler's
job_run() to detect potential deadlock against shrinker/reclaim, I hit
this lockdep splat:
==
WARNING: possible circular locking dependency detected
6.
From: Rob Clark
Preparing for better lockdep annotations for things that happen in runpm
suspend/resume path vs shrinker/reclaim in the following patches, we
need to avoid allocations that can trigger reclaim in the icc_set_bw()
path. In the RPMh case, rpmh_write_batch() already uses GFP_ATOMIC,
Expose lima gp and pp usage stats through fdinfo, following
Documentation/gpu/drm-usage-stats.rst.
Borrowed from these previous implementations:
"df622729ddbf drm/scheduler: track GPU active time per entity" added
usage time accounting to drm scheduler, which is where the data used
here comes from
This exposes an accumulated active time per client via the fdinfo
infrastructure per execution engine, following
Documentation/gpu/drm-usage-stats.rst.
In lima, the exposed execution engines are gp and pp.
Signed-off-by: Erico Nunes
---
drivers/gpu/drm/lima/lima_drv.c | 31 ++
To track if fds are pointing to the same execution context and export
the expected information to fdinfo, similar to what is done in other
drivers.
Signed-off-by: Erico Nunes
---
drivers/gpu/drm/lima/lima_device.h | 3 +++
drivers/gpu/drm/lima/lima_drv.c| 12
drivers/gpu/drm/li
lima maintains a context manager per drm_file, similar to amdgpu.
In order to account for the complete usage per drm_file, all of the
associated contexts need to be considered.
Previously released contexts also need to be accounted for but their
drm_sched_entity info is gone once they get released,
This bug is first reported here:
https://lore.kernel.org/lkml/1a620e7c-5b71-3d16-001a-0d79b292a...@amd.com/
I modify the patch accroding mail list's discusstion, and I do reboot
test for tens of thousands of times about 10 machines on arm64, there's
no bug reported.
在 2023/3/10 16:18, Che
On Mon, Mar 13, 2023 at 7:31 AM Erico Nunes wrote:
>
> This exposes an accumulated active time per client via the fdinfo
> infrastructure per execution engine, following
> Documentation/gpu/drm-usage-stats.rst.
> In lima, the exposed execution engines are gp and pp.
>
> Signed-off-by: Erico Nunes
Patch set is:
Reviewed-by: Qiang Yu
Looks like drm-misc-next does not contain "df622729ddbf drm/scheduler:
track GPU active time per entity" yet.
Will apply later.
Regards,
Qiang
On Mon, Mar 13, 2023 at 7:31 AM Erico Nunes wrote:
>
> Expose lima gp and pp usage stats through fdinfo, following
Patch is:
Reviewed-by: Qiang Yu
On Sat, Feb 25, 2023 at 5:41 AM Maíra Canal wrote:
>
> As lima_gem_add_deps() performs the same steps as
> drm_sched_job_add_syncobj_dependency(), replace the open-coded
> implementation in Lima in order to simply use the DRM function.
>
> Signed-off-by: Maíra Can
On Fri, 2023-03-10 at 08:47 -0600, Rob Herring wrote:
> It is preferred to use typed property access functions (i.e.
> of_property_read_ functions) rather than low-level
> of_get_property/of_find_property functions for reading properties. As
> part of this, convert of_get_property/of_find_property
Appropriate maintainers should be suggested for changes to the
include/drm/drm_bridge.h header file, so add the header file to the
'DRM DRIVERS FOR BRIDGE CHIPS' section.
Signed-off-by: Liu Ying
---
MAINTAINERS | 1 +
1 file changed, 1 insertion(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 3
In order to catch issues in other drivers to ensure proper call
sequence of polling function.
v2: drop Fixes tag in commit message (Bert & Jani)
v3: use drm_WARN_ON instead of WARN_ON (Jani)
Bug: https://gitlab.freedesktop.org/drm/amd/-/issues/2411
Reported-by: Bert Karwatzki
Suggested-by: Dmitr
47 matches
Mail list logo