Re: [Intel-gfx] [PATCH v2] drm: Check actual format for legacy pageflip.

2021-01-11 Thread Alex Deucher
On Sat, Jan 9, 2021 at 9:11 PM Bas Nieuwenhuizen
 wrote:
>
> With modifiers one can actually have different format_info structs
> for the same format, which now matters for AMDGPU since we convert
> implicit modifiers to explicit modifiers with multiple planes.
>
> I checked other drivers and it doesn't look like they end up triggering
> this case so I think this is safe to relax.
>
> Signed-off-by: Bas Nieuwenhuizen 
> Reviewed-by: Daniel Vetter 
> Reviewed-by: Zhan Liu 
> Acked-by: Christian König 
> Acked-by: Alex Deucher 
> Fixes: 816853f9dc40 ("drm/amd/display: Set new format info for converted 
> metadata.")

Do you have commit rights to drm-misc or do you need someone to commit
this for you?

Thanks!

Alex

> ---
>  drivers/gpu/drm/drm_plane.c | 9 -
>  1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/drm_plane.c b/drivers/gpu/drm/drm_plane.c
> index e6231947f987..a0cb746bcb0a 100644
> --- a/drivers/gpu/drm/drm_plane.c
> +++ b/drivers/gpu/drm/drm_plane.c
> @@ -1163,7 +1163,14 @@ int drm_mode_page_flip_ioctl(struct drm_device *dev,
> if (ret)
> goto out;
>
> -   if (old_fb->format != fb->format) {
> +   /*
> +* Only check the FOURCC format code, excluding modifiers. This is
> +* enough for all legacy drivers. Atomic drivers have their own
> +* checks in their ->atomic_check implementation, which will
> +* return -EINVAL if any hw or driver constraint is violated due
> +* to modifier changes.
> +*/
> +   if (old_fb->format->format != fb->format->format) {
> DRM_DEBUG_KMS("Page flip is not allowed to change frame 
> buffer format.\n");
> ret = -EINVAL;
> goto out;
> --
> 2.29.2
>
> ___
> amd-gfx mailing list
> amd-...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH v2] drm: Check actual format for legacy pageflip.

2021-01-11 Thread Alex Deucher
On Mon, Jan 11, 2021 at 11:39 AM Bas Nieuwenhuizen
 wrote:
>
> On Mon, Jan 11, 2021 at 4:02 PM Alex Deucher  wrote:
> >
> > On Sat, Jan 9, 2021 at 9:11 PM Bas Nieuwenhuizen
> >  wrote:
> > >
> > > With modifiers one can actually have different format_info structs
> > > for the same format, which now matters for AMDGPU since we convert
> > > implicit modifiers to explicit modifiers with multiple planes.
> > >
> > > I checked other drivers and it doesn't look like they end up triggering
> > > this case so I think this is safe to relax.
> > >
> > > Signed-off-by: Bas Nieuwenhuizen 
> > > Reviewed-by: Daniel Vetter 
> > > Reviewed-by: Zhan Liu 
> > > Acked-by: Christian König 
> > > Acked-by: Alex Deucher 
> > > Fixes: 816853f9dc40 ("drm/amd/display: Set new format info for converted 
> > > metadata.")
> >
> > Do you have commit rights to drm-misc or do you need someone to commit
> > this for you?
>
> I don't have commit rights so if the patch could be committed for me
> that would be appreciated!

Pushed to drm-misc-fixes.  Thanks!

If you want access to drm-misc, I don't see any reason you shouldn't have it.

Alex


> >
> > Thanks!
> >
> > Alex
> >
> > > ---
> > >  drivers/gpu/drm/drm_plane.c | 9 -
> > >  1 file changed, 8 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/gpu/drm/drm_plane.c b/drivers/gpu/drm/drm_plane.c
> > > index e6231947f987..a0cb746bcb0a 100644
> > > --- a/drivers/gpu/drm/drm_plane.c
> > > +++ b/drivers/gpu/drm/drm_plane.c
> > > @@ -1163,7 +1163,14 @@ int drm_mode_page_flip_ioctl(struct drm_device 
> > > *dev,
> > > if (ret)
> > > goto out;
> > >
> > > -   if (old_fb->format != fb->format) {
> > > +   /*
> > > +* Only check the FOURCC format code, excluding modifiers. This is
> > > +* enough for all legacy drivers. Atomic drivers have their own
> > > +* checks in their ->atomic_check implementation, which will
> > > +* return -EINVAL if any hw or driver constraint is violated due
> > > +* to modifier changes.
> > > +*/
> > > +   if (old_fb->format->format != fb->format->format) {
> > > DRM_DEBUG_KMS("Page flip is not allowed to change frame 
> > > buffer format.\n");
> > > ret = -EINVAL;
> > > goto out;
> > > --
> > > 2.29.2
> > >
> > > ___
> > > amd-gfx mailing list
> > > amd-...@lists.freedesktop.org
> > > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 3/4] drm/amd/display: Add DP 2.0 MST DC Support

2021-10-20 Thread Alex Deucher
On Wed, Oct 20, 2021 at 3:50 PM Bhawanpreet Lakha
 wrote:
>
> From: Fangzhi Zuo 

Please include a patch description.

Alex

>
> Signed-off-by: Fangzhi Zuo 
> ---
>  drivers/gpu/drm/amd/display/dc/core/dc.c  |  14 +
>  drivers/gpu/drm/amd/display/dc/core/dc_link.c | 280 ++
>  .../gpu/drm/amd/display/dc/core/dc_link_dp.c  |  19 ++
>  drivers/gpu/drm/amd/display/dc/dc_link.h  |   7 +
>  drivers/gpu/drm/amd/display/dc/dc_stream.h|  13 +
>  5 files changed, 333 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c 
> b/drivers/gpu/drm/amd/display/dc/core/dc.c
> index 8be04be19124..935a50d6e933 100644
> --- a/drivers/gpu/drm/amd/display/dc/core/dc.c
> +++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
> @@ -2354,6 +2354,11 @@ static enum surface_update_type 
> check_update_surfaces_for_stream(
> if (stream_update->dsc_config)
> su_flags->bits.dsc_changed = 1;
>
> +#if defined(CONFIG_DRM_AMD_DC_DCN)
> +   if (stream_update->mst_bw_update)
> +   su_flags->bits.mst_bw = 1;
> +#endif
> +
> if (su_flags->raw != 0)
> overall_type = UPDATE_TYPE_FULL;
>
> @@ -2731,6 +2736,15 @@ static void commit_planes_do_stream_update(struct dc 
> *dc,
> if (stream_update->dsc_config)
> dp_update_dsc_config(pipe_ctx);
>
> +#if defined(CONFIG_DRM_AMD_DC_DCN)
> +   if (stream_update->mst_bw_update) {
> +   if (stream_update->mst_bw_update->is_increase)
> +   
> dc_link_increase_mst_payload(pipe_ctx, 
> stream_update->mst_bw_update->mst_stream_bw);
> +   else
> +   dc_link_reduce_mst_payload(pipe_ctx, 
> stream_update->mst_bw_update->mst_stream_bw);
> +   }
> +#endif
> +
> if (stream_update->pending_test_pattern) {
> dc_link_dp_set_test_pattern(stream->link,
> stream->test_pattern.type,
> diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c 
> b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
> index e5d6cbd7ea78..b23972b6a27c 100644
> --- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
> +++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
> @@ -3232,6 +3232,9 @@ static struct fixed31_32 get_pbn_from_timing(struct 
> pipe_ctx *pipe_ctx)
>  static void update_mst_stream_alloc_table(
> struct dc_link *link,
> struct stream_encoder *stream_enc,
> +#if defined(CONFIG_DRM_AMD_DC_DCN)
> +   struct hpo_dp_stream_encoder *hpo_dp_stream_enc, // TODO: Rename 
> stream_enc to dio_stream_enc?
> +#endif
> const struct dp_mst_stream_allocation_table *proposed_table)
>  {
> struct link_mst_stream_allocation work_table[MAX_CONTROLLER_NUM] = { 
> 0 };
> @@ -3267,6 +3270,9 @@ static void update_mst_stream_alloc_table(
> work_table[i].slot_count =
> 
> proposed_table->stream_allocations[i].slot_count;
> work_table[i].stream_enc = stream_enc;
> +#if defined(CONFIG_DRM_AMD_DC_DCN)
> +   work_table[i].hpo_dp_stream_enc = hpo_dp_stream_enc;
> +#endif
> }
> }
>
> @@ -3389,6 +3395,10 @@ enum dc_status dc_link_allocate_mst_payload(struct 
> pipe_ctx *pipe_ctx)
> struct dc_link *link = stream->link;
> struct link_encoder *link_encoder = NULL;
> struct stream_encoder *stream_encoder = 
> pipe_ctx->stream_res.stream_enc;
> +#if defined(CONFIG_DRM_AMD_DC_DCN)
> +   struct hpo_dp_link_encoder *hpo_dp_link_encoder = 
> link->hpo_dp_link_enc;
> +   struct hpo_dp_stream_encoder *hpo_dp_stream_encoder = 
> pipe_ctx->stream_res.hpo_dp_stream_enc;
> +#endif
> struct dp_mst_stream_allocation_table proposed_table = {0};
> struct fixed31_32 avg_time_slots_per_mtp;
> struct fixed31_32 pbn;
> @@ -3416,7 +3426,14 @@ enum dc_status dc_link_allocate_mst_payload(struct 
> pipe_ctx *pipe_ctx)
> &proposed_table,
> true)) {
> update_mst_stream_alloc_table(
> +#if defined(CONFIG_DRM_AMD_DC_DCN)
> +   link,
> +   pipe_ctx->stream_res.stream_enc,
> +   
> pipe_ctx->stream_res.hpo_dp_stream_enc,
> +   &proposed_table);
> +#else
> link, 
> pipe_ctx->stream_res.stream_enc, &proposed_table);
> +#endif
> }
> else
> DC_LOG_WARNING("Failed to update"
> @@ -3430,6 +3447,20 @@ enum dc_status dc_link_allocate_mst_payload(struct 
> pipe_ctx *pipe_ctx)
> link->mst_stream_alloc_table.stream_count);
>
> for (i = 0; i < MAX_CONTROL

Re: [Intel-gfx] [PATCH 00/14] drm/hdcp: Pull HDCP auth/exchange/check into

2021-09-13 Thread Alex Deucher
On Mon, Sep 13, 2021 at 1:57 PM Sean Paul  wrote:
>
> From: Sean Paul 
>
> Hello,
> This patchset pulls the HDCP protocol auth/exchange/check logic out from
> i915 into a HDCP helper library which drivers can use to implement the
> proper protocol and UAPI interactions for achieving HDCP.
>
> Originally this was all stuffed into i915 since it was the only driver
> supporting HDCP. Over the last while I've been working on HDCP support
> in the msm driver and have identified the parts which can/should be
> shared between drivers and the parts which are hw-specific.
>
> We can generalize all of the sink interactions in the helper as well as
> state handling and link checks. This tends to be the trickiest part of
> adding HDCP support, since the property state and locking is a bit of a
> nightmare. The driver need only implement the more mechanical display
> controller register accesses.
>
> The first third of the pachset is establishing the helpers, the next
> third is converting the i915 driver to use the helpers, and the last
> third is the msm driver implementation.
>
> I've left out HDCP 2.x support, since we still only have i915 as the
> reference implementation and I'm not super comfortable speculating on
> which parts are platform independent.

FWIW, amdgpu has support for both HDCP 1.x and 2.x

Alex

>
> Please take a look,
>
> Sean
>
> Sean Paul (14):
>   drm/hdcp: Add drm_hdcp_atomic_check()
>   drm/hdcp: Avoid changing crtc state in hdcp atomic check
>   drm/hdcp: Update property value on content type and user changes
>   drm/hdcp: Expand HDCP helper library for enable/disable/check
>   drm/i915/hdcp: Consolidate HDCP setup/state cache
>   drm/i915/hdcp: Retain hdcp_capable return codes
>   drm/i915/hdcp: Use HDCP helpers for i915
>   drm/msm/dpu_kms: Re-order dpu includes
>   drm/msm/dpu: Remove useless checks in dpu_encoder
>   drm/msm/dpu: Remove encoder->enable() hack
>   drm/msm/dp: Re-order dp_audio_put in deinit_sub_modules
>   dt-bindings: msm/dp: Add bindings for HDCP registers
>   drm/msm: Add hdcp register ranges to sc7180 device tree
>   drm/msm: Implement HDCP 1.x using the new drm HDCP helpers
>
>  .../bindings/display/msm/dp-controller.yaml   |   11 +-
>  drivers/gpu/drm/drm_hdcp.c| 1198 -
>  drivers/gpu/drm/i915/display/intel_atomic.c   |7 +-
>  drivers/gpu/drm/i915/display/intel_ddi.c  |   29 +-
>  .../drm/i915/display/intel_display_debugfs.c  |   11 +-
>  .../drm/i915/display/intel_display_types.h|   58 +-
>  drivers/gpu/drm/i915/display/intel_dp_hdcp.c  |  341 ++---
>  drivers/gpu/drm/i915/display/intel_dp_mst.c   |   17 +-
>  drivers/gpu/drm/i915/display/intel_hdcp.c | 1011 +++---
>  drivers/gpu/drm/i915/display/intel_hdcp.h |   35 +-
>  drivers/gpu/drm/i915/display/intel_hdmi.c |  256 ++--
>  drivers/gpu/drm/msm/Makefile  |1 +
>  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c   |   17 +-
>  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c   |   30 +-
>  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h   |2 -
>  drivers/gpu/drm/msm/disp/dpu1/dpu_trace.h |4 -
>  drivers/gpu/drm/msm/dp/dp_debug.c |   49 +-
>  drivers/gpu/drm/msm/dp/dp_debug.h |6 +-
>  drivers/gpu/drm/msm/dp/dp_display.c   |   47 +-
>  drivers/gpu/drm/msm/dp/dp_display.h   |5 +
>  drivers/gpu/drm/msm/dp/dp_drm.c   |   68 +-
>  drivers/gpu/drm/msm/dp/dp_drm.h   |5 +
>  drivers/gpu/drm/msm/dp/dp_hdcp.c  |  433 ++
>  drivers/gpu/drm/msm/dp/dp_hdcp.h  |   27 +
>  drivers/gpu/drm/msm/dp/dp_parser.c|   30 +-
>  drivers/gpu/drm/msm/dp/dp_parser.h|4 +
>  drivers/gpu/drm/msm/dp/dp_reg.h   |   44 +-
>  drivers/gpu/drm/msm/msm_atomic.c  |   15 +
>  include/drm/drm_hdcp.h|  194 +++
>  29 files changed, 2570 insertions(+), 1385 deletions(-)
>  create mode 100644 drivers/gpu/drm/msm/dp/dp_hdcp.c
>  create mode 100644 drivers/gpu/drm/msm/dp/dp_hdcp.h
>
> --
> Sean Paul, Software Engineer, Google / Chromium OS
>


Re: [Intel-gfx] [PATCH 1/2] Enable buddy memory manager support

2021-09-20 Thread Alex Deucher
On Mon, Sep 20, 2021 at 3:21 PM Arunpravin
 wrote:

Please prefix the patch subject with drm.  E.g.,
drm: Enable buddy memory manager support

Same for the second patch, but make it drm/amdgpu instead.

Alex

>
> Port Intel buddy system manager to drm root folder
> Add CPU mappable/non-mappable region support to the drm buddy manager
>
> Signed-off-by: Arunpravin 
> ---
>  drivers/gpu/drm/Makefile|   2 +-
>  drivers/gpu/drm/drm_buddy.c | 465 
>  include/drm/drm_buddy.h | 154 
>  3 files changed, 620 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/gpu/drm/drm_buddy.c
>  create mode 100644 include/drm/drm_buddy.h
>
> diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
> index a118692a6df7..fe1a2fc09675 100644
> --- a/drivers/gpu/drm/Makefile
> +++ b/drivers/gpu/drm/Makefile
> @@ -18,7 +18,7 @@ drm-y   :=drm_aperture.o drm_auth.o drm_cache.o 
> \
> drm_dumb_buffers.o drm_mode_config.o drm_vblank.o \
> drm_syncobj.o drm_lease.o drm_writeback.o drm_client.o \
> drm_client_modeset.o drm_atomic_uapi.o drm_hdcp.o \
> -   drm_managed.o drm_vblank_work.o
> +   drm_managed.o drm_vblank_work.o drm_buddy.o
>
>  drm-$(CONFIG_DRM_LEGACY) += drm_agpsupport.o drm_bufs.o drm_context.o 
> drm_dma.o \
> drm_legacy_misc.o drm_lock.o drm_memory.o 
> drm_scatter.o \
> diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c
> new file mode 100644
> index ..f07919a004b6
> --- /dev/null
> +++ b/drivers/gpu/drm/drm_buddy.c
> @@ -0,0 +1,465 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright � 2021 Intel Corporation
> + */
> +
> +#include 
> +#include 
> +
> +static struct drm_buddy_block *drm_block_alloc(struct drm_buddy_mm *mm,
> +   struct drm_buddy_block *parent, unsigned int order,
> +   u64 offset)
> +{
> +   struct drm_buddy_block *block;
> +
> +   BUG_ON(order > DRM_BUDDY_MAX_ORDER);
> +
> +   block = kmem_cache_zalloc(mm->slab_blocks, GFP_KERNEL);
> +   if (!block)
> +   return NULL;
> +
> +   block->header = offset;
> +   block->header |= order;
> +   block->parent = parent;
> +   block->start = offset >> PAGE_SHIFT;
> +   block->size = (mm->chunk_size << order) >> PAGE_SHIFT;
> +
> +   BUG_ON(block->header & DRM_BUDDY_HEADER_UNUSED);
> +   return block;
> +}
> +
> +static void drm_block_free(struct drm_buddy_mm *mm, struct drm_buddy_block 
> *block)
> +{
> +   kmem_cache_free(mm->slab_blocks, block);
> +}
> +
> +static void add_ordered(struct drm_buddy_mm *mm, struct drm_buddy_block 
> *block)
> +{
> +   struct drm_buddy_block *node;
> +
> +   if (list_empty(&mm->free_list[drm_buddy_block_order(block)])) {
> +   list_add(&block->link,
> +   &mm->free_list[drm_buddy_block_order(block)]);
> +   return;
> +   }
> +
> +   list_for_each_entry(node, 
> &mm->free_list[drm_buddy_block_order(block)], link)
> +   if (block->start > node->start)
> +   break;
> +
> +   __list_add(&block->link, node->link.prev, &node->link);
> +}
> +
> +static void mark_allocated(struct drm_buddy_block *block)
> +{
> +   block->header &= ~DRM_BUDDY_HEADER_STATE;
> +   block->header |= DRM_BUDDY_ALLOCATED;
> +
> +   list_del(&block->link);
> +}
> +
> +static void mark_free(struct drm_buddy_mm *mm,
> + struct drm_buddy_block *block)
> +{
> +   block->header &= ~DRM_BUDDY_HEADER_STATE;
> +   block->header |= DRM_BUDDY_FREE;
> +
> +   add_ordered(mm, block);
> +}
> +
> +static void mark_split(struct drm_buddy_block *block)
> +{
> +   block->header &= ~DRM_BUDDY_HEADER_STATE;
> +   block->header |= DRM_BUDDY_SPLIT;
> +
> +   list_del(&block->link);
> +}
> +
> +int drm_buddy_init(struct drm_buddy_mm *mm, u64 size, u64 chunk_size)
> +{
> +   unsigned int i;
> +   u64 offset;
> +
> +   if (size < chunk_size)
> +   return -EINVAL;
> +
> +   if (chunk_size < PAGE_SIZE)
> +   return -EINVAL;
> +
> +   if (!is_power_of_2(chunk_size))
> +   return -EINVAL;
> +
> +   size = round_down(size, chunk_size);
> +
> +   mm->size = size;
> +   mm->chunk_size = chunk_size;
> +   mm->max_order = ilog2(size) - ilog2(chunk_size);
> +
> +   BUG_ON(mm->max_order > DRM_BUDDY_MAX_ORDER);
> +
> +   mm->slab_blocks = KMEM_CACHE(drm_buddy_block, SLAB_HWCACHE_ALIGN);
> +
> +   if (!mm->slab_blocks)
> +   return -ENOMEM;
> +
> +   mm->free_list = kmalloc_array(mm->max_order + 1,
> + sizeof(struct list_head),
> + GFP_KERNEL);
> +   if (!mm->free_list)
> +   goto out_destroy_slab;
> +
> +   for (i = 0; i <= mm->max_order; 

Re: [Intel-gfx] [PATCH 2/2] Add drm buddy manager support to amdgpu driver

2021-09-20 Thread Alex Deucher
On Mon, Sep 20, 2021 at 3:21 PM Arunpravin
 wrote:
>
> Replace drm_mm with drm buddy manager for
> VRAM memory management

Would be good to document why we are doing this and what advantages it
brings over the old drm_mm code.

Alex


>
> Signed-off-by: Arunpravin 
> ---
>  .../gpu/drm/amd/amdgpu/amdgpu_res_cursor.h|  78 +--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h   |   3 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c  | 216 ++
>  3 files changed, 189 insertions(+), 108 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h
> index acfa207cf970..ba24052e9062 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h
> @@ -30,12 +30,25 @@
>  #include 
>  #include 
>
> +struct amdgpu_vram_mgr_node {
> +   struct ttm_range_mgr_node tnode;
> +   struct list_head blocks;
> +};
> +
> +static inline struct amdgpu_vram_mgr_node *
> +to_amdgpu_vram_mgr_node(struct ttm_resource *res)
> +{
> +   return container_of(container_of(res, struct ttm_range_mgr_node, 
> base),
> +   struct amdgpu_vram_mgr_node, tnode);
> +}
> +
>  /* state back for walking over vram_mgr and gtt_mgr allocations */
>  struct amdgpu_res_cursor {
> uint64_tstart;
> uint64_tsize;
> uint64_tremaining;
> -   struct drm_mm_node  *node;
> +   void*node;
> +   uint32_tmem_type;
>  };
>
>  /**
> @@ -52,8 +65,6 @@ static inline void amdgpu_res_first(struct ttm_resource 
> *res,
> uint64_t start, uint64_t size,
> struct amdgpu_res_cursor *cur)
>  {
> -   struct drm_mm_node *node;
> -
> if (!res || res->mem_type == TTM_PL_SYSTEM) {
> cur->start = start;
> cur->size = size;
> @@ -65,14 +76,39 @@ static inline void amdgpu_res_first(struct ttm_resource 
> *res,
>
> BUG_ON(start + size > res->num_pages << PAGE_SHIFT);
>
> -   node = to_ttm_range_mgr_node(res)->mm_nodes;
> -   while (start >= node->size << PAGE_SHIFT)
> -   start -= node++->size << PAGE_SHIFT;
> +   cur->mem_type = res->mem_type;
> +
> +   if (cur->mem_type == TTM_PL_VRAM) {
> +   struct drm_buddy_block *block;
> +   struct list_head *head, *next;
> +
> +   head = &to_amdgpu_vram_mgr_node(res)->blocks;
> +
> +   block = list_first_entry_or_null(head, struct 
> drm_buddy_block, link);
> +   while (start >= block->size << PAGE_SHIFT) {
> +   start -= block->size << PAGE_SHIFT;
> +
> +   next = block->link.next;
> +   if (next != head)
> +   block = list_entry(next, struct 
> drm_buddy_block, link);
> +   }
>
> -   cur->start = (node->start << PAGE_SHIFT) + start;
> -   cur->size = min((node->size << PAGE_SHIFT) - start, size);
> -   cur->remaining = size;
> -   cur->node = node;
> +   cur->start = (block->start << PAGE_SHIFT) + start;
> +   cur->size = min((block->size << PAGE_SHIFT) - start, size);
> +   cur->remaining = size;
> +   cur->node = block;
> +   } else if (cur->mem_type == TTM_PL_TT) {
> +   struct drm_mm_node *node;
> +
> +   node = to_ttm_range_mgr_node(res)->mm_nodes;
> +   while (start >= node->size << PAGE_SHIFT)
> +   start -= node++->size << PAGE_SHIFT;
> +
> +   cur->start = (node->start << PAGE_SHIFT) + start;
> +   cur->size = min((node->size << PAGE_SHIFT) - start, size);
> +   cur->remaining = size;
> +   cur->node = node;
> +   }
>  }
>
>  /**
> @@ -85,8 +121,6 @@ static inline void amdgpu_res_first(struct ttm_resource 
> *res,
>   */
>  static inline void amdgpu_res_next(struct amdgpu_res_cursor *cur, uint64_t 
> size)
>  {
> -   struct drm_mm_node *node = cur->node;
> -
> BUG_ON(size > cur->remaining);
>
> cur->remaining -= size;
> @@ -99,9 +133,23 @@ static inline void amdgpu_res_next(struct 
> amdgpu_res_cursor *cur, uint64_t size)
> return;
> }
>
> -   cur->node = ++node;
> -   cur->start = node->start << PAGE_SHIFT;
> -   cur->size = min(node->size << PAGE_SHIFT, cur->remaining);
> +   if (cur->mem_type == TTM_PL_VRAM) {
> +   struct drm_buddy_block *block = cur->node;
> +   struct list_head *next;
> +
> +   next = block->link.next;
> +   block = list_entry(next, struct drm_buddy_block, link);
> +
> +   cur->node = block;
> +   cur->start = block->start << PAGE_SHIFT;
> +   cur->size = min(block->size << PAGE_SHIFT, cur->remain

Re: [Intel-gfx] [PATCH v3 03/13] drm/dp: add LTTPR DP 2.0 DPCD addresses

2021-09-22 Thread Alex Deucher
+ Harry, Leo

Can you guys get someone to clean this up?

Alex

On Wed, Sep 22, 2021 at 7:10 AM Jani Nikula  wrote:
>
> On Tue, 21 Sep 2021, Nathan Chancellor  wrote:
> > On Thu, Sep 09, 2021 at 03:51:55PM +0300, Jani Nikula wrote:
> >> DP 2.0 brings some new DPCD addresses for PHY repeaters.
> >>
> >> Cc: dri-de...@lists.freedesktop.org
> >> Reviewed-by: Manasi Navare 
> >> Signed-off-by: Jani Nikula 
> >> ---
> >>  include/drm/drm_dp_helper.h | 4 
> >>  1 file changed, 4 insertions(+)
> >>
> >> diff --git a/include/drm/drm_dp_helper.h b/include/drm/drm_dp_helper.h
> >> index 1d5b3dbb6e56..f3a61341011d 100644
> >> --- a/include/drm/drm_dp_helper.h
> >> +++ b/include/drm/drm_dp_helper.h
> >> @@ -1319,6 +1319,10 @@ struct drm_panel;
> >>  #define DP_MAX_LANE_COUNT_PHY_REPEATER  0xf0004 
> >> /* 1.4a */
> >>  #define DP_Repeater_FEC_CAPABILITY  0xf0004 /* 1.4 */
> >>  #define DP_PHY_REPEATER_EXTENDED_WAIT_TIMEOUT   0xf0005 
> >> /* 1.4a */
> >> +#define DP_MAIN_LINK_CHANNEL_CODING_PHY_REPEATER0xf0006 /* 2.0 */
> >> +# define DP_PHY_REPEATER_128B132B_SUPPORTED (1 << 0)
> >> +/* See DP_128B132B_SUPPORTED_LINK_RATES for values */
> >> +#define DP_PHY_REPEATER_128B132B_RATES  0xf0007 
> >> /* 2.0 */
> >>
> >>  enum drm_dp_phy {
> >>  DP_PHY_DPRX,
> >> --
> >> 2.30.2
> >>
> >>
> >
> > This patch causes a build failure in -next when combined with the AMD
> > tree:
> >
> > In file included from drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c:33:
> > In file included from ./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgpu.h:70:
> > In file included from 
> > ./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgpu_mode.h:36:
> > ./include/drm/drm_dp_helper.h:1322:9: error: 
> > 'DP_MAIN_LINK_CHANNEL_CODING_PHY_REPEATER' macro redefined 
> > [-Werror,-Wmacro-redefined]
> > #define DP_MAIN_LINK_CHANNEL_CODING_PHY_REPEATER0xf0006 /* 2.0 
> > */
> > ^
> > ./drivers/gpu/drm/amd/amdgpu/../display/dc/dc_dp_types.h:881:9: note: 
> > previous definition is here
> > #define DP_MAIN_LINK_CHANNEL_CODING_PHY_REPEATER0xF0006
> > ^
> > 1 error generated.
> >
> > Perhaps something like this should be applied during the merge of the
> > second tree or maybe this patch should be in a branch that could be
> > shared between the Intel and AMD trees so that this diff could be
> > applied to the AMD tree directly? Not sure what the standard procedure
> > for this is.
>
> What's in the drm-intel-next branch is changing DRM DP helpers in
> include/drm/drm_dp_helper.h with acks from a drm-misc maintainer. That's
> where this stuff is supposed to land, not in a driver specific file, and
> especially not if added with just a DP_ prefix.
>
>
> BR,
> Jani.
>
> >
> > Cheers,
> > Nathan
> >
> > diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c 
> > b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
> > index 234dfbea926a..279863b5c650 100644
> > --- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
> > +++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
> > @@ -4590,7 +4590,7 @@ bool dp_retrieve_lttpr_cap(struct dc_link *link)
> >   
> > DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV];
> >
> >   link->dpcd_caps.lttpr_caps.supported_128b_132b_rates.raw =
> > - 
> > lttpr_dpcd_data[DP_PHY_REPEATER_128b_132b_RATES -
> > + 
> > lttpr_dpcd_data[DP_PHY_REPEATER_128B132B_RATES -
> >   
> > DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV];
> >  #endif
> >
> > diff --git a/drivers/gpu/drm/amd/display/dc/dc_dp_types.h 
> > b/drivers/gpu/drm/amd/display/dc/dc_dp_types.h
> > index a5e798b5da79..8caf9af5ffa2 100644
> > --- a/drivers/gpu/drm/amd/display/dc/dc_dp_types.h
> > +++ b/drivers/gpu/drm/amd/display/dc/dc_dp_types.h
> > @@ -878,8 +878,6 @@ struct psr_caps {
> >  # define DP_DSC_DECODER_COUNT_MASK   (0b111 << 5)
> >  # define DP_DSC_DECODER_COUNT_SHIFT  5
> >  #define DP_MAIN_LINK_CHANNEL_CODING_SET  0x108
> > -#define DP_MAIN_LINK_CHANNEL_CODING_PHY_REPEATER 0xF0006
> > -#define DP_PHY_REPEATER_128b_132b_RATES  0xF0007
> >  #define DP_128b_132b_TRAINING_AUX_RD_INTERVAL_PHY_REPEATER1  0xF0022
> >  #define DP_INTRA_HOP_AUX_REPLY_INDICATION(1 << 3)
> >  /* TODO - Use DRM header to replace above once available */
>
> --
> Jani Nikula, Intel Open Source Graphics Center


Re: [Intel-gfx] [RFC 0/8] Per client GPU stats

2021-07-23 Thread Alex Deucher
On Fri, Jul 23, 2021 at 9:51 AM Tvrtko Ursulin
 wrote:
>
>
> On 23/07/2021 12:23, Christian König wrote:
> > Am 23.07.21 um 13:21 schrieb Tvrtko Ursulin:
> >>
> >> On 15/07/2021 10:18, Tvrtko Ursulin wrote:
> >>> From: Tvrtko Ursulin 
> >>>
> >>> Same old work but now rebased and series ending with some DRM docs
> >>> proposing
> >>> the common specification which should enable nice common userspace
> >>> tools to be
> >>> written.
> >>>
> >>> For the moment I only have intel_gpu_top converted to use this and
> >>> that seems to
> >>> work okay.
> >>>
> >>> v2:
> >>>   * Added prototype of possible amdgpu changes and spec updates to
> >>> align with the
> >>> common spec.
> >>
> >> Not much interest for the common specification?
> >
> > Well I would rather say not much opposition :)
>
> Hah, thanks, that's good to hear!
>
> > Of hand everything you do in this patch set sounds absolutely sane to
> > me, just don't have any time to review it in detail.
>
> That's fine - could you maybe suggest who on the AMD side could have a
> look at the relevant patches?

Adding David and Roy who did the implementation for the AMD side.  Can
you take a look at these patches when you get a chance?

Thanks,

Alex


>
> Regards,
>
> Tvrtko
>
> >> For reference I've just posted the intel-gpu-top adaptation required
> >> to parse it here:
> >> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.freedesktop.org%2Fpatch%2F446041%2F%3Fseries%3D90464%26rev%3D2&data=04%7C01%7Cchristian.koenig%40amd.com%7Cc967de8b8c2b499eb25b08d94dcbff2e%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637626360837958764%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=07hzP1RuVQkFi8AXWK8i%2Ffu9ajnldcF36PLRrey5wXA%3D&reserved=0.
> >>
> >>
> >> Note that this is not attempting to be a vendor agnostic tool but is
> >> adding per client data to existing i915 tool which uses PMU counters
> >> for global stats.
> >>
> >> intel-gpu-top: Intel Skylake (Gen9) @ /dev/dri/card0 -  335/ 339 MHz;
> >> 10% RC6;  1.24/ 4.18 W;  527 irqs/s
> >>
> >>   IMC reads: 3297 MiB/s
> >>  IMC writes: 2767 MiB/s
> >>
> >>  ENGINES BUSY MI_SEMA MI_WAIT
> >>Render/3D   78.74%
> >> |██▏
> >> |  0%  0%
> >>  Blitter0.00% | |  0%  0%
> >>Video0.00% | |  0%  0%
> >> VideoEnhance0.00% | |  0%  0%
> >>
> >>PID  NAME  Render/3D
> >> Blitter  VideoVideoEnhance
> >>  10202 neverball |███▎ || ||
> >> ||  |
> >>   5665  Xorg |███▍ ||  ||
> >> ||  |
> >>   5679 xfce4-session | ||  ||
> >> ||  |
> >>   5772  ibus-ui-gtk3 | ||  ||
> >> ||  |
> >>   5775   ibus-extension- | ||  ||
> >> ||  |
> >>   5777  ibus-x11 | ||  ||
> >> ||  |
> >>   5823 xfwm4 | ||  ||
> >> ||  |
> >>
> >>
> >> Regards,
> >>
> >> Tvrtko
> >>
> >>> Tvrtko Ursulin (8):
> >>>drm/i915: Explicitly track DRM clients
> >>>drm/i915: Make GEM contexts track DRM clients
> >>>drm/i915: Track runtime spent in closed and unreachable GEM contexts
> >>>drm/i915: Track all user contexts per client
> >>>drm/i915: Track context current active time
> >>>drm: Document fdinfo format specification
> >>>drm/i915: Expose client engine utilisation via fdinfo
> >>>drm/amdgpu: Convert to common fdinfo format
> >>>
> >>>   Documentation/gpu/amdgpu.rst  |  26 
> >>>   Documentation/gpu/drm-usage-stats.rst | 108 +
> >>>   Documentation/gpu/i915.rst|  27 
> >>>   Documentation/gpu/index.rst   |   1 +
> >>>   drivers/gpu/drm/amd/amdgpu/amdgpu_fdinfo.c|  18 ++-
> >>>   drivers/gpu/drm/i915/Makefile |   5 +-
> >>>   drivers/gpu/drm/i915/gem/i915_gem_context.c   |  42 -
> >>>   .../gpu/drm/i915/gem/i915_gem_context_types.h |   6 +
> >>>   drivers/gpu/drm/i915/gt/intel_context.c   |  27 +++-
> >>>   drivers/gpu/drm/i915/gt/intel_context.h   |  15 +-
> >>>   drivers/gpu/drm/i915/gt/intel_context_types.h |  24 ++-
> >>>   .../drm/i915/gt/intel_execlists_submission.c  |  23 ++-
> >>>   .../gpu/drm/i915/gt/intel_gt_clock_utils.c|   4 +
> >>>   drivers/gpu/drm/i915/gt/intel_lrc.c   |  27 ++--
> >>>   drivers/gpu/drm/i915/gt/intel_lrc.h   |  24 +++
> >>>   drivers/gpu/drm/i915/gt/selftest_lrc.c|  10 +-
> >>>   drivers/gpu/drm/i915/i915_drm_client.c| 143 ++
> >>>   drivers/gpu/drm/i915/

Re: [Intel-gfx] [RFC 8/8] drm/amdgpu: Convert to common fdinfo format

2021-07-23 Thread Alex Deucher
+ David, Roy

On Thu, Jul 15, 2021 at 5:18 AM Tvrtko Ursulin
 wrote:
>
> From: Tvrtko Ursulin 
>
> Convert fdinfo format to one documented in drm-usage-stats.rst.
>
> Opens:
>  * Does it work for AMD?
>  * What are the semantics of AMD engine utilisation reported in percents?
>Can it align with what i915 does or needs to document the alternative
>in the specification document?
>
> Signed-off-by: Tvrtko Ursulin 
> Cc: David M Nieto 
> Cc: Christian König 
> Cc: Daniel Vetter 
> ---
>  Documentation/gpu/amdgpu.rst   | 26 ++
>  Documentation/gpu/drm-usage-stats.rst  |  7 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_fdinfo.c | 18 ++-
>  3 files changed, 45 insertions(+), 6 deletions(-)
>
> diff --git a/Documentation/gpu/amdgpu.rst b/Documentation/gpu/amdgpu.rst
> index 364680cdad2e..b9b79c810f28 100644
> --- a/Documentation/gpu/amdgpu.rst
> +++ b/Documentation/gpu/amdgpu.rst
> @@ -322,3 +322,29 @@ smartshift_bias
>
>  .. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
> :doc: smartshift_bias
> +
> +.. _amdgpu-usage-stats:
> +
> +amdgpu DRM client usage stats implementation
> +
> +
> +The amdgpu driver implements the DRM client usage stats specification as
> +documented in :ref:`drm-client-usage-stats`.
> +
> +Example of the output showing the implemented key value pairs and entirety of
> +the currenly possible format options:
> +
> +::
> +
> +  pos:0
> +  flags:  012
> +  mnt_id: 21
> +  drm-driver: amdgpu
> +  drm-pdev:   :00:02.0
> +  drm-client-id:  7
> +  drm-engine-... TODO
> +  drm-memory-... TODO
> +
> +Possible `drm-engine-` key names are: ``,... TODO.
> +
> +Possible `drm-memory-` key names are: ``,... TODO.
> diff --git a/Documentation/gpu/drm-usage-stats.rst 
> b/Documentation/gpu/drm-usage-stats.rst
> index b87505438aaa..eaaa361805c0 100644
> --- a/Documentation/gpu/drm-usage-stats.rst
> +++ b/Documentation/gpu/drm-usage-stats.rst
> @@ -69,7 +69,7 @@ scope of each device, in which case `drm-pdev` shall be 
> present as well.
>  Userspace should make sure to not double account any usage statistics by 
> using
>  the above described criteria in order to associate data to individual 
> clients.
>
> -- drm-engine-:  ns
> +- drm-engine-:  [ns|%]
>
>  GPUs usually contain multiple execution engines. Each shall be given a stable
>  and unique name (str), with possible values documented in the driver specific
> @@ -84,6 +84,9 @@ larger value within a reasonable period. Upon observing a 
> value lower than what
>  was previously read, userspace is expected to stay with that larger previous
>  value until a monotonic update is seen.
>
> +Where time unit is given as a percentage...[AMD folks to fill the semantics
> +and interpretation of that]...
> +
>  - drm-memory-:  [KiB|MiB]
>
>  Each possible memory type which can be used to store buffer objects by the
> @@ -101,3 +104,5 @@ Driver specific implementations
>  ===
>
>  :ref:`i915-usage-stats`
> +
> +:ref:`amdgpu-usage-stats`
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fdinfo.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_fdinfo.c
> index d94c5419ec25..d6b011008fe9 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fdinfo.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fdinfo.c
> @@ -76,11 +76,19 @@ void amdgpu_show_fdinfo(struct seq_file *m, struct file 
> *f)
> }
> amdgpu_vm_get_memory(&fpriv->vm, &vram_mem, >t_mem, &cpu_mem);
> amdgpu_bo_unreserve(fpriv->vm.root.bo);
> -   seq_printf(m, "pdev:\t%04x:%02x:%02x.%d\npasid:\t%u\n", domain, bus,
> +
> +   /*
> +* **
> +* For text output format description please see drm-usage-stats.rst!
> +* **
> +*/
> +
> +   seq_puts(m, "drm-driver: amdgpu\n");
> +   seq_printf(m, "drm-pdev:\t%04x:%02x:%02x.%d\npasid:\t%u\n", domain, 
> bus,
> dev, fn, fpriv->vm.pasid);
> -   seq_printf(m, "vram mem:\t%llu kB\n", vram_mem/1024UL);
> -   seq_printf(m, "gtt mem:\t%llu kB\n", gtt_mem/1024UL);
> -   seq_printf(m, "cpu mem:\t%llu kB\n", cpu_mem/1024UL);
> +   seq_printf(m, "drm-memory-vram:\t%llu KiB\n", vram_mem/1024UL);
> +   seq_printf(m, "drm-memory-gtt:\t%llu KiB\n", gtt_mem/1024UL);
> +   seq_printf(m, "drm-memory-cpu:\t%llu KiB\n", cpu_mem/1024UL);
> for (i = 0; i < AMDGPU_HW_IP_NUM; i++) {
> uint32_t count = amdgpu_ctx_num_entities[i];
> int idx = 0;
> @@ -96,7 +104,7 @@ void amdgpu_show_fdinfo(struct seq_file *m, struct file *f)
> perc = div64_u64(1 * total, min);
> frac = perc % 100;
>
> -   seq_printf(m, "%s%d:\t%d.%d%%\n",
> +   seq_printf(m, "drm-engine-%s%d:\t%d.%d %%\n",
>

Re: [Intel-gfx] [PATCH v2] fbdev/efifb: Release PCI device's runtime PM ref during FB destroy

2021-08-10 Thread Alex Deucher
On Tue, Aug 10, 2021 at 4:36 AM Imre Deak  wrote:
>
> Hi Kai-Heng, Alex,
>
> could you add your ack if the fix looks ok and you're ok if I push it to
> the i915 tree?
>

Acked-by: Alex Deucher 

> Thanks,
> Imre
>
> On Mon, Aug 09, 2021 at 04:31:46PM +0300, Imre Deak wrote:
> > Atm the EFI FB platform driver gets a runtime PM reference for the
> > associated GFX PCI device during probing the EFI FB platform device and
> > releases it only when the platform device gets unbound.
> >
> > When fbcon switches to the FB provided by the PCI device's driver (for
> > instance i915/drmfb), the EFI FB will get only unregistered without the
> > EFI FB platform device getting unbound, keeping the runtime PM reference
> > acquired during the platform device probing. This reference will prevent
> > the PCI driver from runtime suspending the device.
> >
> > Fix this by releasing the RPM reference from the EFI FB's destroy hook,
> > called when the FB gets unregistered.
> >
> > While at it assert that pm_runtime_get_sync() didn't fail.
> >
> > v2:
> > - Move pm_runtime_get_sync() before register_framebuffer() to avoid its
> >   race wrt. efifb_destroy()->pm_runtime_put(). (Daniel)
> > - Assert that pm_runtime_get_sync() didn't fail.
> > - Clarify commit message wrt. platform/PCI device/driver and driver
> >   removal vs. device unbinding.
> >
> > Fixes: a6c0fd3d5a8b ("efifb: Ensure graphics device for efifb stays at PCI 
> > D0")
> > Cc: Kai-Heng Feng 
> > Cc: Daniel Vetter 
> > Reviewed-by: Daniel Vetter  (v1)
> > Signed-off-by: Imre Deak 
> > ---
> >  drivers/video/fbdev/efifb.c | 21 ++---
> >  1 file changed, 14 insertions(+), 7 deletions(-)
> >
> > diff --git a/drivers/video/fbdev/efifb.c b/drivers/video/fbdev/efifb.c
> > index 8ea8f079cde26..edca3703b9640 100644
> > --- a/drivers/video/fbdev/efifb.c
> > +++ b/drivers/video/fbdev/efifb.c
> > @@ -47,6 +47,8 @@ static bool use_bgrt = true;
> >  static bool request_mem_succeeded = false;
> >  static u64 mem_flags = EFI_MEMORY_WC | EFI_MEMORY_UC;
> >
> > +static struct pci_dev *efifb_pci_dev;/* dev with BAR covering the 
> > efifb */
> > +
> >  static struct fb_var_screeninfo efifb_defined = {
> >   .activate   = FB_ACTIVATE_NOW,
> >   .height = -1,
> > @@ -243,6 +245,9 @@ static inline void efifb_show_boot_graphics(struct 
> > fb_info *info) {}
> >
> >  static void efifb_destroy(struct fb_info *info)
> >  {
> > + if (efifb_pci_dev)
> > + pm_runtime_put(&efifb_pci_dev->dev);
> > +
> >   if (info->screen_base) {
> >   if (mem_flags & (EFI_MEMORY_UC | EFI_MEMORY_WC))
> >   iounmap(info->screen_base);
> > @@ -333,7 +338,6 @@ ATTRIBUTE_GROUPS(efifb);
> >
> >  static bool pci_dev_disabled;/* FB base matches BAR of a disabled 
> > device */
> >
> > -static struct pci_dev *efifb_pci_dev;/* dev with BAR covering the 
> > efifb */
> >  static struct resource *bar_resource;
> >  static u64 bar_offset;
> >
> > @@ -569,17 +573,22 @@ static int efifb_probe(struct platform_device *dev)
> >   pr_err("efifb: cannot allocate colormap\n");
> >   goto err_groups;
> >   }
> > +
> > + if (efifb_pci_dev)
> > + WARN_ON(pm_runtime_get_sync(&efifb_pci_dev->dev) < 0);
> > +
> >   err = register_framebuffer(info);
> >   if (err < 0) {
> >   pr_err("efifb: cannot register framebuffer\n");
> > - goto err_fb_dealoc;
> > + goto err_put_rpm_ref;
> >   }
> >   fb_info(info, "%s frame buffer device\n", info->fix.id);
> > - if (efifb_pci_dev)
> > - pm_runtime_get_sync(&efifb_pci_dev->dev);
> >   return 0;
> >
> > -err_fb_dealoc:
> > +err_put_rpm_ref:
> > + if (efifb_pci_dev)
> > + pm_runtime_put(&efifb_pci_dev->dev);
> > +
> >   fb_dealloc_cmap(&info->cmap);
> >  err_groups:
> >   sysfs_remove_groups(&dev->dev.kobj, efifb_groups);
> > @@ -603,8 +612,6 @@ static int efifb_remove(struct platform_device *pdev)
> >   unregister_framebuffer(info);
> >   sysfs_remove_groups(&pdev->dev.kobj, efifb_groups);
> >   framebuffer_release(info);
> > - if (efifb_pci_dev)
> > - pm_runtime_put(&efifb_pci_dev->dev);
> >
> >   return 0;
> >  }
> > --
> > 2.27.0
> >


Re: [Intel-gfx] New uAPI for color management proposal and feedback request

2021-05-12 Thread Alex Deucher
On Wed, May 12, 2021 at 9:04 AM Ville Syrjälä
 wrote:
>
> On Wed, May 12, 2021 at 02:06:56PM +0200, Werner Sembach wrote:
> > Hello,
> >
> > In addition to the existing "max bpc", and "Broadcast RGB/output_csc" drm 
> > properties I propose 4 new properties:
> > "preferred pixel encoding", "active color depth", "active color range", and 
> > "active pixel encoding"
> >
> >
> > Motivation:
> >
> > Current monitors have a variety pixel encodings available: RGB, YCbCr 
> > 4:4:4, YCbCr 4:2:2, YCbCr 4:2:0.
> >
> > In addition they might be full or limited RGB range and the monitors accept 
> > different bit depths.
> >
> > Currently the kernel driver for AMD and Intel GPUs automatically configure 
> > the color settings automatically with little
> > to no influence of the user. However there are several real world scenarios 
> > where the user might disagree with the
> > default chosen by the drivers and wants to set his or her own preference.
> >
> > Some examples:
> >
> > 1. While RGB and YCbCr 4:4:4 in theory carry the same amount of color 
> > information, some screens might look better on one
> > than the other because of bad internal conversion. The driver currently 
> > however has a fixed default that is chosen if
> > available (RGB for Intel and YCbCr 4:4:4 for AMD). The only way to change 
> > this currently is by editing and overloading
> > the edid reported by the monitor to the kernel.
> >
> > 2. RGB and YCbCr 4:4:4 need a higher port clock then YCbCr 4:2:0. Some 
> > hardware might report that it supports the higher
> > port clock, but because of bad shielding on the PC, the cable, or the 
> > monitor the screen cuts out every few seconds when
> > RGB or YCbCr 4:4:4 encoding is used, while YCbCr 4:2:0 might just work fine 
> > without changing hardware. The drivers
> > currently however always default to the "best available" option even if it 
> > might be broken.
> >
> > 3. Some screens natively only supporting 8-bit color, simulate 10-Bit color 
> > by rapidly switching between 2 adjacent
> > colors. They advertise themselves to the kernel as 10-bit monitors but the 
> > user might not like the "fake" 10-bit effect
> > and prefer running at the native 8-bit per color.
> >
> > 4. Some screens are falsely classified as full RGB range wile they actually 
> > use limited RGB range. This results in
> > washed out colors in dark and bright scenes. A user override can be helpful 
> > to manually fix this issue when it occurs.
> >
> > There already exist several requests, discussion, and patches regarding the 
> > thematic:
> >
> > - https://gitlab.freedesktop.org/drm/amd/-/issues/476
> >
> > - https://gitlab.freedesktop.org/drm/amd/-/issues/1548
> >
> > - https://lkml.org/lkml/2021/5/7/695
> >
> > - https://lkml.org/lkml/2021/5/11/416
> >
> >
> > Current State:
> >
> > I only know bits about the Intel i915 and AMD amdgpu driver. I don't know 
> > how other driver handle color management
> >
> > - "max bpc", global setting applied by both i915 (only on dp i think?) and 
> > amdgpu. Default value is "8". For every
> > resolution + frequency combination the highest possible even number between 
> > 6 and max_bpc is chosen. If the range
> > doesn't contain a valid mode the resolution + frequency combination is 
> > discarded (but I guess that would be a very
> > special edge case, if existent at all, when 6 doesn't work but 10 would 
> > work). Intel HDMI code always checks 8, 12, and
> > 10 and does not check the max_bpc setting.
>
> i915 does limit things below max_bpc for both HDMI and DP.
>
> >
> > - "Broadcast RGB" for i915 and "output_csc" for the old radeon driver (not 
> > amdgpu), overwrites the kernel chosen color
> > range setting (full or limited). If I recall correctly Intel HDMI code 
> > defaults to full unless this property is set,
> > Intel dp code tries to probe the monitor to find out what to use. amdgpu 
> > has no corresponding setting (I don't know how
> > it's decided there).
>
> i915 has the same behaviour for HDMI and DP, as per the CTA-861/DP
> specs. Unfortunately as you already mentioned there are quite a few
> monitors (DP monitors in particular) that don't implemnt the spec
> correctly. IIRC later DP specs even relaxed the wording to say
> that you can basically ignore the spec and do whatever you want.
> Which I supose is just admitting defeat and concluding that there
> is no way to get this right 100% of the time.
>
> >
> > - RGB pixel encoding can be forced by overloading a Monitors edid with one 
> > that tells the kernel that only RGB is
> > possible. That doesn't work for YCbCr 4:4:4 however because of the edid 
> > specification. Forcing YCbCr 4:2:0 would
> > theoretically also be possible this way. amdgpu has a debugfs switch 
> > "force_ycbcr_420" which makes the driver default to
> > YCbCr 4:2:0 on all monitors if possible.
> >
> >
> > Proposed Solution:
> >
> > 1. Add a new uAPI property "preferred pixel encoding", as a per port 
> > setting.
> >
> > - An amdg

Re: [Intel-gfx] [PATCH 0/7] Per client engine busyness

2021-05-13 Thread Alex Deucher
On Thu, May 13, 2021 at 7:00 AM Tvrtko Ursulin
 wrote:
>
> From: Tvrtko Ursulin 
>
> Resurrect of the previosuly merged per client engine busyness patches. In a
> nutshell it enables intel_gpu_top to be more top(1) like useful and show not
> only physical GPU engine usage but per process view as well.
>
> Example screen capture:
> 
> intel-gpu-top -  906/ 955 MHz;0% RC6;  5.30 Watts;  933 irqs/s
>
>   IMC reads: 4414 MiB/s
>  IMC writes: 3805 MiB/s
>
>   ENGINE  BUSY  MI_SEMA 
> MI_WAIT
>  Render/3D/0   93.46% |▋  |  0%  
> 0%
>Blitter/00.00% |   |  0%  
> 0%
>  Video/00.00% |   |  0%  
> 0%
>   VideoEnhance/00.00% |   |  0%  
> 0%
>
>   PIDNAME  Render/3D  BlitterVideo  VideoEnhance
>  2733   neverball |██▌ |||||||
>  2047Xorg |███▊|||||||
>  2737glxgears |█▍  |||||||
>  2128   xfwm4 ||||||||
>  2047Xorg ||||||||
> 
>
> Internally we track time spent on engines for each struct intel_context, both
> for current and past contexts belonging to each open DRM file.
>
> This can serve as a building block for several features from the wanted list:
> smarter scheduler decisions, getrusage(2)-like per-GEM-context functionality
> wanted by some customers, setrlimit(2) like controls, cgroups controller,
> dynamic SSEU tuning, ...
>
> To enable userspace access to the tracked data, we expose time spent on GPU 
> per
> client and per engine class in sysfs with a hierarchy like the below:
>
> # cd /sys/class/drm/card0/clients/
> # tree
> .
> ├── 7
> │   ├── busy
> │   │   ├── 0
> │   │   ├── 1
> │   │   ├── 2
> │   │   └── 3
> │   ├── name
> │   └── pid
> ├── 8
> │   ├── busy
> │   │   ├── 0
> │   │   ├── 1
> │   │   ├── 2
> │   │   └── 3
> │   ├── name
> │   └── pid
> └── 9
> ├── busy
> │   ├── 0
> │   ├── 1
> │   ├── 2
> │   └── 3
> ├── name
> └── pid
>
> Files in 'busy' directories are numbered using the engine class ABI values and
> they contain accumulated nanoseconds each client spent on engines of a
> respective class.

We did something similar in amdgpu using the gpu scheduler.  We then
expose the data via fdinfo.  See
https://cgit.freedesktop.org/drm/drm-misc/commit/?id=1774baa64f9395fa884ea9ed494bcb043f3b83f5
https://cgit.freedesktop.org/drm/drm-misc/commit/?id=874442541133f78c78b6880b8cc495bab5c61704

Alex


>
> Tvrtko Ursulin (7):
>   drm/i915: Expose list of clients in sysfs
>   drm/i915: Update client name on context create
>   drm/i915: Make GEM contexts track DRM clients
>   drm/i915: Track runtime spent in closed and unreachable GEM contexts
>   drm/i915: Track all user contexts per client
>   drm/i915: Track context current active time
>   drm/i915: Expose per-engine client busyness
>
>  drivers/gpu/drm/i915/Makefile |   5 +-
>  drivers/gpu/drm/i915/gem/i915_gem_context.c   |  61 ++-
>  .../gpu/drm/i915/gem/i915_gem_context_types.h |  16 +-
>  drivers/gpu/drm/i915/gt/intel_context.c   |  27 +-
>  drivers/gpu/drm/i915/gt/intel_context.h   |  15 +-
>  drivers/gpu/drm/i915/gt/intel_context_types.h |  24 +-
>  .../drm/i915/gt/intel_execlists_submission.c  |  23 +-
>  .../gpu/drm/i915/gt/intel_gt_clock_utils.c|   4 +
>  drivers/gpu/drm/i915/gt/intel_lrc.c   |  27 +-
>  drivers/gpu/drm/i915/gt/intel_lrc.h   |  24 ++
>  drivers/gpu/drm/i915/gt/selftest_lrc.c|  10 +-
>  drivers/gpu/drm/i915/i915_drm_client.c| 365 ++
>  drivers/gpu/drm/i915/i915_drm_client.h| 123 ++
>  drivers/gpu/drm/i915/i915_drv.c   |   6 +
>  drivers/gpu/drm/i915/i915_drv.h   |   5 +
>  drivers/gpu/drm/i915/i915_gem.c   |  21 +-
>  drivers/gpu/drm/i915/i915_gpu_error.c |  31 +-
>  drivers/gpu/drm/i915/i915_gpu_error.h |   2 +-
>  drivers/gpu/drm/i915/i915_sysfs.c |   8 +
>  19 files changed, 716 insertions(+), 81 deletions(-)
>  create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c
>  create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h
>
> --
> 2.30.2
>
___
Intel-gfx mailing list
Intel-gf

Re: [Intel-gfx] [PATCH 0/7] Per client engine busyness

2021-05-13 Thread Alex Deucher
+ David, Christian

On Thu, May 13, 2021 at 12:41 PM Tvrtko Ursulin
 wrote:
>
>
> Hi,
>
> On 13/05/2021 16:48, Alex Deucher wrote:
> > On Thu, May 13, 2021 at 7:00 AM Tvrtko Ursulin
> >  wrote:
> >>
> >> From: Tvrtko Ursulin 
> >>
> >> Resurrect of the previosuly merged per client engine busyness patches. In a
> >> nutshell it enables intel_gpu_top to be more top(1) like useful and show 
> >> not
> >> only physical GPU engine usage but per process view as well.
> >>
> >> Example screen capture:
> >> 
> >> intel-gpu-top -  906/ 955 MHz;0% RC6;  5.30 Watts;  933 irqs/s
> >>
> >>IMC reads: 4414 MiB/s
> >>   IMC writes: 3805 MiB/s
> >>
> >>ENGINE  BUSY  MI_SEMA 
> >> MI_WAIT
> >>   Render/3D/0   93.46% |▋  |  0%   
> >>0%
> >> Blitter/00.00% |   |  0%   
> >>0%
> >>   Video/00.00% |   |  0%   
> >>0%
> >>VideoEnhance/00.00% |   |  0%   
> >>0%
> >>
> >>PIDNAME  Render/3D  BlitterVideo  
> >> VideoEnhance
> >>   2733   neverball |██▌ |||||| 
> >>|
> >>   2047Xorg |███▊|||||| 
> >>|
> >>   2737glxgears |█▍  |||||| 
> >>|
> >>   2128   xfwm4 ||||||| 
> >>|
> >>   2047Xorg ||||||| 
> >>|
> >> 
> >>
> >> Internally we track time spent on engines for each struct intel_context, 
> >> both
> >> for current and past contexts belonging to each open DRM file.
> >>
> >> This can serve as a building block for several features from the wanted 
> >> list:
> >> smarter scheduler decisions, getrusage(2)-like per-GEM-context 
> >> functionality
> >> wanted by some customers, setrlimit(2) like controls, cgroups controller,
> >> dynamic SSEU tuning, ...
> >>
> >> To enable userspace access to the tracked data, we expose time spent on 
> >> GPU per
> >> client and per engine class in sysfs with a hierarchy like the below:
> >>
> >>  # cd /sys/class/drm/card0/clients/
> >>  # tree
> >>  .
> >>  ├── 7
> >>  │   ├── busy
> >>  │   │   ├── 0
> >>  │   │   ├── 1
> >>  │   │   ├── 2
> >>  │   │   └── 3
> >>  │   ├── name
> >>  │   └── pid
> >>  ├── 8
> >>  │   ├── busy
> >>  │   │   ├── 0
> >>  │   │   ├── 1
> >>  │   │   ├── 2
> >>  │   │   └── 3
> >>  │   ├── name
> >>  │   └── pid
> >>  └── 9
> >>  ├── busy
> >>  │   ├── 0
> >>  │   ├── 1
> >>  │   ├── 2
> >>  │   └── 3
> >>  ├── name
> >>  └── pid
> >>
> >> Files in 'busy' directories are numbered using the engine class ABI values 
> >> and
> >> they contain accumulated nanoseconds each client spent on engines of a
> >> respective class.
> >
> > We did something similar in amdgpu using the gpu scheduler.  We then
> > expose the data via fdinfo.  See
> > https://cgit.freedesktop.org/drm/drm-misc/commit/?id=1774baa64f9395fa884ea9ed494bcb043f3b83f5
> > https://cgit.freedesktop.org/drm/drm-misc/commit/?id=874442541133f78c78b6880b8cc495bab5c61704
>
> Interesting!
>
> Is yours wall time or actual GPU time taking preemption and such into
> account? Do you have some userspace tools parsing this data and how to
> do you client discovery? Presumably there has to be a better way that
> going through all open file descriptors?

Wall time.  It uses the fences in the scheduler to calculate engine
time.  We have some python scripts to make it look pretty, but mainly
just r

Re: [Intel-gfx] [PATCH 1/3] gpu: drm: replace occurrences of invalid character

2021-05-19 Thread Alex Deucher
Pushed out to drm-misc-next.  Also fixed up Michel's name.

Alex

On Wed, May 19, 2021 at 11:56 AM Randy Dunlap  wrote:
>
> On 5/19/21 1:15 AM, Mauro Carvalho Chehab wrote:
> > There are some places at drm that ended receiving a
> > REPLACEMENT CHARACTER U+fffd ('�'), probably because of
> > some bad charset conversion.
> >
> > Fix them by using what it seems   to be the proper
> > character.
> >
> > Signed-off-by: Mauro Carvalho Chehab 
>
> Acked-by: Randy Dunlap 
>
> Thanks.
>
> > ---
> >  drivers/gpu/drm/amd/include/atombios.h   | 10 +-
> >  drivers/gpu/drm/i915/gt/intel_gpu_commands.h |  2 +-
> >  drivers/gpu/drm/i915/i915_gpu_error.h|  2 +-
> >  drivers/gpu/drm/r128/r128_drv.h  |  2 +-
> >  4 files changed, 8 insertions(+), 8 deletions(-)
> >
>
> --
> ~Randy
>
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [RFC PATCH 00/97] Basic GuC submission support in the i915

2021-05-25 Thread Alex Deucher
On Fri, May 14, 2021 at 12:31 PM Jason Ekstrand  wrote:
>
> Pulling a few threads together...
>
> On Mon, May 10, 2021 at 1:39 PM Francisco Jerez  wrote:
> >
> > I agree with Martin on this.  Given that using GuC currently involves
> > making your open-source graphics stack rely on a closed-source
> > cryptographically-protected blob in order to submit commands to the GPU,
> > and given that it is /still/ possible to use the GPU without it, I'd
> > expect some strong material justification for making the switch (like,
> > it improves performance of test-case X and Y by Z%, or, we're truly
> > sorry but we cannot program your GPU anymore with a purely open-source
> > software stack).  Any argument based on the apparent direction of the
> > wind doesn't sound like a material engineering reason to me, and runs
> > the risk of being self-fulfilling if it leads us to do the worse thing
> > for our users just because we have the vague feeling that it is the
> > general trend, even though we may have had the means to obtain a better
> > compromise for them.
>
> I think it's important to distinguish between landing code to support
> GuC submission and requiring it in order to use the GPU.  We've got
> the execlist back-end and it's not going anywhere, at least not for
> older hardware, and it will likely keep working as long as execlists
> remain in the hardware.  What's being proposed here is a new back-end
> which, yes, depends on firmware and can be used for more features.
>
> I'm well aware of the slippery slope argument that's implicitly being
> used here even if no one is actually saying it:  If we land GuC
> support in i915 in any form then Intel HW engineers will say "See,
> Linux supports GuC now; we can rip out execlists" and we'll end up in
> the dystopia of closed-source firmware.  If upstream continues to push
> back on GuC in any form then they'll be forced to keep execlists.
> I'll freely admit that there is probably some truth to this.  However,
> I really doubt that it's going to work long-term.  If the HW
> architects are determined enough to rip it out, they will.

You want to stay on the same interfaces as Windows does, like it or
not.  The market is bigger and there is a lot more validation effort.
Even if support for the old way doesn't go away, it won't be as well
tested.  For AMD, we tried to stay on some of the older interfaces on
a number products in the past and ran into lots of subtle issues,
especially around power management related things like clock and power
gating.  There are just too many handshakes and stuff required to make
all of that work smoothly.  It can be especially challenging when the
issues show up well after launch and the firmware and hardware teams
have already moved on to the next projects and have to page the older
projects back into their minds.

Alex


>
> If GuC is really inevitable, then it's in our best interests to land
> at least beta support earlier.  There are a lot of questions that
> people have brought up around back-ports, dealing with stable kernels,
> stability concerns, etc.  The best way to sort those out is to land
> the code and start dealing with the issues.  We can't front-load
> solving every possible issue or the code will never land.  But maybe
> that's people's actual objective?
>
>
> On Wed, May 12, 2021 at 1:26 AM Martin Peres  wrote:
> >
> > On 11/05/2021 19:39, Matthew Brost wrote:
> > > On Tue, May 11, 2021 at 08:26:59AM -0700, Bloomfield, Jon wrote:
> > >>> On 10/05/2021 19:33, Daniel Vetter wrote:
> >  On Mon, May 10, 2021 at 3:55 PM Martin Peres 
> > >>> wrote:
> > >>>
> > >>> However, if the GuC is actually helping i915, then why not open source
> > >>> it and drop all the issues related to its stability? Wouldn't it be the
> > >>> perfect solution, as it would allow dropping execlist support for newer
> > >>> HW, and it would eliminate the concerns about maintenance of stable
> > >>> releases of Linux?
>
> I would like to see that happen.  I know there was some chatter about
> it for a while and then the discussions got killed.  I'm not sure what
> happened, to be honest.  However, I don't think we can make any
> guarantees or assumptions there, I'm afraid. :-(
>
> > >> That the major version of the FW is high is not due to bugs - Bugs don't 
> > >> trigger major version bumps anyway.
> >
> > Of course, where did I say they would?
>
> I think the concern here is that old kernels will require old major
> GuC versions because interfaces won't be backwards-compatible and then
> those kernels won't get bug fixes.  That's a legitimate concern.
> Given the Linux usage model, I think it's fair to require either
> backwards-compatibility with GuC interfaces and validation of that
> backwards-compatibility or stable releases with bug fixes for a good
> long while.  I honestly can't say whether or not we've really scoped
> that.  Jon?
>
> > >> We have been using GuC as the sole mechanism for submission on Windows 
> > >> since Gen8, and

Re: [Intel-gfx] [PATCH] drm/amdgpu: switch from 'pci_' to 'dma_' API

2021-08-23 Thread Alex Deucher
Applied.  Thanks!

Alex

On Mon, Aug 23, 2021 at 2:16 AM Christian König
 wrote:
>
> Am 22.08.21 um 23:21 schrieb Christophe JAILLET:
> > The wrappers in include/linux/pci-dma-compat.h should go away.
> >
> > The patch has been generated with the coccinelle script below.
> >
> > It has been compile tested.
> >
> > @@
> > @@
> > -PCI_DMA_BIDIRECTIONAL
> > +DMA_BIDIRECTIONAL
> >
> > @@
> > @@
> > -PCI_DMA_TODEVICE
> > +DMA_TO_DEVICE
> >
> > @@
> > @@
> > -PCI_DMA_FROMDEVICE
> > +DMA_FROM_DEVICE
> >
> > @@
> > @@
> > -PCI_DMA_NONE
> > +DMA_NONE
> >
> > @@
> > expression e1, e2, e3;
> > @@
> > -pci_alloc_consistent(e1, e2, e3)
> > +dma_alloc_coherent(&e1->dev, e2, e3, GFP_)
> >
> > @@
> > expression e1, e2, e3;
> > @@
> > -pci_zalloc_consistent(e1, e2, e3)
> > +dma_alloc_coherent(&e1->dev, e2, e3, GFP_)
> >
> > @@
> > expression e1, e2, e3, e4;
> > @@
> > -pci_free_consistent(e1, e2, e3, e4)
> > +dma_free_coherent(&e1->dev, e2, e3, e4)
> >
> > @@
> > expression e1, e2, e3, e4;
> > @@
> > -pci_map_single(e1, e2, e3, e4)
> > +dma_map_single(&e1->dev, e2, e3, e4)
> >
> > @@
> > expression e1, e2, e3, e4;
> > @@
> > -pci_unmap_single(e1, e2, e3, e4)
> > +dma_unmap_single(&e1->dev, e2, e3, e4)
> >
> > @@
> > expression e1, e2, e3, e4, e5;
> > @@
> > -pci_map_page(e1, e2, e3, e4, e5)
> > +dma_map_page(&e1->dev, e2, e3, e4, e5)
> >
> > @@
> > expression e1, e2, e3, e4;
> > @@
> > -pci_unmap_page(e1, e2, e3, e4)
> > +dma_unmap_page(&e1->dev, e2, e3, e4)
> >
> > @@
> > expression e1, e2, e3, e4;
> > @@
> > -pci_map_sg(e1, e2, e3, e4)
> > +dma_map_sg(&e1->dev, e2, e3, e4)
> >
> > @@
> > expression e1, e2, e3, e4;
> > @@
> > -pci_unmap_sg(e1, e2, e3, e4)
> > +dma_unmap_sg(&e1->dev, e2, e3, e4)
> >
> > @@
> > expression e1, e2, e3, e4;
> > @@
> > -pci_dma_sync_single_for_cpu(e1, e2, e3, e4)
> > +dma_sync_single_for_cpu(&e1->dev, e2, e3, e4)
> >
> > @@
> > expression e1, e2, e3, e4;
> > @@
> > -pci_dma_sync_single_for_device(e1, e2, e3, e4)
> > +dma_sync_single_for_device(&e1->dev, e2, e3, e4)
> >
> > @@
> > expression e1, e2, e3, e4;
> > @@
> > -pci_dma_sync_sg_for_cpu(e1, e2, e3, e4)
> > +dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4)
> >
> > @@
> > expression e1, e2, e3, e4;
> > @@
> > -pci_dma_sync_sg_for_device(e1, e2, e3, e4)
> > +dma_sync_sg_for_device(&e1->dev, e2, e3, e4)
> >
> > @@
> > expression e1, e2;
> > @@
> > -pci_dma_mapping_error(e1, e2)
> > +dma_mapping_error(&e1->dev, e2)
> >
> > @@
> > expression e1, e2;
> > @@
> > -pci_set_dma_mask(e1, e2)
> > +dma_set_mask(&e1->dev, e2)
> >
> > @@
> > expression e1, e2;
> > @@
> > -pci_set_consistent_dma_mask(e1, e2)
> > +dma_set_coherent_mask(&e1->dev, e2)
> >
> > Signed-off-by: Christophe JAILLET 
>
> Reviewed-by: Christian König 
>
> > ---
> > If needed, see post from Christoph Hellwig on the kernel-janitors ML:
> > https://marc.info/?l=kernel-janitors&m=158745678307186&w=4
> > ---
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c | 6 +++---
> >   1 file changed, 3 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c 
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
> > index b36405170ff3..76efd5f8950f 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
> > @@ -76,7 +76,7 @@ static int amdgpu_gart_dummy_page_init(struct 
> > amdgpu_device *adev)
> >   if (adev->dummy_page_addr)
> >   return 0;
> >   adev->dummy_page_addr = dma_map_page(&adev->pdev->dev, dummy_page, 0,
> > -  PAGE_SIZE, 
> > PCI_DMA_BIDIRECTIONAL);
> > +  PAGE_SIZE, DMA_BIDIRECTIONAL);
> >   if (dma_mapping_error(&adev->pdev->dev, adev->dummy_page_addr)) {
> >   dev_err(&adev->pdev->dev, "Failed to DMA MAP the dummy 
> > page\n");
> >   adev->dummy_page_addr = 0;
> > @@ -96,8 +96,8 @@ void amdgpu_gart_dummy_page_fini(struct amdgpu_device 
> > *adev)
> >   {
> >   if (!adev->dummy_page_addr)
> >   return;
> > - pci_unmap_page(adev->pdev, adev->dummy_page_addr,
> > -PAGE_SIZE, PCI_DMA_BIDIRECTIONAL);
> > + dma_unmap_page(&adev->pdev->dev, adev->dummy_page_addr, PAGE_SIZE,
> > +DMA_BIDIRECTIONAL);
> >   adev->dummy_page_addr = 0;
> >   }
> >
>


Re: [Intel-gfx] [PATCH 2/2] drm/amdgpu: Disable PCIE_DPM on Intel RKL Platform

2021-08-25 Thread Alex Deucher
On Wed, Aug 25, 2021 at 10:22 AM Lazar, Lijo  wrote:
>
>
>
> On 8/25/2021 4:46 PM, Koba Ko wrote:
> > On Wed, Aug 25, 2021 at 6:24 PM Jani Nikula  
> > wrote:
> >>
> >> On Wed, 25 Aug 2021, Koba Ko  wrote:
> >>> On Wed, Aug 25, 2021 at 5:22 PM Jani Nikula  
> >>> wrote:
> 
>  On Wed, 25 Aug 2021, Koba Ko  wrote:
> > AMD polaris GPUs have an issue about audio noise on RKL platform,
> > they provide a commit to fix but for SMU7-based GPU still
> > need another module parameter,
> >
> > For avoiding the module parameter, switch PCI_DPM by determining
> > intel platform in amd drm driver.
> 
>  I'll just note that you could have a Tiger Lake PCH combined with a
>  number of platforms other than Rocket Lake, including not just the
>  obvious Tiger Lake but also Sky Lake, Kaby Lake, Coffee Lake, and Comet
>  Lake.
> 
>  Again, I don't know what the root cause or fix should be, the workaround
>  presented here impacts a much larger number of platforms than where
>  you're claiming the issue is.
> >>>
> >>> Hi Jani, thanks for your feedback.
> >>> Is there any way to identify the RKL PCH?
> >>> I trace the intel_pch.c and can't find the only pch id for RKL.
> >>>
> >>> INTEL_PCH_TGP_DEVICE_ID_TYPE is used by both TGL and RKL.
> >>>
> >>> so it seems that using IS_ROCKETLAKE() is the only way.
> >>
> >> I don't think there is a Rocket Lake PCH. But is the problem related to
> >> the PCH or not?
> >
> > I thought its' not because the issue wouldn't be observed on the TGL 
> > platform.
> > I only tried RKL platform and it use
> > INTEL_PCH_TGP_DEVICE_ID_TYPE/INTEL_PCH_TGP2_DEVICE_ID_TYPE,
> > As per AMD guys, they said the issue is only triggered in RKL platform.
> >
> >>
> >> The GPU PCI IDs are in i915_pciids.h. See INTEL_RKL_IDS() for
> >> RKL. There's a lot of indirection, but that's what IS_ROCKETLAKE() boils
> >> down to. But again, I'm not sure if that's what you want or not.
> > Thanks for suggestions,
> >
> > Just want a way to check if it's a RKL platform,
> > After tracing the kernel, can check by CPU VENDOR(lacks type), check
> > igpu(but there're cpus without igpu)
> > and check pch type(it seems one pch has multiple combinations with CPU).
> > for check igpu, as per my current understanding,  only found RKL CPU with 
> > igpu.
> > Is there a RKL CPU without integrated gpu?
> >
>
> Just for RKL - you could do fetch the x86 info and check
>
> #ifdef CONFIG_X86_64
>  struct cpuinfo_x86 *c = &cpu_data(0);
> // Family/Model check, find the model
> (c->x86 == 6 && c->x86_model == INTEL_FAM6_ROCKETLAKE)
> #endif
>
> I think we don't use anything like this so far. So Alex should give a
> nod as well.

I think that makes sense.  For some background the issue that was
observed with RKL was that the PCIE gen switching has a very high
latency which can lead to audio problems during playback if PCIE DPM
is enabled.

Alex

>
> Thanks,
> Lijo
>
> >>
> >> BR,
> >> Jani.
> >>
> >>
> >>>
> >>> Thanks
> 
>  BR,
>  Jani.
> 
> 
> >
> > Fixes: 1a31474cdb48 ("drm/amd/pm: workaround for audio noise issue")
> > Ref: 
> > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Farchives%2Famd-gfx%2F2021-August%2F067413.html&data=04%7C01%7Clijo.lazar%40amd.com%7C888ab428f2bb4f32e4d408d967c4ae08%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637654916721463596%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=Bgf14CmIx%2FTOD54LN6dccZL0U5gT9lv9yTw7MfKc2sQ%3D&reserved=0
> > Signed-off-by: Koba Ko 
> > ---
> >   .../drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c   | 21 ++-
> >   1 file changed, 20 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c 
> > b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
> > index 0541bfc81c1b..346110dd0f51 100644
> > --- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
> > +++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
> > @@ -1733,6 +1733,25 @@ static int smu7_disable_dpm_tasks(struct 
> > pp_hwmgr *hwmgr)
> >return result;
> >   }
> >
> > +#include 
> > +
> > +static bool intel_tgp_chk(void)
> > +{
> > + struct pci_dev *pch = NULL;
> > + unsigned short id;
> > +
> > + while ((pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, pch))) {
> > + if (pch->vendor != PCI_VENDOR_ID_INTEL)
> > + continue;
> > +
> > + id = pch->device & INTEL_PCH_DEVICE_ID_MASK;
> > + if (id == INTEL_PCH_TGP_DEVICE_ID_TYPE || 
> > INTEL_PCH_TGP2_DEVICE_ID_TYPE)
> 
>  PS. This is always true. ;)
> >>>
> >>> got, thanks
> >>>
> 
> > + return true;
> > + }
> > +
> > + return false;
> > +}
> > +

Re: [Intel-gfx] [PATCH 01/15] dma-resv: Fix kerneldoc

2021-06-22 Thread Alex Deucher
On Tue, Jun 22, 2021 at 12:55 PM Daniel Vetter  wrote:
>
> Oversight from
>
> commit 6edbd6abb783d54f6ac4c3ed5cd9e50cff6c15e9
> Author: Christian König 
> Date:   Mon May 10 16:14:09 2021 +0200
>
> dma-buf: rename and cleanup dma_resv_get_excl v3
>
> Signed-off-by: Daniel Vetter 
> Cc: Sumit Semwal 
> Cc: "Christian König" 
> Cc: linux-me...@vger.kernel.org
> Cc: linaro-mm-...@lists.linaro.org

Reviewed-by: Alex Deucher 

> ---
>  include/linux/dma-resv.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
> index 562b885cf9c3..e1ca2080a1ff 100644
> --- a/include/linux/dma-resv.h
> +++ b/include/linux/dma-resv.h
> @@ -212,7 +212,7 @@ static inline void dma_resv_unlock(struct dma_resv *obj)
>  }
>
>  /**
> - * dma_resv_exclusive - return the object's exclusive fence
> + * dma_resv_excl_fence - return the object's exclusive fence
>   * @obj: the reservation object
>   *
>   * Returns the exclusive fence (if any). Caller must either hold the objects
> --
> 2.32.0.rc2
>
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 02/15] dma-buf: Switch to inline kerneldoc

2021-06-22 Thread Alex Deucher
On Tue, Jun 22, 2021 at 12:55 PM Daniel Vetter  wrote:
>
> Also review & update everything while we're at it.
>
> This is prep work to smash a ton of stuff into the kerneldoc for
> @resv.
>
> Signed-off-by: Daniel Vetter 
> Cc: Sumit Semwal 
> Cc: "Christian König" 
> Cc: Alex Deucher 
> Cc: Daniel Vetter 
> Cc: Dave Airlie 
> Cc: Nirmoy Das 
> Cc: Deepak R Varma 
> Cc: Chen Li 
> Cc: Kevin Wang 
> Cc: linux-me...@vger.kernel.org
> Cc: linaro-mm-...@lists.linaro.org
> ---
>  include/linux/dma-buf.h | 107 +++-
>  1 file changed, 83 insertions(+), 24 deletions(-)
>
> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> index 92eec38a03aa..6d18b9e448b9 100644
> --- a/include/linux/dma-buf.h
> +++ b/include/linux/dma-buf.h
> @@ -289,28 +289,6 @@ struct dma_buf_ops {
>
>  /**
>   * struct dma_buf - shared buffer object
> - * @size: size of the buffer; invariant over the lifetime of the buffer.
> - * @file: file pointer used for sharing buffers across, and for refcounting.
> - * @attachments: list of dma_buf_attachment that denotes all devices 
> attached,
> - *   protected by dma_resv lock.
> - * @ops: dma_buf_ops associated with this buffer object.
> - * @lock: used internally to serialize list manipulation, attach/detach and
> - *vmap/unmap
> - * @vmapping_counter: used internally to refcnt the vmaps
> - * @vmap_ptr: the current vmap ptr if vmapping_counter > 0
> - * @exp_name: name of the exporter; useful for debugging.
> - * @name: userspace-provided name; useful for accounting and debugging,
> - *protected by @resv.
> - * @name_lock: spinlock to protect name access
> - * @owner: pointer to exporter module; used for refcounting when exporter is 
> a
> - * kernel module.
> - * @list_node: node for dma_buf accounting and debugging.
> - * @priv: exporter specific private data for this buffer object.
> - * @resv: reservation object linked to this dma-buf
> - * @poll: for userspace poll support
> - * @cb_excl: for userspace poll support
> - * @cb_shared: for userspace poll support
> - * @sysfs_entry: for exposing information about this buffer in sysfs.
>   * The attachment_uid member of @sysfs_entry is protected by dma_resv lock
>   * and is incremented on each attach.
>   *
> @@ -324,24 +302,100 @@ struct dma_buf_ops {
>   * Device DMA access is handled by the separate &struct dma_buf_attachment.
>   */
>  struct dma_buf {
> +   /**
> +* @size:
> +*
> +* Size of the buffer; invariant over the lifetime of the buffer.
> +*/
> size_t size;
> +
> +   /**
> +* @file:
> +*
> +* File pointer used for sharing buffers across, and for refcounting.
> +* See dma_buf_get() and dma_buf_put().
> +*/
> struct file *file;
> +
> +   /**
> +* @attachments:
> +*
> +* List of dma_buf_attachment that denotes all devices attached,
> +* protected by &dma_resv lock @resv.
> +*/
> struct list_head attachments;
> +
> +   /** @ops: dma_buf_ops associated with this buffer object. */

For consistency you may want to format this like:
/**
  * @ops:
  *
  * dma_buf_ops associated with this buffer object.
  */

> const struct dma_buf_ops *ops;
> +
> +   /**
> +* @lock:
> +*
> +* Used internally to serialize list manipulation, attach/detach and
> +* vmap/unmap. Note that in many cases this is superseeded by
> +* dma_resv_lock() on @resv.
> +*/
> struct mutex lock;
> +
> +   /**
> +* @vmapping_counter:
> +*
> +* Used internally to refcnt the vmaps returned by dma_buf_vmap().
> +* Protected by @lock.
> +*/
> unsigned vmapping_counter;
> +
> +   /**
> +* @vmap_ptr:
> +* The current vmap ptr if @vmapping_counter > 0. Protected by @lock.
> +*/

Same comment as above.

> struct dma_buf_map vmap_ptr;
> +
> +   /**
> +* @exp_name:
> +*
> +* Name of the exporter; useful for debugging. See the
> +* DMA_BUF_SET_NAME IOCTL.
> +*/
> const char *exp_name;
> +
> +   /**
> +* @name:
> +*
> +* Userspace-provided name; useful for accounting and debugging,
> +* protected by dma_resv_lock() on @resv and @name_lock for read 
> access.
> +*/
> const char *name;
> +
> +   /** @name_lock: Spinlock to protect name acces for read access. */
> spinlock_t na

Re: [Intel-gfx] [PULL] drm-misc-next-fixes

2021-04-22 Thread Alex Deucher
On Thu, Apr 22, 2021 at 12:33 PM Maxime Ripard  wrote:
>
> Hi Dave, Daniel,
>
> Here's this week drm-misc-next-fixes PR, for the next merge window
>

Can we also cherry-pick this patch:
https://cgit.freedesktop.org/drm/drm-misc/commit/?id=d510c88cfbb294d2b1e2d0b71576e9b79d0e2e83
It should have really gone into drm-misc-next-fixes rather than
drm-misc-next, but I misjudged the timing.

Thanks,

Alex

> Thanks!
> Maxime
>
> drm-misc-next-fixes-2021-04-22:
> A few fixes for the next merge window, with some build fixes for anx7625
> and lt8912b bridges, incorrect error handling for lt8912b and TTM, and
> one fix for TTM page limit accounting.
> The following changes since commit 9c0fed84d5750e1eea6c664e073ffa2534a17743:
>
>   Merge tag 'drm-intel-next-2021-04-01' of 
> git://anongit.freedesktop.org/drm/drm-intel into drm-next (2021-04-08 
> 14:02:21 +1000)
>
> are available in the Git repository at:
>
>   git://anongit.freedesktop.org/drm/drm-misc 
> tags/drm-misc-next-fixes-2021-04-22
>
> for you to fetch changes up to a4394b6d0a273941a75ebe86a86d6416d536ed0f:
>
>   drm/ttm: Don't count pages in SG BOs against pages_limit (2021-04-21 
> 15:35:20 +0200)
>
> 
> A few fixes for the next merge window, with some build fixes for anx7625
> and lt8912b bridges, incorrect error handling for lt8912b and TTM, and
> one fix for TTM page limit accounting.
>
> 
> Adrien Grassein (1):
>   drm/bridge: lt8912b: fix incorrect handling of of_* return values
>
> Christian König (1):
>   drm/ttm: fix return value check
>
> Felix Kuehling (1):
>   drm/ttm: Don't count pages in SG BOs against pages_limit
>
> Randy Dunlap (2):
>   drm: bridge: fix ANX7625 use of mipi_dsi_() functions
>   drm: bridge: fix LONTIUM use of mipi_dsi_() functions
>
>  drivers/gpu/drm/bridge/Kconfig   |  3 +++
>  drivers/gpu/drm/bridge/analogix/Kconfig  |  1 +
>  drivers/gpu/drm/bridge/lontium-lt8912b.c | 32 
> +---
>  drivers/gpu/drm/ttm/ttm_tt.c | 29 +++--
>  4 files changed, 40 insertions(+), 25 deletions(-)
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH v6 1/1] drm/drm_mst: Use Extended Base Receiver Capability DPCD space

2021-04-28 Thread Alex Deucher
+ dri-devel as well.

On Wed, Apr 28, 2021 at 4:44 PM Nikola Cornij  wrote:
>
> [why]
> DP 1.4a spec madates that if DP_EXTENDED_RECEIVER_CAP_FIELD_PRESENT is
> set, Extended Base Receiver Capability DPCD space must be used. Without
> doing that, the three DPCD values that differ will be wrong, leading to
> incorrect or limited functionality. MST link rate, for example, could
> have a lower value. Also, Synaptics quirk wouldn't work out well when
> Extended DPCD was not read, resulting in no DSC for such hubs.
>
> [how]
> Modify MST topology manager to use the values from Extended DPCD where
> applicable.
>
> To prevent regression on the sources that have a lower maximum link rate
> capability than MAX_LINK_RATE from Extended DPCD, have the drivers
> supply maximum lane count and rate at initialization time.
>
> This also reverts 'commit 2dcab875e763 ("Revert "drm/dp_mst: Retrieve
> extended DPCD caps for topology manager"")', brining the change back to
> the original 'commit ad44c03208e4 ("drm/dp_mst: Retrieve extended DPCD
> caps for topology manager")'.
>
> Signed-off-by: Nikola Cornij 
> ---
>  .../display/amdgpu_dm/amdgpu_dm_mst_types.c   |  5 +++
>  .../gpu/drm/amd/display/dc/core/dc_link_dp.c  | 18 +++
>  drivers/gpu/drm/amd/display/dc/dc_link.h  |  2 ++
>  drivers/gpu/drm/drm_dp_mst_topology.c | 32 ---
>  drivers/gpu/drm/i915/display/intel_dp_mst.c   |  6 +++-
>  drivers/gpu/drm/nouveau/dispnv50/disp.c   |  3 +-
>  drivers/gpu/drm/radeon/radeon_dp_mst.c|  8 +
>  include/drm/drm_dp_mst_helper.h   | 12 ++-
>  8 files changed, 71 insertions(+), 15 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c 
> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
> index 997567f6f0ba..b7e01b6fb328 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
> @@ -429,6 +429,8 @@ void amdgpu_dm_initialize_dp_connector(struct 
> amdgpu_display_manager *dm,
>struct amdgpu_dm_connector *aconnector,
>int link_index)
>  {
> +   struct dc_link_settings max_link_enc_cap = {0};
> +
> aconnector->dm_dp_aux.aux.name =
> kasprintf(GFP_KERNEL, "AMDGPU DM aux hw bus %d",
>   link_index);
> @@ -443,6 +445,7 @@ void amdgpu_dm_initialize_dp_connector(struct 
> amdgpu_display_manager *dm,
> if (aconnector->base.connector_type == DRM_MODE_CONNECTOR_eDP)
> return;
>
> +   dc_link_dp_get_max_link_enc_cap(aconnector->dc_link, 
> &max_link_enc_cap);
> aconnector->mst_mgr.cbs = &dm_mst_cbs;
> drm_dp_mst_topology_mgr_init(
> &aconnector->mst_mgr,
> @@ -450,6 +453,8 @@ void amdgpu_dm_initialize_dp_connector(struct 
> amdgpu_display_manager *dm,
> &aconnector->dm_dp_aux.aux,
> 16,
> 4,
> +   max_link_enc_cap.lane_count,
> +   max_link_enc_cap.link_rate,
> aconnector->connector_id);
>
> drm_connector_attach_dp_subconnector_property(&aconnector->base);
> diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c 
> b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
> index 7d2e433c2275..6fe66b7ee53e 100644
> --- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
> +++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
> @@ -1894,6 +1894,24 @@ bool dc_link_dp_sync_lt_end(struct dc_link *link, bool 
> link_down)
> return true;
>  }
>
> +bool dc_link_dp_get_max_link_enc_cap(const struct dc_link *link, struct 
> dc_link_settings *max_link_enc_cap)
> +{
> +   if (!max_link_enc_cap) {
> +   DC_LOG_ERROR("%s: Could not return max link encoder caps", 
> __func__);
> +   return false;
> +   }
> +
> +   if (link->link_enc->funcs->get_max_link_cap) {
> +   link->link_enc->funcs->get_max_link_cap(link->link_enc, 
> max_link_enc_cap);
> +   return true;
> +   }
> +
> +   DC_LOG_ERROR("%s: Max link encoder caps unknown", __func__);
> +   max_link_enc_cap->lane_count = 1;
> +   max_link_enc_cap->link_rate = 6;
> +   return false;
> +}
> +
>  static struct dc_link_settings get_max_link_cap(struct dc_link *link)
>  {
> struct dc_link_settings max_link_cap = {0};
> diff --git a/drivers/gpu/drm/amd/display/dc/dc_link.h 
> b/drivers/gpu/drm/amd/display/dc/dc_link.h
> index b0013e674864..cb6d0543d839 100644
> --- a/drivers/gpu/drm/amd/display/dc/dc_link.h
> +++ b/drivers/gpu/drm/amd/display/dc/dc_link.h
> @@ -346,6 +346,8 @@ bool dc_link_dp_set_test_pattern(
> const unsigned char *p_custom_pattern,
> unsigned int cust_pattern_size);
>
> +bool dc_link_dp_get_max_link_enc_cap(const struct dc_link *link, struct 
> dc_link_settings *max_link_enc_cap);
> +
>  void dc_link_enab

Re: [Intel-gfx] [PULL] drm-misc-next-fixes

2021-04-28 Thread Alex Deucher
On Mon, Apr 26, 2021 at 3:35 AM Maxime Ripard  wrote:
>
> Hi Alex,
>
> On Thu, Apr 22, 2021 at 12:40:10PM -0400, Alex Deucher wrote:
> > On Thu, Apr 22, 2021 at 12:33 PM Maxime Ripard  wrote:
> > >
> > > Hi Dave, Daniel,
> > >
> > > Here's this week drm-misc-next-fixes PR, for the next merge window
> > >
> >
> > Can we also cherry-pick this patch:
> > https://cgit.freedesktop.org/drm/drm-misc/commit/?id=d510c88cfbb294d2b1e2d0b71576e9b79d0e2e83
> > It should have really gone into drm-misc-next-fixes rather than
> > drm-misc-next, but I misjudged the timing.
>
> Yeah, just cherry-pick it, I'll keep sending PR during the merge window :)

Thanks, I cherry-picked it yesterday.

Alex
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm: Split out drm_probe_helper.h

2019-01-15 Thread Alex Deucher
On Tue, Jan 15, 2019 at 5:41 AM Daniel Vetter  wrote:
>
> Having the probe helper stuff (which pretty much everyone needs) in
> the drm_crtc_helper.h file (which atomic drivers should never need) is
> confusing. Split them out.
>
> To make sure I actually achieved the goal here I went through all
> drivers. And indeed, all atomic drivers are now free of
> drm_crtc_helper.h includes.
>
> v2: Make it compile. There was so much compile fail on arm drivers
> that I figured I'll better not include any of the acks on v1.
>
> v3: Massive rebase because i915 has lost a lot of drmP.h includes, but
> not all: Through drm_crtc_helper.h > drm_modeset_helper.h -> drmP.h
> there was still one, which this patch largely removes. Which means
> rolling out lots more includes all over.
>
> This will also conflict with ongoing drmP.h cleanup by others I
> expect.
>
> v3: Rebase on top of atomic bochs.
>
> Cc: Sam Ravnborg 
> Cc: Jani Nikula 
> Cc: Laurent Pinchart 
> Acked-by: Rodrigo Vivi  (v2)
> Acked-by: Benjamin Gaignard  (v2)
> Signed-off-by: Daniel Vetter 
> Cc: linux-arm-ker...@lists.infradead.org
> Cc: virtualizat...@lists.linux-foundation.org
> Cc: etna...@lists.freedesktop.org
> Cc: linux-samsung-...@vger.kernel.org
> Cc: intel-gfx@lists.freedesktop.org
> Cc: linux-media...@lists.infradead.org
> Cc: linux-amlo...@lists.infradead.org
> Cc: linux-arm-...@vger.kernel.org
> Cc: freedr...@lists.freedesktop.org
> Cc: nouv...@lists.freedesktop.org
> Cc: spice-de...@lists.freedesktop.org
> Cc: amd-...@lists.freedesktop.org
> Cc: linux-renesas-...@vger.kernel.org
> Cc: linux-rockc...@lists.infradead.org
> Cc: linux-st...@st-md-mailman.stormreply.com
> Cc: linux-te...@vger.kernel.org
> Cc: xen-de...@lists.xen.org
> ---
> Merging this is going to be a bit a mess due to all the ongoing drmP.h
> cleanups. I think the following should work:
> - Apply Sam's prep patches for removing drmP.h from
>   drm_modeset_helper.h
> - Get the i915 drmP.h cleanup backmerged into drm-misc-next
> - Apply this patch.
> - Apply Sam's patch to remove drmP.h from drm_modeset_helper.h
> - All through drm-misc-next, which has some potential for trivial
>   conflicts around #includes with other drivers unfortunately.
>
> I hope there's no other driver who'll blow up accidentally because
> someone else is doing a drmP.h cleanup. Laurent maybe?
>
> Jani, ack on this?
> -Daniel

amdgpu and radeon:
Acked-by: Alex Deucher 
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm: remove redundant 'default n' from Kconfig

2019-04-12 Thread Alex Deucher
On Fri, Apr 12, 2019 at 5:56 AM Bartlomiej Zolnierkiewicz
 wrote:
>
> 'default n' is the default value for any bool or tristate Kconfig
> setting so there is no need to write it explicitly.
>
> Also since commit f467c5640c29 ("kconfig: only write '# CONFIG_FOO
> is not set' for visible symbols") the Kconfig behavior is the same
> regardless of 'default n' being present or not:
>
> ...
> One side effect of (and the main motivation for) this change is making
> the following two definitions behave exactly the same:
>
> config FOO
> bool
>
> config FOO
> bool
> default n
>
> With this change, neither of these will generate a
> '# CONFIG_FOO is not set' line (assuming FOO isn't selected/implied).
> That might make it clearer to people that a bare 'default n' is
> redundant.
> ...
>
> Signed-off-by: Bartlomiej Zolnierkiewicz 

Acked-by: Alex Deucher 
for amdgpu and drm.

> ---
>  drivers/gpu/drm/Kconfig |5 -
>  drivers/gpu/drm/amd/amdgpu/Kconfig  |1 -
>  drivers/gpu/drm/arm/Kconfig |1 -
>  drivers/gpu/drm/exynos/Kconfig  |2 --
>  drivers/gpu/drm/i915/Kconfig|3 ---
>  drivers/gpu/drm/i915/Kconfig.debug  |   13 -
>  drivers/gpu/drm/msm/Kconfig |2 --
>  drivers/gpu/drm/nouveau/Kconfig |2 --
>  drivers/gpu/drm/omapdrm/Kconfig |1 -
>  drivers/gpu/drm/omapdrm/dss/Kconfig |6 --
>  10 files changed, 36 deletions(-)
>
> Index: b/drivers/gpu/drm/Kconfig
> ===
> --- a/drivers/gpu/drm/Kconfig   2019-04-12 11:42:30.070095359 +0200
> +++ b/drivers/gpu/drm/Kconfig   2019-04-12 11:42:30.066095359 +0200
> @@ -37,7 +37,6 @@ config DRM_DP_AUX_CHARDEV
>
>  config DRM_DEBUG_MM
> bool "Insert extra checks and debug info into the DRM range managers"
> -   default n
> depends on DRM=y
> depends on STACKTRACE_SUPPORT
> select STACKDEPOT
> @@ -56,7 +55,6 @@ config DRM_DEBUG_SELFTEST
> select PRIME_NUMBERS
> select DRM_LIB_RANDOM
> select DRM_KMS_HELPER
> -   default n
> help
>   This option provides kernel modules that can be used to run
>   various selftests on parts of the DRM api. This option is not
> @@ -113,7 +111,6 @@ config DRM_FBDEV_OVERALLOC
>  config DRM_FBDEV_LEAK_PHYS_SMEM
> bool "Shamelessly allow leaking of fbdev physical address (DANGEROUS)"
> depends on DRM_FBDEV_EMULATION && EXPERT
> -   default n
> help
>   In order to keep user-space compatibility, we want in certain
>   use-cases to keep leaking the fbdev physical address to the
> @@ -247,7 +244,6 @@ config DRM_VKMS
> tristate "Virtual KMS (EXPERIMENTAL)"
> depends on DRM
> select DRM_KMS_HELPER
> -   default n
> help
>   Virtual Kernel Mode-Setting (VKMS) is used for testing or for
>   running GPU in a headless machines. Choose this option to get
> @@ -424,4 +420,3 @@ config DRM_PANEL_ORIENTATION_QUIRKS
>
>  config DRM_LIB_RANDOM
> bool
> -   default n
> Index: b/drivers/gpu/drm/amd/amdgpu/Kconfig
> ===
> --- a/drivers/gpu/drm/amd/amdgpu/Kconfig2019-04-12 11:42:30.070095359 
> +0200
> +++ b/drivers/gpu/drm/amd/amdgpu/Kconfig2019-04-12 11:42:30.066095359 
> +0200
> @@ -35,7 +35,6 @@ config DRM_AMDGPU_GART_DEBUGFS
> bool "Allow GART access through debugfs"
> depends on DRM_AMDGPU
> depends on DEBUG_FS
> -   default n
> help
>   Selecting this option creates a debugfs file to inspect the mapped
>   pages. Uses more memory for housekeeping, enable only for debugging.
> Index: b/drivers/gpu/drm/arm/Kconfig
> ===
> --- a/drivers/gpu/drm/arm/Kconfig   2019-04-12 11:42:30.070095359 +0200
> +++ b/drivers/gpu/drm/arm/Kconfig   2019-04-12 11:42:30.066095359 +0200
> @@ -16,7 +16,6 @@ config DRM_HDLCD
>  config DRM_HDLCD_SHOW_UNDERRUN
> bool "Show underrun conditions"
> depends on DRM_HDLCD
> -   default n
> help
>   Enable this option to show in red colour the pixels that the
>   HDLCD device did not fetch from framebuffer due to underrun
> Index: b/drivers/gpu/drm/exynos/Kconfig
> ===

Re: [Intel-gfx] [PATCH] drm: Nuke drm_calc_{h,v}scale_relaxed()

2019-02-06 Thread Alex Deucher
On Wed, Feb 6, 2019 at 1:32 PM Ville Syrjala
 wrote:
>
> From: Ville Syrjälä 
>
> The fuzzy drm_calc_{h,v}scale_relaxed() helpers are no longer used.
> Throw them in the bin.
>
> Signed-off-by: Ville Syrjälä 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/drm_rect.c | 108 -
>  include/drm/drm_rect.h |   6 ---
>  2 files changed, 114 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_rect.c b/drivers/gpu/drm/drm_rect.c
> index 8c057829b804..66c41b12719c 100644
> --- a/drivers/gpu/drm/drm_rect.c
> +++ b/drivers/gpu/drm/drm_rect.c
> @@ -207,114 +207,6 @@ int drm_rect_calc_vscale(const struct drm_rect *src,
>  }
>  EXPORT_SYMBOL(drm_rect_calc_vscale);
>
> -/**
> - * drm_calc_hscale_relaxed - calculate the horizontal scaling factor
> - * @src: source window rectangle
> - * @dst: destination window rectangle
> - * @min_hscale: minimum allowed horizontal scaling factor
> - * @max_hscale: maximum allowed horizontal scaling factor
> - *
> - * Calculate the horizontal scaling factor as
> - * (@src width) / (@dst width).
> - *
> - * If the calculated scaling factor is below @min_vscale,
> - * decrease the height of rectangle @dst to compensate.
> - *
> - * If the calculated scaling factor is above @max_vscale,
> - * decrease the height of rectangle @src to compensate.
> - *
> - * If the scale is below 1 << 16, round down. If the scale is above
> - * 1 << 16, round up. This will calculate the scale with the most
> - * pessimistic limit calculation.
> - *
> - * RETURNS:
> - * The horizontal scaling factor.
> - */
> -int drm_rect_calc_hscale_relaxed(struct drm_rect *src,
> -struct drm_rect *dst,
> -int min_hscale, int max_hscale)
> -{
> -   int src_w = drm_rect_width(src);
> -   int dst_w = drm_rect_width(dst);
> -   int hscale = drm_calc_scale(src_w, dst_w);
> -
> -   if (hscale < 0 || dst_w == 0)
> -   return hscale;
> -
> -   if (hscale < min_hscale) {
> -   int max_dst_w = src_w / min_hscale;
> -
> -   drm_rect_adjust_size(dst, max_dst_w - dst_w, 0);
> -
> -   return min_hscale;
> -   }
> -
> -   if (hscale > max_hscale) {
> -   int max_src_w = dst_w * max_hscale;
> -
> -   drm_rect_adjust_size(src, max_src_w - src_w, 0);
> -
> -   return max_hscale;
> -   }
> -
> -   return hscale;
> -}
> -EXPORT_SYMBOL(drm_rect_calc_hscale_relaxed);
> -
> -/**
> - * drm_rect_calc_vscale_relaxed - calculate the vertical scaling factor
> - * @src: source window rectangle
> - * @dst: destination window rectangle
> - * @min_vscale: minimum allowed vertical scaling factor
> - * @max_vscale: maximum allowed vertical scaling factor
> - *
> - * Calculate the vertical scaling factor as
> - * (@src height) / (@dst height).
> - *
> - * If the calculated scaling factor is below @min_vscale,
> - * decrease the height of rectangle @dst to compensate.
> - *
> - * If the calculated scaling factor is above @max_vscale,
> - * decrease the height of rectangle @src to compensate.
> - *
> - * If the scale is below 1 << 16, round down. If the scale is above
> - * 1 << 16, round up. This will calculate the scale with the most
> - * pessimistic limit calculation.
> - *
> - * RETURNS:
> - * The vertical scaling factor.
> - */
> -int drm_rect_calc_vscale_relaxed(struct drm_rect *src,
> -struct drm_rect *dst,
> -int min_vscale, int max_vscale)
> -{
> -   int src_h = drm_rect_height(src);
> -   int dst_h = drm_rect_height(dst);
> -   int vscale = drm_calc_scale(src_h, dst_h);
> -
> -   if (vscale < 0 || dst_h == 0)
> -   return vscale;
> -
> -   if (vscale < min_vscale) {
> -   int max_dst_h = src_h / min_vscale;
> -
> -   drm_rect_adjust_size(dst, 0, max_dst_h - dst_h);
> -
> -   return min_vscale;
> -   }
> -
> -   if (vscale > max_vscale) {
> -   int max_src_h = dst_h * max_vscale;
> -
> -   drm_rect_adjust_size(src, 0, max_src_h - src_h);
> -
> -   return max_vscale;
> -   }
> -
> -   return vscale;
> -}
> -EXPORT_SYMBOL(drm_rect_calc_vscale_relaxed);
> -
>  /**
>   * drm_rect_debug_print - print the rectangle information
>   * @prefix: prefix string
> diff --git a/include/drm/drm_rect.h b/include/drm/drm_rect.h
> index 6c54544a4be7..6195820aa5c5 100644
> --- a/include/drm/drm_rect.h
> +++ b/include/drm/

Re: [Intel-gfx] [RFC PATCH 00/42] Introduce memory region concept (including device local memory)

2019-02-25 Thread Alex Deucher
On Mon, Feb 25, 2019 at 9:35 PM Joonas Lahtinen
 wrote:
>
> Quoting Dave Airlie (2019-02-25 12:24:48)
> > On Tue, 19 Feb 2019 at 23:32, Joonas Lahtinen
> >  wrote:
> > >
> > > + dri-devel mailing list, especially for the buddy allocator part
> > >
> > > Quoting Dave Airlie (2019-02-15 02:47:07)
> > > > On Fri, 15 Feb 2019 at 00:57, Matthew Auld  
> > > > wrote:
> > > > >
> > > > > In preparation for upcoming devices with device local memory, 
> > > > > introduce the
> > > > > concept of different memory regions, and a simple buddy allocator to 
> > > > > manage
> > > > > them.
> > > >
> > > > This is missing the information on why it's not TTM.
> > > >
> > > > Nothing against extending i915 gem off into doing stuff we already
> > > > have examples off in tree, but before you do that it would be good to
> > > > have a why we can't use TTM discussion in public.
> > >
> > > Glad that you asked. It's my fault that it was not included in
> > > the cover letter. I anticipated the question, but was travelling
> > > for a couple of days at the time this was sent. I didn't want
> > > to write a hasty explanation and then disappear, leaving others to
> > > take the heat.
> > >
> > > So here goes the less-hasty version:
> > >
> > > We did an analysis on the effort needed vs benefit gained of using
> > > TTM when this was started initially. The conclusion was that we
> > > already share the interesting bits of code through core DRM, really.
> > >
> > > Re-writing the memory handling to TTM would buy us more fine-grained
> > > locking. But it's more a trait of rewriting the memory handling with
> > > the information we have learned, than rewriting it to use TTM :)
> > >
> > > And further, we've been getting rid of struct_mutex at a steady phase
> > > in the past years, so we have a clear path to the fine-grained locking
> > > already in the not-so-distant future. With all this we did not see
> > > much gained from converting over, as the code sharing is already
> > > substantial.
> > >
> > > We also wanted to have the buddy allocator instead of a for loop making
> > > drm_mm allocations to make sure we can keep the memory fragmentation
> > > at bay. The intent is to move the buddy allocator to core DRM, to the
> > > benefit of all the drivers, if there is interest from community. It has
> > > been written as a strictly separate component with that in mind.
> > >
> > > And if you take the buddy allocator out of the patch set, the rest is
> > > mostly just vfuncing things up to be able to have different backing
> > > storages for objects. We took the opportunity to move over to the more
> > > valgrind friendly mmap while touching things, but it's something we
> > > have been contemplating anyway. And yeah, loads of selftests.
> > >
> > > That's really all that needed adding, and most of it is internal to
> > > i915 and not to do with uAPI. This means porting over an userspace
> > > driver doesn't require a substantial rewrite, but adding new a few
> > > new IOCTLs to set the preferred backing storage placements.
> > >
> > > All the previous GEM abstractions keep applying, so we did not see
> > > a justification to rewrite the kernel driver and userspace drivers.
> > > It would have just to made things look like TTM, when we already
> > > have the important parts of the code shared with TTM drivers
> > > behind the GEM interfaces which all our drivers sit on top of.
> >
> > a) you guys should be the community as well, if the buddy allocator is
> > useful in the core DRM get out there and try and see if anyone else
> > has a use case for it, like the GPU scheduler we have now (can i915
> > use that yet? :-)
>
> Well, the buddy allocator should be useful for anybody wishing to have
> as continuous physical allocations as possible. I have naively assumed
> that would be almost everyone. So it would be only a question if others
> see the amount of work required to convert over is justified for them.
>
> For the common DRM scheduler, I think a solid move from the beginning
> would have been to factor out the i915 scheduler as it was most advanced
> in features :) Now there is a way more trivial common scheduler core with
> no easy path to transition without a feature regression.

Can you elaborate?  What features are missing from the drm gpu scheduler?

>
> We'd have to rewrite many of the more advanced features for that codebase
> before we could transition over. It's hard to justify such work, for
> that it would buy us very little compared to amount of work.
>
> Situation would be different if there was something gained from
> switching over. This would be the situation if the more advanced
> scheduler was picked as the shared codebase.
>
> > b) however this last two paragraphs fill me with no confidence that
> > you've looked at TTM at all. It sounds like you took comments about
> > TTM made 10 years ago, and didn't update them. There should be no
> > major reason for a uapi change just because you adopt TTM. TTM hasn't
> 

Re: [Intel-gfx] [RFC PATCH 00/42] Introduce memory region concept (including device local memory)

2019-02-26 Thread Alex Deucher
On Tue, Feb 26, 2019 at 7:17 AM Joonas Lahtinen
 wrote:
>
> Quoting Alex Deucher (2019-02-25 21:31:43)
> > On Mon, Feb 25, 2019 at 9:35 PM Joonas Lahtinen
> >  wrote:
> > >
> > > Quoting Dave Airlie (2019-02-25 12:24:48)
> > > > On Tue, 19 Feb 2019 at 23:32, Joonas Lahtinen
> > > >  wrote:
> > > > >
> > > > > + dri-devel mailing list, especially for the buddy allocator part
> > > > >
> > > > > Quoting Dave Airlie (2019-02-15 02:47:07)
> > > > > > On Fri, 15 Feb 2019 at 00:57, Matthew Auld  
> > > > > > wrote:
> > > > > > >
> > > > > > > In preparation for upcoming devices with device local memory, 
> > > > > > > introduce the
> > > > > > > concept of different memory regions, and a simple buddy allocator 
> > > > > > > to manage
> > > > > > > them.
> > > > > >
> > > > > > This is missing the information on why it's not TTM.
> > > > > >
> > > > > > Nothing against extending i915 gem off into doing stuff we already
> > > > > > have examples off in tree, but before you do that it would be good 
> > > > > > to
> > > > > > have a why we can't use TTM discussion in public.
> > > > >
> > > > > Glad that you asked. It's my fault that it was not included in
> > > > > the cover letter. I anticipated the question, but was travelling
> > > > > for a couple of days at the time this was sent. I didn't want
> > > > > to write a hasty explanation and then disappear, leaving others to
> > > > > take the heat.
> > > > >
> > > > > So here goes the less-hasty version:
> > > > >
> > > > > We did an analysis on the effort needed vs benefit gained of using
> > > > > TTM when this was started initially. The conclusion was that we
> > > > > already share the interesting bits of code through core DRM, really.
> > > > >
> > > > > Re-writing the memory handling to TTM would buy us more fine-grained
> > > > > locking. But it's more a trait of rewriting the memory handling with
> > > > > the information we have learned, than rewriting it to use TTM :)
> > > > >
> > > > > And further, we've been getting rid of struct_mutex at a steady phase
> > > > > in the past years, so we have a clear path to the fine-grained locking
> > > > > already in the not-so-distant future. With all this we did not see
> > > > > much gained from converting over, as the code sharing is already
> > > > > substantial.
> > > > >
> > > > > We also wanted to have the buddy allocator instead of a for loop 
> > > > > making
> > > > > drm_mm allocations to make sure we can keep the memory fragmentation
> > > > > at bay. The intent is to move the buddy allocator to core DRM, to the
> > > > > benefit of all the drivers, if there is interest from community. It 
> > > > > has
> > > > > been written as a strictly separate component with that in mind.
> > > > >
> > > > > And if you take the buddy allocator out of the patch set, the rest is
> > > > > mostly just vfuncing things up to be able to have different backing
> > > > > storages for objects. We took the opportunity to move over to the more
> > > > > valgrind friendly mmap while touching things, but it's something we
> > > > > have been contemplating anyway. And yeah, loads of selftests.
> > > > >
> > > > > That's really all that needed adding, and most of it is internal to
> > > > > i915 and not to do with uAPI. This means porting over an userspace
> > > > > driver doesn't require a substantial rewrite, but adding new a few
> > > > > new IOCTLs to set the preferred backing storage placements.
> > > > >
> > > > > All the previous GEM abstractions keep applying, so we did not see
> > > > > a justification to rewrite the kernel driver and userspace drivers.
> > > > > It would have just to made things look like TTM, when we already
> > > > > have the important parts of the code shared with TTM drivers
> > > > > behind the GEM interfaces which all our drivers sit on top of.
> >

Re: [Intel-gfx] [RFC PATCH 00/42] Introduce memory region concept (including device local memory)

2019-02-26 Thread Alex Deucher
On Tue, Feb 26, 2019 at 12:20 PM Alex Deucher  wrote:
>
> On Tue, Feb 26, 2019 at 7:17 AM Joonas Lahtinen
>  wrote:
> >
> > Quoting Alex Deucher (2019-02-25 21:31:43)
> > > On Mon, Feb 25, 2019 at 9:35 PM Joonas Lahtinen
> > >  wrote:
> > > >
> > > > Quoting Dave Airlie (2019-02-25 12:24:48)
> > > > > On Tue, 19 Feb 2019 at 23:32, Joonas Lahtinen
> > > > >  wrote:
> > > > > >
> > > > > > + dri-devel mailing list, especially for the buddy allocator part
> > > > > >
> > > > > > Quoting Dave Airlie (2019-02-15 02:47:07)
> > > > > > > On Fri, 15 Feb 2019 at 00:57, Matthew Auld 
> > > > > > >  wrote:
> > > > > > > >
> > > > > > > > In preparation for upcoming devices with device local memory, 
> > > > > > > > introduce the
> > > > > > > > concept of different memory regions, and a simple buddy 
> > > > > > > > allocator to manage
> > > > > > > > them.
> > > > > > >
> > > > > > > This is missing the information on why it's not TTM.
> > > > > > >
> > > > > > > Nothing against extending i915 gem off into doing stuff we already
> > > > > > > have examples off in tree, but before you do that it would be 
> > > > > > > good to
> > > > > > > have a why we can't use TTM discussion in public.
> > > > > >
> > > > > > Glad that you asked. It's my fault that it was not included in
> > > > > > the cover letter. I anticipated the question, but was travelling
> > > > > > for a couple of days at the time this was sent. I didn't want
> > > > > > to write a hasty explanation and then disappear, leaving others to
> > > > > > take the heat.
> > > > > >
> > > > > > So here goes the less-hasty version:
> > > > > >
> > > > > > We did an analysis on the effort needed vs benefit gained of using
> > > > > > TTM when this was started initially. The conclusion was that we
> > > > > > already share the interesting bits of code through core DRM, really.
> > > > > >
> > > > > > Re-writing the memory handling to TTM would buy us more fine-grained
> > > > > > locking. But it's more a trait of rewriting the memory handling with
> > > > > > the information we have learned, than rewriting it to use TTM :)
> > > > > >
> > > > > > And further, we've been getting rid of struct_mutex at a steady 
> > > > > > phase
> > > > > > in the past years, so we have a clear path to the fine-grained 
> > > > > > locking
> > > > > > already in the not-so-distant future. With all this we did not see
> > > > > > much gained from converting over, as the code sharing is already
> > > > > > substantial.
> > > > > >
> > > > > > We also wanted to have the buddy allocator instead of a for loop 
> > > > > > making
> > > > > > drm_mm allocations to make sure we can keep the memory fragmentation
> > > > > > at bay. The intent is to move the buddy allocator to core DRM, to 
> > > > > > the
> > > > > > benefit of all the drivers, if there is interest from community. It 
> > > > > > has
> > > > > > been written as a strictly separate component with that in mind.
> > > > > >
> > > > > > And if you take the buddy allocator out of the patch set, the rest 
> > > > > > is
> > > > > > mostly just vfuncing things up to be able to have different backing
> > > > > > storages for objects. We took the opportunity to move over to the 
> > > > > > more
> > > > > > valgrind friendly mmap while touching things, but it's something we
> > > > > > have been contemplating anyway. And yeah, loads of selftests.
> > > > > >
> > > > > > That's really all that needed adding, and most of it is internal to
> > > > > > i915 and not to do with uAPI. This means porting over an userspace
> > > > > > driver doesn't require a substantial rewrite, but adding new a few
> &g

Re: [Intel-gfx] [PATCH 4/4] drm/edid: Add display_info.rgb_quant_range_selectable

2018-11-28 Thread Alex Deucher
On Wed, Nov 28, 2018 at 12:19 PM Eric Anholt  wrote:
>
> Ville Syrjala  writes:
>
> > From: Ville Syrjälä 
> >
> > Move the CEA-861 QS bit handling entirely into the edid code. No
> > need to bother the drivers with this.
> >
> > Cc: Alex Deucher 
> > Cc: "Christian König" 
> > Cc: "David (ChunMing) Zhou" 
> > Cc: amd-...@lists.freedesktop.org
> > Cc: Eric Anholt  (supporter:DRM DRIVERS FOR VC4)
> > Signed-off-by: Ville Syrjälä 
>
> For vc4,
> Acked-by: Eric Anholt 
>
> Looks like a nice cleanup!

for radeon:
Acked-by: Alex Deucher 
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/dp-mst-helper: Remove hotplug callback

2018-11-29 Thread Alex Deucher
On Wed, Nov 28, 2018 at 5:12 PM Daniel Vetter  wrote:
>
> When everyone implements it exactly the same way, among all 4
> implementations, there's not really a need to overwrite this at all.
>
> Aside: drm_kms_helper_hotplug_event is pretty much core functionality
> at this point. Probably should move it there.
>
> Signed-off-by: Daniel Vetter 

Acked-by: Alex Deucher 

> ---
>  .../drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c|  9 -
>  drivers/gpu/drm/drm_dp_mst_topology.c  |  7 ---
>  drivers/gpu/drm/i915/intel_dp_mst.c| 10 --
>  drivers/gpu/drm/nouveau/dispnv50/disp.c|  8 
>  drivers/gpu/drm/radeon/radeon_dp_mst.c |  9 -
>  include/drm/drm_dp_mst_helper.h|  2 --
>  6 files changed, 4 insertions(+), 41 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c 
> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
> index d02c32a1039c..9fdeca096407 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
> @@ -396,14 +396,6 @@ static void dm_dp_destroy_mst_connector(struct 
> drm_dp_mst_topology_mgr *mgr,
> drm_connector_put(connector);
>  }
>
> -static void dm_dp_mst_hotplug(struct drm_dp_mst_topology_mgr *mgr)
> -{
> -   struct amdgpu_dm_connector *master = container_of(mgr, struct 
> amdgpu_dm_connector, mst_mgr);
> -   struct drm_device *dev = master->base.dev;
> -
> -   drm_kms_helper_hotplug_event(dev);
> -}
> -
>  static void dm_dp_mst_register_connector(struct drm_connector *connector)
>  {
> struct drm_device *dev = connector->dev;
> @@ -420,7 +412,6 @@ static void dm_dp_mst_register_connector(struct 
> drm_connector *connector)
>  static const struct drm_dp_mst_topology_cbs dm_mst_cbs = {
> .add_connector = dm_dp_add_mst_connector,
> .destroy_connector = dm_dp_destroy_mst_connector,
> -   .hotplug = dm_dp_mst_hotplug,
> .register_connector = dm_dp_mst_register_connector
>  };
>
> diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c 
> b/drivers/gpu/drm/drm_dp_mst_topology.c
> index 08978ad72f33..639552918b44 100644
> --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> @@ -33,6 +33,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>
>  /**
>   * DOC: dp mst helper
> @@ -1650,7 +1651,7 @@ static void drm_dp_send_link_address(struct 
> drm_dp_mst_topology_mgr *mgr,
> for (i = 0; i < txmsg->reply.u.link_addr.nports; i++) 
> {
> drm_dp_add_port(mstb, mgr->dev, 
> &txmsg->reply.u.link_addr.ports[i]);
> }
> -   (*mgr->cbs->hotplug)(mgr);
> +   drm_kms_helper_hotplug_event(mgr->dev);
> }
> } else {
> mstb->link_address_sent = false;
> @@ -2423,7 +2424,7 @@ static int drm_dp_mst_handle_up_req(struct 
> drm_dp_mst_topology_mgr *mgr)
> drm_dp_update_port(mstb, &msg.u.conn_stat);
>
> DRM_DEBUG_KMS("Got CSN: pn: %d ldps:%d ddps: %d mcs: 
> %d ip: %d pdt: %d\n", msg.u.conn_stat.port_number, 
> msg.u.conn_stat.legacy_device_plug_status, 
> msg.u.conn_stat.displayport_device_plug_status, 
> msg.u.conn_stat.message_capability_status, msg.u.conn_stat.input_port, 
> msg.u.conn_stat.peer_device_type);
> -   (*mgr->cbs->hotplug)(mgr);
> +   drm_kms_helper_hotplug_event(mgr->dev);
>
> } else if (msg.req_type == DP_RESOURCE_STATUS_NOTIFY) {
> drm_dp_send_up_ack_reply(mgr, mgr->mst_primary, 
> msg.req_type, seqno, false);
> @@ -3120,7 +3121,7 @@ static void drm_dp_destroy_connector_work(struct 
> work_struct *work)
> send_hotplug = true;
> }
> if (send_hotplug)
> -   (*mgr->cbs->hotplug)(mgr);
> +   drm_kms_helper_hotplug_event(mgr->dev);
>  }
>
>  static struct drm_private_state *
> diff --git a/drivers/gpu/drm/i915/intel_dp_mst.c 
> b/drivers/gpu/drm/i915/intel_dp_mst.c
> index 4de247ddf05f..f05427b74e34 100644
> --- a/drivers/gpu/drm/i915/intel_dp_mst.c
> +++ b/drivers/gpu/drm/i915/intel_dp_mst.c
> @@ -517,20 +517,10 @@ static void intel_dp_destroy_mst_connector(struct 
> drm_dp_mst_topology_mgr *mgr,
> drm_connector_put(connector);
>  }
>
> -static void intel_dp_mst_hotplug(struct drm_dp_mst_topology_mgr *mgr)
>

Re: [Intel-gfx] [PATCH 4/7] drm: Move the legacy kms disable_all helper to crtc helpers

2018-12-10 Thread Alex Deucher
On Mon, Dec 10, 2018 at 5:04 AM Daniel Vetter  wrote:
>
> It's not a core function, and the matching atomic functions are also
> not in the core. Plus the suspend/resume helper is also already there.
>
> Needs a tiny bit of open-coding, but less midlayer beats that I think.
>
> Cc: Sam Bobroff 
> Signed-off-by: Daniel Vetter 
> Cc: Maarten Lankhorst 
> Cc: Maxime Ripard 
> Cc: Sean Paul 
> Cc: David Airlie 
> Cc: Ben Skeggs 
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: "David (ChunMing) Zhou" 
> Cc: Rex Zhu 
> Cc: Andrey Grodzovsky 
> Cc: Huang Rui 
> Cc: Shaoyun Liu 
> Cc: Monk Liu 
> Cc: nouv...@lists.freedesktop.org
> Cc: amd-...@lists.freedesktop.org
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c |  2 +-
>  drivers/gpu/drm/drm_crtc.c | 31 ---
>  drivers/gpu/drm/drm_crtc_helper.c  | 35 ++
>  drivers/gpu/drm/nouveau/nouveau_display.c  |  2 +-
>  drivers/gpu/drm/radeon/radeon_display.c|  2 +-
>  include/drm/drm_crtc.h |  2 --
>  include/drm/drm_crtc_helper.h  |  1 +
>  7 files changed, 39 insertions(+), 36 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index c75badfa5c4c..e669297ffefb 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -2689,7 +2689,7 @@ void amdgpu_device_fini(struct amdgpu_device *adev)
> amdgpu_irq_disable_all(adev);
> if (adev->mode_info.mode_config_initialized){
> if (!amdgpu_device_has_dc_support(adev))
> -   drm_crtc_force_disable_all(adev->ddev);
> +   drm_helper_force_disable_all(adev->ddev);
> else
> drm_atomic_helper_shutdown(adev->ddev);
> }
> diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
> index f660819d406e..7dabbaf033a1 100644
> --- a/drivers/gpu/drm/drm_crtc.c
> +++ b/drivers/gpu/drm/drm_crtc.c
> @@ -104,37 +104,6 @@ int drm_crtc_force_disable(struct drm_crtc *crtc)
> return drm_mode_set_config_internal(&set);
>  }
>
> -/**
> - * drm_crtc_force_disable_all - Forcibly turn off all enabled CRTCs
> - * @dev: DRM device whose CRTCs to turn off
> - *
> - * Drivers may want to call this on unload to ensure that all displays are
> - * unlit and the GPU is in a consistent, low power state. Takes modeset 
> locks.
> - *
> - * Note: This should only be used by non-atomic legacy drivers. For an atomic
> - * version look at drm_atomic_helper_shutdown().
> - *
> - * Returns:
> - * Zero on success, error code on failure.
> - */
> -int drm_crtc_force_disable_all(struct drm_device *dev)
> -{
> -   struct drm_crtc *crtc;
> -   int ret = 0;
> -
> -   drm_modeset_lock_all(dev);
> -   drm_for_each_crtc(crtc, dev)
> -   if (crtc->enabled) {
> -   ret = drm_crtc_force_disable(crtc);
> -   if (ret)
> -   goto out;
> -   }
> -out:
> -   drm_modeset_unlock_all(dev);
> -   return ret;
> -}
> -EXPORT_SYMBOL(drm_crtc_force_disable_all);
> -
>  static unsigned int drm_num_crtcs(struct drm_device *dev)
>  {
> unsigned int num = 0;
> diff --git a/drivers/gpu/drm/drm_crtc_helper.c 
> b/drivers/gpu/drm/drm_crtc_helper.c
> index a3c81850e755..23159eb494f1 100644
> --- a/drivers/gpu/drm/drm_crtc_helper.c
> +++ b/drivers/gpu/drm/drm_crtc_helper.c
> @@ -984,3 +984,38 @@ void drm_helper_resume_force_mode(struct drm_device *dev)
> drm_modeset_unlock_all(dev);
>  }
>  EXPORT_SYMBOL(drm_helper_resume_force_mode);
> +
> +/**
> + * drm_helper_force_disable_all - Forcibly turn off all enabled CRTCs
> + * @dev: DRM device whose CRTCs to turn off
> + *
> + * Drivers may want to call this on unload to ensure that all displays are
> + * unlit and the GPU is in a consistent, low power state. Takes modeset 
> locks.
> + *
> + * Note: This should only be used by non-atomic legacy drivers. For an atomic
> + * version look at drm_atomic_helper_shutdown().
> + *
> + * Returns:
> + * Zero on success, error code on failure.
> + */
> +int drm_helper_force_disable_all(struct drm_device *dev)

Maybe put crtc somewhere in the function name so it's clear what we
are disabling.  With that fixed:
Reviewed-by: Alex Deucher 

> +{
> +   struct drm_crtc *crtc;
> +   int ret = 0;
> +
> +   drm_modeset_lock_all(dev);
> +   drm_for_each_crtc(crtc, dev)
> +   if (crtc->enabled) {
>

Re: [Intel-gfx] [PATCH 0/7] legacy helper cleanup

2018-12-10 Thread Alex Deucher
On Mon, Dec 10, 2018 at 5:04 AM Daniel Vetter  wrote:
>
> Hi all,
>
> Just a small cleanup motivated by the last patch. After this series atomic
> drivers do no longer need the drm_crtc_helper.h header, and none of them
> use it. Except for the 2 that support both atomic and legacy kms in the
> same driver module (nouveau and amdgpu).
>
> Last patch is a bit huge, but splitting it up will make the churn only
> worse.
>
> Comments and review very much appreciated.

Some comments on patch 4, 1-3,4-6 are:
Reviewed-by: Alex Deucher 
Assuming the build issues reported with patch 7 are fixed:
Reviewed-by: Alex Deucher 

>
> Cheers, Daniel
>
> Daniel Vetter (7):
>   drm/ch7006: Stop using drm_crtc_force_disable
>   drm/nouveau: Stop using drm_crtc_force_disable
>   drm: Unexport drm_crtc_force_disable
>   drm: Move the legacy kms disable_all helper to crtc helpers
>   drm/qxl: Don't set the dpms hook
>   drm/xen: Don't set the dpms hook
>   drm: Split out drm_probe_helper.h
>
>  .../gpu/drm/amd/amdgpu/amdgpu_connectors.c|  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c|  4 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h  |  1 +
>  .../amd/display/amdgpu_dm/amdgpu_dm_helpers.c |  2 +-
>  .../amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c  |  2 +-
>  .../display/amdgpu_dm/amdgpu_dm_services.c|  2 +-
>  drivers/gpu/drm/arc/arcpgu_crtc.c |  2 +-
>  drivers/gpu/drm/arc/arcpgu_drv.c  |  2 +-
>  drivers/gpu/drm/arc/arcpgu_sim.c  |  2 +-
>  drivers/gpu/drm/arm/hdlcd_crtc.c  |  2 +-
>  drivers/gpu/drm/arm/hdlcd_drv.c   |  2 +-
>  drivers/gpu/drm/arm/malidp_crtc.c |  2 +-
>  drivers/gpu/drm/arm/malidp_drv.c  |  2 +-
>  drivers/gpu/drm/arm/malidp_mw.c   |  2 +-
>  drivers/gpu/drm/armada/armada_510.c   |  2 +-
>  drivers/gpu/drm/armada/armada_crtc.c  |  2 +-
>  drivers/gpu/drm/armada/armada_drv.c   |  2 +-
>  drivers/gpu/drm/armada/armada_fb.c|  2 +-
>  drivers/gpu/drm/ast/ast_drv.c |  1 +
>  drivers/gpu/drm/ast/ast_mode.c|  1 +
>  .../gpu/drm/atmel-hlcdc/atmel_hlcdc_crtc.c|  2 +-
>  drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.h  |  2 +-
>  drivers/gpu/drm/bochs/bochs_drv.c |  1 +
>  drivers/gpu/drm/bochs/bochs_kms.c |  1 +
>  drivers/gpu/drm/bridge/adv7511/adv7511.h  |  2 +-
>  drivers/gpu/drm/bridge/analogix-anx78xx.c |  3 +-
>  .../drm/bridge/analogix/analogix_dp_core.c|  2 +-
>  drivers/gpu/drm/bridge/cdns-dsi.c |  2 +-
>  drivers/gpu/drm/bridge/dumb-vga-dac.c |  2 +-
>  .../bridge/megachips-stdp-ge-b850v3-fw.c  |  2 +-
>  drivers/gpu/drm/bridge/nxp-ptn3460.c  |  2 +-
>  drivers/gpu/drm/bridge/panel.c|  2 +-
>  drivers/gpu/drm/bridge/parade-ps8622.c|  2 +-
>  drivers/gpu/drm/bridge/sii902x.c  |  2 +-
>  drivers/gpu/drm/bridge/synopsys/dw-hdmi.c |  2 +-
>  drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c |  2 +-
>  drivers/gpu/drm/bridge/tc358764.c |  2 +-
>  drivers/gpu/drm/bridge/tc358767.c |  2 +-
>  drivers/gpu/drm/bridge/ti-sn65dsi86.c |  2 +-
>  drivers/gpu/drm/bridge/ti-tfp410.c|  2 +-
>  drivers/gpu/drm/cirrus/cirrus_drv.c   |  1 +
>  drivers/gpu/drm/cirrus/cirrus_mode.c  |  1 +
>  drivers/gpu/drm/drm_atomic_helper.c   |  1 -
>  drivers/gpu/drm/drm_crtc.c| 41 ---
>  drivers/gpu/drm/drm_crtc_helper.c | 35 +
>  drivers/gpu/drm/drm_crtc_internal.h   |  1 +
>  drivers/gpu/drm/drm_dp_mst_topology.c |  2 +-
>  drivers/gpu/drm/drm_modeset_helper.c  |  2 +-
>  drivers/gpu/drm/drm_probe_helper.c|  2 +-
>  drivers/gpu/drm/drm_simple_kms_helper.c   |  2 +-
>  drivers/gpu/drm/etnaviv/etnaviv_drv.h |  1 -
>  drivers/gpu/drm/exynos/exynos_dp.c|  2 +-
>  drivers/gpu/drm/exynos/exynos_drm_crtc.c  |  2 +-
>  drivers/gpu/drm/exynos/exynos_drm_dpi.c   |  2 +-
>  drivers/gpu/drm/exynos/exynos_drm_drv.c   |  2 +-
>  drivers/gpu/drm/exynos/exynos_drm_dsi.c   |  2 +-
>  drivers/gpu/drm/exynos/exynos_drm_fb.c|  2 +-
>  drivers/gpu/drm/exynos/exynos_drm_fbdev.c |  2 +-
>  drivers/gpu/drm/exynos/exynos_drm_vidi.c  |  2 +-
>  drivers/gpu/drm/exynos/exynos_hdmi.c  |  2 +-
>  drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_crtc.c|  2 +-
>  drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c |  2 +-
>  drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_kms.c |  2 +-
>  drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_plane.c   |  2 +-
>  drivers/gpu

Re: [Intel-gfx] [PATCH 4/7] drm: Move the legacy kms disable_all helper to crtc helpers

2018-12-11 Thread Alex Deucher
On Tue, Dec 11, 2018 at 10:53 AM Sean Paul  wrote:
>
> On Mon, Dec 10, 2018 at 10:58:20AM -0500, Alex Deucher wrote:
> > On Mon, Dec 10, 2018 at 5:04 AM Daniel Vetter  
> > wrote:
> > >
> > > It's not a core function, and the matching atomic functions are also
> > > not in the core. Plus the suspend/resume helper is also already there.
> > >
> > > Needs a tiny bit of open-coding, but less midlayer beats that I think.
> > >
> > > Cc: Sam Bobroff 
> > > Signed-off-by: Daniel Vetter 
> > > Cc: Maarten Lankhorst 
> > > Cc: Maxime Ripard 
> > > Cc: Sean Paul 
> > > Cc: David Airlie 
> > > Cc: Ben Skeggs 
> > > Cc: Alex Deucher 
> > > Cc: "Christian König" 
> > > Cc: "David (ChunMing) Zhou" 
> > > Cc: Rex Zhu 
> > > Cc: Andrey Grodzovsky 
> > > Cc: Huang Rui 
> > > Cc: Shaoyun Liu 
> > > Cc: Monk Liu 
> > > Cc: nouv...@lists.freedesktop.org
> > > Cc: amd-...@lists.freedesktop.org
> > > ---
> > >  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c |  2 +-
> > >  drivers/gpu/drm/drm_crtc.c | 31 ---
> > >  drivers/gpu/drm/drm_crtc_helper.c  | 35 ++
> > >  drivers/gpu/drm/nouveau/nouveau_display.c  |  2 +-
> > >  drivers/gpu/drm/radeon/radeon_display.c|  2 +-
> > >  include/drm/drm_crtc.h |  2 --
> > >  include/drm/drm_crtc_helper.h  |  1 +
> > >  7 files changed, 39 insertions(+), 36 deletions(-)
> > >
>
> /snip
>
> > > diff --git a/drivers/gpu/drm/drm_crtc_helper.c 
> > > b/drivers/gpu/drm/drm_crtc_helper.c
> > > index a3c81850e755..23159eb494f1 100644
> > > --- a/drivers/gpu/drm/drm_crtc_helper.c
> > > +++ b/drivers/gpu/drm/drm_crtc_helper.c
> > > @@ -984,3 +984,38 @@ void drm_helper_resume_force_mode(struct drm_device 
> > > *dev)
> > > drm_modeset_unlock_all(dev);
> > >  }
> > >  EXPORT_SYMBOL(drm_helper_resume_force_mode);
> > > +
> > > +/**
> > > + * drm_helper_force_disable_all - Forcibly turn off all enabled CRTCs
> > > + * @dev: DRM device whose CRTCs to turn off
> > > + *
> > > + * Drivers may want to call this on unload to ensure that all displays 
> > > are
> > > + * unlit and the GPU is in a consistent, low power state. Takes modeset 
> > > locks.
> > > + *
> > > + * Note: This should only be used by non-atomic legacy drivers. For an 
> > > atomic
> > > + * version look at drm_atomic_helper_shutdown().
> > > + *
> > > + * Returns:
> > > + * Zero on success, error code on failure.
> > > + */
> > > +int drm_helper_force_disable_all(struct drm_device *dev)
> >
> > Maybe put crtc somewhere in the function name so it's clear what we
> > are disabling.
>
> FWIW, I think it's more clear this way. set_config_internal will turn off
> everything attached to the crtc, so _everything_ will be disabled in this 
> case.

I'm not pressed.  RB either way for me as well.

Alex

>
> Either way,
>
> Reviewed-by: Sean Paul 
>
> Sean
>
> > With that fixed:
> > Reviewed-by: Alex Deucher 
> >
> > > +{
> > > +   struct drm_crtc *crtc;
> > > +   int ret = 0;
> > > +
> > > +   drm_modeset_lock_all(dev);
> > > +   drm_for_each_crtc(crtc, dev)
> > > +   if (crtc->enabled) {
> > > +   struct drm_mode_set set = {
> > > +   .crtc = crtc,
> > > +   };
> > > +
> > > +   ret = drm_mode_set_config_internal(&set);
> > > +   if (ret)
> > > +   goto out;
> > > +   }
> > > +out:
> > > +   drm_modeset_unlock_all(dev);
> > > +   return ret;
> > > +}
> > > +EXPORT_SYMBOL(drm_helper_force_disable_all);
> > > diff --git a/drivers/gpu/drm/nouveau/nouveau_display.c 
> > > b/drivers/gpu/drm/nouveau/nouveau_display.c
> > > index f326ffd86766..5d273a655479 100644
> > > --- a/drivers/gpu/drm/nouveau/nouveau_display.c
> > > +++ b/drivers/gpu/drm/nouveau/nouveau_display.c
> > > @@ -453,7 +453,7 @@ nouveau_display_fini(struct drm_device *dev, bool 
> > > suspend, bool runtime)
> > > if (drm_drv_uses_atomic_modeset(dev))
> > >   

Re: [Intel-gfx] [PATCH i-g-t] igt/amdgpu_amd_prime: Bail if we fail to create more contexts

2018-12-13 Thread Alex Deucher
On Thu, Dec 13, 2018 at 6:57 AM Chris Wilson  wrote:
>
> amdgpu has started to report out of space after creating a few contexts.
> This is not the scope of this test as here we just verifying that fences
> created in amd can be imported and used for synchronisation by i915 and
> for that we just need at least one context created!
>
> References: https://bugs.freedesktop.org/show_bug.cgi?id=109049
> Signed-off-by: Chris Wilson 

Acked-by: Alex Deucher 

> ---
>  tests/amdgpu/amd_prime.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/tests/amdgpu/amd_prime.c b/tests/amdgpu/amd_prime.c
> index bda0ce83d..518c88963 100644
> --- a/tests/amdgpu/amd_prime.c
> +++ b/tests/amdgpu/amd_prime.c
> @@ -354,8 +354,8 @@ static void amd_to_i915(int i915, int amd, 
> amdgpu_device_handle device)
> contexts = realloc(contexts, size * 
> sizeof(*contexts));
> }
>
> -   r = amdgpu_cs_ctx_create(device, &contexts[count]);
> -   igt_assert_eq(r, 0);
> +   if (amdgpu_cs_ctx_create(device, &contexts[count]))
> +   break;
>
> r = amdgpu_cs_submit(contexts[count], 0, &ibs_request, 1);
> igt_assert_eq(r, 0);
> @@ -364,6 +364,7 @@ static void amd_to_i915(int i915, int amd, 
> amdgpu_device_handle device)
> }
>
> igt_info("Reservation width = %ld\n", count);
> +   igt_require(count);
>
> amdgpu_bo_export(ib_result_handle,
>  amdgpu_bo_handle_type_dma_buf_fd,
> --
> 2.20.0
>
> ___
> amd-gfx mailing list
> amd-...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH v2 8/9] drm/amdgpu: use drm_debug_enabled() to check for debug categories

2019-09-24 Thread Alex Deucher
On Tue, Sep 24, 2019 at 9:00 AM Jani Nikula  wrote:
>
> Allow better abstraction of the drm_debug global variable in the
> future. No functional changes.
>
> Cc: Alex Deucher 
> Cc: Christian König 
> Cc: David (ChunMing) Zhou 
> Cc: amd-...@lists.freedesktop.org
> Signed-off-by: Jani Nikula 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/amdgpu/smu_v11_0_i2c.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/smu_v11_0_i2c.c 
> b/drivers/gpu/drm/amd/amdgpu/smu_v11_0_i2c.c
> index 4a5951036927..5f17bd4899e2 100644
> --- a/drivers/gpu/drm/amd/amdgpu/smu_v11_0_i2c.c
> +++ b/drivers/gpu/drm/amd/amdgpu/smu_v11_0_i2c.c
> @@ -234,7 +234,7 @@ static uint32_t smu_v11_0_i2c_transmit(struct i2c_adapter 
> *control,
> DRM_DEBUG_DRIVER("I2C_Transmit(), address = %x, bytes = %d , data: ",
>  (uint16_t)address, numbytes);
>
> -   if (drm_debug & DRM_UT_DRIVER) {
> +   if (drm_debug_enabled(DRM_UT_DRIVER)) {
> print_hex_dump(KERN_INFO, "data: ", DUMP_PREFIX_NONE,
>16, 1, data, numbytes, false);
> }
> @@ -388,7 +388,7 @@ static uint32_t smu_v11_0_i2c_receive(struct i2c_adapter 
> *control,
> DRM_DEBUG_DRIVER("I2C_Receive(), address = %x, bytes = %d, data :",
>   (uint16_t)address, bytes_received);
>
> -   if (drm_debug & DRM_UT_DRIVER) {
> +   if (drm_debug_enabled(DRM_UT_DRIVER)) {
> print_hex_dump(KERN_INFO, "data: ", DUMP_PREFIX_NONE,
>16, 1, data, bytes_received, false);
> }
> --
> 2.20.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH v2 1/9] drm/print: move drm_debug variable to drm_print.[ch]

2019-09-24 Thread Alex Deucher
On Tue, Sep 24, 2019 at 8:59 AM Jani Nikula  wrote:
>
> Move drm_debug variable declaration and definition to where they are
> relevant and needed. No functional changes.
>
> Signed-off-by: Jani Nikula 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/drm_drv.c   | 17 -
>  drivers/gpu/drm/drm_print.c | 19 +++
>  include/drm/drm_drv.h   |  2 --
>  include/drm/drm_print.h |  2 ++
>  4 files changed, 21 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
> index 769feeff..1b9b40a1c7c9 100644
> --- a/drivers/gpu/drm/drm_drv.c
> +++ b/drivers/gpu/drm/drm_drv.c
> @@ -46,26 +46,9 @@
>  #include "drm_internal.h"
>  #include "drm_legacy.h"
>
> -/*
> - * drm_debug: Enable debug output.
> - * Bitmask of DRM_UT_x. See include/drm/drm_print.h for details.
> - */
> -unsigned int drm_debug = 0;
> -EXPORT_SYMBOL(drm_debug);
> -
>  MODULE_AUTHOR("Gareth Hughes, Leif Delgass, José Fonseca, Jon Smirl");
>  MODULE_DESCRIPTION("DRM shared core routines");
>  MODULE_LICENSE("GPL and additional rights");
> -MODULE_PARM_DESC(debug, "Enable debug output, where each bit enables a debug 
> category.\n"
> -"\t\tBit 0 (0x01)  will enable CORE messages (drm core code)\n"
> -"\t\tBit 1 (0x02)  will enable DRIVER messages (drm controller code)\n"
> -"\t\tBit 2 (0x04)  will enable KMS messages (modesetting code)\n"
> -"\t\tBit 3 (0x08)  will enable PRIME messages (prime code)\n"
> -"\t\tBit 4 (0x10)  will enable ATOMIC messages (atomic code)\n"
> -"\t\tBit 5 (0x20)  will enable VBL messages (vblank code)\n"
> -"\t\tBit 7 (0x80)  will enable LEASE messages (leasing code)\n"
> -"\t\tBit 8 (0x100) will enable DP messages (displayport code)");
> -module_param_named(debug, drm_debug, int, 0600);
>
>  static DEFINE_SPINLOCK(drm_minor_lock);
>  static struct idr drm_minors_idr;
> diff --git a/drivers/gpu/drm/drm_print.c b/drivers/gpu/drm/drm_print.c
> index dfa27367ebb8..c9b57012d412 100644
> --- a/drivers/gpu/drm/drm_print.c
> +++ b/drivers/gpu/drm/drm_print.c
> @@ -28,6 +28,7 @@
>  #include 
>
>  #include 
> +#include 
>  #include 
>  #include 
>
> @@ -35,6 +36,24 @@
>  #include 
>  #include 
>
> +/*
> + * drm_debug: Enable debug output.
> + * Bitmask of DRM_UT_x. See include/drm/drm_print.h for details.
> + */
> +unsigned int drm_debug;
> +EXPORT_SYMBOL(drm_debug);
> +
> +MODULE_PARM_DESC(debug, "Enable debug output, where each bit enables a debug 
> category.\n"
> +"\t\tBit 0 (0x01)  will enable CORE messages (drm core code)\n"
> +"\t\tBit 1 (0x02)  will enable DRIVER messages (drm controller code)\n"
> +"\t\tBit 2 (0x04)  will enable KMS messages (modesetting code)\n"
> +"\t\tBit 3 (0x08)  will enable PRIME messages (prime code)\n"
> +"\t\tBit 4 (0x10)  will enable ATOMIC messages (atomic code)\n"
> +"\t\tBit 5 (0x20)  will enable VBL messages (vblank code)\n"
> +"\t\tBit 7 (0x80)  will enable LEASE messages (leasing code)\n"
> +"\t\tBit 8 (0x100) will enable DP messages (displayport code)");
> +module_param_named(debug, drm_debug, int, 0600);
> +
>  void __drm_puts_coredump(struct drm_printer *p, const char *str)
>  {
> struct drm_print_iterator *iterator = p->arg;
> diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
> index 8976afe48c1c..cf13470810a5 100644
> --- a/include/drm/drm_drv.h
> +++ b/include/drm/drm_drv.h
> @@ -778,8 +778,6 @@ struct drm_driver {
> int dev_priv_size;
>  };
>
> -extern unsigned int drm_debug;
> -
>  int drm_dev_init(struct drm_device *dev,
>  struct drm_driver *driver,
>  struct device *parent);
> diff --git a/include/drm/drm_print.h b/include/drm/drm_print.h
> index 12d4916254b4..e5c421abce48 100644
> --- a/include/drm/drm_print.h
> +++ b/include/drm/drm_print.h
> @@ -34,6 +34,8 @@
>
>  #include 
>
> +extern unsigned int drm_debug;
> +
>  /**
>   * DOC: print
>   *
> --
> 2.20.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH v2 2/9] drm/print: add drm_debug_enabled()

2019-09-24 Thread Alex Deucher
On Tue, Sep 24, 2019 at 8:59 AM Jani Nikula  wrote:
>
> Add helper to check if a drm debug category is enabled. Convert drm core
> to use it. No functional changes.
>
> v2: Move unlikely() to drm_debug_enabled() (Eric)
>
> Signed-off-by: Jani Nikula 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/drm_atomic_uapi.c | 2 +-
>  drivers/gpu/drm/drm_dp_mst_topology.c | 6 +++---
>  drivers/gpu/drm/drm_edid.c| 2 +-
>  drivers/gpu/drm/drm_edid_load.c   | 2 +-
>  drivers/gpu/drm/drm_mipi_dbi.c| 4 ++--
>  drivers/gpu/drm/drm_print.c   | 4 ++--
>  drivers/gpu/drm/drm_vblank.c  | 6 +++---
>  include/drm/drm_print.h   | 5 +
>  8 files changed, 18 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_atomic_uapi.c 
> b/drivers/gpu/drm/drm_atomic_uapi.c
> index 7a26bfb5329c..0d466d3b0809 100644
> --- a/drivers/gpu/drm/drm_atomic_uapi.c
> +++ b/drivers/gpu/drm/drm_atomic_uapi.c
> @@ -1405,7 +1405,7 @@ int drm_mode_atomic_ioctl(struct drm_device *dev,
> } else if (arg->flags & DRM_MODE_ATOMIC_NONBLOCK) {
> ret = drm_atomic_nonblocking_commit(state);
> } else {
> -   if (unlikely(drm_debug & DRM_UT_STATE))
> +   if (drm_debug_enabled(DRM_UT_STATE))
> drm_atomic_print_state(state);
>
> ret = drm_atomic_commit(state);
> diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c 
> b/drivers/gpu/drm/drm_dp_mst_topology.c
> index 97216099a718..5b41dc167816 100644
> --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> @@ -1180,7 +1180,7 @@ static int drm_dp_mst_wait_tx_reply(struct 
> drm_dp_mst_branch *mstb,
> }
> }
>  out:
> -   if (unlikely(ret == -EIO && drm_debug & DRM_UT_DP)) {
> +   if (ret == -EIO && drm_debug_enabled(DRM_UT_DP)) {
> struct drm_printer p = drm_debug_printer(DBG_PREFIX);
>
> drm_dp_mst_dump_sideband_msg_tx(&p, txmsg);
> @@ -2321,7 +2321,7 @@ static int process_single_tx_qlock(struct 
> drm_dp_mst_topology_mgr *mgr,
> idx += tosend + 1;
>
> ret = drm_dp_send_sideband_msg(mgr, up, chunk, idx);
> -   if (unlikely(ret && drm_debug & DRM_UT_DP)) {
> +   if (ret && drm_debug_enabled(DRM_UT_DP)) {
> struct drm_printer p = drm_debug_printer(DBG_PREFIX);
>
> drm_printf(&p, "sideband msg failed to send\n");
> @@ -2388,7 +2388,7 @@ static void drm_dp_queue_down_tx(struct 
> drm_dp_mst_topology_mgr *mgr,
> mutex_lock(&mgr->qlock);
> list_add_tail(&txmsg->next, &mgr->tx_msg_downq);
>
> -   if (unlikely(drm_debug & DRM_UT_DP)) {
> +   if (drm_debug_enabled(DRM_UT_DP)) {
> struct drm_printer p = drm_debug_printer(DBG_PREFIX);
>
> drm_dp_mst_dump_sideband_msg_tx(&p, txmsg);
> diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
> index 3c9703b08491..0552175313cb 100644
> --- a/drivers/gpu/drm/drm_edid.c
> +++ b/drivers/gpu/drm/drm_edid.c
> @@ -1651,7 +1651,7 @@ static void connector_bad_edid(struct drm_connector 
> *connector,
>  {
> int i;
>
> -   if (connector->bad_edid_counter++ && !(drm_debug & DRM_UT_KMS))
> +   if (connector->bad_edid_counter++ && !drm_debug_enabled(DRM_UT_KMS))
> return;
>
> dev_warn(connector->dev->dev,
> diff --git a/drivers/gpu/drm/drm_edid_load.c b/drivers/gpu/drm/drm_edid_load.c
> index d38b3b255926..37d8ba3ddb46 100644
> --- a/drivers/gpu/drm/drm_edid_load.c
> +++ b/drivers/gpu/drm/drm_edid_load.c
> @@ -175,7 +175,7 @@ static void *edid_load(struct drm_connector *connector, 
> const char *name,
> u8 *edid;
> int fwsize, builtin;
> int i, valid_extensions = 0;
> -   bool print_bad_edid = !connector->bad_edid_counter || (drm_debug & 
> DRM_UT_KMS);
> +   bool print_bad_edid = !connector->bad_edid_counter || 
> drm_debug_enabled(DRM_UT_KMS);
>
> builtin = match_string(generic_edid_name, GENERIC_EDIDS, name);
> if (builtin >= 0) {
> diff --git a/drivers/gpu/drm/drm_mipi_dbi.c b/drivers/gpu/drm/drm_mipi_dbi.c
> index f8154316a3b0..ccfb5b33c5e3 100644
> --- a/drivers/gpu/drm/drm_mipi_dbi.c
> +++ b/drivers/gpu/drm/drm_mipi_dbi.c
> @@ -783,7 +783,7 @@ static int mipi_dbi_spi1e_transfer(struct mipi_dbi *dbi, 
> int dc,
> int i, ret;
> u8 *dst;
>
> -   if (drm_debug & DRM_UT_DRIVER)
> +   if (drm_debug_enabl

Re: [Intel-gfx] [PATCH v2 9/9] drm/print: rename drm_debug to __drm_debug to discourage use

2019-09-24 Thread Alex Deucher
On Tue, Sep 24, 2019 at 9:00 AM Jani Nikula  wrote:
>
> drm_debug_enabled() is the way to check. __drm_debug is now reserved for
> drm print code only. No functional changes.
>
> v2: Rebase on move unlikely() to drm_debug_enabled()
>
> Signed-off-by: Jani Nikula 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/drm_print.c | 8 
>  include/drm/drm_print.h | 5 +++--
>  2 files changed, 7 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_print.c b/drivers/gpu/drm/drm_print.c
> index a7c89ec5ff26..ca3c56b026f0 100644
> --- a/drivers/gpu/drm/drm_print.c
> +++ b/drivers/gpu/drm/drm_print.c
> @@ -37,11 +37,11 @@
>  #include 
>
>  /*
> - * drm_debug: Enable debug output.
> + * __drm_debug: Enable debug output.
>   * Bitmask of DRM_UT_x. See include/drm/drm_print.h for details.
>   */
> -unsigned int drm_debug;
> -EXPORT_SYMBOL(drm_debug);
> +unsigned int __drm_debug;
> +EXPORT_SYMBOL(__drm_debug);
>
>  MODULE_PARM_DESC(debug, "Enable debug output, where each bit enables a debug 
> category.\n"
>  "\t\tBit 0 (0x01)  will enable CORE messages (drm core code)\n"
> @@ -52,7 +52,7 @@ MODULE_PARM_DESC(debug, "Enable debug output, where each 
> bit enables a debug cat
>  "\t\tBit 5 (0x20)  will enable VBL messages (vblank code)\n"
>  "\t\tBit 7 (0x80)  will enable LEASE messages (leasing code)\n"
>  "\t\tBit 8 (0x100) will enable DP messages (displayport code)");
> -module_param_named(debug, drm_debug, int, 0600);
> +module_param_named(debug, __drm_debug, int, 0600);
>
>  void __drm_puts_coredump(struct drm_printer *p, const char *str)
>  {
> diff --git a/include/drm/drm_print.h b/include/drm/drm_print.h
> index 4618e90cd124..cde54900d593 100644
> --- a/include/drm/drm_print.h
> +++ b/include/drm/drm_print.h
> @@ -34,7 +34,8 @@
>
>  #include 
>
> -extern unsigned int drm_debug;
> +/* Do *not* use outside of drm_print.[ch]! */
> +extern unsigned int __drm_debug;
>
>  /**
>   * DOC: print
> @@ -296,7 +297,7 @@ static inline struct drm_printer drm_err_printer(const 
> char *prefix)
>
>  static inline bool drm_debug_enabled(unsigned int category)
>  {
> -   return unlikely(drm_debug & category);
> +   return unlikely(__drm_debug & category);
>  }
>
>  __printf(3, 4)
> --
> 2.20.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH] drm/dp: Set the connector's TILE property even for DP SST connectors

2019-03-13 Thread Alex Deucher
On Tue, Mar 12, 2019 at 10:15 PM Manasi Navare
 wrote:
>
> Current driver sets the tile property only for DP MST connectors.
> However there are some tiled displays where each SST connector
> carries a single tile. So we need to attach this property object
> for every connector and set it for every connector (DP SST and MST).
> Plus since the tile information is obtained as a result of EDID
> parsing, the best place to update tile property is where we update
> edid property.
> Also now we dont need to explicitly set this now for MST connectors.
>
> This has been tested with xrandr --props and modetest and verified
> that TILE property is exposed correctly.
>
> Cc: Dave Airlie 
> Cc: Jani Nikula 
> Cc: Daniel Vetter 
> Cc: Ville Syrjälä 
> Signed-off-by: Manasi Navare 

Reviewed-by: Alex Deucher 

> ---
>  drivers/gpu/drm/drm_connector.c   | 13 -
>  drivers/gpu/drm/drm_dp_mst_topology.c |  1 -
>  2 files changed, 12 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
> index 07d65a16c623..2355124849db 100644
> --- a/drivers/gpu/drm/drm_connector.c
> +++ b/drivers/gpu/drm/drm_connector.c
> @@ -245,6 +245,7 @@ int drm_connector_init(struct drm_device *dev,
> INIT_LIST_HEAD(&connector->modes);
> mutex_init(&connector->mutex);
> connector->edid_blob_ptr = NULL;
> +   connector->tile_blob_ptr = NULL;
> connector->status = connector_status_unknown;
> connector->display_info.panel_orientation =
> DRM_MODE_PANEL_ORIENTATION_UNKNOWN;
> @@ -272,6 +273,9 @@ int drm_connector_init(struct drm_device *dev,
> drm_object_attach_property(&connector->base,
>config->non_desktop_property,
>0);
> +   drm_object_attach_property(&connector->base,
> +  config->tile_property,
> +  0);
>
> if (drm_core_check_feature(dev, DRIVER_ATOMIC)) {
> drm_object_attach_property(&connector->base, 
> config->prop_crtc_id, 0);
> @@ -1712,6 +1716,8 @@ EXPORT_SYMBOL(drm_connector_set_path_property);
>   * This looks up the tile information for a connector, and creates a
>   * property for userspace to parse if it exists. The property is of
>   * the form of 8 integers using ':' as a separator.
> + * This is used for dual port tiled displays with DisplayPort SST
> + * or DisplayPort MST connectors.
>   *
>   * Returns:
>   * Zero on success, errno on failure.
> @@ -1755,6 +1761,9 @@ EXPORT_SYMBOL(drm_connector_set_tile_property);
>   *
>   * This function creates a new blob modeset object and assigns its id to the
>   * connector's edid property.
> + * Since we also parse tile information from EDID's displayID block, we also
> + * set the connector's tile property here. See 
> drm_connector_set_tile_property()
> + * for more details.
>   *
>   * Returns:
>   * Zero on success, negative errno on failure.
> @@ -1796,7 +1805,9 @@ int drm_connector_update_edid_property(struct 
> drm_connector *connector,
>edid,
>&connector->base,
>
> dev->mode_config.edid_property);
> -   return ret;
> +   if (ret)
> +   return ret;
> +   return drm_connector_set_tile_property(connector);
>  }
>  EXPORT_SYMBOL(drm_connector_update_edid_property);
>
> diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c 
> b/drivers/gpu/drm/drm_dp_mst_topology.c
> index dc7ac0c60547..c630ed157994 100644
> --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> @@ -3022,7 +3022,6 @@ struct edid *drm_dp_mst_get_edid(struct drm_connector 
> *connector, struct drm_dp_
> edid = drm_edid_duplicate(port->cached_edid);
> else {
> edid = drm_get_edid(connector, &port->aux.ddc);
> -   drm_connector_set_tile_property(connector);
> }
> port->has_audio = drm_detect_monitor_audio(edid);
> drm_dp_mst_topology_put_port(port);
> --
> 2.19.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH 21/21] drm/fb-helper: Unexport fill_{var, info}

2019-03-26 Thread Alex Deucher
On Tue, Mar 26, 2019 at 9:21 AM Daniel Vetter  wrote:
>
> Not used by drivers anymore.
>
> v2: Rebase
>
> Signed-off-by: Daniel Vetter 

Other than the spelling typos noted by Noralf, the series is:
Reviewed-by: Alex Deucher 


> ---
>  drivers/gpu/drm/drm_fb_helper.c | 38 +
>  include/drm/drm_fb_helper.h |  4 
>  2 files changed, 5 insertions(+), 37 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 42423ca28991..6ae8d5fa142c 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -2037,21 +2037,8 @@ static int drm_fb_helper_single_fb_probe(struct 
> drm_fb_helper *fb_helper,
> return 0;
>  }
>
> -/**
> - * drm_fb_helper_fill_fix - initializes fixed fbdev information
> - * @info: fbdev registered by the helper
> - * @pitch: desired pitch
> - * @depth: desired depth
> - *
> - * Helper to fill in the fixed fbdev information useful for a non-accelerated
> - * fbdev emulations. Drivers which support acceleration methods which impose
> - * additional constraints need to set up their own limits.
> - *
> - * Drivers should call this (or their equivalent setup code) from their
> - * &drm_fb_helper_funcs.fb_probe callback.
> - */
> -void drm_fb_helper_fill_fix(struct fb_info *info, uint32_t pitch,
> -   uint32_t depth)
> +static void drm_fb_helper_fill_fix(struct fb_info *info, uint32_t pitch,
> +  uint32_t depth)
>  {
> info->fix.type = FB_TYPE_PACKED_PIXELS;
> info->fix.visual = depth == 8 ? FB_VISUAL_PSEUDOCOLOR :
> @@ -2066,24 +2053,10 @@ void drm_fb_helper_fill_fix(struct fb_info *info, 
> uint32_t pitch,
>
> info->fix.line_length = pitch;
>  }
> -EXPORT_SYMBOL(drm_fb_helper_fill_fix);
>
> -/**
> - * drm_fb_helper_fill_var - initalizes variable fbdev information
> - * @info: fbdev instance to set up
> - * @fb_helper: fb helper instance to use as template
> - * @fb_width: desired fb width
> - * @fb_height: desired fb height
> - *
> - * Sets up the variable fbdev metainformation from the given fb helper 
> instance
> - * and the drm framebuffer allocated in &drm_fb_helper.fb.
> - *
> - * Drivers should call this (or their equivalent setup code) from their
> - * &drm_fb_helper_funcs.fb_probe callback after having allocated the fbdev
> - * backing storage framebuffer.
> - */
> -void drm_fb_helper_fill_var(struct fb_info *info, struct drm_fb_helper 
> *fb_helper,
> -   uint32_t fb_width, uint32_t fb_height)
> +static void drm_fb_helper_fill_var(struct fb_info *info,
> +  struct drm_fb_helper *fb_helper,
> +  uint32_t fb_width, uint32_t fb_height)
>  {
> struct drm_framebuffer *fb = fb_helper->fb;
>
> @@ -2103,7 +2076,6 @@ void drm_fb_helper_fill_var(struct fb_info *info, 
> struct drm_fb_helper *fb_helpe
> info->var.xres = fb_width;
> info->var.yres = fb_height;
>  }
> -EXPORT_SYMBOL(drm_fb_helper_fill_var);
>
>  /**
>   * drm_fb_helper_fill_info - initializes fbdev information
> diff --git a/include/drm/drm_fb_helper.h b/include/drm/drm_fb_helper.h
> index 9ef72f20662d..9ba9db5dc34d 100644
> --- a/include/drm/drm_fb_helper.h
> +++ b/include/drm/drm_fb_helper.h
> @@ -289,10 +289,6 @@ int drm_fb_helper_restore_fbdev_mode_unlocked(struct 
> drm_fb_helper *fb_helper);
>
>  struct fb_info *drm_fb_helper_alloc_fbi(struct drm_fb_helper *fb_helper);
>  void drm_fb_helper_unregister_fbi(struct drm_fb_helper *fb_helper);
> -void drm_fb_helper_fill_var(struct fb_info *info, struct drm_fb_helper 
> *fb_helper,
> -   uint32_t fb_width, uint32_t fb_height);
> -void drm_fb_helper_fill_fix(struct fb_info *info, uint32_t pitch,
> -   uint32_t depth);
>  void drm_fb_helper_fill_info(struct fb_info *info,
>  struct drm_fb_helper *fb_helper,
>  struct drm_fb_helper_surface_size *sizes);
> --
> 2.20.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH 0/5] drm: Aspect ratio fixes

2019-06-21 Thread Alex Deucher
On Thu, Jun 20, 2019 at 10:26 AM Ville Syrjala
 wrote:
>
> From: Ville Syrjälä 
>
> Ilia pointed out some oddball crap in the i915 aspect ratio handling.
> While looking at that I noticed a bunch of fail in the core as well.
> This series aims to fix it all.
>
> Cc: Ilia Mirkin 
>
> Ville Syrjälä (5):
>   drm: Do not use bitwise OR to set picure_aspect_ratio
>   drm: Do not accept garbage mode aspect ratio flags
>   drm: WARN on illegal aspect ratio when converting a mode to umode

Patches 1-3:
Reviewed-by: Alex Deucher 

>   drm/i915: Do not override mode's aspect ratio with the prop value NONE
>   drm/i915: Drop redundant aspec ratio prop value initialization
>
>  drivers/gpu/drm/drm_modes.c   | 17 +++--
>  drivers/gpu/drm/i915/display/intel_hdmi.c |  5 ++---
>  drivers/gpu/drm/i915/display/intel_sdvo.c |  4 +---
>  3 files changed, 14 insertions(+), 12 deletions(-)
>
> --
> 2.21.0
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] linux-next: Tree for Jun 15 (drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm.c)

2022-06-15 Thread Alex Deucher
Pushed to drm-misc-next.

Alex

On Wed, Jun 15, 2022 at 7:26 PM Stephen Rothwell  wrote:
>
> Hi all,
>
> On Wed, 15 Jun 2022 13:52:34 -0700 Nathan Chancellor  
> wrote:
> >
> > On Wed, Jun 15, 2022 at 04:45:16PM -0400, Alex Deucher wrote:
> > > On Wed, Jun 15, 2022 at 4:24 PM Nathan Chancellor  
> > > wrote:
> > > >
> > > > On Wed, Jun 15, 2022 at 03:28:52PM -0400, Alex Deucher wrote:
> > > > > On Wed, Jun 15, 2022 at 3:01 PM Randy Dunlap  
> > > > > wrote:
> > > > > >
> > > > > >
> > > > > >
> > > > > > On 6/14/22 23:01, Stephen Rothwell wrote:
> > > > > > > Hi all,
> > > > > > >
> > > > > > > Changes since 20220614:
> > > > > > >
> > > > > >
> > > > > > on i386:
> > > > > > # CONFIG_DEBUG_FS is not set
> > > > > >
> > > > > >
> > > > > > ../drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm.c: In 
> > > > > > function ‘amdgpu_dm_crtc_late_register’:
> > > > > > ../drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm.c:6599:2:
> > > > > >  error: implicit declaration of function ‘crtc_debugfs_init’; did 
> > > > > > you mean ‘amdgpu_debugfs_init’? 
> > > > > > [-Werror=implicit-function-declaration]
> > > > > >   crtc_debugfs_init(crtc);
> > > > > >   ^
> > > > > >   amdgpu_debugfs_init
> > > > > >
> > > > > >
> > > > > > Full randconfig file is attached.
> > > > >
> > > > > I tried building with your config and I can't repro this.  As Harry
> > > > > noted, that function and the whole secure display feature depend on
> > > > > debugfs.  It should never be built without CONFIG_DEBUG_FS.  See
> > > > > drivers/gpu/drm/amd/display/Kconfig:
> > > > >
> > > > > > config DRM_AMD_SECURE_DISPLAY
> > > > > > bool "Enable secure display support"
> > > > > > default n
> > > > > > depends on DEBUG_FS
> > > > > > depends on DRM_AMD_DC_DCN
> > > > > > help
> > > > > > Choose this option if you want to
> > > > > > support secure display
> > > > > >
> > > > > > This option enables the calculation
> > > > > > of crc of specific region via debugfs.
> > > > > > Cooperate with specific DMCU FW.
> > > > >
> > > > > amdgpu_dm_crtc_late_register is guarded by
> > > > > CONIG_DRM_AMD_SECURE_DISPLAY.  It's not clear to me how we could hit
> > > > > this.
> > > >
> > > > I think the problem is that you are not looking at the right tree.
> > > >
> > > > The kernel test robot reported [1] [2] this error is caused by commit
> > > > 4cd79f614b50 ("drm/amd/display: Move connector debugfs to drm"), which
> > > > is in the drm-misc tree on the drm-misc-next branch. That change removes
> > > > the #ifdef around amdgpu_dm_crtc_late_register(), meaning that
> > > > crtc_debugfs_init() can be called without CONFIG_DRM_AMD_SECURE_DISPLAY
> > > > and CONFIG_DEBUG_FS.
> > > >
> > > >   $ git show -s --format='%h ("%s")'
> > > >   abf0ba5a34ea ("drm/bridge: it6505: Add missing CRYPTO_HASH 
> > > > dependency")
> > > >
> > > >   $ make -skj"$(nproc)" ARCH=x86_64 mrproper defconfig
> > > >
> > > >   $ scripts/config -d BLK_DEV_IO_TRACE -d DEBUG_FS -e DRM_AMDGPU
> > > >
> > > >   $ make -skj"$(nproc)" ARCH=x86_64 olddefconfig 
> > > > drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm.o
> > > >   drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm.c: In 
> > > > function ‘amdgpu_dm_crtc_late_register’:
> > > >   drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm.c:6622:9: 
> > > > error: implicit declaration of function ‘crtc_debugfs_init’; did you 
> > > > mean ‘amdgpu_debugfs_init’? [-Werror=implicit-function-declaration]
> > > >6622 | crtc_debugfs_init(crtc);
> &

Re: [Intel-gfx] [PATCH] dma-buf: revert "return only unsignaled fences in dma_fence_unwrap_for_each v3"

2022-07-12 Thread Alex Deucher
On Tue, Jul 12, 2022 at 8:06 AM Christian König
 wrote:
>
> Hi Karolina,
>
> Am 12.07.22 um 14:04 schrieb Karolina Drobnik:
> > Hi Christian,
> >
> > On 12.07.2022 12:28, Christian König wrote:
> >> This reverts commit 8f61973718485f3e89bc4f408f929048b7b47c83.
> >>
> >> It turned out that this is not correct. Especially the sync_file info
> >> IOCTL needs to see even signaled fences to correctly report back their
> >> status to userspace.
> >>
> >> Instead add the filter in the merge function again where it makes sense.
> >>
> >> Signed-off-by: Christian König 
> >
> > After applying the patch, fence merging works and all sw_sync subtests
> > are passing. Thanks for taking care of this.
> >
> > Tested-by: Karolina Drobnik 
>
> can anybody give me an rb or at least an Acked-by as well so that I can
> push this upstream?

Patch makes sense.

Reviewed-by: Alex Deucher 

>
> Thanks,
> Christian.
>
> >
> >> ---
> >>   drivers/dma-buf/dma-fence-unwrap.c | 3 ++-
> >>   include/linux/dma-fence-unwrap.h   | 6 +-
> >>   2 files changed, 3 insertions(+), 6 deletions(-)
> >>
> >> diff --git a/drivers/dma-buf/dma-fence-unwrap.c
> >> b/drivers/dma-buf/dma-fence-unwrap.c
> >> index 502a65ea6d44..7002bca792ff 100644
> >> --- a/drivers/dma-buf/dma-fence-unwrap.c
> >> +++ b/drivers/dma-buf/dma-fence-unwrap.c
> >> @@ -72,7 +72,8 @@ struct dma_fence *__dma_fence_unwrap_merge(unsigned
> >> int num_fences,
> >>   count = 0;
> >>   for (i = 0; i < num_fences; ++i) {
> >>   dma_fence_unwrap_for_each(tmp, &iter[i], fences[i])
> >> -++count;
> >> +if (!dma_fence_is_signaled(tmp))
> >> +++count;
> >>   }
> >> if (count == 0)
> >> diff --git a/include/linux/dma-fence-unwrap.h
> >> b/include/linux/dma-fence-unwrap.h
> >> index 390de1ee9d35..66b1e56fbb81 100644
> >> --- a/include/linux/dma-fence-unwrap.h
> >> +++ b/include/linux/dma-fence-unwrap.h
> >> @@ -43,14 +43,10 @@ struct dma_fence *dma_fence_unwrap_next(struct
> >> dma_fence_unwrap *cursor);
> >>* Unwrap dma_fence_chain and dma_fence_array containers and deep
> >> dive into all
> >>* potential fences in them. If @head is just a normal fence only
> >> that one is
> >>* returned.
> >> - *
> >> - * Note that signalled fences are opportunistically filtered out, which
> >> - * means the iteration is potentially over no fence at all.
> >>*/
> >>   #define dma_fence_unwrap_for_each(fence, cursor, head)\
> >>   for (fence = dma_fence_unwrap_first(head, cursor); fence;\
> >> - fence = dma_fence_unwrap_next(cursor))\
> >> -if (!dma_fence_is_signaled(fence))
> >> + fence = dma_fence_unwrap_next(cursor))
> >> struct dma_fence *__dma_fence_unwrap_merge(unsigned int num_fences,
> >>  struct dma_fence **fences,
>


Re: [Intel-gfx] [PATCH 02/22] drm/amdgpu: Remove pointless on stack mode copies

2022-03-15 Thread Alex Deucher
Applied.  Thanks!

Alex

On Fri, Feb 18, 2022 at 11:28 AM Harry Wentland  wrote:
>
>
>
> On 2022-02-18 05:03, Ville Syrjala wrote:
> > From: Ville Syrjälä 
> >
> > These on stack copies of the modes appear to be pointless.
> > Just look at the originals directly.
> >
> > Cc: Harry Wentland 
> > Cc: Leo Li 
> > Cc: Rodrigo Siqueira 
> > Cc: Alex Deucher 
> > Cc: amd-...@lists.freedesktop.org
> > Cc: Nikola Cornij 
> > Cc: Aurabindo Pillai 
> > Signed-off-by: Ville Syrjälä 
>
> Reviewed-by: Harry Wentland 
>
> Harry
>
> > ---
> >  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 32 +--
> >  1 file changed, 16 insertions(+), 16 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
> > b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > index 21dba337dab0..65aab0d086b6 100644
> > --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > @@ -10139,27 +10139,27 @@ static bool
> >  is_timing_unchanged_for_freesync(struct drm_crtc_state *old_crtc_state,
> >struct drm_crtc_state *new_crtc_state)
> >  {
> > - struct drm_display_mode old_mode, new_mode;
> > + const struct drm_display_mode *old_mode, *new_mode;
> >
> >   if (!old_crtc_state || !new_crtc_state)
> >   return false;
> >
> > - old_mode = old_crtc_state->mode;
> > - new_mode = new_crtc_state->mode;
> > + old_mode = &old_crtc_state->mode;
> > + new_mode = &new_crtc_state->mode;
> >
> > - if (old_mode.clock   == new_mode.clock &&
> > - old_mode.hdisplay== new_mode.hdisplay &&
> > - old_mode.vdisplay== new_mode.vdisplay &&
> > - old_mode.htotal  == new_mode.htotal &&
> > - old_mode.vtotal  != new_mode.vtotal &&
> > - old_mode.hsync_start == new_mode.hsync_start &&
> > - old_mode.vsync_start != new_mode.vsync_start &&
> > - old_mode.hsync_end   == new_mode.hsync_end &&
> > - old_mode.vsync_end   != new_mode.vsync_end &&
> > - old_mode.hskew   == new_mode.hskew &&
> > - old_mode.vscan   == new_mode.vscan &&
> > - (old_mode.vsync_end - old_mode.vsync_start) ==
> > - (new_mode.vsync_end - new_mode.vsync_start))
> > + if (old_mode->clock   == new_mode->clock &&
> > + old_mode->hdisplay== new_mode->hdisplay &&
> > + old_mode->vdisplay== new_mode->vdisplay &&
> > + old_mode->htotal  == new_mode->htotal &&
> > + old_mode->vtotal  != new_mode->vtotal &&
> > + old_mode->hsync_start == new_mode->hsync_start &&
> > + old_mode->vsync_start != new_mode->vsync_start &&
> > + old_mode->hsync_end   == new_mode->hsync_end &&
> > + old_mode->vsync_end   != new_mode->vsync_end &&
> > + old_mode->hskew   == new_mode->hskew &&
> > + old_mode->vscan   == new_mode->vscan &&
> > + (old_mode->vsync_end - old_mode->vsync_start) ==
> > + (new_mode->vsync_end - new_mode->vsync_start))
> >   return true;
> >
> >   return false;
>


Re: [Intel-gfx] [PATCH 05/22] drm/radeon: Use drm_mode_copy()

2022-03-15 Thread Alex Deucher
Applied.  Thanks!

Alex

On Fri, Feb 18, 2022 at 5:04 AM Ville Syrjala
 wrote:
>
> From: Ville Syrjälä 
>
> struct drm_display_mode embeds a list head, so overwriting
> the full struct with another one will corrupt the list
> (if the destination mode is on a list). Use drm_mode_copy()
> instead which explicitly preserves the list head of
> the destination mode.
>
> Even if we know the destination mode is not on any list
> using drm_mode_copy() seems decent as it sets a good
> example. Bad examples of not using it might eventually
> get copied into code where preserving the list head
> actually matters.
>
> Obviously one case not covered here is when the mode
> itself is embedded in a larger structure and the whole
> structure is copied. But if we are careful when copying
> into modes embedded in structures I think we can be a
> little more reassured that bogus list heads haven't been
> propagated in.
>
> @is_mode_copy@
> @@
> drm_mode_copy(...)
> {
> ...
> }
>
> @depends on !is_mode_copy@
> struct drm_display_mode *mode;
> expression E, S;
> @@
> (
> - *mode = E
> + drm_mode_copy(mode, &E)
> |
> - memcpy(mode, E, S)
> + drm_mode_copy(mode, E)
> )
>
> @depends on !is_mode_copy@
> struct drm_display_mode mode;
> expression E;
> @@
> (
> - mode = E
> + drm_mode_copy(&mode, &E)
> |
> - memcpy(&mode, E, S)
> + drm_mode_copy(&mode, E)
> )
>
> @@
> struct drm_display_mode *mode;
> @@
> - &*mode
> + mode
>
> Cc: Alex Deucher 
> Cc: amd-...@lists.freedesktop.org
> Signed-off-by: Ville Syrjälä 
> ---
>  drivers/gpu/drm/radeon/radeon_connectors.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c 
> b/drivers/gpu/drm/radeon/radeon_connectors.c
> index a7925a8290b2..0cb1345c6ba4 100644
> --- a/drivers/gpu/drm/radeon/radeon_connectors.c
> +++ b/drivers/gpu/drm/radeon/radeon_connectors.c
> @@ -777,7 +777,7 @@ static void radeon_fixup_lvds_native_mode(struct 
> drm_encoder *encoder,
> if (mode->type & DRM_MODE_TYPE_PREFERRED) {
> if (mode->hdisplay != native_mode->hdisplay ||
> mode->vdisplay != native_mode->vdisplay)
> -   memcpy(native_mode, mode, sizeof(*mode));
> +   drm_mode_copy(native_mode, mode);
> }
> }
>
> @@ -786,7 +786,7 @@ static void radeon_fixup_lvds_native_mode(struct 
> drm_encoder *encoder,
> list_for_each_entry_safe(mode, t, &connector->probed_modes, 
> head) {
> if (mode->hdisplay == native_mode->hdisplay &&
> mode->vdisplay == native_mode->vdisplay) {
> -   *native_mode = *mode;
> +   drm_mode_copy(native_mode, mode);
> drm_mode_set_crtcinfo(native_mode, 
> CRTC_INTERLACE_HALVE_V);
> DRM_DEBUG_KMS("Determined LVDS native mode 
> details from EDID\n");
> break;
> --
> 2.34.1
>


Re: [Intel-gfx] [PATCH 04/22] drm/amdgpu: Use drm_mode_copy()

2022-03-15 Thread Alex Deucher
Applied.  Thanks!

Alex

On Fri, Feb 18, 2022 at 11:32 AM Harry Wentland  wrote:
>
>
>
> On 2022-02-18 05:03, Ville Syrjala wrote:
> > From: Ville Syrjälä 
> >
> > struct drm_display_mode embeds a list head, so overwriting
> > the full struct with another one will corrupt the list
> > (if the destination mode is on a list). Use drm_mode_copy()
> > instead which explicitly preserves the list head of
> > the destination mode.
> >
> > Even if we know the destination mode is not on any list
> > using drm_mode_copy() seems decent as it sets a good
> > example. Bad examples of not using it might eventually
> > get copied into code where preserving the list head
> > actually matters.
> >
> > Obviously one case not covered here is when the mode
> > itself is embedded in a larger structure and the whole
> > structure is copied. But if we are careful when copying
> > into modes embedded in structures I think we can be a
> > little more reassured that bogus list heads haven't been
> > propagated in.
> >
> > @is_mode_copy@
> > @@
> > drm_mode_copy(...)
> > {
> > ...
> > }
> >
> > @depends on !is_mode_copy@
> > struct drm_display_mode *mode;
> > expression E, S;
> > @@
> > (
> > - *mode = E
> > + drm_mode_copy(mode, &E)
> > |
> > - memcpy(mode, E, S)
> > + drm_mode_copy(mode, E)
> > )
> >
> > @depends on !is_mode_copy@
> > struct drm_display_mode mode;
> > expression E;
> > @@
> > (
> > - mode = E
> > + drm_mode_copy(&mode, &E)
> > |
> > - memcpy(&mode, E, S)
> > + drm_mode_copy(&mode, E)
> > )
> >
> > @@
> > struct drm_display_mode *mode;
> > @@
> > - &*mode
> > + mode
> >
> > Cc: Alex Deucher 
> > Cc: Harry Wentland 
> > Cc: Leo Li 
> > Cc: Rodrigo Siqueira 
> > Cc: amd-...@lists.freedesktop.org
> > Signed-off-by: Ville Syrjälä 
>
> Reviewed-by: Harry Wentland 
>
> Harry
>
> > ---
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c| 4 ++--
> >  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 6 +++---
> >  2 files changed, 5 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c 
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
> > index fa20261aa928..673078faa27a 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
> > @@ -626,7 +626,7 @@ amdgpu_connector_fixup_lcd_native_mode(struct 
> > drm_encoder *encoder,
> >   if (mode->type & DRM_MODE_TYPE_PREFERRED) {
> >   if (mode->hdisplay != native_mode->hdisplay ||
> >   mode->vdisplay != native_mode->vdisplay)
> > - memcpy(native_mode, mode, sizeof(*mode));
> > + drm_mode_copy(native_mode, mode);
> >   }
> >   }
> >
> > @@ -635,7 +635,7 @@ amdgpu_connector_fixup_lcd_native_mode(struct 
> > drm_encoder *encoder,
> >   list_for_each_entry_safe(mode, t, &connector->probed_modes, 
> > head) {
> >   if (mode->hdisplay == native_mode->hdisplay &&
> >   mode->vdisplay == native_mode->vdisplay) {
> > - *native_mode = *mode;
> > + drm_mode_copy(native_mode, mode);
> >   drm_mode_set_crtcinfo(native_mode, 
> > CRTC_INTERLACE_HALVE_V);
> >   DRM_DEBUG_KMS("Determined LVDS native mode 
> > details from EDID\n");
> >   break;
> > diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
> > b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > index bd23c9e481eb..514280699ad5 100644
> > --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > @@ -6318,7 +6318,7 @@ get_highest_refresh_rate_mode(struct 
> > amdgpu_dm_connector *aconnector,
> >   }
> >   }
> >
> > - aconnector->freesync_vid_base = *m_pref;
> > + drm_mode_copy(&aconnector->freesync_vid_base, m_pref);
> >   return m_pref;
> >  }
> >
> > @@ -6432,8 +6432,8 @@ create_stream_for_sink(struct amdgpu_dm_connector 
> > *aconnector,
> >   recalculate_timing = is_freesync_video_mode(&mode, 
> > aconnector);
> >   if (recalculate_timing) {
> >   freesync_mode = 
> > get_highest_refresh_rate_mode(aconnector, false);
> > - saved_mode = mode;
> > - mode = *freesync_mode;
> > + drm_mode_copy(&saved_mode, &mode);
> > + drm_mode_copy(&mode, freesync_mode);
> >   } else {
> >   decide_crtc_timing_for_drm_display_mode(
> >   &mode, preferred_mode, scale);
>


Re: [Intel-gfx] [PATCH 00/22] drm: Review of mode copies

2022-03-15 Thread Alex Deucher
On Mon, Mar 14, 2022 at 6:12 PM Ville Syrjälä
 wrote:
>
> On Fri, Feb 18, 2022 at 12:03:41PM +0200, Ville Syrjala wrote:
> >   drm: Add drm_mode_init()
> >   drm/bridge: Use drm_mode_copy()
> >   drm/imx: Use drm_mode_duplicate()
> >   drm/panel: Use drm_mode_duplicate()
> >   drm/vc4: Use drm_mode_copy()
> These have been pushed to drm-misc-next.
>
> >   drm/amdgpu: Remove pointless on stack mode copies
> >   drm/amdgpu: Use drm_mode_init() for on-stack modes
> >   drm/amdgpu: Use drm_mode_copy()
> amdgpu ones are reviewed, but I'll leave them for the
> AMD folks to push to whichever tree they prefer.

I pulled patches 2, 4, 5 into my tree.  For 3, I'm happy to have it
land via drm-misc with the rest of the mode_init changes if you'd
prefer.

Alex


Alex

>
>
> The rest are still in need of review:
> >   drm/radeon: Use drm_mode_copy()
> >   drm/gma500: Use drm_mode_copy()
> >   drm/hisilicon: Use drm_mode_init() for on-stack modes
> >   drm/msm: Nuke weird on stack mode copy
> >   drm/msm: Use drm_mode_init() for on-stack modes
> >   drm/msm: Use drm_mode_copy()
> >   drm/mtk: Use drm_mode_init() for on-stack modes
> >   drm/rockchip: Use drm_mode_copy()
> >   drm/sti: Use drm_mode_copy()
> >   drm/tilcdc: Use drm_mode_copy()
> >   drm/i915: Use drm_mode_init() for on-stack modes
> >   drm/i915: Use drm_mode_copy()
> >   drm: Use drm_mode_init() for on-stack modes
> >   drm: Use drm_mode_copy()
>
> --
> Ville Syrjälä
> Intel


Re: [Intel-gfx] Commit messages (was: [PATCH v11] drm/amdgpu: add drm buddy support to amdgpu)

2022-03-23 Thread Alex Deucher
On Wed, Mar 23, 2022 at 10:00 AM Daniel Stone  wrote:
>
> On Wed, 23 Mar 2022 at 08:19, Christian König  
> wrote:
> > Am 23.03.22 um 09:10 schrieb Paul Menzel:
> > > Sorry, I disagree. The motivation needs to be part of the commit
> > > message. For example see recent discussion on the LWN article
> > > *Donenfeld: Random number generator enhancements for Linux 5.17 and
> > > 5.18* [1].
> > >
> > > How much the commit message should be extended, I do not know, but the
> > > current state is insufficient (too terse).
> >
> > Well the key point is it's not about you to judge that.
> >
> > If you want to complain about the commit message then come to me with
> > that and don't request information which isn't supposed to be publicly
> > available.
> >
> > So to make it clear: The information is intentionally hold back and not
> > made public.
>
> In that case, the code isn't suitable to be merged into upstream
> trees; it can be resubmitted when it can be explained.

So you are saying we need to publish the problematic RTL to be able to
fix a HW bug in the kernel?  That seems a little unreasonable.  Also,
links to internal documents or bug trackers don't provide much value
to the community since they can't access them.  In general, adding
internal documents to commit messages is frowned on.

Alex


Re: [Intel-gfx] Commit messages (was: [PATCH v11] drm/amdgpu: add drm buddy support to amdgpu)

2022-03-23 Thread Alex Deucher
On Wed, Mar 23, 2022 at 11:04 AM Daniel Stone  wrote:
>
> Hi Alex,
>
> On Wed, 23 Mar 2022 at 14:42, Alex Deucher  wrote:
> > On Wed, Mar 23, 2022 at 10:00 AM Daniel Stone  wrote:
> > > On Wed, 23 Mar 2022 at 08:19, Christian König  
> > > wrote:
> > > > Well the key point is it's not about you to judge that.
> > > >
> > > > If you want to complain about the commit message then come to me with
> > > > that and don't request information which isn't supposed to be publicly
> > > > available.
> > > >
> > > > So to make it clear: The information is intentionally hold back and not
> > > > made public.
> > >
> > > In that case, the code isn't suitable to be merged into upstream
> > > trees; it can be resubmitted when it can be explained.
> >
> > So you are saying we need to publish the problematic RTL to be able to
> > fix a HW bug in the kernel?  That seems a little unreasonable.  Also,
> > links to internal documents or bug trackers don't provide much value
> > to the community since they can't access them.  In general, adding
> > internal documents to commit messages is frowned on.
>
> That's not what anyone's saying here ...
>
> No-one's demanding AMD publish RTL, or internal design docs, or
> hardware specs, or URLs to JIRA tickets no-one can access.
>
> This is a large and invasive commit with pretty big ramifications;
> containing exactly two lines of commit message, one of which just
> duplicates the subject.
>
> It cannot be the case that it's completely impossible to provide any
> justification, background, or details, about this commit being made.
> Unless, of course, it's to fix a non-public security issue, that is
> reasonable justification for eliding some of the details. But then
> again, 'huge change which is very deliberately opaque' is a really
> good way to draw a lot of attention to the commit, and it would be
> better to provide more detail about the change to help it slip under
> the radar.
>
> If dri-devel@ isn't allowed to inquire about patches which are posted,
> then CCing the list is just a façade; might as well just do it all
> internally and periodically dump out pull requests.

I think we are in agreement. I think the withheld information
Christian was referring to was on another thread with Christian and
Paul discussing a workaround for a hardware bug:
https://www.spinics.net/lists/amd-gfx/msg75908.html

Alex


Alex


Re: [Intel-gfx] [PATCH v12] drm/amdgpu: add drm buddy support to amdgpu

2022-05-16 Thread Alex Deucher
On Mon, May 16, 2022 at 8:40 AM Mike Lothian  wrote:
>
> Hi
>
> The merge window for 5.19 will probably be opening next week, has
> there been any progress with this bug?

It took a while to find a combination of GPUs that would repro the
issue, but now that we can, it is still being investigated.

Alex

>
> Thanks
>
> Mike
>
> On Mon, 2 May 2022 at 17:31, Mike Lothian  wrote:
> >
> > On Mon, 2 May 2022 at 16:54, Arunpravin Paneer Selvam
> >  wrote:
> > >
> > >
> > >
> > > On 5/2/2022 8:41 PM, Mike Lothian wrote:
> > > > On Wed, 27 Apr 2022 at 12:55, Mike Lothian  wrote:
> > > >> On Tue, 26 Apr 2022 at 17:36, Christian König 
> > > >>  wrote:
> > > >>> Hi Mike,
> > > >>>
> > > >>> sounds like somehow stitching together the SG table for PRIME doesn't
> > > >>> work any more with this patch.
> > > >>>
> > > >>> Can you try with P2P DMA disabled?
> > > >> -CONFIG_PCI_P2PDMA=y
> > > >> +# CONFIG_PCI_P2PDMA is not set
> > > >>
> > > >> If that's what you're meaning, then there's no difference, I'll upload
> > > >> my dmesg to the gitlab issue
> > > >>
> > > >>> Apart from that can you take a look Arun?
> > > >>>
> > > >>> Thanks,
> > > >>> Christian.
> > > > Hi
> > > >
> > > > Have you had any success in replicating this?
> > > Hi Mike,
> > > I couldn't replicate on my Raven APU machine. I see you have 2 cards
> > > initialized, one is Renoir
> > > and the other is Navy Flounder. Could you give some more details, are
> > > you running Gravity Mark
> > > on Renoir and what is your system RAM configuration?
> > > >
> > > > Cheers
> > > >
> > > > Mike
> > >
> > Hi
> >
> > It's a PRIME laptop, it failed on the RENOIR too, it caused a lockup,
> > but systemd managed to capture it, I'll attach it to the issue
> >
> > I've got 64GB RAM, the 6800M has 12GB VRAM
> >
> > Cheers
> >
> > Mike


Re: [Intel-gfx] Per file OOM badness

2022-05-31 Thread Alex Deucher
+ dri-devel

On Tue, May 31, 2022 at 6:00 AM Christian König
 wrote:
>
> Hello everyone,
>
> To summarize the issue I'm trying to address here: Processes can allocate
> resources through a file descriptor without being held responsible for it.
>
> Especially for the DRM graphics driver subsystem this is rather
> problematic. Modern games tend to allocate huge amounts of system memory
> through the DRM drivers to make it accessible to GPU rendering.
>
> But even outside of the DRM subsystem this problem exists and it is
> trivial to exploit. See the following simple example of
> using memfd_create():
>
>  fd = memfd_create("test", 0);
>  while (1)
>  write(fd, page, 4096);
>
> Compile this and you can bring down any standard desktop system within
> seconds.
>
> The background is that the OOM killer will kill every processes in the
> system, but just not the one which holds the only reference to the memory
> allocated by the memfd.
>
> Those problems where brought up on the mailing list multiple times now
> [1][2][3], but without any final conclusion how to address them. Since
> file descriptors are considered shared the process can not directly held
> accountable for allocations made through them. Additional to that file
> descriptors can also easily move between processes as well.
>
> So what this patch set does is to instead of trying to account the
> allocated memory to a specific process it adds a callback to struct
> file_operations which the OOM killer can use to query the specific OOM
> badness of this file reference. This badness is then divided by the
> file_count, so that every process using a shmem file, DMA-buf or DRM
> driver will get it's equal amount of OOM badness.
>
> Callbacks are then implemented for the two core users (memfd and DMA-buf)
> as well as 72 DRM based graphics drivers.
>
> The result is that the OOM killer can now much better judge if a process
> is worth killing to free up memory. Resulting a quite a bit better system
> stability in OOM situations, especially while running games.
>
> The only other possibility I can see would be to change the accounting of
> resources whenever references to the file structure change, but this would
> mean quite some additional overhead for a rather common operation.
>
> Additionally I think trying to limit device driver allocations using
> cgroups is orthogonal to this effort. While cgroups is very useful, it
> works on per process limits and tries to enforce a collaborative model on
> memory management while the OOM killer enforces a competitive model.
>
> Please comment and/or review, we have that problem flying around for years
> now and are not at a point where we finally need to find a solution for
> this.
>
> Regards,
> Christian.
>
> [1] 
> https://lists.freedesktop.org/archives/dri-devel/2015-September/089778.html
> [2] https://lkml.org/lkml/2018/1/18/543
> [3] https://lkml.org/lkml/2021/2/4/799
>
>


Re: [Intel-gfx] [PATCH v2 03/29] drm/amdgpu: Don't register backlight when another backlight should be used

2022-07-20 Thread Alex Deucher
On Tue, Jul 12, 2022 at 3:39 PM Hans de Goede  wrote:
>
> Before this commit when we want userspace to use the acpi_video backlight
> device we register both the GPU's native backlight device and acpi_video's
> firmware acpi_video# backlight device. This relies on userspace preferring
> firmware type backlight devices over native ones.
>
> Registering 2 backlight devices for a single display really is
> undesirable, don't register the GPU's native backlight device when
> another backlight device should be used.
>
> Changes in v2:
> - To avoid linker errors when amdgpu is builtin and video_detect.c is in
>   a module, select ACPI_VIDEO and its deps if ACPI && X86 are enabled.
>   When these are not set, ACPI_VIDEO is disabled, ensuring the stubs
>   from acpi/video.h will be used.
>
> Signed-off-by: Hans de Goede 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/Kconfig   | 6 ++
>  drivers/gpu/drm/amd/amdgpu/atombios_encoders.c| 7 +++
>  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 7 +++
>  3 files changed, 20 insertions(+)
>
> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
> index aaa7ad1f0614..d65119860760 100644
> --- a/drivers/gpu/drm/Kconfig
> +++ b/drivers/gpu/drm/Kconfig
> @@ -258,6 +258,12 @@ config DRM_AMDGPU
> select HWMON
> select BACKLIGHT_CLASS_DEVICE
> select INTERVAL_TREE
> +   # amdgpu depends on ACPI_VIDEO when X86 and ACPI are both enabled
> +   # for select to work, ACPI_VIDEO's dependencies must also be selected
> +   select INPUT if ACPI && X86
> +   select X86_PLATFORM_DEVICES if ACPI && X86
> +   select ACPI_WMI if ACPI && X86
> +   select ACPI_VIDEO if ACPI && X86
> help
>   Choose this option if you have a recent AMD Radeon graphics card.
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c 
> b/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c
> index fa7421afb9a6..abf209e36fca 100644
> --- a/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c
> +++ b/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c
> @@ -26,6 +26,8 @@
>
>  #include 
>
> +#include 
> +
>  #include 
>  #include 
>  #include "amdgpu.h"
> @@ -184,6 +186,11 @@ void amdgpu_atombios_encoder_init_backlight(struct 
> amdgpu_encoder *amdgpu_encode
> if (!(adev->mode_info.firmware_flags & 
> ATOM_BIOS_INFO_BL_CONTROLLED_BY_GPU))
> return;
>
> +   if (!acpi_video_backlight_use_native()) {
> +   DRM_INFO("Skipping amdgpu atom DIG backlight registration\n");
> +   return;
> +   }
> +
> pdata = kmalloc(sizeof(struct amdgpu_backlight_privdata), GFP_KERNEL);
> if (!pdata) {
> DRM_ERROR("Memory allocation failed\n");
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> index 5eb111d35793..3b03a95e59a8 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> @@ -86,6 +86,8 @@
>  #include 
>  #include 
>
> +#include 
> +
>  #include "ivsrcid/dcn/irqsrcs_dcn_1_0.h"
>
>  #include "dcn/dcn_1_0_offset.h"
> @@ -4050,6 +4052,11 @@ amdgpu_dm_register_backlight_device(struct 
> amdgpu_display_manager *dm)
> amdgpu_dm_update_backlight_caps(dm, dm->num_of_edps);
> dm->brightness[dm->num_of_edps] = AMDGPU_MAX_BL_LEVEL;
>
> +   if (!acpi_video_backlight_use_native()) {
> +   DRM_INFO("Skipping amdgpu DM backlight registration\n");
> +   return;
> +   }
> +
> props.max_brightness = AMDGPU_MAX_BL_LEVEL;
> props.brightness = AMDGPU_MAX_BL_LEVEL;
> props.type = BACKLIGHT_RAW;
> --
> 2.36.0
>


Re: [Intel-gfx] [PATCH v2 04/29] drm/radeon: Don't register backlight when another backlight should be used

2022-07-20 Thread Alex Deucher
On Tue, Jul 12, 2022 at 3:39 PM Hans de Goede  wrote:
>
> Before this commit when we want userspace to use the acpi_video backlight
> device we register both the GPU's native backlight device and acpi_video's
> firmware acpi_video# backlight device. This relies on userspace preferring
> firmware type backlight devices over native ones.
>
> Registering 2 backlight devices for a single display really is
> undesirable, don't register the GPU's native backlight device when
> another backlight device should be used.
>
> Changes in v2:
> - To avoid linker errors when radeon is builtin and video_detect.c is in
>   a module, select ACPI_VIDEO and its deps if ACPI && X86 are enabled.
>   When these are not set, ACPI_VIDEO is disabled, ensuring the stubs
>   from acpi/video.h will be used.
>
> Signed-off-by: Hans de Goede 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/Kconfig | 6 ++
>  drivers/gpu/drm/radeon/atombios_encoders.c  | 7 +++
>  drivers/gpu/drm/radeon/radeon_legacy_encoders.c | 7 +++
>  3 files changed, 20 insertions(+)
>
> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
> index d65119860760..a07b76e06f84 100644
> --- a/drivers/gpu/drm/Kconfig
> +++ b/drivers/gpu/drm/Kconfig
> @@ -234,6 +234,12 @@ config DRM_RADEON
> select HWMON
> select BACKLIGHT_CLASS_DEVICE
> select INTERVAL_TREE
> +   # radeon depends on ACPI_VIDEO when X86 and ACPI are both enabled
> +   # for select to work, ACPI_VIDEO's dependencies must also be selected
> +   select INPUT if ACPI && X86
> +   select X86_PLATFORM_DEVICES if ACPI && X86
> +   select ACPI_WMI if ACPI && X86
> +   select ACPI_VIDEO if ACPI && X86
> help
>   Choose this option if you have an ATI Radeon graphics card.  There
>   are both PCI and AGP versions.  You don't need to choose this to
> diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c 
> b/drivers/gpu/drm/radeon/atombios_encoders.c
> index c93040e60d04..958920230d6f 100644
> --- a/drivers/gpu/drm/radeon/atombios_encoders.c
> +++ b/drivers/gpu/drm/radeon/atombios_encoders.c
> @@ -32,6 +32,8 @@
>  #include 
>  #include 
>
> +#include 
> +
>  #include "atom.h"
>  #include "radeon_atombios.h"
>  #include "radeon.h"
> @@ -209,6 +211,11 @@ void radeon_atom_backlight_init(struct radeon_encoder 
> *radeon_encoder,
> if (!(rdev->mode_info.firmware_flags & 
> ATOM_BIOS_INFO_BL_CONTROLLED_BY_GPU))
> return;
>
> +   if (!acpi_video_backlight_use_native()) {
> +   DRM_INFO("Skipping radeon atom DIG backlight registration\n");
> +   return;
> +   }
> +
> pdata = kmalloc(sizeof(struct radeon_backlight_privdata), GFP_KERNEL);
> if (!pdata) {
> DRM_ERROR("Memory allocation failed\n");
> diff --git a/drivers/gpu/drm/radeon/radeon_legacy_encoders.c 
> b/drivers/gpu/drm/radeon/radeon_legacy_encoders.c
> index 1a66fb969ee7..d24cedf20c47 100644
> --- a/drivers/gpu/drm/radeon/radeon_legacy_encoders.c
> +++ b/drivers/gpu/drm/radeon/radeon_legacy_encoders.c
> @@ -33,6 +33,8 @@
>  #include 
>  #include 
>
> +#include 
> +
>  #include "radeon.h"
>  #include "radeon_asic.h"
>  #include "radeon_legacy_encoders.h"
> @@ -387,6 +389,11 @@ void radeon_legacy_backlight_init(struct radeon_encoder 
> *radeon_encoder,
> return;
>  #endif
>
> +   if (!acpi_video_backlight_use_native()) {
> +   DRM_INFO("Skipping radeon legacy LVDS backlight 
> registration\n");
> +   return;
> +   }
> +
> pdata = kmalloc(sizeof(struct radeon_backlight_privdata), GFP_KERNEL);
> if (!pdata) {
> DRM_ERROR("Memory allocation failed\n");
> --
> 2.36.0
>


Re: [Intel-gfx] [PATCH v2 03/29] drm/amdgpu: Don't register backlight when another backlight should be used

2022-07-20 Thread Alex Deucher
On Wed, Jul 20, 2022 at 12:44 PM Alex Deucher  wrote:
>
> On Tue, Jul 12, 2022 at 3:39 PM Hans de Goede  wrote:
> >
> > Before this commit when we want userspace to use the acpi_video backlight
> > device we register both the GPU's native backlight device and acpi_video's
> > firmware acpi_video# backlight device. This relies on userspace preferring
> > firmware type backlight devices over native ones.
> >
> > Registering 2 backlight devices for a single display really is
> > undesirable, don't register the GPU's native backlight device when
> > another backlight device should be used.
> >
> > Changes in v2:
> > - To avoid linker errors when amdgpu is builtin and video_detect.c is in
> >   a module, select ACPI_VIDEO and its deps if ACPI && X86 are enabled.
> >   When these are not set, ACPI_VIDEO is disabled, ensuring the stubs
> >   from acpi/video.h will be used.
> >
> > Signed-off-by: Hans de Goede 
>
> Acked-by: Alex Deucher 

Actually, can you use dev_info for the messages below rather than
DRM_INFO?  That makes it easier to tell which GPU is affected in a
multi-GPU system.  With that changed,
Acked-by: Alex Deucher 

>
> > ---
> >  drivers/gpu/drm/Kconfig   | 6 ++
> >  drivers/gpu/drm/amd/amdgpu/atombios_encoders.c| 7 +++
> >  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 7 +++
> >  3 files changed, 20 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
> > index aaa7ad1f0614..d65119860760 100644
> > --- a/drivers/gpu/drm/Kconfig
> > +++ b/drivers/gpu/drm/Kconfig
> > @@ -258,6 +258,12 @@ config DRM_AMDGPU
> > select HWMON
> > select BACKLIGHT_CLASS_DEVICE
> > select INTERVAL_TREE
> > +   # amdgpu depends on ACPI_VIDEO when X86 and ACPI are both enabled
> > +   # for select to work, ACPI_VIDEO's dependencies must also be 
> > selected
> > +   select INPUT if ACPI && X86
> > +   select X86_PLATFORM_DEVICES if ACPI && X86
> > +   select ACPI_WMI if ACPI && X86
> > +   select ACPI_VIDEO if ACPI && X86
> > help
> >   Choose this option if you have a recent AMD Radeon graphics card.
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c 
> > b/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c
> > index fa7421afb9a6..abf209e36fca 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c
> > @@ -26,6 +26,8 @@
> >
> >  #include 
> >
> > +#include 
> > +
> >  #include 
> >  #include 
> >  #include "amdgpu.h"
> > @@ -184,6 +186,11 @@ void amdgpu_atombios_encoder_init_backlight(struct 
> > amdgpu_encoder *amdgpu_encode
> > if (!(adev->mode_info.firmware_flags & 
> > ATOM_BIOS_INFO_BL_CONTROLLED_BY_GPU))
> > return;
> >
> > +   if (!acpi_video_backlight_use_native()) {
> > +   DRM_INFO("Skipping amdgpu atom DIG backlight 
> > registration\n");
> > +   return;
> > +   }
> > +
> > pdata = kmalloc(sizeof(struct amdgpu_backlight_privdata), 
> > GFP_KERNEL);
> > if (!pdata) {
> > DRM_ERROR("Memory allocation failed\n");
> > diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
> > b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > index 5eb111d35793..3b03a95e59a8 100644
> > --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > @@ -86,6 +86,8 @@
> >  #include 
> >  #include 
> >
> > +#include 
> > +
> >  #include "ivsrcid/dcn/irqsrcs_dcn_1_0.h"
> >
> >  #include "dcn/dcn_1_0_offset.h"
> > @@ -4050,6 +4052,11 @@ amdgpu_dm_register_backlight_device(struct 
> > amdgpu_display_manager *dm)
> > amdgpu_dm_update_backlight_caps(dm, dm->num_of_edps);
> > dm->brightness[dm->num_of_edps] = AMDGPU_MAX_BL_LEVEL;
> >
> > +   if (!acpi_video_backlight_use_native()) {
> > +   DRM_INFO("Skipping amdgpu DM backlight registration\n");
> > +   return;
> > +   }
> > +
> > props.max_brightness = AMDGPU_MAX_BL_LEVEL;
> > props.brightness = AMDGPU_MAX_BL_LEVEL;
> > props.type = BACKLIGHT_RAW;
> > --
> > 2.36.0
> >


Re: [Intel-gfx] [PATCH v2 09/29] ACPI: video: Make backlight class device registration a separate step

2022-07-20 Thread Alex Deucher
On Tue, Jul 12, 2022 at 3:40 PM Hans de Goede  wrote:
>
> On x86/ACPI boards the acpi_video driver will usually initializing before

initializing -> initialize

> the kms driver (except i915). This causes /sys/class/backlight/acpi_video0
> to show up and then the kms driver registers its own native backlight
> device after which the drivers/acpi/video_detect.c code unregisters
> the acpi_video0 device (when acpi_video_get_backlight_type()==native).
>
> This means that userspace briefly sees 2 devices and the disappearing of
> acpi_video0 after a brief time confuses the systemd backlight level
> save/restore code, see e.g.:
> https://bbs.archlinux.org/viewtopic.php?id=269920
>
> To fix this make backlight class device registration a separate step
> done by a new acpi_video_register_backlight() function. The intend is for
> this to be called by the drm/kms driver *after* it is done setting up its
> own native backlight device. So that acpi_video_get_backlight_type() knows
> if a native backlight will be available or not at acpi_video backlight
> registration time, avoiding the add + remove dance.
>
> Note the new acpi_video_register_backlight() function is also called from
> a delayed work to ensure that the acpi_video backlight devices does get
> registered if necessary even if there is no drm/kms driver or when it is
> disabled.
>
> Signed-off-by: Hans de Goede 
> ---
>  drivers/acpi/acpi_video.c | 45 ---
>  include/acpi/video.h  |  2 ++
>  2 files changed, 44 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/acpi/acpi_video.c b/drivers/acpi/acpi_video.c
> index 6944794797a5..c4c3a9e7ce69 100644
> --- a/drivers/acpi/acpi_video.c
> +++ b/drivers/acpi/acpi_video.c
> @@ -31,6 +31,12 @@
>  #define ACPI_VIDEO_BUS_NAME"Video Bus"
>  #define ACPI_VIDEO_DEVICE_NAME "Video Device"
>
> +/*
> + * Display probing is known to take up to 5 seconds, so delay the fallback
> + * backlight registration by 5 seconds + 3 seconds for some extra margin.
> + */
> +#define ACPI_VIDEO_REGISTER_BACKLIGHT_DELAY(8 * HZ)
> +
>  #define MAX_NAME_LEN   20
>
>  MODULE_AUTHOR("Bruno Ducrot");
> @@ -81,6 +87,9 @@ static LIST_HEAD(video_bus_head);
>  static int acpi_video_bus_add(struct acpi_device *device);
>  static int acpi_video_bus_remove(struct acpi_device *device);
>  static void acpi_video_bus_notify(struct acpi_device *device, u32 event);
> +static void acpi_video_bus_register_backlight_work(struct work_struct 
> *ignored);
> +static DECLARE_DELAYED_WORK(video_bus_register_backlight_work,
> +   acpi_video_bus_register_backlight_work);
>  void acpi_video_detect_exit(void);
>
>  /*
> @@ -1865,8 +1874,6 @@ static int acpi_video_bus_register_backlight(struct 
> acpi_video_bus *video)
> if (video->backlight_registered)
> return 0;
>
> -   acpi_video_run_bcl_for_osi(video);
> -
> if (acpi_video_get_backlight_type() != acpi_backlight_video)
> return 0;
>
> @@ -2092,7 +2099,11 @@ static int acpi_video_bus_add(struct acpi_device 
> *device)
> list_add_tail(&video->entry, &video_bus_head);
> mutex_unlock(&video_list_lock);
>
> -   acpi_video_bus_register_backlight(video);
> +   /*
> +* The userspace visible backlight_device gets registered separately
> +* from acpi_video_register_backlight().
> +*/
> +   acpi_video_run_bcl_for_osi(video);
> acpi_video_bus_add_notify_handler(video);
>
> return 0;
> @@ -2131,6 +2142,11 @@ static int acpi_video_bus_remove(struct acpi_device 
> *device)
> return 0;
>  }
>
> +static void acpi_video_bus_register_backlight_work(struct work_struct 
> *ignored)
> +{
> +   acpi_video_register_backlight();
> +}
> +
>  static int __init is_i740(struct pci_dev *dev)
>  {
> if (dev->device == 0x00D1)
> @@ -2241,6 +2257,17 @@ int acpi_video_register(void)
>  */
> register_count = 1;
>
> +   /*
> +* acpi_video_bus_add() skips registering the userspace visible
> +* backlight_device. The intend is for this to be registered by the
> +* drm/kms driver calling acpi_video_register_backlight() *after* it 
> is
> +* done setting up its own native backlight device. The delayed work
> +* ensures that acpi_video_register_backlight() always gets called
> +* eventually, in case there is no drm/kms driver or it is disabled.
> +*/
> +   schedule_delayed_work(&video_bus_register_backlight_work,
> + ACPI_VIDEO_REGISTER_BACKLIGHT_DELAY);
> +
>  leave:
> mutex_unlock(®ister_count_mutex);
> return ret;
> @@ -2251,6 +2278,7 @@ void acpi_video_unregister(void)
>  {
> mutex_lock(®ister_count_mutex);
> if (register_count) {
> +   cancel_delayed_work_sync(&video_bus_register_backlight_work);
> acpi_bus_unregister_driver(&acpi_video_bus);
>

Re: [Intel-gfx] [PATCH v2 13/29] drm/amdgpu: Register ACPI video backlight when skipping amdgpu backlight registration

2022-07-20 Thread Alex Deucher
On Tue, Jul 12, 2022 at 3:40 PM Hans de Goede  wrote:
>
> Typically the acpi_video driver will initialize before amdgpu, which
> used to cause /sys/class/backlight/acpi_video0 to get registered and then
> amdgpu would register its own amdgpu_bl# device later. After which
> the drivers/acpi/video_detect.c code unregistered the acpi_video0 device
> to avoid there being 2 backlight devices.
>
> This means that userspace used to briefly see 2 devices and the
> disappearing of acpi_video0 after a brief time confuses the systemd
> backlight level save/restore code, see e.g.:
> https://bbs.archlinux.org/viewtopic.php?id=269920
>
> To fix this the ACPI video code has been modified to make backlight class
> device registration a separate step, relying on the drm/kms driver to
> ask for the acpi_video backlight registration after it is done setting up
> its native backlight device.
>
> Add a call to the new acpi_video_register_backlight() when amdgpu skips
> registering its own backlight device because of either the firmware_flags
> or the acpi_video_get_backlight_type() return value. This ensures that
> if the acpi_video backlight device should be used, it will be available
> before the amdgpu drm_device gets registered with userspace.
>
> Signed-off-by: Hans de Goede 
> ---
>  drivers/gpu/drm/amd/amdgpu/atombios_encoders.c| 9 +++--
>  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 2 ++
>  2 files changed, 9 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c 
> b/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c
> index abf209e36fca..45cd9268b426 100644
> --- a/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c
> +++ b/drivers/gpu/drm/amd/amdgpu/atombios_encoders.c
> @@ -184,11 +184,11 @@ void amdgpu_atombios_encoder_init_backlight(struct 
> amdgpu_encoder *amdgpu_encode
> return;
>
> if (!(adev->mode_info.firmware_flags & 
> ATOM_BIOS_INFO_BL_CONTROLLED_BY_GPU))
> -   return;
> +   goto register_acpi_backlight;
>
> if (!acpi_video_backlight_use_native()) {
> DRM_INFO("Skipping amdgpu atom DIG backlight registration\n");
> -   return;
> +   goto register_acpi_backlight;
> }
>
> pdata = kmalloc(sizeof(struct amdgpu_backlight_privdata), GFP_KERNEL);
> @@ -225,6 +225,11 @@ void amdgpu_atombios_encoder_init_backlight(struct 
> amdgpu_encoder *amdgpu_encode
>  error:
> kfree(pdata);
> return;
> +
> +register_acpi_backlight:
> +   /* Try registering an ACPI video backlight device instead. */
> +   acpi_video_register_backlight();
> +   return;

Can drop the return here.  Either way,
Acked-by: Alex Deucher 

>  }
>
>  void
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> index 3b03a95e59a8..a667e66a9842 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> @@ -4054,6 +4054,8 @@ amdgpu_dm_register_backlight_device(struct 
> amdgpu_display_manager *dm)
>
> if (!acpi_video_backlight_use_native()) {
> DRM_INFO("Skipping amdgpu DM backlight registration\n");
> +   /* Try registering an ACPI video backlight device instead. */
> +   acpi_video_register_backlight();
> return;
> }
>
> --
> 2.36.0
>


Re: [Intel-gfx] [PATCH v2 14/29] drm/radeon: Register ACPI video backlight when skipping radeon backlight registration

2022-07-20 Thread Alex Deucher
On Tue, Jul 12, 2022 at 3:40 PM Hans de Goede  wrote:
>
> Typically the acpi_video driver will initialize before radeon, which
> used to cause /sys/class/backlight/acpi_video0 to get registered and then
> radeon would register its own radeon_bl# device later. After which
> the drivers/acpi/video_detect.c code unregistered the acpi_video0 device
> to avoid there being 2 backlight devices.
>
> This means that userspace used to briefly see 2 devices and the
> disappearing of acpi_video0 after a brief time confuses the systemd
> backlight level save/restore code, see e.g.:
> https://bbs.archlinux.org/viewtopic.php?id=269920
>
> To fix this the ACPI video code has been modified to make backlight class
> device registration a separate step, relying on the drm/kms driver to
> ask for the acpi_video backlight registration after it is done setting up
> its native backlight device.
>
> Add a call to the new acpi_video_register_backlight() when radeon skips
> registering its own backlight device because of e.g. the firmware_flags
> or the acpi_video_get_backlight_type() return value. This ensures that
> if the acpi_video backlight device should be used, it will be available
> before the radeon drm_device gets registered with userspace.
>
> Signed-off-by: Hans de Goede 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/radeon/radeon_encoders.c | 11 ++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/radeon/radeon_encoders.c 
> b/drivers/gpu/drm/radeon/radeon_encoders.c
> index 46549d5179ee..c1cbebb51be1 100644
> --- a/drivers/gpu/drm/radeon/radeon_encoders.c
> +++ b/drivers/gpu/drm/radeon/radeon_encoders.c
> @@ -30,6 +30,8 @@
>  #include 
>  #include 
>
> +#include 
> +
>  #include "radeon.h"
>  #include "radeon_atombios.h"
>  #include "radeon_legacy_encoders.h"
> @@ -167,7 +169,7 @@ static void radeon_encoder_add_backlight(struct 
> radeon_encoder *radeon_encoder,
> return;
>
> if (radeon_backlight == 0) {
> -   return;
> +   use_bl = false;
> } else if (radeon_backlight == 1) {
> use_bl = true;
> } else if (radeon_backlight == -1) {
> @@ -193,6 +195,13 @@ static void radeon_encoder_add_backlight(struct 
> radeon_encoder *radeon_encoder,
> else
> radeon_legacy_backlight_init(radeon_encoder, 
> connector);
> }
> +
> +   /*
> +* If there is no native backlight device (which may happen even when
> +* use_bl==true) try registering an ACPI video backlight device 
> instead.
> +*/
> +   if (!rdev->mode_info.bl_encoder)
> +   acpi_video_register_backlight();
>  }
>
>  void
> --
> 2.36.0
>


Re: [Intel-gfx] [PATCH 1/3] drm/amd/display: Fix merge conflict resolution in amdgpu_dm_plane.c

2022-08-10 Thread Alex Deucher
Acked-by: Alex Deucher 

On Mon, Aug 1, 2022 at 10:08 AM Simon Ser  wrote:
>
> Acked-by: Simon Ser 
>
> CC amd-gfx
>
> On Monday, August 1st, 2022 at 15:52, Imre Deak  wrote:
>
> > The API change introduced in
> >
> > commit 30c637151cfa ("drm/plane-helper: Export individual helpers")
> >
> > was missed in the conflict resolution of
> >
> > commit d93a13bd75b9 ("Merge remote-tracking branch 'drm-misc/drm-misc-next' 
> > into drm-tip")
> >
> > fix this up.
> >
> > Fixes: d93a13bd75b9 ("Merge remote-tracking branch 'drm-misc/drm-misc-next' 
> > into drm-tip")
> > Cc: Simon Ser cont...@emersion.fr
> >
> > Cc: Thomas Zimmermann tzimmerm...@suse.de
> >
> > Acked-by: Thomas Zimmermann tzimmerm...@suse.de
> >
> > Signed-off-by: Imre Deak imre.d...@intel.com
> >
> > ---
> > drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c 
> > b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
> > index 8cd25b2ea0dca..5eb5d31e591de 100644
> > --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
> > +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
> > @@ -1562,7 +1562,7 @@ int dm_drm_plane_get_property(struct drm_plane *plane,
> > static const struct drm_plane_funcs dm_plane_funcs = {
> > .update_plane = drm_atomic_helper_update_plane,
> > .disable_plane = drm_atomic_helper_disable_plane,
> > - .destroy = drm_primary_helper_destroy,
> > + .destroy = drm_plane_helper_destroy,
> > .reset = dm_drm_plane_reset,
> > .atomic_duplicate_state = dm_drm_plane_duplicate_state,
> > .atomic_destroy_state = dm_drm_plane_destroy_state,
> > --
> > 2.37.1


Re: [Intel-gfx] [PATCH 01/13] drm/connector: Add define for HDMI 1.4 Maximum Pixel Rate

2021-11-02 Thread Alex Deucher
On Tue, Nov 2, 2021 at 10:59 AM Maxime Ripard  wrote:
>
> A lot of drivers open-code the HDMI 1.4 maximum pixel rate in their
> driver to test whether the resolutions are supported or if the
> scrambling needs to be enabled.
>
> Let's create a common define for everyone to use it.
>
> Cc: Alex Deucher 
> Cc: amd-...@lists.freedesktop.org
> Cc: Andrzej Hajda 
> Cc: Benjamin Gaignard 
> Cc: "Christian König" 
> Cc: Emma Anholt 
> Cc: intel-gfx@lists.freedesktop.org
> Cc: Jani Nikula 
> Cc: Jernej Skrabec 
> Cc: Jerome Brunet 
> Cc: Jonas Karlman 
> Cc: Jonathan Hunter 
> Cc: Joonas Lahtinen 
> Cc: Kevin Hilman 
> Cc: Laurent Pinchart 
> Cc: linux-amlo...@lists.infradead.org
> Cc: linux-arm-ker...@lists.infradead.org
> Cc: linux-te...@vger.kernel.org
> Cc: Martin Blumenstingl 
> Cc: Neil Armstrong 
> Cc: "Pan, Xinhui" 
> Cc: Robert Foss 
> Cc: Rodrigo Vivi 
> Cc: Thierry Reding 
> Signed-off-by: Maxime Ripard 
> ---
>  drivers/gpu/drm/bridge/synopsys/dw-hdmi.c  | 4 ++--
>  drivers/gpu/drm/drm_edid.c | 2 +-
>  drivers/gpu/drm/i915/display/intel_hdmi.c  | 2 +-
>  drivers/gpu/drm/meson/meson_dw_hdmi.c  | 4 ++--
>  drivers/gpu/drm/radeon/radeon_encoders.c   | 2 +-

For radeon:
Acked-by: Alex Deucher 

Note that there are several instances of this in amdgpu as well:
drivers/gpu/drm/amd/amdgpu/amdgpu_encoders.c:if
(pixel_clock > 34)
drivers/gpu/drm/amd/amdgpu/amdgpu_encoders.c:if
(pixel_clock > 34)
drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c:if
(mode->clock > 34)
drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c:if
(mode->clock > 34)

Alex

>  drivers/gpu/drm/sti/sti_hdmi_tx3g4c28phy.c | 2 +-
>  drivers/gpu/drm/tegra/sor.c| 8 
>  drivers/gpu/drm/vc4/vc4_hdmi.c | 4 ++--
>  include/drm/drm_connector.h| 2 ++
>  9 files changed, 16 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c 
> b/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
> index 62ae63565d3a..3a58db357be0 100644
> --- a/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
> +++ b/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
> @@ -46,7 +46,7 @@
>  /* DW-HDMI Controller >= 0x200a are at least compliant with SCDC version 1 */
>  #define SCDC_MIN_SOURCE_VERSION0x1
>
> -#define HDMI14_MAX_TMDSCLK 34000
> +#define HDMI14_MAX_TMDSCLK (DRM_HDMI_14_MAX_TMDS_CLK_KHZ * 1000)
>
>  enum hdmi_datamap {
> RGB444_8B = 0x01,
> @@ -1264,7 +1264,7 @@ static bool dw_hdmi_support_scdc(struct dw_hdmi *hdmi,
>  * for low rates is not supported either
>  */
> if (!display->hdmi.scdc.scrambling.low_rates &&
> -   display->max_tmds_clock <= 34)
> +   display->max_tmds_clock <= DRM_HDMI_14_MAX_TMDS_CLK_KHZ)
> return false;
>
> return true;
> diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
> index 7aa2a56a71c8..ec8fb2d098ae 100644
> --- a/drivers/gpu/drm/drm_edid.c
> +++ b/drivers/gpu/drm/drm_edid.c
> @@ -4966,7 +4966,7 @@ static void drm_parse_hdmi_forum_vsdb(struct 
> drm_connector *connector,
> u32 max_tmds_clock = hf_vsdb[5] * 5000;
> struct drm_scdc *scdc = &hdmi->scdc;
>
> -   if (max_tmds_clock > 34) {
> +   if (max_tmds_clock > DRM_HDMI_14_MAX_TMDS_CLK_KHZ) {
> display->max_tmds_clock = max_tmds_clock;
> DRM_DEBUG_KMS("HF-VSDB: max TMDS clock %d kHz\n",
> display->max_tmds_clock);
> diff --git a/drivers/gpu/drm/i915/display/intel_hdmi.c 
> b/drivers/gpu/drm/i915/display/intel_hdmi.c
> index d2e61f6c6e08..0666203d52b7 100644
> --- a/drivers/gpu/drm/i915/display/intel_hdmi.c
> +++ b/drivers/gpu/drm/i915/display/intel_hdmi.c
> @@ -2226,7 +2226,7 @@ int intel_hdmi_compute_config(struct intel_encoder 
> *encoder,
> if (scdc->scrambling.low_rates)
> pipe_config->hdmi_scrambling = true;
>
> -   if (pipe_config->port_clock > 34) {
> +   if (pipe_config->port_clock > DRM_HDMI_14_MAX_TMDS_CLK_KHZ) {
> pipe_config->hdmi_scrambling = true;
> pipe_config->hdmi_high_tmds_clock_ratio = true;
> }
> diff --git a/drivers/gpu/drm/meson/meson_dw_hdmi.c 
> b/drivers/gpu/drm/meson/meson_dw_hdmi.c
> index 0afbd1e70bfc..8078667aea0e 100644
> --- a/drivers/gpu/drm/meson/meson_dw_hdmi.c
> +++ b/drivers/gpu/drm/meson/meson_dw_hdmi.c
>

Re: [Intel-gfx] [PATCH v8 1/3] gpu: drm: separate panel orientation property creating and value setting

2022-02-18 Thread Alex Deucher
On Fri, Feb 18, 2022 at 7:13 AM Simon Ser  wrote:
>
> On Friday, February 18th, 2022 at 12:54, Hans de Goede  
> wrote:
>
> > On 2/18/22 12:39, Simon Ser wrote:
> > > On Friday, February 18th, 2022 at 11:38, Hans de Goede 
> > >  wrote:
> > >
> > >> What I'm reading in the above is that it is being considered to allow
> > >> changing the panel-orientation value after the connector has been made
> > >> available to userspace; and let userspace know about this through a 
> > >> uevent.
> > >>
> > >> I believe that this is a bad idea, it is important to keep in mind here
> > >> what userspace (e.g. plymouth) uses this prorty for. This property is
> > >> used to rotate the image being rendered / shown on the framebuffer to
> > >> adjust for the panel orientation.
> > >>
> > >> So now lets assume we apply the correct upside-down orientation later
> > >> on a device with an upside-down mounted LCD panel. Then on boot the
> > >> following could happen:
> > >>
> > >> 1. amdgpu exports a connector for the LCD panel to userspace without
> > >> setting panel-orient=upside-down
> > >> 2. plymouth sees this and renders its splash normally, but since the
> > >> panel is upside-down it will now actually show upside-down
> > >
> > > At this point amdgpu hasn't probed the connector yet. So the connector
> > > will be marked as disconnected, and plymouth shouldn't render anything.
> >
> > If before the initial probe of the connector there is a /dev/dri/card0
> > which plymouth can access, then plymouth may at this point decide
> > to disable any seemingly unused crtcs, which will make the screen go 
> > black...
> >
> > I'm not sure if plymouth will actually do this, but AFAICT this would
> > not be invalid behavior for a userspace kms consumer to do and I
> > believe it is likely that mutter will disable unused crtcs.
> >
> > IMHO it is just a bad idea to register /dev/dri/card0 with userspace
> > before the initial connector probe is done. Nothing good can come
> > of that.
> >
> > If all the exposed connectors initially are going to show up as
> > disconnected anyways what is the value in registering /dev/dri/card0
> > with userspace early ?
>
> OK. I'm still unsure how I feel about this, but I think I agree with
> you. That said, the amdgpu architecture is quite involved with multiple
> abstraction levels, so I don't think I'm equipped to write a patch to
> fix this...
>
> cc Daniel Vetter: can you confirm probing all connectors is a good thing
> to do on driver module load?

I don't think it's a big deal to change, but at least my
understanding, albeit this was back in the early KMS days, was that
probing was driven by things outside of the driver.  I.e., there is no
need to probe displays if nothing is going to use them.  If you want
to use the displays, you'd call probe first before trying to use them
so you know what is available.

Alex

>
> > >> I guess the initial modeline is inherited from the video-bios, but
> > >> what about the physical size? Note that you cannot just change the
> > >> physical size later either, that gets used to determine the hidpi
> > >> scaling factor in the bootsplash, and changing that after the initial
> > >> bootsplash dislay will also look ugly
> > >>
> > >> b) Why you need the edid for the panel-orientation property at all,
> > >> typically the edid prom is part of the panel and the panel does not
> > >> know that it is mounted e.g. upside down at all, that is a property
> > >> of the system as a whole not of the panel as a standalone unit so
> > >> in my experience getting panel-orient info is something which comes
> > >> from the firmware /video-bios not from edid ?
> > >
> > > This is an internal DRM thing. The orientation quirks logic uses the
> > > mode size advertised by the EDID.
> >
> > The DMI based quirking does, yes. But e.g. the quirk code directly
> > reading this from the Intel VBT does not rely on the mode.
> >
> > But if you are planning on using a DMI based quirk for the steamdeck
> > then yes that needs the mode.
> >
> > Thee mode check is there for 2 reasons:
> >
> > 1. To avoid also applying the quirk to external displays, but
> > I think that that is also solved in most drivers by only checking for
> > a quirk at all on the eDP connector
> >
> > 2. Some laptop models ship with different panels in different badges
> > some of these are portrait (so need a panel-orient) setting and others
> > are landscape.
>
> That makes sense. So yeah the EDID mode based matching logic needs to
> stay to accomodate for these cases.
>
> > > I agree that at least in the Steam
> > > Deck case it may not make a lot of sense to use any info from the
> > > EDID, but that's needed for the current status quo.
> >
> > We could extend the DMI quirk mechanism to allow quirks which don't
> > do the mode check, for use on devices where we can guarantee neither
> > 1 nor 2 happens, then amdgpu could call the quirk code early simply
> > passing 0x0 as resolution.
>
> Yeah. But per the above amd

Re: [Intel-gfx] [PATCH v12 1/6] drm: Add arch arm64 for drm_clflush_virt_range

2022-03-02 Thread Alex Deucher
On Wed, Mar 2, 2022 at 10:55 AM Michael Cheng  wrote:
>
> Thanks for the feedback Robin!
>
> Sorry my choices of word weren't that great, but what I meant is to
> understand how ARM flushes a range of dcache for device drivers, and not
> an equal to x86 clflush.
>
> I believe the concern is if the CPU writes an update, that update might
> only be sitting in the CPU cache and never make it to device memory
> where the device can see it; there are specific places that we are
> supposed to flush the CPU caches to make sure our updates are visible to
> the hardware.
>
> +Matt Roper
>
> Matt, Lucas, any feed back here?

MMIO (e.g., PCI BARs, etc.) should be mapped uncached.  If it's not
you'll have a lot of problems using a GPU on that architecture.  One
thing that you may want to check is if your device has its own caches
or write queues on the BAR aperture.  You may have to flush them after
CPU access to the BAR to make sure CPU updates land in device memory.
For system memory, PCI, per the spec, should be cache coherent with
the CPU.  If it's not, you'll have a lot of trouble using a GPU on
that platform.

Alex

>
> On 2022-03-02 4:49 a.m., Robin Murphy wrote:
> > On 2022-02-25 19:27, Michael Cheng wrote:
> >> Hi Robin,
> >>
> >> [ +arm64 maintainers for their awareness, which would have been a
> >> good thing to do from the start ]
> >>
> >>   * Thanks for adding the arm64 maintainer and sorry I didn't rope them
> >> in sooner.
> >>
> >> Why does i915 need to ensure the CPU's instruction cache is coherent
> >> with its data cache? Is it a self-modifying driver?
> >>
> >>   * Also thanks for pointing this out. Initially I was using
> >> dcache_clean_inval_poc, which seem to be the equivalently to what
> >> x86 is doing for dcache flushing, but it was giving me build errors
> >> since its not on the global list of kernel symbols. And after
> >> revisiting the documentation for caches_clean_inval_pou, it won't
> >> fly for what we are trying to do. Moving forward, what would you (or
> >> someone in the ARM community) suggest we do? Could it be possible to
> >> export dcache_clean_inval_poc as a global symbol?
> >
> > Unlikely, unless something with a legitimate need for CPU-centric
> > cache maintenance like kexec or CPU hotplug ever becomes modular.
> >
> > In the case of a device driver, it's not even the basic issues of
> > assuming to find direct equivalents to x86 semantics in other CPU
> > architectures, or effectively reinventing parts of the DMA API, it's
> > even bigger than that. Once you move from being integrated in a single
> > vendor's system architecture to being on a discrete card, you
> > fundamentally *no longer have any control over cache coherency*.
> > Whether the host CPU architecture happens to be AArch64, RISC-V, or
> > whatever doesn't really matter, you're at the mercy of 3rd-party PCIe
> > and interconnect IP vendors, and SoC integrators. You'll find yourself
> > in systems where PCIe simply cannot snoop any caches, where you'd
> > better have the correct DMA API calls in place to have any hope of
> > even the most basic functionality working properly; you'll find
> > yourself in systems where even if the PCIe root complex claims to
> > support No Snoop, your uncached traffic will still end up snooping
> > stale data that got prefetched back into caches you thought you'd
> > invalidated; you'll find yourself in systems where your memory
> > attributes may or may not get forcibly rewritten by an IOMMU depending
> > on the kernel config and/or command line.
> >
> > It's not about simply finding a substitute for clflush, it's that the
> > reasons you have for using clflush in the first place can no longer be
> > assumed to be valid.
> >
> > Robin.
> >
> >> On 2022-02-25 10:24 a.m., Robin Murphy wrote:
> >>> [ +arm64 maintainers for their awareness, which would have been a
> >>> good thing to do from the start ]
> >>>
> >>> On 2022-02-25 03:24, Michael Cheng wrote:
>  Add arm64 support for drm_clflush_virt_range. caches_clean_inval_pou
>  performs a flush by first performing a clean, follow by an
>  invalidation
>  operation.
> 
>  v2 (Michael Cheng): Use correct macro for cleaning and invalidation
>  the
>  dcache. Thanks Tvrtko for the suggestion.
> 
>  v3 (Michael Cheng): Replace asm/cacheflush.h with linux/cacheflush.h
> 
>  v4 (Michael Cheng): Arm64 does not export dcache_clean_inval_poc as a
>  symbol that could be use by other modules, thus use
>  caches_clean_inval_pou instead. Also this version
>  removes include for cacheflush, since its already
>  included base on architecture type.
> 
>  Signed-off-by: Michael Cheng 
>  Reviewed-by: Matt Roper 
>  ---
>    drivers/gpu/drm/drm_cache.c | 5 +
>    1 file changed, 5 insertions(+)
> 
>  diff --git a/drivers/gpu/drm/drm_cache.c b/drivers/g

Re: [Intel-gfx] [RFC v2 1/2] drm/doc/rfc: VM_BIND feature design document

2022-03-09 Thread Alex Deucher
On Mon, Mar 7, 2022 at 3:30 PM Niranjana Vishwanathapura
 wrote:
>
> VM_BIND design document with description of intended use cases.
>
> Signed-off-by: Niranjana Vishwanathapura 
> ---
>  Documentation/gpu/rfc/i915_vm_bind.rst | 210 +
>  Documentation/gpu/rfc/index.rst|   4 +
>  2 files changed, 214 insertions(+)
>  create mode 100644 Documentation/gpu/rfc/i915_vm_bind.rst
>
> diff --git a/Documentation/gpu/rfc/i915_vm_bind.rst 
> b/Documentation/gpu/rfc/i915_vm_bind.rst
> new file mode 100644
> index ..cdc6bb25b942
> --- /dev/null
> +++ b/Documentation/gpu/rfc/i915_vm_bind.rst
> @@ -0,0 +1,210 @@
> +==
> +I915 VM_BIND feature design and use cases
> +==
> +
> +VM_BIND feature
> +
> +DRM_I915_GEM_VM_BIND/UNBIND ioctls allows UMD to bind/unbind GEM buffer
> +objects (BOs) or sections of a BOs at specified GPU virtual addresses on
> +a specified address space (VM).
> +
> +These mappings (also referred to as persistent mappings) will be persistent
> +across multiple GPU submissions (execbuff) issued by the UMD, without user
> +having to provide a list of all required mappings during each submission
> +(as required by older execbuff mode).
> +
> +VM_BIND ioctl deferes binding the mappings until next execbuff submission
> +where it will be required, or immediately if I915_GEM_VM_BIND_IMMEDIATE
> +flag is set (useful if mapping is required for an active context).
> +
> +VM_BIND feature is advertised to user via I915_PARAM_HAS_VM_BIND.
> +User has to opt-in for VM_BIND mode of binding for an address space (VM)
> +during VM creation time via I915_VM_CREATE_FLAGS_USE_VM_BIND extension.
> +A VM in VM_BIND mode will not support older execbuff mode of binding.
> +
> +UMDs can still send BOs of these persistent mappings in execlist of execbuff
> +for specifying BO dependencies (implicit fencing) and to use BO as a batch,
> +but those BOs should be mapped ahead via vm_bind ioctl.
> +
> +VM_BIND features include,
> +- Multiple Virtual Address (VA) mappings can map to the same physical pages
> +  of an object (aliasing).
> +- VA mapping can map to a partial section of the BO (partial binding).
> +- Support capture of persistent mappings in the dump upon GPU error.
> +- TLB is flushed upon unbind completion. Batching of TLB flushes in some
> +  usecases will be helpful.
> +- Asynchronous vm_bind and vm_unbind support.
> +- VM_BIND uses user/memory fence mechanism for signaling bind completion
> +  and for signaling batch completion in long running contexts (explained
> +  below).
> +
> +VM_PRIVATE objects
> +--
> +By default, BOs can be mapped on multiple VMs and can also be dma-buf
> +exported. Hence these BOs are referred to as Shared BOs.
> +During each execbuff submission, the request fence must be added to the
> +dma-resv fence list of all shared BOs mapped on the VM.
> +
> +VM_BIND feature introduces an optimization where user can create BO which
> +is private to a specified VM via I915_GEM_CREATE_EXT_VM_PRIVATE flag during
> +BO creation. Unlike Shared BOs, these VM private BOs can only be mapped on
> +the VM they are private to and can't be dma-buf exported.
> +All private BOs of a VM share the dma-resv object. Hence during each execbuff
> +submission, they need only one dma-resv fence list updated. Thus the fast
> +path (where required mappings are already bound) submission latency is O(1)
> +w.r.t the number of VM private BOs.
> +
> +VM_BIND locking hirarchy
> +-
> +VM_BIND locking order is as below.
> +
> +1) A vm_bind mutex will protect vm_bind lists. This lock is taken in vm_bind/
> +   vm_unbind ioctl calls, in the execbuff path and while releasing the 
> mapping.
> +
> +   In future, when GPU page faults are supported, we can potentially use a
> +   rwsem instead, so that multiple pagefault handlers can take the read side
> +   lock to lookup the mapping and hence can run in parallel.
> +
> +2) The BO's dma-resv lock will protect i915_vma state and needs to be held
> +   while binding a vma and while updating dma-resv fence list of a BO.
> +   The private BOs of a VM will all share a dma-resv object.
> +
> +   This lock is held in vm_bind call for immediate binding, during vm_unbind
> +   call for unbinding and during execbuff path for binding the mapping and
> +   updating the dma-resv fence list of the BO.
> +
> +3) Spinlock/s to protect some of the VM's lists.
> +
> +We will also need support for bluk LRU movement of persistent mapping to
> +avoid additional latencies in execbuff path.
> +
> +GPU page faults
> +
> +Both older execbuff mode and the newer VM_BIND mode of binding will require
> +using dma-fence to ensure residency.
> +In future when GPU page faults are supported, no dma-fence usage is required
> +as residency is purely managed by installing and removing/invalidating ptes.
> +
> +
> +User/Memory Fence

Re: [Intel-gfx] [PATCH 01/21] MAINTAINERS: Add entry for fbdev core

2022-02-02 Thread Alex Deucher
Acked-by: Alex Deucher 

On Wed, Feb 2, 2022 at 6:31 AM Maxime Ripard  wrote:
>
> On Mon, Jan 31, 2022 at 10:05:32PM +0100, Daniel Vetter wrote:
> > Ever since Tomi extracted the core code in 2014 it's been defacto me
> > maintaining this, with help from others from dri-devel and sometimes
> > Linus (but those are mostly merge conflicts):
> >
> > $ git shortlog -ns  drivers/video/fbdev/core/ | head -n5
> > 35  Daniel Vetter
> > 23  Linus Torvalds
> > 10  Hans de Goede
> >  9  Dave Airlie
> >  6  Peter Rosin
> >
> > I think ideally we'd also record that the various firmware fb drivers
> > (efifb, vesafb, ...) are also maintained in drm-misc because for the
> > past few years the patches have either been to fix handover issues
> > with drm drivers, or caused handover issues with drm drivers. So any
> > other tree just doesn't make sense. But also, there's plenty of
> > outdated MAINTAINER entries for these with people and git trees that
> > haven't been active in years, so maybe let's just leave them alone.
> > And furthermore distros are now adopting simpledrm as the firmware fb
> > driver, so hopefully the need to care about the fbdev firmware drivers
> > will go down going forward.
> >
> > Note that drm-misc is group maintained, I expect that to continue like
> > we've done before, so no new expectations that patches all go through
> > my hands. That would be silly. This also means I'm happy to put any
> > other volunteer's name in the M: line, but otherwise git log says I'm
> > the one who's stuck with this.
> >
> > Cc: Dave Airlie 
> > Cc: Jani Nikula 
> > Cc: Linus Torvalds 
> > Cc: Linux Fbdev development list 
> > Cc: Pavel Machek 
> > Cc: Sam Ravnborg 
> > Cc: Greg Kroah-Hartman 
> > Cc: Javier Martinez Canillas 
> > Cc: DRI Development 
> > Cc: Linux Kernel Mailing List 
> > Cc: Claudio Suarez 
> > Cc: Tomi Valkeinen 
> > Cc: Geert Uytterhoeven 
> > Cc: Thomas Zimmermann 
> > Cc: Daniel Vetter 
> > Cc: Sven Schnelle 
> > Cc: Gerd Hoffmann 
> > Signed-off-by: Daniel Vetter 
>
> Acked-by: Maxime Ripard 
>
> Maxime


Re: [Intel-gfx] [PATCH 6/6] drm/amdgpu: use dma_fence_chain_contained

2022-02-04 Thread Alex Deucher
On Fri, Feb 4, 2022 at 5:04 AM Christian König
 wrote:
>
> Instead of manually extracting the fence.
>
> Signed-off-by: Christian König 

Reviewed-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
> index f7d8487799b2..40e06745fae9 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
> @@ -261,10 +261,9 @@ int amdgpu_sync_resv(struct amdgpu_device *adev, struct 
> amdgpu_sync *sync,
>
> dma_resv_for_each_fence(&cursor, resv, true, f) {
> dma_fence_chain_for_each(f, f) {
> -   struct dma_fence_chain *chain = to_dma_fence_chain(f);
> +   struct dma_fence *tmp = dma_fence_chain_contained(f);
>
> -   if (amdgpu_sync_test_fence(adev, mode, owner, chain ?
> -  chain->fence : f)) {
> +   if (amdgpu_sync_test_fence(adev, mode, owner, tmp)) {
> r = amdgpu_sync_fence(sync, f);
> dma_fence_put(f);
> if (r)
> --
> 2.25.1
>


Re: [Intel-gfx] [PATCH 0/6] Remove unused declarations for gpu/drm

2022-09-13 Thread Alex Deucher
Pushed patches 1-5 to drm-misc-next.

Alex

On Tue, Sep 13, 2022 at 2:14 AM Christian König
 wrote:
>
> Nice cleanup. Acked-by: Christian König  for
> the whole series.
>
> Thanks,
> Christian.
>
> Am 13.09.22 um 04:48 schrieb Gaosheng Cui:
> > This series contains a few cleanup patches, to remove unused
> > declarations which have been removed. Thanks!
> >
> > Gaosheng Cui (6):
> >drm/vmwgfx: remove unused vmw_bo_is_vmw_bo() declaration
> >drm/radeon/r600_cs: remove r600_cs_legacy_get_tiling_conf()
> >  declaration
> >drm/radeon: remove unused declarations for radeon
> >drm/gma500: remove unused declarations in psb_intel_drv.h
> >drm/amd/pm: remove unused declarations in hardwaremanager.h
> >drm/i915: remove unused i915_gem_lmem_obj_ops declaration
> >
> >   drivers/gpu/drm/amd/pm/powerplay/inc/hardwaremanager.h | 2 --
> >   drivers/gpu/drm/gma500/psb_intel_drv.h | 5 -
> >   drivers/gpu/drm/i915/gem/i915_gem_lmem.h   | 2 --
> >   drivers/gpu/drm/radeon/r600_cs.c   | 2 --
> >   drivers/gpu/drm/radeon/radeon.h| 3 ---
> >   drivers/gpu/drm/radeon/radeon_mode.h   | 1 -
> >   drivers/gpu/drm/vmwgfx/vmwgfx_drv.h| 1 -
> >   7 files changed, 16 deletions(-)
> >
>


Re: [Intel-gfx] [PATCH TRIVIAL v2] gpu: Fix Kconfig indentation

2019-10-07 Thread Alex Deucher
On Mon, Oct 7, 2019 at 7:39 AM Jani Nikula  wrote:
>
> On Fri, 04 Oct 2019, Krzysztof Kozlowski  wrote:
> >  drivers/gpu/drm/i915/Kconfig |  12 +-
> >  drivers/gpu/drm/i915/Kconfig.debug   | 144 +++
>
> Please split these out to a separate patch. Can't speak for others, but
> the patch looks like it'll be conflicts galore and a problem to manage
> if merged in one big lump.

Yes, it would be nice to have the amd patch separate as well.

Alex

>
> BR,
> Jani.
>
>
> --
> Jani Nikula, Intel Open Source Graphics Center
> ___
> amd-gfx mailing list
> amd-...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] linux-next: build failure after merge of the drm-misc tree

2019-10-09 Thread Alex Deucher
Applied.  thanks!

Alex

On Tue, Oct 8, 2019 at 8:36 PM Stephen Rothwell  wrote:
>
> Hi all,
>
> After merging the drm-misc tree, today's linux-next build (x86_64
> allmodconfig) failed like this:
>
> In file included from drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_module.c:25:
> drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_priv.h:40:10: fatal error: 
> drm/drmP.h: No such file or directory
>40 | #include 
>   |  ^~~~
> In file included from drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_chardev.c:38:
> drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_priv.h:40:10: fatal error: 
> drm/drmP.h: No such file or directory
>40 | #include 
>   |  ^~~~
> In file included from drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_device.c:26:
> drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_priv.h:40:10: fatal error: 
> drm/drmP.h: No such file or directory
>40 | #include 
>   |  ^~~~
> In file included from drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_topology.c:34:
> drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_priv.h:40:10: fatal error: 
> drm/drmP.h: No such file or directory
>40 | #include 
>   |  ^~~~
>
>
> Caused by commit
>
>   4e98f871bcff ("drm: delete drmP.h + drm_os_linux.h")
>
> interacting with commit
>
>   6b855f7b83d2 ("drm/amdkfd: Check against device cgroup")
>
> from the amdgpu tree.
>
> I added the following merge fix patch for today:
>
> From: Stephen Rothwell 
> Date: Wed, 9 Oct 2019 11:24:38 +1100
> Subject: [PATCH] drm/amdkfd: update for drmP.h removal
>
> Signed-off-by: Stephen Rothwell 
> ---
>  drivers/gpu/drm/amd/amdkfd/kfd_priv.h | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h 
> b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
> index b8b4485c8f74..41bc0428bfc0 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
> @@ -37,7 +37,9 @@
>  #include 
>  #include 
>  #include 
> -#include 
> +#include 
> +#include 
> +#include 
>  #include 
>
>  #include "amd_shared.h"
> @@ -49,8 +51,6 @@
>  /* GPU ID hash width in bits */
>  #define KFD_GPU_ID_HASH_WIDTH 16
>
> -struct drm_device;
> -
>  /* Use upper bits of mmap offset to store KFD driver specific information.
>   * BITS[63:62] - Encode MMAP type
>   * BITS[61:46] - Encode gpu_id. To identify to which GPU the offset belongs 
> to
> --
> 2.23.0
>
> --
> Cheers,
> Stephen Rothwell
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] linux-next: build failure after merge of the drm-misc tree

2019-10-16 Thread Alex Deucher
Applied.  Thanks!

Alex

On Tue, Oct 15, 2019 at 8:22 PM Stephen Rothwell  wrote:
>
> Hi all,
>
> After merging the drm-misc tree, today's linux-next build (x86_64
> allmodconfig) failed like this:
>
> drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.c:23:10: fatal error: drm/drmP.h: No 
> such file or directory
>23 | #include 
>   |  ^~~~
>
> Caused by commit
>
>   4e98f871bcff ("drm: delete drmP.h + drm_os_linux.h")
>
> interacting with commit
>
>   8b8c294c5d37 ("drm/amdgpu: add function to check tmz capability (v4)")
>
> from the amdgpu tree.
>
> I applied the following merge fix patch for today (which should also
> apply to the amdgpu tree).
>
> From: Stephen Rothwell 
> Date: Wed, 16 Oct 2019 11:17:32 +1100
> Subject: [PATCH] drm/amdgpu: fix up for amdgpu_tmz.c and removal of drm/drmP.h
>
> Signed-off-by: Stephen Rothwell 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.c | 5 -
>  1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.c
> index 14a55003dd81..823527a0fa47 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.c
> @@ -20,7 +20,10 @@
>   * OTHER DEALINGS IN THE SOFTWARE.
>   */
>
> -#include 
> +#include 
> +
> +#include 
> +
>  #include "amdgpu.h"
>  #include "amdgpu_tmz.h"
>
> --
> 2.23.0
>
> --
> Cheers,
> Stephen Rothwell
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH] drm/amdgpu/dm: Do not throw an error for a display with no audio

2019-11-14 Thread Alex Deucher
On Thu, Nov 14, 2019 at 4:23 PM Harry Wentland  wrote:
>
> On 2019-11-14 3:44 p.m., Chris Wilson wrote:
> > An old display with no audio may not have an EDID with a CEA block, or
> > it may simply be too old to support audio. This is not a driver error,
> > so don't flag it as such.
> >
> > Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=112140
> > References: ae2a3495973e ("drm/amd: be quiet when no SAD block is found")
> > Signed-off-by: Chris Wilson 
>
> Reviewed-by: Harry Wentland 
>

Applied.  thanks!

Alex

> Harry
>
> > Cc: Harry Wentland 
> > Cc: Jean Delvare 
> > Cc: Alex Deucher 
> > ---
> >  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c | 2 --
> >  1 file changed, 2 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c 
> > b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
> > index 11e5784aa62a..04808dbecab3 100644
> > --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
> > +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
> > @@ -97,8 +97,6 @@ enum dc_edid_status dm_helpers_parse_edid_caps(
> >   (struct edid *) edid->raw_edid);
> >
> >   sad_count = drm_edid_to_sad((struct edid *) edid->raw_edid, &sads);
> > - if (sad_count < 0)
> > - DRM_ERROR("Couldn't read SADs: %d\n", sad_count);
> >   if (sad_count <= 0)
> >   return result;
> >
> >
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] drm core/helpers and MIT license

2019-11-15 Thread Alex Deucher
On Tue, Nov 12, 2019 at 10:03 AM Daniel Vetter  wrote:
>
> Hi all,
>
> Dave and me chatted about this last week on irc. Essentially we have:
>
> $ git grep SPDX.*GPL -- ':(glob)drivers/gpu/drm/*c'
> drivers/gpu/drm/drm_client.c:// SPDX-License-Identifier: GPL-2.0
> drivers/gpu/drm/drm_damage_helper.c:// SPDX-License-Identifier: GPL-2.0 OR MIT
> drivers/gpu/drm/drm_dp_cec.c:// SPDX-License-Identifier: GPL-2.0
> drivers/gpu/drm/drm_edid_load.c:// SPDX-License-Identifier: GPL-2.0-or-later
> drivers/gpu/drm/drm_fb_cma_helper.c:// SPDX-License-Identifier: 
> GPL-2.0-or-later
> drivers/gpu/drm/drm_format_helper.c:/* SPDX-License-Identifier: GPL-2.0 */
> drivers/gpu/drm/drm_gem_cma_helper.c:// SPDX-License-Identifier:
> GPL-2.0-or-later
> drivers/gpu/drm/drm_gem_framebuffer_helper.c://
> SPDX-License-Identifier: GPL-2.0-or-later
> drivers/gpu/drm/drm_gem_shmem_helper.c:// SPDX-License-Identifier: GPL-2.0
> drivers/gpu/drm/drm_gem_ttm_helper.c:// SPDX-License-Identifier:
> GPL-2.0-or-later
> drivers/gpu/drm/drm_gem_vram_helper.c:// SPDX-License-Identifier:
> GPL-2.0-or-later
> drivers/gpu/drm/drm_hdcp.c:// SPDX-License-Identifier: GPL-2.0
> drivers/gpu/drm/drm_lease.c:// SPDX-License-Identifier: GPL-2.0-or-later
> drivers/gpu/drm/drm_mipi_dbi.c:// SPDX-License-Identifier: GPL-2.0-or-later
> drivers/gpu/drm/drm_of.c:// SPDX-License-Identifier: GPL-2.0-only
> drivers/gpu/drm/drm_simple_kms_helper.c:// SPDX-License-Identifier:
> GPL-2.0-or-later
> drivers/gpu/drm/drm_sysfs.c:// SPDX-License-Identifier: GPL-2.0-only
> drivers/gpu/drm/drm_vma_manager.c:// SPDX-License-Identifier: GPL-2.0 OR MIT
> drivers/gpu/drm/drm_vram_helper_common.c:// SPDX-License-Identifier:
> GPL-2.0-or-later
> drivers/gpu/drm/drm_writeback.c:// SPDX-License-Identifier: GPL-2.0
>
> One is GPL+MIT, so ok, and one is a default GPL-only header from
> Greg's infamous patch (so could probably be changed to MIT license
> header). I only looked at .c sources, since headers are worse wrt
> having questionable default headers. So about 18 files with clear GPL
> licenses thus far in drm core/helpers.
>
> Looking at where that code came from, it is mostly from GPL-only
> drivers (we have a lot of those nowadays), so seems legit non-MIT
> licensed. Question is now what do we do:
>
> - Nothing, which means GPL will slowly encroach on drm core/helpers,
> which is roughly the same as ...
>
> - Throw in the towel on MIT drm core officially. Same as above, except
> lets just make it official.
>
> - Try to counter this, which means at least a) relicensing a bunch of
> stuff b) rewriting a bunch of stuff c) making sure that's ok with
> everyone, there's a lot of GPL-by-default for the kernel (that's how
> we got most of the above code through merged drivers I think). I
> suspect that whomever cares will need to put in the work to make this
> happen (since it will need a pile of active resistance at least).
>

I'd like to try and keep as much MIT as possible.  I'd be willing to
help with the re-licensing effort.

Alex

> Cc maintainers/driver teams who might care most about this.
>
> Also if people could cc *bsd, they probably care and I don't know best
> contacts for graphics stuff (or anything else really at all).
>
> Cheers, Daniel
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH v2 2/2] drm: share address space for dma bufs

2019-11-22 Thread Alex Deucher
On Fri, Nov 22, 2019 at 4:17 AM Daniel Vetter  wrote:
>
> On Fri, Nov 22, 2019 at 7:37 AM Gerd Hoffmann  wrote:
> >
> > Use the shared address space of the drm device (see drm_open() in
> > drm_file.c) for dma-bufs too.  That removes a difference betweem drm
> > device mmap vmas and dma-buf mmap vmas and fixes corner cases like
> > dropping ptes (using madvise(DONTNEED) for example) not working
> > properly.
> >
> > Also remove amdgpu driver's private dmabuf update.  It is not needed
> > any more now that we are doing this for everybody.
> >
> > Signed-off-by: Gerd Hoffmann 
>
> Reviewed-by: Daniel Vetter 
>
> But I think you want at least an ack from amd guys for double checking here.
> -Daniel

Looks correct to me.

Reviewed-by: Alex Deucher 


> > ---
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 4 +---
> >  drivers/gpu/drm/drm_prime.c | 4 +++-
> >  2 files changed, 4 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c 
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> > index d5bcdfefbad6..586db4fb46bd 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> > @@ -361,10 +361,8 @@ struct dma_buf *amdgpu_gem_prime_export(struct 
> > drm_gem_object *gobj,
> > return ERR_PTR(-EPERM);
> >
> > buf = drm_gem_prime_export(gobj, flags);
> > -   if (!IS_ERR(buf)) {
> > -   buf->file->f_mapping = gobj->dev->anon_inode->i_mapping;
> > +   if (!IS_ERR(buf))
> > buf->ops = &amdgpu_dmabuf_ops;
> > -   }
> >
> > return buf;
> >  }
> > diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> > index a9633bd241bb..c3fc341453c0 100644
> > --- a/drivers/gpu/drm/drm_prime.c
> > +++ b/drivers/gpu/drm/drm_prime.c
> > @@ -240,6 +240,7 @@ void drm_prime_destroy_file_private(struct 
> > drm_prime_file_private *prime_fpriv)
> >  struct dma_buf *drm_gem_dmabuf_export(struct drm_device *dev,
> >   struct dma_buf_export_info *exp_info)
> >  {
> > +   struct drm_gem_object *obj = exp_info->priv;
> > struct dma_buf *dma_buf;
> >
> > dma_buf = dma_buf_export(exp_info);
> > @@ -247,7 +248,8 @@ struct dma_buf *drm_gem_dmabuf_export(struct drm_device 
> > *dev,
> > return dma_buf;
> >
> > drm_dev_get(dev);
> > -   drm_gem_object_get(exp_info->priv);
> > +   drm_gem_object_get(obj);
> > +   dma_buf->file->f_mapping = obj->dev->anon_inode->i_mapping;
> >
> > return dma_buf;
> >  }
> > --
> > 2.18.1
> >
>
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH v3 1/9] drm/amd/display: Align macro name as per DP spec

2020-01-06 Thread Alex Deucher
On Fri, Jan 3, 2020 at 6:53 PM Manasi Navare  wrote:
>
> Harry, Jani - Since this also updates the AMD driver file, should this be 
> merged through
> AMD tree and then backmerged to drm-misc ?

Take it through whatever tree is easiest for you.

Alex

>
> Manasi
>
> On Mon, Dec 30, 2019 at 09:45:15PM +0530, Animesh Manna wrote:
> > [Why]:
> > Aligh with DP spec wanted to follow same naming convention.
> >
> > [How]:
> > Changed the macro name of the dpcd address used for getting requested
> > test-pattern.
> >
> > Cc: Harry Wentland 
> > Cc: Alex Deucher 
> > Reviewed-by: Harry Wentland 
> > Signed-off-by: Animesh Manna 
> > ---
> >  drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c | 2 +-
> >  include/drm/drm_dp_helper.h  | 2 +-
> >  2 files changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c 
> > b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
> > index 42aa889fd0f5..1a6109be2fce 100644
> > --- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
> > +++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
> > @@ -2491,7 +2491,7 @@ static void dp_test_send_phy_test_pattern(struct 
> > dc_link *link)
> >   /* get phy test pattern and pattern parameters from DP receiver */
> >   core_link_read_dpcd(
> >   link,
> > - DP_TEST_PHY_PATTERN,
> > + DP_PHY_TEST_PATTERN,
> >   &dpcd_test_pattern.raw,
> >   sizeof(dpcd_test_pattern));
> >   core_link_read_dpcd(
> > diff --git a/include/drm/drm_dp_helper.h b/include/drm/drm_dp_helper.h
> > index 8f8f3632e697..d6e560870fb1 100644
> > --- a/include/drm/drm_dp_helper.h
> > +++ b/include/drm/drm_dp_helper.h
> > @@ -699,7 +699,7 @@
> >  # define DP_TEST_CRC_SUPPORTED   (1 << 5)
> >  # define DP_TEST_COUNT_MASK  0xf
> >
> > -#define DP_TEST_PHY_PATTERN 0x248
> > +#define DP_PHY_TEST_PATTERN 0x248
> >  #define DP_TEST_80BIT_CUSTOM_PATTERN_7_00x250
> >  #define  DP_TEST_80BIT_CUSTOM_PATTERN_15_8   0x251
> >  #define  DP_TEST_80BIT_CUSTOM_PATTERN_23_16  0x252
> > --
> > 2.24.0
> >
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915/dp: Add current maximum eDP link rate to sink_rate array.

2020-01-09 Thread Alex Deucher
On Thu, Jan 9, 2020 at 10:08 AM Mario Kleiner
 wrote:
>
> If the current eDP link rate, as read from hw, provides a
> higher bandwidth than the standard link rates, then add the
> current link rate to the link_rates array for consideration
> in future mode-sets.
>
> These initial current eDP link settings have been set up by
> firmware during boot, so they should work on the eDP panel.
> Therefore use them if the firmware thinks they are good and
> they provide higher link bandwidth, e.g., to enable higher
> resolutions / color depths.
>
> This fixes a problem found on the MacBookPro 2017 Retina panel:
>
> The panel reports 10 bpc color depth in its EDID, and the UEFI
> firmware chooses link settings at boot which support enough
> bandwidth for 10 bpc (324000 kbit/sec to be precise), but the
> DP_MAX_LINK_RATE dpcd register only reports 2.7 Gbps as possible,
> so intel_dp_set_sink_rates() would cap at that. This restricts
> achievable color depth to 8 bpc, not providing the full color
> depth of the panel. With this commit, we can use firmware setting
> and get the full 10 bpc advertised by the Retina panel.

Would it make more sense to just add a quirk for this particular
panel?  Would there be cases where the link was programmed wrong and
then we end up using that additional link speed as supported?

Alex

>
> Signed-off-by: Mario Kleiner 
> Cc: Daniel Vetter 
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 23 +++
>  1 file changed, 23 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c 
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 2f31d226c6eb..aa3e0b5108c6 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -4368,6 +4368,8 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp)
>  {
> struct drm_i915_private *dev_priv =
> to_i915(dp_to_dig_port(intel_dp)->base.base.dev);
> +   int max_rate;
> +   u8 link_bw;
>
> /* this function is meant to be called only once */
> WARN_ON(intel_dp->dpcd[DP_DPCD_REV] != 0);
> @@ -4433,6 +4435,27 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp)
> else
> intel_dp_set_sink_rates(intel_dp);
>
> +   /*
> +* If the firmware programmed a rate higher than the standard sink 
> rates
> +* during boot, then add that rate as a valid sink rate, as fw knows
> +* this is a good rate and we get extra bandwidth.
> +*
> +* Helps, e.g., on the Apple MacBookPro 2017 Retina panel, which is 
> only
> +* eDP 1.1, but supports the unusual rate of 324000 kHz at bootup, for
> +* 10 bpc / 30 bit color depth.
> +*/
> +   if (!intel_dp->use_rate_select &&
> +   (drm_dp_dpcd_read(&intel_dp->aux, DP_LINK_BW_SET, &link_bw, 1) == 
> 1) &&
> +   (link_bw > 0) && (intel_dp->num_sink_rates < 
> DP_MAX_SUPPORTED_RATES)) {
> +   max_rate = drm_dp_bw_code_to_link_rate(link_bw);
> +   if (max_rate > intel_dp->sink_rates[intel_dp->num_sink_rates 
> - 1]) {
> +   intel_dp->sink_rates[intel_dp->num_sink_rates] = 
> max_rate;
> +   intel_dp->num_sink_rates++;
> +   DRM_DEBUG_KMS("Adding max bandwidth eDP rate %d 
> kHz.\n",
> + max_rate);
> +   }
> +   }
> +
> intel_dp_set_common_rates(intel_dp);
>
> /* Read the eDP DSC DPCD registers */
> --
> 2.24.0
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915/dp: Add current maximum eDP link rate to sink_rate array.

2020-01-09 Thread Alex Deucher
On Thu, Jan 9, 2020 at 11:47 AM Mario Kleiner
 wrote:
>
> On Thu, Jan 9, 2020 at 4:40 PM Alex Deucher  wrote:
>>
>> On Thu, Jan 9, 2020 at 10:08 AM Mario Kleiner
>>  wrote:
>> >
>> > If the current eDP link rate, as read from hw, provides a
>> > higher bandwidth than the standard link rates, then add the
>> > current link rate to the link_rates array for consideration
>> > in future mode-sets.
>> >
>> > These initial current eDP link settings have been set up by
>> > firmware during boot, so they should work on the eDP panel.
>> > Therefore use them if the firmware thinks they are good and
>> > they provide higher link bandwidth, e.g., to enable higher
>> > resolutions / color depths.
>> >
>> > This fixes a problem found on the MacBookPro 2017 Retina panel:
>> >
>> > The panel reports 10 bpc color depth in its EDID, and the UEFI
>> > firmware chooses link settings at boot which support enough
>> > bandwidth for 10 bpc (324000 kbit/sec to be precise), but the
>> > DP_MAX_LINK_RATE dpcd register only reports 2.7 Gbps as possible,
>> > so intel_dp_set_sink_rates() would cap at that. This restricts
>> > achievable color depth to 8 bpc, not providing the full color
>> > depth of the panel. With this commit, we can use firmware setting
>> > and get the full 10 bpc advertised by the Retina panel.
>>
>> Would it make more sense to just add a quirk for this particular
>> panel?  Would there be cases where the link was programmed wrong and
>> then we end up using that additional link speed as supported?
>>
>> Alex
>>
>
> Not sure. This MBP 2017 is the only non-ancient laptop i now have. I'd assume 
> many other Apple Retina panels would behave similar. The panels dpcd regs 
> report DP 1.1 and eDP 1.3, so the flexible table with additional modes from 
> eDP1.4+ does not exist. According to Wikipedia, eDP 1.4 was introduced in 
> february 2013 and this is a mid 2017 machine, so Apple seems to be quite 
> behind. Therefore i assume  we'd need a lot of quirks over time.
>
> That said:
>
> 1. The logic in amdgpu's DC for the same purpose is a bit different than on 
> the intel side.
>
> 2. DC allows overriding DP link settings, that's how i initially tested this, 
> so one could do the "quirk" via something like that in a bootup script. So on 
> AMD one could work around the lack of the patch and of quirks.
>
> 3. I spent a lot of time with a photo-meter, testing the quality of the 10 
> bit: It turns out that running the panel at 8 bit + AMD's spatial dithering 
> that kicks in gives better results than running the panel in native 10 bit. 
> Maybe the panel is not really a 10 bit one, but just pretends to be and then 
> uses its own dithering to achieve 10 bit. So at least on AMD one is better 
> off precision-wise with the 8 bit panel default with this specific panel.
>
> On Intel however, we don't do dithering for > 6 bpc panels atm., so using the 
> panel at 10 bpc is the only way to get 10 bit display atm. Adn we don't use 
> dithering on Intel at > 6 bpc panels atm., because there are some oddities in 
> the way Intel hw dithers at higher bit depths - it also dithers pixel values 
> where it shouldn't. That makes it impossible to get an identity passthrough 
> of a 8 bpc framebuffer to the outputs, which kills all kind of special 
> display equipment that needs that identity passthrough to work.
>

As Harry mentioned in the other thread, won't this only work if the
display was brought up by the vbios?  In the suspend/resume case,
won't we just fall back to 2.7Gbps?

Alex

> -mario
>
>> >
>> > Signed-off-by: Mario Kleiner 
>> > Cc: Daniel Vetter 
>> > ---
>> >  drivers/gpu/drm/i915/display/intel_dp.c | 23 +++
>> >  1 file changed, 23 insertions(+)
>> >
>> > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c 
>> > b/drivers/gpu/drm/i915/display/intel_dp.c
>> > index 2f31d226c6eb..aa3e0b5108c6 100644
>> > --- a/drivers/gpu/drm/i915/display/intel_dp.c
>> > +++ b/drivers/gpu/drm/i915/display/intel_dp.c
>> > @@ -4368,6 +4368,8 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp)
>> >  {
>> > struct drm_i915_private *dev_priv =
>> > to_i915(dp_to_dig_port(intel_dp)->base.base.dev);
>> > +   int max_rate;
>> > +   u8 link_bw;
>> >
>> > /* this function is meant to be called only once */
>> > WARN_ON(intel_dp->dpcd[DP_DPCD_REV] != 0);
>

Re: [Intel-gfx] [PATCH 02/23] drm/amdgpu: Convert to struct drm_crtc_helper_funcs.get_scanout_position()

2020-01-13 Thread Alex Deucher
On Fri, Jan 10, 2020 at 4:21 AM Thomas Zimmermann  wrote:
>
> The callback struct drm_driver.get_scanout_position() is deprecated in
> favor of struct drm_crtc_helper_funcs.get_scanout_position(). Convert
> amdgpu over.
>

I would prefer to just change the signature of
amdgpu_display_get_crtc_scanoutpos() to match the new API rather than
wrapping it again.

Alex

> Signed-off-by: Thomas Zimmermann 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   | 12 
>  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   | 11 ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h  |  5 +
>  drivers/gpu/drm/amd/amdgpu/dce_v10_0.c|  1 +
>  drivers/gpu/drm/amd/amdgpu/dce_v11_0.c|  1 +
>  drivers/gpu/drm/amd/amdgpu/dce_v6_0.c |  1 +
>  drivers/gpu/drm/amd/amdgpu/dce_v8_0.c |  1 +
>  drivers/gpu/drm/amd/amdgpu/dce_virtual.c  |  1 +
>  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  3 ++-
>  9 files changed, 24 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> index 4e699071d144..a1e769d4417d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> @@ -914,3 +914,15 @@ int amdgpu_display_crtc_idx_to_irq_type(struct 
> amdgpu_device *adev, int crtc)
> return AMDGPU_CRTC_IRQ_NONE;
> }
>  }
> +
> +bool amdgpu_crtc_get_scanout_position(struct drm_crtc *crtc,
> +   bool in_vblank_irq, int *vpos,
> +   int *hpos, ktime_t *stime, ktime_t *etime,
> +   const struct drm_display_mode *mode)
> +{
> +   struct drm_device *dev = crtc->dev;
> +   unsigned int pipe = crtc->index;
> +
> +   return amdgpu_display_get_crtc_scanoutpos(dev, pipe, 0, vpos, hpos,
> + stime, etime, mode);
> +}
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> index 3f6f14ce1511..0749285dd1c7 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> @@ -1367,16 +1367,6 @@ int amdgpu_file_to_fpriv(struct file *filp, struct 
> amdgpu_fpriv **fpriv)
> return 0;
>  }
>
> -static bool
> -amdgpu_get_crtc_scanout_position(struct drm_device *dev, unsigned int pipe,
> -bool in_vblank_irq, int *vpos, int *hpos,
> -ktime_t *stime, ktime_t *etime,
> -const struct drm_display_mode *mode)
> -{
> -   return amdgpu_display_get_crtc_scanoutpos(dev, pipe, 0, vpos, hpos,
> - stime, etime, mode);
> -}
> -
>  static struct drm_driver kms_driver = {
> .driver_features =
> DRIVER_USE_AGP | DRIVER_ATOMIC |
> @@ -1391,7 +1381,6 @@ static struct drm_driver kms_driver = {
> .enable_vblank = amdgpu_enable_vblank_kms,
> .disable_vblank = amdgpu_disable_vblank_kms,
> .get_vblank_timestamp = drm_calc_vbltimestamp_from_scanoutpos,
> -   .get_scanout_position = amdgpu_get_crtc_scanout_position,
> .irq_handler = amdgpu_irq_handler,
> .ioctls = amdgpu_ioctls_kms,
> .gem_free_object_unlocked = amdgpu_gem_object_free,
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
> index eb9975f4decb..37ba07e2feb5 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
> @@ -612,6 +612,11 @@ void amdgpu_panel_mode_fixup(struct drm_encoder *encoder,
>  struct drm_display_mode *adjusted_mode);
>  int amdgpu_display_crtc_idx_to_irq_type(struct amdgpu_device *adev, int 
> crtc);
>
> +bool amdgpu_crtc_get_scanout_position(struct drm_crtc *crtc,
> +   bool in_vblank_irq, int *vpos,
> +   int *hpos, ktime_t *stime, ktime_t *etime,
> +   const struct drm_display_mode *mode);
> +
>  /* fbdev layer */
>  int amdgpu_fbdev_init(struct amdgpu_device *adev);
>  void amdgpu_fbdev_fini(struct amdgpu_device *adev);
> diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c 
> b/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
> index 40d2ac723dd6..bdc1e0f036d4 100644
> --- a/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
> @@ -2685,6 +2685,7 @@ static const struct drm_crtc_helper_funcs 
> dce_v10_0_crtc_helper_funcs = {
> .prepare = dce_v10_0_crtc_prepare,
> .commit = dce_v10_0_crtc_commit,
> .disable = dce_v10_0_crtc_disable,
> +   .get_scanout_position = amdgpu_crtc_get_scanout_position,
>  };
>
>  static int dce_v10_0_crtc_init(struct amdgpu_device *adev, int index)
> diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c 
> b/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
> index 898ef72d423c..0319da5f7

Re: [Intel-gfx] [PATCH 05/23] drm/radeon: Convert to struct drm_crtc_helper_funcs.get_scanout_position()

2020-01-13 Thread Alex Deucher
On Fri, Jan 10, 2020 at 4:22 AM Thomas Zimmermann  wrote:
>
> The callback struct drm_driver.get_scanout_position() is deprecated in
> favor of struct drm_crtc_helper_funcs.get_scanout_position(). Convert
> radeon over.
>

I'd prefer to just change the signature of
radeon_get_crtc_scanoutpos() to match the new API.

Alex

> Signed-off-by: Thomas Zimmermann 
> ---
>  drivers/gpu/drm/radeon/atombios_crtc.c  |  1 +
>  drivers/gpu/drm/radeon/radeon_display.c | 13 +
>  drivers/gpu/drm/radeon/radeon_drv.c | 11 ---
>  drivers/gpu/drm/radeon/radeon_legacy_crtc.c |  3 ++-
>  drivers/gpu/drm/radeon/radeon_mode.h|  6 ++
>  5 files changed, 22 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c 
> b/drivers/gpu/drm/radeon/atombios_crtc.c
> index da2c9e295408..447d74b78f19 100644
> --- a/drivers/gpu/drm/radeon/atombios_crtc.c
> +++ b/drivers/gpu/drm/radeon/atombios_crtc.c
> @@ -2232,6 +2232,7 @@ static const struct drm_crtc_helper_funcs 
> atombios_helper_funcs = {
> .prepare = atombios_crtc_prepare,
> .commit = atombios_crtc_commit,
> .disable = atombios_crtc_disable,
> +   .get_scanout_position = radeon_get_crtc_scanout_position,
>  };
>
>  void radeon_atombios_init_crtc(struct drm_device *dev,
> diff --git a/drivers/gpu/drm/radeon/radeon_display.c 
> b/drivers/gpu/drm/radeon/radeon_display.c
> index 962575e27cde..7187158b9963 100644
> --- a/drivers/gpu/drm/radeon/radeon_display.c
> +++ b/drivers/gpu/drm/radeon/radeon_display.c
> @@ -1978,3 +1978,16 @@ int radeon_get_crtc_scanoutpos(struct drm_device *dev, 
> unsigned int pipe,
>
> return ret;
>  }
> +
> +bool
> +radeon_get_crtc_scanout_position(struct drm_crtc *crtc,
> +bool in_vblank_irq, int *vpos, int *hpos,
> +ktime_t *stime, ktime_t *etime,
> +const struct drm_display_mode *mode)
> +{
> +   struct drm_device *dev = crtc->dev;
> +   unsigned int pipe = crtc->index;
> +
> +   return radeon_get_crtc_scanoutpos(dev, pipe, 0, vpos, hpos,
> + stime, etime, mode);
> +}
> diff --git a/drivers/gpu/drm/radeon/radeon_drv.c 
> b/drivers/gpu/drm/radeon/radeon_drv.c
> index fd74e2611185..1f597f166bff 100644
> --- a/drivers/gpu/drm/radeon/radeon_drv.c
> +++ b/drivers/gpu/drm/radeon/radeon_drv.c
> @@ -563,16 +563,6 @@ static const struct file_operations 
> radeon_driver_kms_fops = {
>  #endif
>  };
>
> -static bool
> -radeon_get_crtc_scanout_position(struct drm_device *dev, unsigned int pipe,
> -bool in_vblank_irq, int *vpos, int *hpos,
> -ktime_t *stime, ktime_t *etime,
> -const struct drm_display_mode *mode)
> -{
> -   return radeon_get_crtc_scanoutpos(dev, pipe, 0, vpos, hpos,
> - stime, etime, mode);
> -}
> -
>  static struct drm_driver kms_driver = {
> .driver_features =
> DRIVER_USE_AGP | DRIVER_GEM | DRIVER_RENDER,
> @@ -585,7 +575,6 @@ static struct drm_driver kms_driver = {
> .enable_vblank = radeon_enable_vblank_kms,
> .disable_vblank = radeon_disable_vblank_kms,
> .get_vblank_timestamp = drm_calc_vbltimestamp_from_scanoutpos,
> -   .get_scanout_position = radeon_get_crtc_scanout_position,
> .irq_preinstall = radeon_driver_irq_preinstall_kms,
> .irq_postinstall = radeon_driver_irq_postinstall_kms,
> .irq_uninstall = radeon_driver_irq_uninstall_kms,
> diff --git a/drivers/gpu/drm/radeon/radeon_legacy_crtc.c 
> b/drivers/gpu/drm/radeon/radeon_legacy_crtc.c
> index a1985a552794..8817fd033cd0 100644
> --- a/drivers/gpu/drm/radeon/radeon_legacy_crtc.c
> +++ b/drivers/gpu/drm/radeon/radeon_legacy_crtc.c
> @@ -,7 +,8 @@ static const struct drm_crtc_helper_funcs 
> legacy_helper_funcs = {
> .mode_set_base_atomic = radeon_crtc_set_base_atomic,
> .prepare = radeon_crtc_prepare,
> .commit = radeon_crtc_commit,
> -   .disable = radeon_crtc_disable
> +   .disable = radeon_crtc_disable,
> +   .get_scanout_position = radeon_get_crtc_scanout_position,
>  };
>
>
> diff --git a/drivers/gpu/drm/radeon/radeon_mode.h 
> b/drivers/gpu/drm/radeon/radeon_mode.h
> index fd470d6bf3f4..06c4c527d376 100644
> --- a/drivers/gpu/drm/radeon/radeon_mode.h
> +++ b/drivers/gpu/drm/radeon/radeon_mode.h
> @@ -881,6 +881,12 @@ extern int radeon_get_crtc_scanoutpos(struct drm_device 
> *dev, unsigned int pipe,
>   ktime_t *stime, ktime_t *etime,
>   const struct drm_display_mode *mode);
>
> +extern bool radeon_get_crtc_scanout_position(struct drm_crtc *crtc,
> +bool in_vblank_irq, int *vpos,
> +int *hpos, ktime_t *stime,
> +   

Re: [Intel-gfx] [PATCH 12/23] drm/amdgpu: Convert to CRTC VBLANK callbacks

2020-01-13 Thread Alex Deucher
On Fri, Jan 10, 2020 at 4:22 AM Thomas Zimmermann  wrote:
>
> VBLANK callbacks in struct drm_driver are deprecated in favor of
> their equivalents in struct drm_crtc_funcs. Convert amdgpu over.

I think I'd prefer to just update the signatures of the relevant
functions rather than wrapping them.

Alex

>
> Signed-off-by: Thomas Zimmermann 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  3 +++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   |  4 
>  drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c   | 24 +++
>  drivers/gpu/drm/amd/amdgpu/dce_v10_0.c|  4 
>  drivers/gpu/drm/amd/amdgpu/dce_v11_0.c|  4 
>  drivers/gpu/drm/amd/amdgpu/dce_v6_0.c |  4 
>  drivers/gpu/drm/amd/amdgpu/dce_v8_0.c |  4 
>  drivers/gpu/drm/amd/amdgpu/dce_virtual.c  |  4 
>  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  2 ++
>  9 files changed, 49 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index 81a531b652aa..c1262ab588c9 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -1197,6 +1197,9 @@ int amdgpu_device_resume(struct drm_device *dev, bool 
> fbcon);
>  u32 amdgpu_get_vblank_counter_kms(struct drm_device *dev, unsigned int pipe);
>  int amdgpu_enable_vblank_kms(struct drm_device *dev, unsigned int pipe);
>  void amdgpu_disable_vblank_kms(struct drm_device *dev, unsigned int pipe);
> +u32 amdgpu_crtc_get_vblank_counter(struct drm_crtc *crtc);
> +int amdgpu_crtc_enable_vblank(struct drm_crtc *crtc);
> +void amdgpu_crtc_disable_vblank(struct drm_crtc *crtc);
>  long amdgpu_kms_compat_ioctl(struct file *filp, unsigned int cmd,
>  unsigned long arg);
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> index 0749285dd1c7..9baa1ddf8693 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> @@ -1377,10 +1377,6 @@ static struct drm_driver kms_driver = {
> .postclose = amdgpu_driver_postclose_kms,
> .lastclose = amdgpu_driver_lastclose_kms,
> .unload = amdgpu_driver_unload_kms,
> -   .get_vblank_counter = amdgpu_get_vblank_counter_kms,
> -   .enable_vblank = amdgpu_enable_vblank_kms,
> -   .disable_vblank = amdgpu_disable_vblank_kms,
> -   .get_vblank_timestamp = drm_calc_vbltimestamp_from_scanoutpos,
> .irq_handler = amdgpu_irq_handler,
> .ioctls = amdgpu_ioctls_kms,
> .gem_free_object_unlocked = amdgpu_gem_object_free,
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
> index 60591dbc2097..efe4671fb032 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
> @@ -1174,6 +1174,14 @@ u32 amdgpu_get_vblank_counter_kms(struct drm_device 
> *dev, unsigned int pipe)
> return count;
>  }
>
> +u32 amdgpu_crtc_get_vblank_counter(struct drm_crtc *crtc)
> +{
> +   struct drm_device *dev = crtc->dev;
> +   unsigned int pipe = crtc->index;
> +
> +   return amdgpu_get_vblank_counter_kms(dev, pipe);
> +}
> +
>  /**
>   * amdgpu_enable_vblank_kms - enable vblank interrupt
>   *
> @@ -1191,6 +1199,14 @@ int amdgpu_enable_vblank_kms(struct drm_device *dev, 
> unsigned int pipe)
> return amdgpu_irq_get(adev, &adev->crtc_irq, idx);
>  }
>
> +int amdgpu_crtc_enable_vblank(struct drm_crtc *crtc)
> +{
> +   struct drm_device *dev = crtc->dev;
> +   unsigned int pipe = crtc->index;
> +
> +   return amdgpu_enable_vblank_kms(dev, pipe);
> +}
> +
>  /**
>   * amdgpu_disable_vblank_kms - disable vblank interrupt
>   *
> @@ -1207,6 +1223,14 @@ void amdgpu_disable_vblank_kms(struct drm_device *dev, 
> unsigned int pipe)
> amdgpu_irq_put(adev, &adev->crtc_irq, idx);
>  }
>
> +void amdgpu_crtc_disable_vblank(struct drm_crtc *crtc)
> +{
> +   struct drm_device *dev = crtc->dev;
> +   unsigned int pipe = crtc->index;
> +
> +   amdgpu_disable_vblank_kms(dev, pipe);
> +}
> +
>  const struct drm_ioctl_desc amdgpu_ioctls_kms[] = {
> DRM_IOCTL_DEF_DRV(AMDGPU_GEM_CREATE, amdgpu_gem_create_ioctl, 
> DRM_AUTH|DRM_RENDER_ALLOW),
> DRM_IOCTL_DEF_DRV(AMDGPU_CTX, amdgpu_ctx_ioctl, 
> DRM_AUTH|DRM_RENDER_ALLOW),
> diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c 
> b/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
> index bdc1e0f036d4..8e62f46f0bfd 100644
> --- a/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
> @@ -2494,6 +2494,10 @@ static const struct drm_crtc_funcs 
> dce_v10_0_crtc_funcs = {
> .set_config = amdgpu_display_crtc_set_config,
> .destroy = dce_v10_0_crtc_destroy,
> .page_flip_target = amdgpu_display_crtc_page_flip_target,
> +   .get_vblank_counter = amdgpu_crtc_get_vblank_counter,
> +   .enable_vblank = amdgpu_crtc_enable_vblank,
> + 

Re: [Intel-gfx] [PATCH 17/23] drm/radeon: Convert to CRTC VBLANK callbacks

2020-01-13 Thread Alex Deucher
On Fri, Jan 10, 2020 at 4:22 AM Thomas Zimmermann  wrote:
>
> VBLANK callbacks in struct drm_driver are deprecated in favor of
> their equivalents in struct drm_crtc_funcs. Convert radeon over.
>
> Signed-off-by: Thomas Zimmermann 

Reviewed-by: Alex Deucher 

> ---
>  drivers/gpu/drm/radeon/radeon_display.c | 12 --
>  drivers/gpu/drm/radeon/radeon_drv.c |  7 --
>  drivers/gpu/drm/radeon/radeon_kms.c | 29 ++---
>  3 files changed, 26 insertions(+), 22 deletions(-)
>
> diff --git a/drivers/gpu/drm/radeon/radeon_display.c 
> b/drivers/gpu/drm/radeon/radeon_display.c
> index 7187158b9963..9116975b6eb9 100644
> --- a/drivers/gpu/drm/radeon/radeon_display.c
> +++ b/drivers/gpu/drm/radeon/radeon_display.c
> @@ -45,6 +45,10 @@
>  #include "atom.h"
>  #include "radeon.h"
>
> +u32 radeon_get_vblank_counter_kms(struct drm_crtc *crtc);
> +int radeon_enable_vblank_kms(struct drm_crtc *crtc);
> +void radeon_disable_vblank_kms(struct drm_crtc *crtc);
> +
>  static void avivo_crtc_load_lut(struct drm_crtc *crtc)
>  {
> struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);
> @@ -458,7 +462,7 @@ static void radeon_flip_work_func(struct work_struct 
> *__work)
> (DRM_SCANOUTPOS_VALID | DRM_SCANOUTPOS_IN_VBLANK) &&
> (!ASIC_IS_AVIVO(rdev) ||
> ((int) (work->target_vblank -
> -   dev->driver->get_vblank_counter(dev, work->crtc_id)) > 0)))
> +   crtc->funcs->get_vblank_counter(crtc)) > 0)))
> usleep_range(1000, 2000);
>
> /* We borrow the event spin lock for protecting flip_status */
> @@ -574,7 +578,7 @@ static int radeon_crtc_page_flip_target(struct drm_crtc 
> *crtc,
> }
> work->base = base;
> work->target_vblank = target - (uint32_t)drm_crtc_vblank_count(crtc) +
> -   dev->driver->get_vblank_counter(dev, work->crtc_id);
> +   crtc->funcs->get_vblank_counter(crtc);
>
> /* We borrow the event spin lock for protecting flip_work */
> spin_lock_irqsave(&crtc->dev->event_lock, flags);
> @@ -666,6 +670,10 @@ static const struct drm_crtc_funcs radeon_crtc_funcs = {
> .set_config = radeon_crtc_set_config,
> .destroy = radeon_crtc_destroy,
> .page_flip_target = radeon_crtc_page_flip_target,
> +   .get_vblank_counter = radeon_get_vblank_counter_kms,
> +   .enable_vblank = radeon_enable_vblank_kms,
> +   .disable_vblank = radeon_disable_vblank_kms,
> +   .get_vblank_timestamp = drm_crtc_calc_vbltimestamp_from_scanoutpos,
>  };
>
>  static void radeon_crtc_init(struct drm_device *dev, int index)
> diff --git a/drivers/gpu/drm/radeon/radeon_drv.c 
> b/drivers/gpu/drm/radeon/radeon_drv.c
> index 1f597f166bff..49ce2e7d5f9e 100644
> --- a/drivers/gpu/drm/radeon/radeon_drv.c
> +++ b/drivers/gpu/drm/radeon/radeon_drv.c
> @@ -119,9 +119,6 @@ void radeon_driver_postclose_kms(struct drm_device *dev,
>  int radeon_suspend_kms(struct drm_device *dev, bool suspend,
>bool fbcon, bool freeze);
>  int radeon_resume_kms(struct drm_device *dev, bool resume, bool fbcon);
> -u32 radeon_get_vblank_counter_kms(struct drm_device *dev, unsigned int pipe);
> -int radeon_enable_vblank_kms(struct drm_device *dev, unsigned int pipe);
> -void radeon_disable_vblank_kms(struct drm_device *dev, unsigned int pipe);
>  void radeon_driver_irq_preinstall_kms(struct drm_device *dev);
>  int radeon_driver_irq_postinstall_kms(struct drm_device *dev);
>  void radeon_driver_irq_uninstall_kms(struct drm_device *dev);
> @@ -571,10 +568,6 @@ static struct drm_driver kms_driver = {
> .postclose = radeon_driver_postclose_kms,
> .lastclose = radeon_driver_lastclose_kms,
> .unload = radeon_driver_unload_kms,
> -   .get_vblank_counter = radeon_get_vblank_counter_kms,
> -   .enable_vblank = radeon_enable_vblank_kms,
> -   .disable_vblank = radeon_disable_vblank_kms,
> -   .get_vblank_timestamp = drm_calc_vbltimestamp_from_scanoutpos,
> .irq_preinstall = radeon_driver_irq_preinstall_kms,
> .irq_postinstall = radeon_driver_irq_postinstall_kms,
> .irq_uninstall = radeon_driver_irq_uninstall_kms,
> diff --git a/drivers/gpu/drm/radeon/radeon_kms.c 
> b/drivers/gpu/drm/radeon/radeon_kms.c
> index d24f23a81656..cab891f86dc0 100644
> --- a/drivers/gpu/drm/radeon/radeon_kms.c
> +++ b/drivers/gpu/drm/radeon/radeon_kms.c
> @@ -739,14 +739,15 @@ void radeon_driver_postclose_kms(struct drm_device *dev,
>  /**
>   * radeon_get_vblank_counter_kms - get frame count
>   *
> - * @dev: drm dev pointe

Re: [Intel-gfx] [PATCH 02/23] drm/amdgpu: Convert to struct drm_crtc_helper_funcs.get_scanout_position()

2020-01-15 Thread Alex Deucher
On Wed, Jan 15, 2020 at 4:41 AM Thomas Zimmermann  wrote:
>
> Hi
>
> Am 13.01.20 um 19:52 schrieb Alex Deucher:
> > On Fri, Jan 10, 2020 at 4:21 AM Thomas Zimmermann  
> > wrote:
> >>
> >> The callback struct drm_driver.get_scanout_position() is deprecated in
> >> favor of struct drm_crtc_helper_funcs.get_scanout_position(). Convert
> >> amdgpu over.
> >>
> >
> > I would prefer to just change the signature of
> > amdgpu_display_get_crtc_scanoutpos() to match the new API rather than
> > wrapping it again.
>
> While trying to adapt the siganture, I found that
> amdgpu_display_get_crtc_scanoutpos() requires a flags argument that is
> not mappable to the callback API. That wrapper function is necessary.
>

No worries.  We can clean them up later.  Wrapping is fine.

Alex

> Best regards
> Thomas
>
> >
> > Alex
> >
> >> Signed-off-by: Thomas Zimmermann 
> >> ---
> >>  drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   | 12 
> >>  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   | 11 ---
> >>  drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h  |  5 +
> >>  drivers/gpu/drm/amd/amdgpu/dce_v10_0.c|  1 +
> >>  drivers/gpu/drm/amd/amdgpu/dce_v11_0.c|  1 +
> >>  drivers/gpu/drm/amd/amdgpu/dce_v6_0.c |  1 +
> >>  drivers/gpu/drm/amd/amdgpu/dce_v8_0.c |  1 +
> >>  drivers/gpu/drm/amd/amdgpu/dce_virtual.c  |  1 +
> >>  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  3 ++-
> >>  9 files changed, 24 insertions(+), 12 deletions(-)
> >>
> >> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c 
> >> b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> >> index 4e699071d144..a1e769d4417d 100644
> >> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> >> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> >> @@ -914,3 +914,15 @@ int amdgpu_display_crtc_idx_to_irq_type(struct 
> >> amdgpu_device *adev, int crtc)
> >> return AMDGPU_CRTC_IRQ_NONE;
> >> }
> >>  }
> >> +
> >> +bool amdgpu_crtc_get_scanout_position(struct drm_crtc *crtc,
> >> +   bool in_vblank_irq, int *vpos,
> >> +   int *hpos, ktime_t *stime, ktime_t *etime,
> >> +   const struct drm_display_mode *mode)
> >> +{
> >> +   struct drm_device *dev = crtc->dev;
> >> +   unsigned int pipe = crtc->index;
> >> +
> >> +   return amdgpu_display_get_crtc_scanoutpos(dev, pipe, 0, vpos, hpos,
> >> + stime, etime, mode);
> >> +}
> >> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
> >> b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> >> index 3f6f14ce1511..0749285dd1c7 100644
> >> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> >> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> >> @@ -1367,16 +1367,6 @@ int amdgpu_file_to_fpriv(struct file *filp, struct 
> >> amdgpu_fpriv **fpriv)
> >> return 0;
> >>  }
> >>
> >> -static bool
> >> -amdgpu_get_crtc_scanout_position(struct drm_device *dev, unsigned int 
> >> pipe,
> >> -bool in_vblank_irq, int *vpos, int *hpos,
> >> -ktime_t *stime, ktime_t *etime,
> >> -const struct drm_display_mode *mode)
> >> -{
> >> -   return amdgpu_display_get_crtc_scanoutpos(dev, pipe, 0, vpos, hpos,
> >> - stime, etime, mode);
> >> -}
> >> -
> >>  static struct drm_driver kms_driver = {
> >> .driver_features =
> >> DRIVER_USE_AGP | DRIVER_ATOMIC |
> >> @@ -1391,7 +1381,6 @@ static struct drm_driver kms_driver = {
> >> .enable_vblank = amdgpu_enable_vblank_kms,
> >> .disable_vblank = amdgpu_disable_vblank_kms,
> >> .get_vblank_timestamp = drm_calc_vbltimestamp_from_scanoutpos,
> >> -   .get_scanout_position = amdgpu_get_crtc_scanout_position,
> >> .irq_handler = amdgpu_irq_handler,
> >> .ioctls = amdgpu_ioctls_kms,
> >> .gem_free_object_unlocked = amdgpu_gem_object_free,
> >> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h 
> >> b/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
> >> index eb9975f4decb..37ba07e2feb5 100644
> >> --- a/drivers/gpu

Re: [Intel-gfx] [PATCH 1/4] drm/mst: Don't do atomic checks over disabled managers

2020-01-17 Thread Alex Deucher
] RSP: 0018:c90001687b58 EFLAGS: 00010246
> > [  305.970382] RAX:  RBX: 003f RCX:
> > 
> > [  305.977571] RDX:  RSI: 88849fba8cb8 RDI:
> > 
> > [  305.984747] RBP:  R08:  R09:
> > 0001
> > [  305.991921] R10: c900016879a0 R11: c900016879a5 R12:
> > 
> > [  305.999099] R13:  R14: 8884905c9bc0 R15:
> > 
> > [  306.006271] FS:  () GS:88849fb8()
> > knlGS:
> > [  306.014407] CS:  0010 DS:  ES:  CR0: 80050033
> > [  306.020185] CR2: 0030 CR3: 00048b3aa003 CR4:
> > 00760ee0
> > [  306.027404] PKRU: 5554
> > [  306.030127] BUG: sleeping function called from invalid context at
> > include/linux/percpu-rwsem.h:38
> > [  306.039049] in_atomic(): 0, irqs_disabled(): 1, non_block: 0, pid: 183,
> > name: kworker/3:2
> > [  306.047272] INFO: lockdep is turned off.
> > [  306.051217] irq event stamp: 77505
> > [  306.054647] hardirqs last  enabled at (77505): []
> > _raw_spin_unlock_irqrestore+0x47/0x60
> > [  306.064270] hardirqs last disabled at (77504): []
> > _raw_spin_lock_irqsave+0xf/0x50
> > [  306.073404] softirqs last  enabled at (77402): []
> > __do_softirq+0x389/0x47f
> > [  306.081885] softirqs last disabled at (77395): []
> > irq_exit+0xa9/0xc0
> > [  306.089859] CPU: 3 PID: 183 Comm: kworker/3:2 Tainted:
> > G  D   5.5.0-rc6+ #1404
> > [  306.098167] Hardware name: Intel Corporation Ice Lake Client
> > Platform/IceLake U DDR4 SODIMM PD RVP TLC, BIOS
> > ICLSFWR1.R00.3201.A00.1905140358 05/14/2019
> > [  306.111882] Workqueue: events drm_dp_delayed_destroy_work
> > [  306.117314] Call Trace:
> > [  306.119780]  dump_stack+0x71/0xa0
> > [  306.123135]  ___might_sleep.cold+0xf7/0x10b
> > [  306.127399]  exit_signals+0x2b/0x360
> > [  306.131014]  do_exit+0xa7/0xc70
> > [  306.134189]  ? kthread+0x100/0x140
> > [  306.137615]  rewind_stack_do_exit+0x17/0x20
> >
> > Fixes: cd82d82cbc04 ("drm/dp_mst: Add branch bandwidth validation to MST
> > atomic check")
> > Cc: Mikita Lipski 
> > Cc: Alex Deucher 
> > Cc: Lyude Paul 
> > Signed-off-by: José Roberto de Souza 
> > ---
> >  drivers/gpu/drm/drm_dp_mst_topology.c | 3 +++
> >  1 file changed, 3 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c
> > b/drivers/gpu/drm/drm_dp_mst_topology.c
> > index 4b74193b89ce..38bf111e5f9b 100644
> > --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> > +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> > @@ -5034,6 +5034,9 @@ int drm_dp_mst_atomic_check(struct drm_atomic_state
> > *state)
> >   int i, ret = 0;
> >
> >   for_each_new_mst_mgr_in_state(state, mgr, mst_state, i) {
> > + if (!mgr->mst_state)
> > + continue;
> > +
> >   ret = drm_dp_mst_atomic_check_vcpi_alloc_limit(mgr,
> > mst_state);
> >   if (ret)
> >   break;
> --
> Cheers,
> Lyude Paul
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 5/6] drm/amdgpu: add support for exporting VRAM using DMA-buf v2

2020-03-11 Thread Alex Deucher
On Wed, Mar 11, 2020 at 9:52 AM Christian König
 wrote:
>
> We should be able to do this now after checking all the prerequisites.
>
> v2: fix entrie count in the sgt
>
> Signed-off-by: Christian König 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c  | 56 ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h  | 12 ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 97 
>  3 files changed, 151 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> index bbf67800c8a6..43d8ed7dbd00 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> @@ -276,14 +276,21 @@ static struct sg_table *amdgpu_dma_buf_map(struct 
> dma_buf_attachment *attach,
> struct dma_buf *dma_buf = attach->dmabuf;
> struct drm_gem_object *obj = dma_buf->priv;
> struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
> +   struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
> struct sg_table *sgt;
> long r;
>
> if (!bo->pin_count) {
> -   /* move buffer into GTT */
> +   /* move buffer into GTT or VRAM */
> struct ttm_operation_ctx ctx = { false, false };
> +   unsigned domains = AMDGPU_GEM_DOMAIN_GTT;
>
> -   amdgpu_bo_placement_from_domain(bo, AMDGPU_GEM_DOMAIN_GTT);
> +   if (bo->preferred_domains & AMDGPU_GEM_DOMAIN_VRAM &&
> +   attach->peer2peer) {
> +   bo->flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
> +   domains |= AMDGPU_GEM_DOMAIN_VRAM;
> +   }
> +   amdgpu_bo_placement_from_domain(bo, domains);
> r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
> if (r)
> return ERR_PTR(r);
> @@ -293,20 +300,34 @@ static struct sg_table *amdgpu_dma_buf_map(struct 
> dma_buf_attachment *attach,
> return ERR_PTR(-EBUSY);
> }
>
> -   sgt = drm_prime_pages_to_sg(bo->tbo.ttm->pages, bo->tbo.num_pages);
> -   if (IS_ERR(sgt))
> -   return sgt;
> -
> -   if (!dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir,
> - DMA_ATTR_SKIP_CPU_SYNC))
> -   goto error_free;
> +   switch (bo->tbo.mem.mem_type) {
> +   case TTM_PL_TT:
> +   sgt = drm_prime_pages_to_sg(bo->tbo.ttm->pages,
> +   bo->tbo.num_pages);
> +   if (IS_ERR(sgt))
> +   return sgt;
> +
> +   if (!dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir,
> + DMA_ATTR_SKIP_CPU_SYNC))
> +   goto error_free;
> +   break;
> +
> +   case TTM_PL_VRAM:
> +   r = amdgpu_vram_mgr_alloc_sgt(adev, &bo->tbo.mem, attach->dev,
> + dir, &sgt);
> +   if (r)
> +   return ERR_PTR(r);
> +   break;
> +   default:
> +   return ERR_PTR(-EINVAL);
> +   }
>
> return sgt;
>
>  error_free:
> sg_free_table(sgt);
> kfree(sgt);
> -   return ERR_PTR(-ENOMEM);
> +   return ERR_PTR(-EBUSY);
>  }
>
>  /**
> @@ -322,9 +343,18 @@ static void amdgpu_dma_buf_unmap(struct 
> dma_buf_attachment *attach,
>  struct sg_table *sgt,
>  enum dma_data_direction dir)
>  {
> -   dma_unmap_sg(attach->dev, sgt->sgl, sgt->nents, dir);
> -   sg_free_table(sgt);
> -   kfree(sgt);
> +   struct dma_buf *dma_buf = attach->dmabuf;
> +   struct drm_gem_object *obj = dma_buf->priv;
> +   struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
> +   struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
> +
> +   if (sgt->sgl->page_link) {
> +   dma_unmap_sg(attach->dev, sgt->sgl, sgt->nents, dir);
> +   sg_free_table(sgt);
> +   kfree(sgt);
> +   } else {
> +   amdgpu_vram_mgr_free_sgt(adev, attach->dev, dir, sgt);
> +   }
>  }
>
>  /**
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> index 7551f3729445..a99d813b23a5 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> @@ -24,8 +24,9 @@
>  #ifndef __AMDGPU_TTM_H__
>  #define __AMDGPU_TTM_H__
>
> -#include "amdgpu.h"
> +#include 
>  #include 
> +#include "amdgpu.h"
>
>  #define AMDGPU_PL_GDS  (TTM_PL_PRIV + 0)
>  #define AMDGPU_PL_GWS  (TTM_PL_PRIV + 1)
> @@ -74,6 +75,15 @@ uint64_t amdgpu_gtt_mgr_usage(struct ttm_mem_type_manager 
> *man);
>  int amdgpu_gtt_mgr_recover(struct ttm_mem_type_manager *man);
>
>  u64 amdgpu_vram_mgr_bo_visible_size(struct amdgpu_bo *bo);
> +int amdgpu_vram_mgr_allo

Re: [Intel-gfx] [PATCH 1/9] drm: Constify topology id

2020-03-13 Thread Alex Deucher
On Fri, Mar 13, 2020 at 12:21 PM Ville Syrjala
 wrote:
>
> From: Ville Syrjälä 
>
> Make the topology id const since we don't want to change it.
>
> Signed-off-by: Ville Syrjälä 

Series is:
Reviewed-by: Alex Deucher 

> ---
>  drivers/gpu/drm/drm_connector.c | 4 ++--
>  include/drm/drm_connector.h | 4 ++--
>  2 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
> index 644f0ad10671..462d8caa6e72 100644
> --- a/drivers/gpu/drm/drm_connector.c
> +++ b/drivers/gpu/drm/drm_connector.c
> @@ -2392,7 +2392,7 @@ EXPORT_SYMBOL(drm_mode_put_tile_group);
>   * tile group or NULL if not found.
>   */
>  struct drm_tile_group *drm_mode_get_tile_group(struct drm_device *dev,
> -  char topology[8])
> +  const char topology[8])
>  {
> struct drm_tile_group *tg;
> int id;
> @@ -2422,7 +2422,7 @@ EXPORT_SYMBOL(drm_mode_get_tile_group);
>   * new tile group or NULL.
>   */
>  struct drm_tile_group *drm_mode_create_tile_group(struct drm_device *dev,
> - char topology[8])
> + const char topology[8])
>  {
> struct drm_tile_group *tg;
> int ret;
> diff --git a/include/drm/drm_connector.h b/include/drm/drm_connector.h
> index 19ae6bb5c85b..fd543d1db9b2 100644
> --- a/include/drm/drm_connector.h
> +++ b/include/drm/drm_connector.h
> @@ -1617,9 +1617,9 @@ struct drm_tile_group {
>  };
>
>  struct drm_tile_group *drm_mode_create_tile_group(struct drm_device *dev,
> - char topology[8]);
> + const char topology[8]);
>  struct drm_tile_group *drm_mode_get_tile_group(struct drm_device *dev,
> -  char topology[8]);
> +  const char topology[8]);
>  void drm_mode_put_tile_group(struct drm_device *dev,
>  struct drm_tile_group *tg);
>
> --
> 2.24.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 4/5] drm/amdgpu: utilize subconnector property for DP through atombios

2020-04-16 Thread Alex Deucher
On Wed, Apr 15, 2020 at 6:05 AM Jani Nikula  wrote:
>
>
> Alex, Harry, Christian, can you please eyeball this series and see if it
> makes sense for you?
>

Patches 4, 5 are:
Acked-by: Alex Deucher 
Feel free to take them through whichever tree you want.

Alex


> Thanks,
> Jani.
>
>
> On Tue, 07 Apr 2020, Jeevan B  wrote:
> > From: Oleg Vasilev 
> >
> > Since DP-specific information is stored in driver's structures, every
> > driver needs to implement subconnector property by itself.
> >
> > v2: rebase
> >
> > Cc: Alex Deucher 
> > Cc: Christian König 
> > Cc: David (ChunMing) Zhou 
> > Cc: amd-...@lists.freedesktop.org
> > Signed-off-by: Jeevan B 
> > Signed-off-by: Oleg Vasilev 
> > Reviewed-by: Emil Velikov 
> > Link: 
> > https://patchwork.freedesktop.org/patch/msgid/20190829114854.1539-6-oleg.vasi...@intel.com
> > ---
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c | 10 ++
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h   |  1 +
> >  drivers/gpu/drm/amd/amdgpu/atombios_dp.c   | 18 +-
> >  3 files changed, 28 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c 
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
> > index f355d9a..71aade0 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
> > @@ -26,6 +26,7 @@
> >
> >  #include 
> >  #include 
> > +#include 
> >  #include 
> >  #include 
> >  #include "amdgpu.h"
> > @@ -1405,6 +1406,10 @@ amdgpu_connector_dp_detect(struct drm_connector 
> > *connector, bool force)
> >   pm_runtime_put_autosuspend(connector->dev->dev);
> >   }
> >
> > + drm_dp_set_subconnector_property(&amdgpu_connector->base,
> > +  ret,
> > +  amdgpu_dig_connector->dpcd,
> > +  
> > amdgpu_dig_connector->downstream_ports);
> >   return ret;
> >  }
> >
> > @@ -1951,6 +1956,11 @@ amdgpu_connector_add(struct amdgpu_device *adev,
> >   if (has_aux)
> >   amdgpu_atombios_dp_aux_init(amdgpu_connector);
> >
> > + if (connector_type == DRM_MODE_CONNECTOR_DisplayPort ||
> > + connector_type == DRM_MODE_CONNECTOR_eDP) {
> > + 
> > drm_mode_add_dp_subconnector_property(&amdgpu_connector->base);
> > + }
> > +
> >   return;
> >
> >  failed:
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h 
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
> > index 37ba07e..04a430e 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
> > @@ -469,6 +469,7 @@ struct amdgpu_encoder {
> >  struct amdgpu_connector_atom_dig {
> >   /* displayport */
> >   u8 dpcd[DP_RECEIVER_CAP_SIZE];
> > + u8 downstream_ports[DP_MAX_DOWNSTREAM_PORTS];
> >   u8 dp_sink_type;
> >   int dp_clock;
> >   int dp_lane_count;
> > diff --git a/drivers/gpu/drm/amd/amdgpu/atombios_dp.c 
> > b/drivers/gpu/drm/amd/amdgpu/atombios_dp.c
> > index 9b74cfd..900b272 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/atombios_dp.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/atombios_dp.c
> > @@ -328,6 +328,22 @@ static void amdgpu_atombios_dp_probe_oui(struct 
> > amdgpu_connector *amdgpu_connect
> > buf[0], buf[1], buf[2]);
> >  }
> >
> > +static void amdgpu_atombios_dp_ds_ports(struct amdgpu_connector 
> > *amdgpu_connector)
> > +{
> > + struct amdgpu_connector_atom_dig *dig_connector = 
> > amdgpu_connector->con_priv;
> > + int ret;
> > +
> > + if (dig_connector->dpcd[DP_DPCD_REV] > 0x10) {
> > + ret = drm_dp_dpcd_read(&amdgpu_connector->ddc_bus->aux,
> > +DP_DOWNSTREAM_PORT_0,
> > +dig_connector->downstream_ports,
> > +DP_MAX_DOWNSTREAM_PORTS);
> > + if (ret)
> > + memset(dig_connector->downstream_ports, 0,
> > +DP_MAX_DOWNSTREAM_PORTS);
> > + }
> > +}
> > +
> >  int amdgpu_atombios_dp_get_dpcd(struct amdgpu_connector *amdgpu_connector)
> >  {
> >   struct amdgpu_connector_atom_dig *dig_connector = 
> > amdgpu_c

Re: [Intel-gfx] [PATCH v2 7/9] PM: sleep: core: Rename DPM_FLAG_NEVER_SKIP

2020-04-22 Thread Alex Deucher
On Sat, Apr 18, 2020 at 1:11 PM Rafael J. Wysocki  wrote:
>
> From: "Rafael J. Wysocki" 
>
> Rename DPM_FLAG_NEVER_SKIP to DPM_FLAG_NO_DIRECT_COMPLETE which
> matches its purpose more closely.
>
> No functional impact.
>
> Signed-off-by: Rafael J. Wysocki 
> Acked-by: Bjorn Helgaas  # for PCI parts
> Acked-by: Jeff Kirsher 

Acked-by: Alex Deucher 
for radeon and amdgpu

Alex

> ---
>
> -> v2:
>* Rebased.
>* Added tags received so far.
>
> ---
>  Documentation/driver-api/pm/devices.rst|  6 +++---
>  Documentation/power/pci.rst| 10 +-
>  drivers/base/power/main.c  |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c|  2 +-
>  drivers/gpu/drm/i915/intel_runtime_pm.c|  2 +-
>  drivers/gpu/drm/radeon/radeon_kms.c|  2 +-
>  drivers/misc/mei/pci-me.c  |  2 +-
>  drivers/misc/mei/pci-txe.c |  2 +-
>  drivers/net/ethernet/intel/e1000e/netdev.c |  2 +-
>  drivers/net/ethernet/intel/igb/igb_main.c  |  2 +-
>  drivers/net/ethernet/intel/igc/igc_main.c  |  2 +-
>  drivers/pci/pcie/portdrv_pci.c |  2 +-
>  include/linux/pm.h |  6 +++---
>  13 files changed, 21 insertions(+), 21 deletions(-)
>
> diff --git a/Documentation/driver-api/pm/devices.rst 
> b/Documentation/driver-api/pm/devices.rst
> index f66c7b9126ea..4ace0eba4506 100644
> --- a/Documentation/driver-api/pm/devices.rst
> +++ b/Documentation/driver-api/pm/devices.rst
> @@ -361,9 +361,9 @@ the phases are: ``prepare``, ``suspend``, 
> ``suspend_late``, ``suspend_noirq``.
> runtime PM disabled.
>
> This feature also can be controlled by device drivers by using the
> -   ``DPM_FLAG_NEVER_SKIP`` and ``DPM_FLAG_SMART_PREPARE`` driver power
> -   management flags.  [Typically, they are set at the time the driver is
> -   probed against the device in question by passing them to the
> +   ``DPM_FLAG_NO_DIRECT_COMPLETE`` and ``DPM_FLAG_SMART_PREPARE`` driver
> +   power management flags.  [Typically, they are set at the time the 
> driver
> +   is probed against the device in question by passing them to the
> :c:func:`dev_pm_set_driver_flags` helper function.]  If the first of
> these flags is set, the PM core will not apply the direct-complete
> procedure described above to the given device and, consequenty, to any
> diff --git a/Documentation/power/pci.rst b/Documentation/power/pci.rst
> index aa1c7fce6cd0..9e1408121bea 100644
> --- a/Documentation/power/pci.rst
> +++ b/Documentation/power/pci.rst
> @@ -1004,11 +1004,11 @@ including the PCI bus type.  The flags should be set 
> once at the driver probe
>  time with the help of the dev_pm_set_driver_flags() function and they should 
> not
>  be updated directly afterwards.
>
> -The DPM_FLAG_NEVER_SKIP flag prevents the PM core from using the 
> direct-complete
> -mechanism allowing device suspend/resume callbacks to be skipped if the 
> device
> -is in runtime suspend when the system suspend starts.  That also affects all 
> of
> -the ancestors of the device, so this flag should only be used if absolutely
> -necessary.
> +The DPM_FLAG_NO_DIRECT_COMPLETE flag prevents the PM core from using the
> +direct-complete mechanism allowing device suspend/resume callbacks to be 
> skipped
> +if the device is in runtime suspend when the system suspend starts.  That 
> also
> +affects all of the ancestors of the device, so this flag should only be used 
> if
> +absolutely necessary.
>
>  The DPM_FLAG_SMART_PREPARE flag instructs the PCI bus type to only return a
>  positive value from pci_pm_prepare() if the ->prepare callback provided by 
> the
> diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
> index 3170d93e29f9..dbc1e5e7346b 100644
> --- a/drivers/base/power/main.c
> +++ b/drivers/base/power/main.c
> @@ -1844,7 +1844,7 @@ static int device_prepare(struct device *dev, 
> pm_message_t state)
> spin_lock_irq(&dev->power.lock);
> dev->power.direct_complete = state.event == PM_EVENT_SUSPEND &&
> (ret > 0 || dev->power.no_pm_callbacks) &&
> -   !dev_pm_test_driver_flags(dev, DPM_FLAG_NEVER_SKIP);
> +   !dev_pm_test_driver_flags(dev, DPM_FLAG_NO_DIRECT_COMPLETE);
> spin_unlock_irq(&dev->power.lock);
> return 0;
>  }
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
> index fd1dc3236eca..a9086ea1ab60 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
> @@ -191,7 +191,7 @@ int amdgpu_driver_l

Re: [Intel-gfx] [PATCH v6 23/24] drm/radeon: Provide ddc symlink in connector sysfs directory

2019-07-26 Thread Alex Deucher
On Fri, Jul 26, 2019 at 1:29 PM Andrzej Pietrasiewicz
 wrote:
>
> Use the ddc pointer provided by the generic connector.
>
> Signed-off-by: Andrzej Pietrasiewicz 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/radeon/radeon_connectors.c | 142 +++--
>  1 file changed, 106 insertions(+), 36 deletions(-)
>
> diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c 
> b/drivers/gpu/drm/radeon/radeon_connectors.c
> index c60d1a44d22a..b3ad8d890801 100644
> --- a/drivers/gpu/drm/radeon/radeon_connectors.c
> +++ b/drivers/gpu/drm/radeon/radeon_connectors.c
> @@ -1870,6 +1870,7 @@ radeon_add_atom_connector(struct drm_device *dev,
> struct radeon_connector_atom_dig *radeon_dig_connector;
> struct drm_encoder *encoder;
> struct radeon_encoder *radeon_encoder;
> +   struct i2c_adapter *ddc;
> uint32_t subpixel_order = SubPixelNone;
> bool shared_ddc = false;
> bool is_dp_bridge = false;
> @@ -1947,17 +1948,21 @@ radeon_add_atom_connector(struct drm_device *dev,
> radeon_connector->con_priv = radeon_dig_connector;
> if (i2c_bus->valid) {
> radeon_connector->ddc_bus = radeon_i2c_lookup(rdev, 
> i2c_bus);
> -   if (radeon_connector->ddc_bus)
> +   if (radeon_connector->ddc_bus) {
> has_aux = true;
> -   else
> +   ddc = &radeon_connector->ddc_bus->adapter;
> +   } else {
> DRM_ERROR("DP: Failed to assign ddc bus! 
> Check dmesg for i2c errors.\n");
> +   }
> }
> switch (connector_type) {
> case DRM_MODE_CONNECTOR_VGA:
> case DRM_MODE_CONNECTOR_DVIA:
> default:
> -   drm_connector_init(dev, &radeon_connector->base,
> -  &radeon_dp_connector_funcs, 
> connector_type);
> +   drm_connector_init_with_ddc(dev, 
> &radeon_connector->base,
> +   
> &radeon_dp_connector_funcs,
> +   connector_type,
> +   ddc);
> drm_connector_helper_add(&radeon_connector->base,
>  
> &radeon_dp_connector_helper_funcs);
> connector->interlace_allowed = true;
> @@ -1979,8 +1984,10 @@ radeon_add_atom_connector(struct drm_device *dev,
> case DRM_MODE_CONNECTOR_HDMIA:
> case DRM_MODE_CONNECTOR_HDMIB:
> case DRM_MODE_CONNECTOR_DisplayPort:
> -   drm_connector_init(dev, &radeon_connector->base,
> -  &radeon_dp_connector_funcs, 
> connector_type);
> +   drm_connector_init_with_ddc(dev, 
> &radeon_connector->base,
> +   
> &radeon_dp_connector_funcs,
> +   connector_type,
> +   ddc);
> drm_connector_helper_add(&radeon_connector->base,
>  
> &radeon_dp_connector_helper_funcs);
> 
> drm_object_attach_property(&radeon_connector->base.base,
> @@ -2027,8 +2034,10 @@ radeon_add_atom_connector(struct drm_device *dev,
> break;
> case DRM_MODE_CONNECTOR_LVDS:
> case DRM_MODE_CONNECTOR_eDP:
> -   drm_connector_init(dev, &radeon_connector->base,
> -  
> &radeon_lvds_bridge_connector_funcs, connector_type);
> +   drm_connector_init_with_ddc(dev, 
> &radeon_connector->base,
> +   
> &radeon_lvds_bridge_connector_funcs,
> +   connector_type,
> +   ddc);
> drm_connector_helper_add(&radeon_connector->base,
>  
> &radeon_dp_connector_helper_funcs);
> 
> drm_object_attach_property(&radeon_connector->base.base,
> @@ -2042,13 +2051,18 @@ radeon_add_atom_connector(struct drm_device *dev,
> } else {
>  

Re: [Intel-gfx] [PATCH v6 22/24] drm/amdgpu: Provide ddc symlink in connector sysfs directory

2019-07-26 Thread Alex Deucher
On Fri, Jul 26, 2019 at 1:28 PM Andrzej Pietrasiewicz
 wrote:
>
> Use the ddc pointer provided by the generic connector.
>
> Signed-off-by: Andrzej Pietrasiewicz 

Note that this only covers the legacy display code.  The new DC
display code also needs to be converted.  See:
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
With those updated as well:
Acked-by: Alex Deucher 

> ---
>  .../gpu/drm/amd/amdgpu/amdgpu_connectors.c| 96 ++-
>  1 file changed, 70 insertions(+), 26 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
> index 73b2ede773d3..ece55c8fa673 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
> @@ -1505,6 +1505,7 @@ amdgpu_connector_add(struct amdgpu_device *adev,
> struct amdgpu_connector_atom_dig *amdgpu_dig_connector;
> struct drm_encoder *encoder;
> struct amdgpu_encoder *amdgpu_encoder;
> +   struct i2c_adapter *ddc = NULL;
> uint32_t subpixel_order = SubPixelNone;
> bool shared_ddc = false;
> bool is_dp_bridge = false;
> @@ -1574,17 +1575,21 @@ amdgpu_connector_add(struct amdgpu_device *adev,
> amdgpu_connector->con_priv = amdgpu_dig_connector;
> if (i2c_bus->valid) {
> amdgpu_connector->ddc_bus = amdgpu_i2c_lookup(adev, 
> i2c_bus);
> -   if (amdgpu_connector->ddc_bus)
> +   if (amdgpu_connector->ddc_bus) {
> has_aux = true;
> -   else
> +   ddc = &amdgpu_connector->ddc_bus->adapter;
> +   } else {
> DRM_ERROR("DP: Failed to assign ddc bus! 
> Check dmesg for i2c errors.\n");
> +   }
> }
> switch (connector_type) {
> case DRM_MODE_CONNECTOR_VGA:
> case DRM_MODE_CONNECTOR_DVIA:
> default:
> -   drm_connector_init(dev, &amdgpu_connector->base,
> -  &amdgpu_connector_dp_funcs, 
> connector_type);
> +   drm_connector_init_with_ddc(dev, 
> &amdgpu_connector->base,
> +   
> &amdgpu_connector_dp_funcs,
> +   connector_type,
> +   ddc);
> drm_connector_helper_add(&amdgpu_connector->base,
>  
> &amdgpu_connector_dp_helper_funcs);
> connector->interlace_allowed = true;
> @@ -1602,8 +1607,10 @@ amdgpu_connector_add(struct amdgpu_device *adev,
> case DRM_MODE_CONNECTOR_HDMIA:
> case DRM_MODE_CONNECTOR_HDMIB:
> case DRM_MODE_CONNECTOR_DisplayPort:
> -   drm_connector_init(dev, &amdgpu_connector->base,
> -  &amdgpu_connector_dp_funcs, 
> connector_type);
> +   drm_connector_init_with_ddc(dev, 
> &amdgpu_connector->base,
> +   
> &amdgpu_connector_dp_funcs,
> +   connector_type,
> +   ddc);
> drm_connector_helper_add(&amdgpu_connector->base,
>  
> &amdgpu_connector_dp_helper_funcs);
> 
> drm_object_attach_property(&amdgpu_connector->base.base,
> @@ -1644,8 +1651,10 @@ amdgpu_connector_add(struct amdgpu_device *adev,
> break;
> case DRM_MODE_CONNECTOR_LVDS:
> case DRM_MODE_CONNECTOR_eDP:
> -   drm_connector_init(dev, &amdgpu_connector->base,
> -  &amdgpu_connector_edp_funcs, 
> connector_type);
> +   drm_connector_init_with_ddc(dev, 
> &amdgpu_connector->base,
> +   
> &amdgpu_connector_edp_funcs,
> +   connector_type,
> +   ddc);
> drm_connector_helper_add(&amdgpu_connector->base,
>   

Re: [Intel-gfx] [PATCH v6 22/24] drm/amdgpu: Provide ddc symlink in connector sysfs directory

2019-07-26 Thread Alex Deucher
On Fri, Jul 26, 2019 at 3:42 PM Andrzej Pietrasiewicz
 wrote:
>
> Hi Alex,
>
>
> W dniu 26.07.2019 o 21:28, Alex Deucher pisze:
> > On Fri, Jul 26, 2019 at 1:28 PM Andrzej Pietrasiewicz
> >  wrote:
> >>
> >> Use the ddc pointer provided by the generic connector.
> >>
> >> Signed-off-by: Andrzej Pietrasiewicz 
> >
> > Note that this only covers the legacy display code.  The new DC
> > display code also needs to be converted.  See:
> > drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> > drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
>
> In amdgpu_dm_connector_init() the ddc is &i2c->base, is it?

Yes.

>
> But it is not clear to me how can I find ddc pointer in
> dm_dp_add_mst_connector()?

+ Harry and Nick.

hmmm, not sure about MST.  Maybe just skip them for now.

Alex

>
> Andrzej
>
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH RESEND 02/14] drm/radeon: Provide ddc symlink in connector sysfs directory

2019-08-27 Thread Alex Deucher
On Mon, Aug 26, 2019 at 3:26 PM Andrzej Pietrasiewicz
 wrote:
>
> Use the ddc pointer provided by the generic connector.
>
> Signed-off-by: Andrzej Pietrasiewicz 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/radeon/radeon_connectors.c | 143 +++--
>  1 file changed, 107 insertions(+), 36 deletions(-)
>
> diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c 
> b/drivers/gpu/drm/radeon/radeon_connectors.c
> index c60d1a44d22a..62d37eddf99c 100644
> --- a/drivers/gpu/drm/radeon/radeon_connectors.c
> +++ b/drivers/gpu/drm/radeon/radeon_connectors.c
> @@ -1870,6 +1870,7 @@ radeon_add_atom_connector(struct drm_device *dev,
> struct radeon_connector_atom_dig *radeon_dig_connector;
> struct drm_encoder *encoder;
> struct radeon_encoder *radeon_encoder;
> +   struct i2c_adapter *ddc = NULL;
> uint32_t subpixel_order = SubPixelNone;
> bool shared_ddc = false;
> bool is_dp_bridge = false;
> @@ -1947,17 +1948,21 @@ radeon_add_atom_connector(struct drm_device *dev,
> radeon_connector->con_priv = radeon_dig_connector;
> if (i2c_bus->valid) {
> radeon_connector->ddc_bus = radeon_i2c_lookup(rdev, 
> i2c_bus);
> -   if (radeon_connector->ddc_bus)
> +   if (radeon_connector->ddc_bus) {
> has_aux = true;
> -   else
> +   ddc = &radeon_connector->ddc_bus->adapter;
> +   } else {
> DRM_ERROR("DP: Failed to assign ddc bus! 
> Check dmesg for i2c errors.\n");
> +   }
> }
> switch (connector_type) {
> case DRM_MODE_CONNECTOR_VGA:
> case DRM_MODE_CONNECTOR_DVIA:
> default:
> -   drm_connector_init(dev, &radeon_connector->base,
> -  &radeon_dp_connector_funcs, 
> connector_type);
> +   drm_connector_init_with_ddc(dev, 
> &radeon_connector->base,
> +   
> &radeon_dp_connector_funcs,
> +   connector_type,
> +   ddc);
> drm_connector_helper_add(&radeon_connector->base,
>  
> &radeon_dp_connector_helper_funcs);
> connector->interlace_allowed = true;
> @@ -1979,8 +1984,10 @@ radeon_add_atom_connector(struct drm_device *dev,
> case DRM_MODE_CONNECTOR_HDMIA:
> case DRM_MODE_CONNECTOR_HDMIB:
> case DRM_MODE_CONNECTOR_DisplayPort:
> -   drm_connector_init(dev, &radeon_connector->base,
> -  &radeon_dp_connector_funcs, 
> connector_type);
> +   drm_connector_init_with_ddc(dev, 
> &radeon_connector->base,
> +   
> &radeon_dp_connector_funcs,
> +   connector_type,
> +   ddc);
> drm_connector_helper_add(&radeon_connector->base,
>  
> &radeon_dp_connector_helper_funcs);
> 
> drm_object_attach_property(&radeon_connector->base.base,
> @@ -2027,8 +2034,10 @@ radeon_add_atom_connector(struct drm_device *dev,
> break;
> case DRM_MODE_CONNECTOR_LVDS:
> case DRM_MODE_CONNECTOR_eDP:
> -   drm_connector_init(dev, &radeon_connector->base,
> -  
> &radeon_lvds_bridge_connector_funcs, connector_type);
> +   drm_connector_init_with_ddc(dev, 
> &radeon_connector->base,
> +   
> &radeon_lvds_bridge_connector_funcs,
> +   connector_type,
> +   ddc);
> drm_connector_helper_add(&radeon_connector->base,
>  
> &radeon_dp_connector_helper_funcs);
> 
> drm_object_attach_property(&radeon_connector->base.base,
> @@ -2042,13 +2051,18 @@ radeon_add_atom_connector(struct drm_device *dev,
> } else {
> 

Re: [Intel-gfx] [PATCH v3 4/7] drm/i915: utilize subconnector property for DP

2019-08-29 Thread Alex Deucher
On Wed, Aug 28, 2019 at 10:27 AM Ville Syrjälä
 wrote:
>
> On Mon, Aug 26, 2019 at 04:22:13PM +0300, Oleg Vasilev wrote:
> > Since DP-specific information is stored in driver's structures, every
> > driver needs to implement subconnector property by itself.
> >
> > v2: updates to match previous commit changes
> >
> > Reviewed-by: Emil Velikov 
> > Tested-by: Oleg Vasilev 
> > Signed-off-by: Oleg Vasilev 
> > Cc: Ville Syrjälä 
> > Cc: intel-gfx@lists.freedesktop.org
> > ---
> >  drivers/gpu/drm/i915/display/intel_dp.c | 6 ++
> >  1 file changed, 6 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c 
> > b/drivers/gpu/drm/i915/display/intel_dp.c
> > index 6da6a4859f06..9c97ece803eb 100644
> > --- a/drivers/gpu/drm/i915/display/intel_dp.c
> > +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> > @@ -5434,6 +5434,10 @@ intel_dp_detect(struct drm_connector *connector,
> >   if (status != connector_status_connected && !intel_dp->is_mst)
> >   intel_dp_unset_edid(intel_dp);
> >
> > + drm_dp_set_subconnector_property(connector,
> > +  status,
> > +  intel_dp->dpcd,
> > +  intel_dp->downstream_ports);
> >   return status;
> >  }
> >
> > @@ -6332,6 +6336,8 @@ intel_dp_add_properties(struct intel_dp *intel_dp, 
> > struct drm_connector *connect
> >   struct drm_i915_private *dev_priv = to_i915(connector->dev);
> >   enum port port = dp_to_dig_port(intel_dp)->base.port;
> >
> > + drm_mode_add_dp_subconnector_property(connector);
>
> Maybe skip this for eDP?

Not sure if you have something similar, but there are AMD platforms
that contain eDP to LVDS bridges.  Then again, probably not a big deal
for the laptop panel.

Alex

>
> > +
> >   if (!IS_G4X(dev_priv) && port != PORT_A)
> >   intel_attach_force_audio_property(connector);
> >
> > --
> > 2.23.0
>
> --
> Ville Syrjälä
> Intel
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH 4/8] drm/i915: Clear out spurious whitespace

2020-01-27 Thread Alex Deucher
Title should be s/i915/edid/ , with that fixed:
Reviewed-by: Alex Deucher 


On Fri, Jan 24, 2020 at 3:03 PM Ville Syrjala
 wrote:
>
> From: Ville Syrjälä 
>
> Nuke some whitespace that shouldn't be there.
>
> Signed-off-by: Ville Syrjälä 
> ---
>  drivers/gpu/drm/drm_edid.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
> index d6bce58b27ac..3df5744026b0 100644
> --- a/drivers/gpu/drm/drm_edid.c
> +++ b/drivers/gpu/drm/drm_edid.c
> @@ -2842,7 +2842,7 @@ do_inferred_modes(struct detailed_timing *timing, void 
> *c)
> closure->modes += drm_dmt_modes_for_range(closure->connector,
>   closure->edid,
>   timing);
> -
> +
> if (!version_greater(closure->edid, 1, 1))
> return; /* GTF not defined yet */
>
> @@ -3084,7 +3084,7 @@ do_cvt_mode(struct detailed_timing *timing, void *c)
>
>  static int
>  add_cvt_modes(struct drm_connector *connector, struct edid *edid)
> -{
> +{
> struct detailed_mode_closure closure = {
> .connector = connector,
> .edid = edid,
> @@ -4342,7 +4342,7 @@ void drm_edid_get_monitor_name(struct edid *edid, char 
> *name, int bufsize)
>  {
> int name_length;
> char buf[13];
> -
> +
> if (bufsize <= 0)
> return;
>
> --
> 2.24.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 5/8] drm/edid: Document why we don't bounds check the DispID CEA block start/end

2020-01-27 Thread Alex Deucher
On Fri, Jan 24, 2020 at 3:03 PM Ville Syrjala
 wrote:
>
> From: Ville Syrjälä 
>
> After much head scratching I managed to convince myself that
> for_each_displayid_db() has already done the bounds checks for
> the DispID CEA data block. Which is why we don't need to repeat
> them in cea_db_offsets(). To avoid having to go through that
> pain again in the future add a comment which explains this fact.
>
> Cc: Andres Rodriguez 
> Signed-off-by: Ville Syrjälä 
> ---
>  drivers/gpu/drm/drm_edid.c | 4 
>  1 file changed, 4 insertions(+)
>
> diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
> index 3df5744026b0..0369a54e3d32 100644
> --- a/drivers/gpu/drm/drm_edid.c
> +++ b/drivers/gpu/drm/drm_edid.c
> @@ -4001,6 +4001,10 @@ cea_db_offsets(const u8 *cea, int *start, int *end)
>  *   no non-DTD data.
>  */
> if (cea[0] == DATA_BLOCK_CTA) {
> +   /*
> +* for_each_displayid_db() has already verified
> +* that these stay within expected bounds.
> +*/

I think the preferred format is to have the start of the comment be on
the first line after the /* with that fixed:
Acked-by: Alex Deucher 

> *start = 3;
> *end = *start + cea[2];
> } else if (cea[0] == CEA_EXT) {
> --
> 2.24.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 6/8] drm/edid: Add a FIXME about DispID CEA data block revision

2020-01-27 Thread Alex Deucher
On Fri, Jan 24, 2020 at 3:02 PM Ville Syrjala
 wrote:
>
> From: Ville Syrjälä 
>
> I don't understand what the DispID CEA data block revision
> means. The spec doesn't say. I guess some DispID must have
> a value of >= 3 in there or else we generally wouldn't
> even parse the CEA data blocks. Or does all this code
> actually not do anything?
>
> Cc: Andres Rodriguez 
> Signed-off-by: Ville Syrjälä 
> ---
>  drivers/gpu/drm/drm_edid.c | 7 +++
>  1 file changed, 7 insertions(+)
>
> diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
> index 0369a54e3d32..fd9b724067a7 100644
> --- a/drivers/gpu/drm/drm_edid.c
> +++ b/drivers/gpu/drm/drm_edid.c
> @@ -3977,6 +3977,13 @@ cea_db_tag(const u8 *db)
>  static int
>  cea_revision(const u8 *cea)
>  {
> +   /*
> +* FIXME is this correct for the DispID variant?
> +* The DispID spec doesn't really specify whether
> +* this is the revision of the CEA extension or
> +* the DispID CEA data block. And the only value
> +* given as an example is 0.
> +*/

Same comment as the previous patch regarding the comment formatting.

Alex

> return cea[1];
>  }
>
> --
> 2.24.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 2/8] drm/edid: Don't accept any old garbage as a display descriptor

2020-01-27 Thread Alex Deucher
On Fri, Jan 24, 2020 at 3:02 PM Ville Syrjala
 wrote:
>
> From: Ville Syrjälä 
>
> Currently we assume any 18 byte descriptor to be a display descritor
> if only the tag byte matches the expected value. But for detailed
> timing descriptors that same byte is just the lower 8 bits of
> hblank, and as such can match any display descriptor tag. To
> properly validate that the 18 byte descriptor is in fact a
> display descriptor we must also examine bytes 0-2 (just byte 1
> should actually suffice but the spec does say that bytes 0 and
> 2 must also always be zero for display descriptors so we check
> those too).
>
> Unlike Allen's original proposed patch to just fix is_rb() we
> roll this out across the board to fix everything.
>
> Cc: Allen Chen 
> Signed-off-by: Ville Syrjälä 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/drm_edid.c | 65 --
>  1 file changed, 41 insertions(+), 24 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
> index 1b6e544cf5c7..96ae1fde4ce2 100644
> --- a/drivers/gpu/drm/drm_edid.c
> +++ b/drivers/gpu/drm/drm_edid.c
> @@ -2196,6 +2196,12 @@ struct drm_display_mode *drm_mode_find_dmt(struct 
> drm_device *dev,
>  }
>  EXPORT_SYMBOL(drm_mode_find_dmt);
>
> +static bool is_display_descriptor(const u8 d[18], u8 tag)
> +{
> +   return d[0] == 0x00 && d[1] == 0x00 &&
> +   d[2] == 0x00 && d[3] == tag;
> +}
> +
>  typedef void detailed_cb(struct detailed_timing *timing, void *closure);
>
>  static void
> @@ -2257,9 +2263,12 @@ static void
>  is_rb(struct detailed_timing *t, void *data)
>  {
> u8 *r = (u8 *)t;
> -   if (r[3] == EDID_DETAIL_MONITOR_RANGE)
> -   if (r[15] & 0x10)
> -   *(bool *)data = true;
> +
> +   if (!is_display_descriptor(r, EDID_DETAIL_MONITOR_RANGE))
> +   return;
> +
> +   if (r[15] & 0x10)
> +   *(bool *)data = true;
>  }
>
>  /* EDID 1.4 defines this explicitly.  For EDID 1.3, we guess, badly. */
> @@ -2279,7 +2288,11 @@ static void
>  find_gtf2(struct detailed_timing *t, void *data)
>  {
> u8 *r = (u8 *)t;
> -   if (r[3] == EDID_DETAIL_MONITOR_RANGE && r[10] == 0x02)
> +
> +   if (!is_display_descriptor(r, EDID_DETAIL_MONITOR_RANGE))
> +   return;
> +
> +   if (r[10] == 0x02)
> *(u8 **)data = r;
>  }
>
> @@ -2818,7 +2831,7 @@ do_inferred_modes(struct detailed_timing *timing, void 
> *c)
> struct detailed_non_pixel *data = &timing->data.other_data;
> struct detailed_data_monitor_range *range = &data->data.range;
>
> -   if (data->type != EDID_DETAIL_MONITOR_RANGE)
> +   if (!is_display_descriptor((const u8 *)timing, 
> EDID_DETAIL_MONITOR_RANGE))
> return;
>
> closure->modes += drm_dmt_modes_for_range(closure->connector,
> @@ -2897,10 +2910,11 @@ static void
>  do_established_modes(struct detailed_timing *timing, void *c)
>  {
> struct detailed_mode_closure *closure = c;
> -   struct detailed_non_pixel *data = &timing->data.other_data;
>
> -   if (data->type == EDID_DETAIL_EST_TIMINGS)
> -   closure->modes += drm_est3_modes(closure->connector, timing);
> +   if (!is_display_descriptor((const u8 *)timing, 
> EDID_DETAIL_EST_TIMINGS))
> +   return;
> +
> +   closure->modes += drm_est3_modes(closure->connector, timing);
>  }
>
>  /**
> @@ -2949,19 +2963,19 @@ do_standard_modes(struct detailed_timing *timing, 
> void *c)
> struct detailed_non_pixel *data = &timing->data.other_data;
> struct drm_connector *connector = closure->connector;
> struct edid *edid = closure->edid;
> +   int i;
>
> -   if (data->type == EDID_DETAIL_STD_MODES) {
> -   int i;
> -   for (i = 0; i < 6; i++) {
> -   struct std_timing *std;
> -   struct drm_display_mode *newmode;
> +   if (!is_display_descriptor((const u8 *)timing, EDID_DETAIL_STD_MODES))
> +   return;
>
> -   std = &data->data.timings[i];
> -   newmode = drm_mode_std(connector, edid, std);
> -   if (newmode) {
> -   drm_mode_probed_add(connector, newmode);
> -   closure->modes++;
> -   }
> +   for (i = 0; i < 6; i++) {
> +   struct std_timing *std = &data->data.timings[i];
> +

Re: [Intel-gfx] [PATCH 3/8] drm/edid: Introduce is_detailed_timing_descritor()

2020-01-27 Thread Alex Deucher
On Fri, Jan 24, 2020 at 3:02 PM Ville Syrjala
 wrote:
>
> From: Ville Syrjälä 
>
> Let's introduce is_detailed_timing_descritor() as the opposite
> counterpart of is_display_descriptor().
>
> Cc: Allen Chen 
> Signed-off-by: Ville Syrjälä 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/drm_edid.c | 42 ++
>  1 file changed, 24 insertions(+), 18 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
> index 96ae1fde4ce2..d6bce58b27ac 100644
> --- a/drivers/gpu/drm/drm_edid.c
> +++ b/drivers/gpu/drm/drm_edid.c
> @@ -2202,6 +2202,11 @@ static bool is_display_descriptor(const u8 d[18], u8 
> tag)
> d[2] == 0x00 && d[3] == tag;
>  }
>
> +static bool is_detailed_timing_descriptor(const u8 d[18])
> +{
> +   return d[0] != 0x00 || d[1] != 0x00;
> +}
> +
>  typedef void detailed_cb(struct detailed_timing *timing, void *closure);
>
>  static void
> @@ -3101,27 +3106,28 @@ do_detailed_mode(struct detailed_timing *timing, void 
> *c)
> struct detailed_mode_closure *closure = c;
> struct drm_display_mode *newmode;
>
> -   if (timing->pixel_clock) {
> -   newmode = drm_mode_detailed(closure->connector->dev,
> -   closure->edid, timing,
> -   closure->quirks);
> -   if (!newmode)
> -   return;
> +   if (!is_detailed_timing_descriptor((const u8 *)timing))
> +   return;
> +
> +   newmode = drm_mode_detailed(closure->connector->dev,
> +   closure->edid, timing,
> +   closure->quirks);
> +   if (!newmode)
> +   return;
>
> -   if (closure->preferred)
> -   newmode->type |= DRM_MODE_TYPE_PREFERRED;
> +   if (closure->preferred)
> +   newmode->type |= DRM_MODE_TYPE_PREFERRED;
>
> -   /*
> -* Detailed modes are limited to 10kHz pixel clock resolution,
> -* so fix up anything that looks like CEA/HDMI mode, but the 
> clock
> -* is just slightly off.
> -*/
> -   fixup_detailed_cea_mode_clock(newmode);
> +   /*
> +* Detailed modes are limited to 10kHz pixel clock resolution,
> +* so fix up anything that looks like CEA/HDMI mode, but the clock
> +* is just slightly off.
> +*/
> +   fixup_detailed_cea_mode_clock(newmode);
>
> -   drm_mode_probed_add(closure->connector, newmode);
> -   closure->modes++;
> -   closure->preferred = false;
> -   }
> +   drm_mode_probed_add(closure->connector, newmode);
> +   closure->modes++;
> +   closure->preferred = false;
>  }
>
>  /*
> --
> 2.24.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 7/8] drm/edid: Constify lots of things

2020-01-27 Thread Alex Deucher
On Fri, Jan 24, 2020 at 3:03 PM Ville Syrjala
 wrote:
>
> From: Ville Syrjälä 
>
> Let's try to make a lot more stuff const in the edid parser.
>
> The "downside" is that we can no longer mangle the EDID in the
> middle of the parsing to apply quirks (drm_mode_detailed()).
> I don't really think mangling the blob itself is such a great
> idea anyway so I won't miss that part. But if we do want it
> back I guess we should do the mangling in one explicit place
> before we otherwise parse the EDID.
>
> Signed-off-by: Ville Syrjälä 

I generally agree, but are there any userspace expectations that they
will be getting a corrected EDID in some cases?

Alex

> ---
>  drivers/gpu/drm/drm_connector.c |   4 +-
>  drivers/gpu/drm/drm_edid.c  | 303 ++--
>  include/drm/drm_connector.h |   4 +-
>  3 files changed, 176 insertions(+), 135 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
> index f632ca05960e..92a5cd6ff6b1 100644
> --- a/drivers/gpu/drm/drm_connector.c
> +++ b/drivers/gpu/drm/drm_connector.c
> @@ -2377,7 +2377,7 @@ EXPORT_SYMBOL(drm_mode_put_tile_group);
>   * tile group or NULL if not found.
>   */
>  struct drm_tile_group *drm_mode_get_tile_group(struct drm_device *dev,
> -  char topology[8])
> +  const u8 topology[8])
>  {
> struct drm_tile_group *tg;
> int id;
> @@ -2407,7 +2407,7 @@ EXPORT_SYMBOL(drm_mode_get_tile_group);
>   * new tile group or NULL.
>   */
>  struct drm_tile_group *drm_mode_create_tile_group(struct drm_device *dev,
> - char topology[8])
> + const u8 topology[8])
>  {
> struct drm_tile_group *tg;
> int ret;
> diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
> index fd9b724067a7..8e76efe1654d 100644
> --- a/drivers/gpu/drm/drm_edid.c
> +++ b/drivers/gpu/drm/drm_edid.c
> @@ -88,7 +88,7 @@
>
>  struct detailed_mode_closure {
> struct drm_connector *connector;
> -   struct edid *edid;
> +   const struct edid *edid;
> bool preferred;
> u32 quirks;
> int modes;
> @@ -1584,8 +1584,8 @@ MODULE_PARM_DESC(edid_fixup,
>  "Minimum number of valid EDID header bytes (0-8, default 
> 6)");
>
>  static void drm_get_displayid(struct drm_connector *connector,
> - struct edid *edid);
> -static int validate_displayid(u8 *displayid, int length, int idx);
> + const struct edid *edid);
> +static int validate_displayid(const u8 *displayid, int length, int idx);
>
>  static int drm_edid_block_checksum(const u8 *raw_edid)
>  {
> @@ -2207,41 +2207,41 @@ static bool is_detailed_timing_descriptor(const u8 
> d[18])
> return d[0] != 0x00 || d[1] != 0x00;
>  }
>
> -typedef void detailed_cb(struct detailed_timing *timing, void *closure);
> +typedef void detailed_cb(const struct detailed_timing *timing, void 
> *closure);
>
>  static void
> -cea_for_each_detailed_block(u8 *ext, detailed_cb *cb, void *closure)
> +cea_for_each_detailed_block(const u8 *ext, detailed_cb *cb, void *closure)
>  {
> int i, n;
> u8 d = ext[0x02];
> -   u8 *det_base = ext + d;
> +   const u8 *det_base = ext + d;
>
> if (d < 4 || d > 127)
> return;
>
> n = (127 - d) / 18;
> for (i = 0; i < n; i++)
> -   cb((struct detailed_timing *)(det_base + 18 * i), closure);
> +   cb((const struct detailed_timing *)(det_base + 18 * i), 
> closure);
>  }
>
>  static void
> -vtb_for_each_detailed_block(u8 *ext, detailed_cb *cb, void *closure)
> +vtb_for_each_detailed_block(const u8 *ext, detailed_cb *cb, void *closure)
>  {
> unsigned int i, n = min((int)ext[0x02], 6);
> -   u8 *det_base = ext + 5;
> +   const u8 *det_base = ext + 5;
>
> if (ext[0x01] != 1)
> return; /* unknown version */
>
> for (i = 0; i < n; i++)
> -   cb((struct detailed_timing *)(det_base + 18 * i), closure);
> +   cb((const struct detailed_timing *)(det_base + 18 * i), 
> closure);
>  }
>
>  static void
> -drm_for_each_detailed_block(u8 *raw_edid, detailed_cb *cb, void *closure)
> +drm_for_each_detailed_block(const u8 *raw_edid, detailed_cb *cb, void 
> *closure)
>  {
> +   const struct edid *edid = (struct edid *)raw_edid;
> int i;
> -   struct edid *edid = (struct edid *)raw_edid;
>
> if (edid == NULL)
> return;
> @@ -2250,7 +2250,7 @@ drm_for_each_detailed_block(u8 *raw_edid, detailed_cb 
> *cb, void *closure)
> cb(&(edid->detailed_timings[i]), closure);
>
> for (i = 1; i <= raw_edid[0x7e]; i++) {
> -   u8 *ext = raw_edid + (i * EDID_LENGTH);
> +   const u8 *ext = raw_edid + (i * EDID_LENGTH);
> 

Re: [Intel-gfx] [PATCH 1/8] drm/edid: Check the number of detailed timing descriptors in the CEA ext block

2020-01-27 Thread Alex Deucher
On Fri, Jan 24, 2020 at 3:03 PM Ville Syrjala
 wrote:
>
> From: Ville Syrjälä 
>
> CEA-861 says :
> "d = offset for the byte following the reserved data block.
>  If no data is provided in the reserved data block, then d=4.
>  If no DTDs are provided, then d=0."
>
> So let's not look for DTDs when d==0. In fact let's just make that
> <4 since those values would just mean that he DTDs overlap the block
> header. And let's also check that d isn't so big as to declare
> the descriptors to live past the block end, although the code
> does already survive that case as we'd just end up with a negative
> number of descriptors and the loop would not do anything.
>
> Cc: Allen Chen 
> Signed-off-by: Ville Syrjälä 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/drm_edid.c | 5 -
>  1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
> index 99769d6c9f84..1b6e544cf5c7 100644
> --- a/drivers/gpu/drm/drm_edid.c
> +++ b/drivers/gpu/drm/drm_edid.c
> @@ -2201,10 +2201,13 @@ typedef void detailed_cb(struct detailed_timing 
> *timing, void *closure);
>  static void
>  cea_for_each_detailed_block(u8 *ext, detailed_cb *cb, void *closure)
>  {
> -   int i, n = 0;
> +   int i, n;
> u8 d = ext[0x02];
> u8 *det_base = ext + d;
>
> +   if (d < 4 || d > 127)
> +   return;
> +
> n = (127 - d) / 18;
> for (i = 0; i < n; i++)
> cb((struct detailed_timing *)(det_base + 18 * i), closure);
> --
> 2.24.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 8/8] drm/edid: Dump bogus 18 byte descriptors

2020-01-27 Thread Alex Deucher
On Fri, Jan 24, 2020 at 3:03 PM Ville Syrjala
 wrote:
>
> From: Ville Syrjälä 
>
> I'm curious if there are any bogus 18 byte descriptors around.
> Let's dump them out if we encounter them.
>
> Not sure we'd actually want this, but at least I get to see
> if our CI has anything that hits this :)
>
> Signed-off-by: Ville Syrjälä 

Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/drm_edid.c | 22 +++---
>  1 file changed, 19 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
> index 8e76efe1654d..4d8303e56536 100644
> --- a/drivers/gpu/drm/drm_edid.c
> +++ b/drivers/gpu/drm/drm_edid.c
> @@ -2202,6 +2202,12 @@ static bool is_display_descriptor(const u8 d[18], u8 
> tag)
> d[2] == 0x00 && d[3] == tag;
>  }
>
> +static bool is_any_display_descriptor(const u8 d[18])
> +{
> +   return d[0] == 0x00 && d[1] == 0x00 &&
> +   d[2] == 0x00;
> +}
> +
>  static bool is_detailed_timing_descriptor(const u8 d[18])
>  {
> return d[0] != 0x00 || d[1] != 0x00;
> @@ -2209,6 +2215,15 @@ static bool is_detailed_timing_descriptor(const u8 
> d[18])
>
>  typedef void detailed_cb(const struct detailed_timing *timing, void 
> *closure);
>
> +static void do_detailed_block(const u8 d[18], detailed_cb *cb, void *closure)
> +{
> +   if (!is_detailed_timing_descriptor(d) &&
> +   !is_any_display_descriptor(d))
> +   DRM_WARN("Unrecognized 18 byte descriptor: %*ph\n", 18, d);
> +
> +   cb((const struct detailed_timing *)d, closure);
> +}
> +
>  static void
>  cea_for_each_detailed_block(const u8 *ext, detailed_cb *cb, void *closure)
>  {
> @@ -2221,7 +2236,7 @@ cea_for_each_detailed_block(const u8 *ext, detailed_cb 
> *cb, void *closure)
>
> n = (127 - d) / 18;
> for (i = 0; i < n; i++)
> -   cb((const struct detailed_timing *)(det_base + 18 * i), 
> closure);
> +   do_detailed_block(det_base + 18 * i, cb, closure);
>  }
>
>  static void
> @@ -2234,7 +2249,7 @@ vtb_for_each_detailed_block(const u8 *ext, detailed_cb 
> *cb, void *closure)
> return; /* unknown version */
>
> for (i = 0; i < n; i++)
> -   cb((const struct detailed_timing *)(det_base + 18 * i), 
> closure);
> +   do_detailed_block(det_base + 18 * i, cb, closure);
>  }
>
>  static void
> @@ -2247,7 +2262,8 @@ drm_for_each_detailed_block(const u8 *raw_edid, 
> detailed_cb *cb, void *closure)
> return;
>
> for (i = 0; i < EDID_DETAILED_TIMINGS; i++)
> -   cb(&(edid->detailed_timings[i]), closure);
> +   do_detailed_block((const u8 *)&edid->detailed_timings[i],
> + cb, closure);
>
> for (i = 1; i <= raw_edid[0x7e]; i++) {
> const u8 *ext = raw_edid + (i * EDID_LENGTH);
> --
> 2.24.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH v4 10/22] drm/radeon: Convert to struct drm_crtc_helper_funcs.get_scanout_position()

2020-02-03 Thread Alex Deucher
On Thu, Jan 23, 2020 at 9:00 AM Thomas Zimmermann  wrote:
>
> The callback struct drm_driver.get_scanout_position() is deprecated in
> favor of struct drm_crtc_helper_funcs.get_scanout_position(). Convert
> radeon over.
>
> v4:
> * 80-character line fixes
>
> Signed-off-by: Thomas Zimmermann 

Reviewed-by: Alex Deucher 

> ---
>  drivers/gpu/drm/radeon/atombios_crtc.c  |  1 +
>  drivers/gpu/drm/radeon/radeon_display.c | 13 +
>  drivers/gpu/drm/radeon/radeon_drv.c | 11 ---
>  drivers/gpu/drm/radeon/radeon_legacy_crtc.c |  3 ++-
>  drivers/gpu/drm/radeon/radeon_mode.h|  6 ++
>  5 files changed, 22 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c 
> b/drivers/gpu/drm/radeon/atombios_crtc.c
> index be583695427a..91811757104c 100644
> --- a/drivers/gpu/drm/radeon/atombios_crtc.c
> +++ b/drivers/gpu/drm/radeon/atombios_crtc.c
> @@ -2231,6 +2231,7 @@ static const struct drm_crtc_helper_funcs 
> atombios_helper_funcs = {
> .prepare = atombios_crtc_prepare,
> .commit = atombios_crtc_commit,
> .disable = atombios_crtc_disable,
> +   .get_scanout_position = radeon_get_crtc_scanout_position,
>  };
>
>  void radeon_atombios_init_crtc(struct drm_device *dev,
> diff --git a/drivers/gpu/drm/radeon/radeon_display.c 
> b/drivers/gpu/drm/radeon/radeon_display.c
> index 856526cb2caf..2f641f3b39e7 100644
> --- a/drivers/gpu/drm/radeon/radeon_display.c
> +++ b/drivers/gpu/drm/radeon/radeon_display.c
> @@ -1978,3 +1978,16 @@ int radeon_get_crtc_scanoutpos(struct drm_device *dev, 
> unsigned int pipe,
>
> return ret;
>  }
> +
> +bool
> +radeon_get_crtc_scanout_position(struct drm_crtc *crtc,
> +bool in_vblank_irq, int *vpos, int *hpos,
> +ktime_t *stime, ktime_t *etime,
> +const struct drm_display_mode *mode)
> +{
> +   struct drm_device *dev = crtc->dev;
> +   unsigned int pipe = crtc->index;
> +
> +   return radeon_get_crtc_scanoutpos(dev, pipe, 0, vpos, hpos,
> + stime, etime, mode);
> +}
> diff --git a/drivers/gpu/drm/radeon/radeon_drv.c 
> b/drivers/gpu/drm/radeon/radeon_drv.c
> index fd74e2611185..1f597f166bff 100644
> --- a/drivers/gpu/drm/radeon/radeon_drv.c
> +++ b/drivers/gpu/drm/radeon/radeon_drv.c
> @@ -563,16 +563,6 @@ static const struct file_operations 
> radeon_driver_kms_fops = {
>  #endif
>  };
>
> -static bool
> -radeon_get_crtc_scanout_position(struct drm_device *dev, unsigned int pipe,
> -bool in_vblank_irq, int *vpos, int *hpos,
> -ktime_t *stime, ktime_t *etime,
> -const struct drm_display_mode *mode)
> -{
> -   return radeon_get_crtc_scanoutpos(dev, pipe, 0, vpos, hpos,
> - stime, etime, mode);
> -}
> -
>  static struct drm_driver kms_driver = {
> .driver_features =
> DRIVER_USE_AGP | DRIVER_GEM | DRIVER_RENDER,
> @@ -585,7 +575,6 @@ static struct drm_driver kms_driver = {
> .enable_vblank = radeon_enable_vblank_kms,
> .disable_vblank = radeon_disable_vblank_kms,
> .get_vblank_timestamp = drm_calc_vbltimestamp_from_scanoutpos,
> -   .get_scanout_position = radeon_get_crtc_scanout_position,
> .irq_preinstall = radeon_driver_irq_preinstall_kms,
> .irq_postinstall = radeon_driver_irq_postinstall_kms,
> .irq_uninstall = radeon_driver_irq_uninstall_kms,
> diff --git a/drivers/gpu/drm/radeon/radeon_legacy_crtc.c 
> b/drivers/gpu/drm/radeon/radeon_legacy_crtc.c
> index a1985a552794..8817fd033cd0 100644
> --- a/drivers/gpu/drm/radeon/radeon_legacy_crtc.c
> +++ b/drivers/gpu/drm/radeon/radeon_legacy_crtc.c
> @@ -,7 +,8 @@ static const struct drm_crtc_helper_funcs 
> legacy_helper_funcs = {
> .mode_set_base_atomic = radeon_crtc_set_base_atomic,
> .prepare = radeon_crtc_prepare,
> .commit = radeon_crtc_commit,
> -   .disable = radeon_crtc_disable
> +   .disable = radeon_crtc_disable,
> +   .get_scanout_position = radeon_get_crtc_scanout_position,
>  };
>
>
> diff --git a/drivers/gpu/drm/radeon/radeon_mode.h 
> b/drivers/gpu/drm/radeon/radeon_mode.h
> index fd470d6bf3f4..3a61530c1398 100644
> --- a/drivers/gpu/drm/radeon/radeon_mode.h
> +++ b/drivers/gpu/drm/radeon/radeon_mode.h
> @@ -881,6 +881,12 @@ extern int radeon_get_crtc_scanoutpos(struct drm_device 
> *dev, unsigned int pipe,
>   ktime_t *stime, ktime_t *etime,
>

Re: [Intel-gfx] [PATCH v4 04/22] drm/amdgpu: Convert to struct drm_crtc_helper_funcs.get_scanout_position()

2020-02-03 Thread Alex Deucher
On Thu, Jan 23, 2020 at 9:00 AM Thomas Zimmermann  wrote:
>
> The callback struct drm_driver.get_scanout_position() is deprecated in
> favor of struct drm_crtc_helper_funcs.get_scanout_position(). Convert
> amdgpu over.
>
> Signed-off-by: Thomas Zimmermann 

Reviewed-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   | 12 
>  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   | 11 ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h  |  5 +
>  drivers/gpu/drm/amd/amdgpu/dce_v10_0.c|  1 +
>  drivers/gpu/drm/amd/amdgpu/dce_v11_0.c|  1 +
>  drivers/gpu/drm/amd/amdgpu/dce_v6_0.c |  1 +
>  drivers/gpu/drm/amd/amdgpu/dce_v8_0.c |  1 +
>  drivers/gpu/drm/amd/amdgpu/dce_virtual.c  |  1 +
>  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  3 ++-
>  9 files changed, 24 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> index 4e699071d144..a1e769d4417d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> @@ -914,3 +914,15 @@ int amdgpu_display_crtc_idx_to_irq_type(struct 
> amdgpu_device *adev, int crtc)
> return AMDGPU_CRTC_IRQ_NONE;
> }
>  }
> +
> +bool amdgpu_crtc_get_scanout_position(struct drm_crtc *crtc,
> +   bool in_vblank_irq, int *vpos,
> +   int *hpos, ktime_t *stime, ktime_t *etime,
> +   const struct drm_display_mode *mode)
> +{
> +   struct drm_device *dev = crtc->dev;
> +   unsigned int pipe = crtc->index;
> +
> +   return amdgpu_display_get_crtc_scanoutpos(dev, pipe, 0, vpos, hpos,
> + stime, etime, mode);
> +}
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> index a9c4edca70c9..955b78f1bba4 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> @@ -1377,16 +1377,6 @@ int amdgpu_file_to_fpriv(struct file *filp, struct 
> amdgpu_fpriv **fpriv)
> return 0;
>  }
>
> -static bool
> -amdgpu_get_crtc_scanout_position(struct drm_device *dev, unsigned int pipe,
> -bool in_vblank_irq, int *vpos, int *hpos,
> -ktime_t *stime, ktime_t *etime,
> -const struct drm_display_mode *mode)
> -{
> -   return amdgpu_display_get_crtc_scanoutpos(dev, pipe, 0, vpos, hpos,
> - stime, etime, mode);
> -}
> -
>  static struct drm_driver kms_driver = {
> .driver_features =
> DRIVER_USE_AGP | DRIVER_ATOMIC |
> @@ -1402,7 +1392,6 @@ static struct drm_driver kms_driver = {
> .enable_vblank = amdgpu_enable_vblank_kms,
> .disable_vblank = amdgpu_disable_vblank_kms,
> .get_vblank_timestamp = drm_calc_vbltimestamp_from_scanoutpos,
> -   .get_scanout_position = amdgpu_get_crtc_scanout_position,
> .irq_handler = amdgpu_irq_handler,
> .ioctls = amdgpu_ioctls_kms,
> .gem_free_object_unlocked = amdgpu_gem_object_free,
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
> index eb9975f4decb..37ba07e2feb5 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
> @@ -612,6 +612,11 @@ void amdgpu_panel_mode_fixup(struct drm_encoder *encoder,
>  struct drm_display_mode *adjusted_mode);
>  int amdgpu_display_crtc_idx_to_irq_type(struct amdgpu_device *adev, int 
> crtc);
>
> +bool amdgpu_crtc_get_scanout_position(struct drm_crtc *crtc,
> +   bool in_vblank_irq, int *vpos,
> +   int *hpos, ktime_t *stime, ktime_t *etime,
> +   const struct drm_display_mode *mode);
> +
>  /* fbdev layer */
>  int amdgpu_fbdev_init(struct amdgpu_device *adev);
>  void amdgpu_fbdev_fini(struct amdgpu_device *adev);
> diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c 
> b/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
> index 40d2ac723dd6..bdc1e0f036d4 100644
> --- a/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
> @@ -2685,6 +2685,7 @@ static const struct drm_crtc_helper_funcs 
> dce_v10_0_crtc_helper_funcs = {
> .prepare = dce_v10_0_crtc_prepare,
> .commit = dce_v10_0_crtc_commit,
> .disable = dce_v10_0_crtc_disable,
> +   .get_scanout_position = amdgpu_crtc_get_scanout_position,
>  };

Re: [Intel-gfx] [PATCH v4 05/22] drm/amdgpu: Convert to CRTC VBLANK callbacks

2020-02-03 Thread Alex Deucher
On Thu, Jan 23, 2020 at 9:00 AM Thomas Zimmermann  wrote:
>
> VBLANK callbacks in struct drm_driver are deprecated in favor of
> their equivalents in struct drm_crtc_funcs. Convert amdgpu over.
>
> v2:
> * don't wrap existing functions; change signature instead
>
> Signed-off-by: Thomas Zimmermann 

Reviewed-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  6 +++---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |  4 ++--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   |  4 
>  drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c   | 21 +++
>  drivers/gpu/drm/amd/amdgpu/dce_v10_0.c|  4 
>  drivers/gpu/drm/amd/amdgpu/dce_v11_0.c|  4 
>  drivers/gpu/drm/amd/amdgpu/dce_v6_0.c |  4 
>  drivers/gpu/drm/amd/amdgpu/dce_v8_0.c |  4 
>  drivers/gpu/drm/amd/amdgpu/dce_virtual.c  |  4 
>  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 10 +
>  10 files changed, 43 insertions(+), 22 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index f42e8d467c12..2319fdfc42e5 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -1191,9 +1191,9 @@ void amdgpu_driver_postclose_kms(struct drm_device *dev,
>  int amdgpu_device_ip_suspend(struct amdgpu_device *adev);
>  int amdgpu_device_suspend(struct drm_device *dev, bool fbcon);
>  int amdgpu_device_resume(struct drm_device *dev, bool fbcon);
> -u32 amdgpu_get_vblank_counter_kms(struct drm_device *dev, unsigned int pipe);
> -int amdgpu_enable_vblank_kms(struct drm_device *dev, unsigned int pipe);
> -void amdgpu_disable_vblank_kms(struct drm_device *dev, unsigned int pipe);
> +u32 amdgpu_get_vblank_counter_kms(struct drm_crtc *crtc);
> +int amdgpu_enable_vblank_kms(struct drm_crtc *crtc);
> +void amdgpu_disable_vblank_kms(struct drm_crtc *crtc);
>  long amdgpu_kms_compat_ioctl(struct file *filp, unsigned int cmd,
>  unsigned long arg);
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> index a1e769d4417d..ad9c9546a64f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> @@ -99,7 +99,7 @@ static void amdgpu_display_flip_work_func(struct 
> work_struct *__work)
>  & (DRM_SCANOUTPOS_VALID | DRM_SCANOUTPOS_IN_VBLANK)) ==
> (DRM_SCANOUTPOS_VALID | DRM_SCANOUTPOS_IN_VBLANK) &&
> (int)(work->target_vblank -
> - amdgpu_get_vblank_counter_kms(adev->ddev, 
> amdgpu_crtc->crtc_id)) > 0) {
> + amdgpu_get_vblank_counter_kms(crtc)) > 0) {
> schedule_delayed_work(&work->flip_work, 
> usecs_to_jiffies(1000));
> return;
> }
> @@ -219,7 +219,7 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc 
> *crtc,
> if (!adev->enable_virtual_display)
> work->base = amdgpu_bo_gpu_offset(new_abo);
> work->target_vblank = target - (uint32_t)drm_crtc_vblank_count(crtc) +
> -   amdgpu_get_vblank_counter_kms(dev, work->crtc_id);
> +   amdgpu_get_vblank_counter_kms(crtc);
>
> /* we borrow the event spin lock for protecting flip_wrok */
> spin_lock_irqsave(&crtc->dev->event_lock, flags);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> index 955b78f1bba4..bc2fa428013f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> @@ -1388,10 +1388,6 @@ static struct drm_driver kms_driver = {
> .postclose = amdgpu_driver_postclose_kms,
> .lastclose = amdgpu_driver_lastclose_kms,
> .unload = amdgpu_driver_unload_kms,
> -   .get_vblank_counter = amdgpu_get_vblank_counter_kms,
> -   .enable_vblank = amdgpu_enable_vblank_kms,
> -   .disable_vblank = amdgpu_disable_vblank_kms,
> -   .get_vblank_timestamp = drm_calc_vbltimestamp_from_scanoutpos,
> .irq_handler = amdgpu_irq_handler,
> .ioctls = amdgpu_ioctls_kms,
> .gem_free_object_unlocked = amdgpu_gem_object_free,
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
> index 60591dbc2097..98c196de27a4 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
> @@ -1110,14 +1110,15 @@ void amdgpu_driver_postclose_kms(struct drm_device 
> *dev,
>  /**
>   * amdgpu_get_vblank_counter_kms - get frame count
>   *
> - * @dev: drm dev pointer
> - *

  1   2   3   4   >