Re: [Intel-gfx] [RFC v3 3/3] drm/doc/rfc: VM_BIND uapi definition

2022-06-08 Thread Lionel Landwerlin

On 03/06/2022 09:53, Niranjana Vishwanathapura wrote:
On Wed, Jun 01, 2022 at 10:08:35PM -0700, Niranjana Vishwanathapura 
wrote:

On Wed, Jun 01, 2022 at 11:27:17AM +0200, Daniel Vetter wrote:

On Wed, 1 Jun 2022 at 11:03, Dave Airlie  wrote:


On Tue, 24 May 2022 at 05:20, Niranjana Vishwanathapura
 wrote:


On Thu, May 19, 2022 at 04:07:30PM -0700, Zanoni, Paulo R wrote:
>On Tue, 2022-05-17 at 11:32 -0700, Niranjana Vishwanathapura wrote:
>> VM_BIND and related uapi definitions
>>
>> v2: Ensure proper kernel-doc formatting with cross references.
>> Also add new uapi and documentation as per review comments
>> from Daniel.
>>
>> Signed-off-by: Niranjana Vishwanathapura 


>> ---
>>  Documentation/gpu/rfc/i915_vm_bind.h | 399 
+++

>>  1 file changed, 399 insertions(+)
>>  create mode 100644 Documentation/gpu/rfc/i915_vm_bind.h
>>
>> diff --git a/Documentation/gpu/rfc/i915_vm_bind.h 
b/Documentation/gpu/rfc/i915_vm_bind.h

>> new file mode 100644
>> index ..589c0a009107
>> --- /dev/null
>> +++ b/Documentation/gpu/rfc/i915_vm_bind.h
>> @@ -0,0 +1,399 @@
>> +/* SPDX-License-Identifier: MIT */
>> +/*
>> + * Copyright © 2022 Intel Corporation
>> + */
>> +
>> +/**
>> + * DOC: I915_PARAM_HAS_VM_BIND
>> + *
>> + * VM_BIND feature availability.
>> + * See typedef drm_i915_getparam_t param.
>> + */
>> +#define I915_PARAM_HAS_VM_BIND   57
>> +
>> +/**
>> + * DOC: I915_VM_CREATE_FLAGS_USE_VM_BIND
>> + *
>> + * Flag to opt-in for VM_BIND mode of binding during VM creation.
>> + * See struct drm_i915_gem_vm_control flags.
>> + *
>> + * A VM in VM_BIND mode will not support the older execbuff 
mode of binding.
>> + * In VM_BIND mode, execbuff ioctl will not accept any 
execlist (ie., the

>> + * &drm_i915_gem_execbuffer2.buffer_count must be 0).
>> + * Also, &drm_i915_gem_execbuffer2.batch_start_offset and
>> + * &drm_i915_gem_execbuffer2.batch_len must be 0.
>> + * DRM_I915_GEM_EXECBUFFER_EXT_BATCH_ADDRESSES extension must 
be provided

>> + * to pass in the batch buffer addresses.
>> + *
>> + * Additionally, I915_EXEC_NO_RELOC, I915_EXEC_HANDLE_LUT and
>> + * I915_EXEC_BATCH_FIRST of &drm_i915_gem_execbuffer2.flags 
must be 0
>> + * (not used) in VM_BIND mode. I915_EXEC_USE_EXTENSIONS flag 
must always be

>> + * set (See struct drm_i915_gem_execbuffer_ext_batch_addresses).
>> + * The buffers_ptr, buffer_count, batch_start_offset and 
batch_len fields
>> + * of struct drm_i915_gem_execbuffer2 are also not used and 
must be 0.

>> + */
>
>From that description, it seems we have:
>
>struct drm_i915_gem_execbuffer2 {
>    __u64 buffers_ptr;  -> must be 0 (new)
>    __u32 buffer_count; -> must be 0 (new)
>    __u32 batch_start_offset;   -> must be 0 (new)
>    __u32 batch_len;    -> must be 0 (new)
>    __u32 DR1;  -> must be 0 (old)
>    __u32 DR4;  -> must be 0 (old)
>    __u32 num_cliprects; (fences)   -> must be 0 since using 
extensions
>    __u64 cliprects_ptr; (fences, extensions) -> contains an 
actual pointer!
>    __u64 flags;    -> some flags must be 0 
(new)

>    __u64 rsvd1; (context info) -> repurposed field (old)
>    __u64 rsvd2;    -> unused
>};
>
>Based on that, why can't we just get drm_i915_gem_execbuffer3 
instead

>of adding even more complexity to an already abused interface? While
>the Vulkan-like extension thing is really nice, I don't think what
>we're doing here is extending the ioctl usage, we're completely
>changing how the base struct should be interpreted based on how 
the VM

>was created (which is an entirely different ioctl).
>
>From Rusty Russel's API Design grading, drm_i915_gem_execbuffer2 is
>already at -6 without these changes. I think after vm_bind we'll 
need

>to create a -11 entry just to deal with this ioctl.
>

The only change here is removing the execlist support for VM_BIND
mode (other than natual extensions).
Adding a new execbuffer3 was considered, but I think we need to be 
careful
with that as that goes beyond the VM_BIND support, including any 
future

requirements (as we don't want an execbuffer4 after VM_BIND).


Why not? it's not like adding extensions here is really that different
than adding new ioctls.

I definitely think this deserves an execbuffer3 without even
considering future requirements. Just  to burn down the old
requirements and pointless fields.

Make execbuffer3 be vm bind only, no relocs, no legacy bits, leave the
older sw on execbuf2 for ever.


I guess another point in favour of execbuf3 would be that it's less
midlayer. If we share the entry point then there's quite a few vfuncs
needed to cleanly split out the vm_bind paths from the legacy
reloc/softping paths.

If we invert this and do execbuf3, then there's the existing ioctl
vfunc, and then we share code (where it even makes sense, probably
request setup/submit need to be shared, a

Re: [Intel-gfx] [RFC v3 3/3] drm/doc/rfc: VM_BIND uapi definition

2022-06-08 Thread Tvrtko Ursulin



On 07/06/2022 20:37, Niranjana Vishwanathapura wrote:

On Tue, Jun 07, 2022 at 11:27:14AM +0100, Tvrtko Ursulin wrote:


On 17/05/2022 19:32, Niranjana Vishwanathapura wrote:

VM_BIND and related uapi definitions

v2: Ensure proper kernel-doc formatting with cross references.
    Also add new uapi and documentation as per review comments
    from Daniel.

Signed-off-by: Niranjana Vishwanathapura 


---
 Documentation/gpu/rfc/i915_vm_bind.h | 399 +++
 1 file changed, 399 insertions(+)
 create mode 100644 Documentation/gpu/rfc/i915_vm_bind.h

diff --git a/Documentation/gpu/rfc/i915_vm_bind.h 
b/Documentation/gpu/rfc/i915_vm_bind.h

new file mode 100644
index ..589c0a009107
--- /dev/null
+++ b/Documentation/gpu/rfc/i915_vm_bind.h
@@ -0,0 +1,399 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2022 Intel Corporation
+ */
+
+/**
+ * DOC: I915_PARAM_HAS_VM_BIND
+ *
+ * VM_BIND feature availability.
+ * See typedef drm_i915_getparam_t param.
+ */
+#define I915_PARAM_HAS_VM_BIND    57
+
+/**
+ * DOC: I915_VM_CREATE_FLAGS_USE_VM_BIND
+ *
+ * Flag to opt-in for VM_BIND mode of binding during VM creation.
+ * See struct drm_i915_gem_vm_control flags.
+ *
+ * A VM in VM_BIND mode will not support the older execbuff mode of 
binding.
+ * In VM_BIND mode, execbuff ioctl will not accept any execlist 
(ie., the

+ * &drm_i915_gem_execbuffer2.buffer_count must be 0).
+ * Also, &drm_i915_gem_execbuffer2.batch_start_offset and
+ * &drm_i915_gem_execbuffer2.batch_len must be 0.
+ * DRM_I915_GEM_EXECBUFFER_EXT_BATCH_ADDRESSES extension must be 
provided

+ * to pass in the batch buffer addresses.
+ *
+ * Additionally, I915_EXEC_NO_RELOC, I915_EXEC_HANDLE_LUT and
+ * I915_EXEC_BATCH_FIRST of &drm_i915_gem_execbuffer2.flags must be 0
+ * (not used) in VM_BIND mode. I915_EXEC_USE_EXTENSIONS flag must 
always be

+ * set (See struct drm_i915_gem_execbuffer_ext_batch_addresses).
+ * The buffers_ptr, buffer_count, batch_start_offset and batch_len 
fields

+ * of struct drm_i915_gem_execbuffer2 are also not used and must be 0.
+ */
+#define I915_VM_CREATE_FLAGS_USE_VM_BIND    (1 << 0)
+
+/**
+ * DOC: I915_CONTEXT_CREATE_FLAGS_LONG_RUNNING
+ *
+ * Flag to declare context as long running.
+ * See struct drm_i915_gem_context_create_ext flags.
+ *
+ * Usage of dma-fence expects that they complete in reasonable 
amount of time.
+ * Compute on the other hand can be long running. Hence it is not 
appropriate

+ * for compute contexts to export request completion dma-fence to user.
+ * The dma-fence usage will be limited to in-kernel consumption only.
+ * Compute contexts need to use user/memory fence.
+ *
+ * So, long running contexts do not support output fences. Hence,
+ * I915_EXEC_FENCE_OUT (See &drm_i915_gem_execbuffer2.flags and
+ * I915_EXEC_FENCE_SIGNAL (See &drm_i915_gem_exec_fence.flags) are 
expected

+ * to be not used.
+ *
+ * DRM_I915_GEM_WAIT ioctl call is also not supported for objects 
mapped

+ * to long running contexts.
+ */
+#define I915_CONTEXT_CREATE_FLAGS_LONG_RUNNING   (1u << 2)
+
+/* VM_BIND related ioctls */
+#define DRM_I915_GEM_VM_BIND    0x3d
+#define DRM_I915_GEM_VM_UNBIND    0x3e
+#define DRM_I915_GEM_WAIT_USER_FENCE    0x3f
+
+#define DRM_IOCTL_I915_GEM_VM_BIND    DRM_IOWR(DRM_COMMAND_BASE 
+ DRM_I915_GEM_VM_BIND, struct drm_i915_gem_vm_bind)
+#define DRM_IOCTL_I915_GEM_VM_UNBIND
DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GEM_VM_UNBIND, struct 
drm_i915_gem_vm_bind)
+#define DRM_IOCTL_I915_GEM_WAIT_USER_FENCE
DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GEM_WAIT_USER_FENCE, struct 
drm_i915_gem_wait_user_fence)

+
+/**
+ * struct drm_i915_gem_vm_bind - VA to object mapping to bind.
+ *
+ * This structure is passed to VM_BIND ioctl and specifies the 
mapping of GPU
+ * virtual address (VA) range to the section of an object that 
should be bound

+ * in the device page table of the specified address space (VM).
+ * The VA range specified must be unique (ie., not currently bound) 
and can
+ * be mapped to whole object or a section of the object (partial 
binding).
+ * Multiple VA mappings can be created to the same section of the 
object

+ * (aliasing).
+ */
+struct drm_i915_gem_vm_bind {
+    /** @vm_id: VM (address space) id to bind */
+    __u32 vm_id;
+
+    /** @handle: Object handle */
+    __u32 handle;
+
+    /** @start: Virtual Address start to bind */
+    __u64 start;
+
+    /** @offset: Offset in object to bind */
+    __u64 offset;
+
+    /** @length: Length of mapping to bind */
+    __u64 length;


Does it support, or should it, equivalent of EXEC_OBJECT_PAD_TO_SIZE? 
Or if not userspace is expected to map the remainder of the space to a 
dummy object? In which case would there be any alignment/padding 
issues preventing the two bind to be placed next to each other?


I ask because someone from the compute side asked me about a problem 
with their strategy of dealing with overfetch and I suggested pad to 
size.




Thanks Tvrtko,
I t

Re: [Intel-gfx] [RFC v3 1/3] drm/doc/rfc: VM_BIND feature design document

2022-06-08 Thread Tvrtko Ursulin




On 07/06/2022 22:32, Niranjana Vishwanathapura wrote:

On Tue, Jun 07, 2022 at 11:18:11AM -0700, Niranjana Vishwanathapura wrote:

On Tue, Jun 07, 2022 at 12:12:03PM -0500, Jason Ekstrand wrote:

 On Fri, Jun 3, 2022 at 6:52 PM Niranjana Vishwanathapura
  wrote:

   On Fri, Jun 03, 2022 at 10:20:25AM +0300, Lionel Landwerlin wrote:
   >   On 02/06/2022 23:35, Jason Ekstrand wrote:
   >
   > On Thu, Jun 2, 2022 at 3:11 PM Niranjana Vishwanathapura
   >  wrote:
   >
   >   On Wed, Jun 01, 2022 at 01:28:36PM -0700, Matthew Brost 
wrote:

   >   >On Wed, Jun 01, 2022 at 05:25:49PM +0300, Lionel Landwerlin
   wrote:
   >   >> On 17/05/2022 21:32, Niranjana Vishwanathapura wrote:
   >   >> > +VM_BIND/UNBIND ioctl will immediately start
   binding/unbinding
   >   the mapping in an
   >   >> > +async worker. The binding and unbinding will work 
like a

   special
   >   GPU engine.
   >   >> > +The binding and unbinding operations are serialized and
   will
   >   wait on specified
   >   >> > +input fences before the operation and will signal the
   output
   >   fences upon the
   >   >> > +completion of the operation. Due to serialization,
   completion of
   >   an operation
   >   >> > +will also indicate that all previous operations are 
also

   >   complete.
   >   >>
   >   >> I guess we should avoid saying "will immediately start
   >   binding/unbinding" if
   >   >> there are fences involved.
   >   >>
   >   >> And the fact that it's happening in an async worker 
seem to

   imply
   >   it's not
   >   >> immediate.
   >   >>
   >
   >   Ok, will fix.
   >   This was added because in earlier design binding was deferred
   until
   >   next execbuff.
   >   But now it is non-deferred (immediate in that sense). But 
yah,

   this is
   >   confusing
   >   and will fix it.
   >
   >   >>
   >   >> I have a question on the behavior of the bind operation 
when

   no
   >   input fence
   >   >> is provided. Let say I do :
   >   >>
   >   >> VM_BIND (out_fence=fence1)
   >   >>
   >   >> VM_BIND (out_fence=fence2)
   >   >>
   >   >> VM_BIND (out_fence=fence3)
   >   >>
   >   >>
   >   >> In what order are the fences going to be signaled?
   >   >>
   >   >> In the order of VM_BIND ioctls? Or out of order?
   >   >>
   >   >> Because you wrote "serialized I assume it's : in order
   >   >>
   >
   >   Yes, in the order of VM_BIND/UNBIND ioctls. Note that bind 
and

   unbind
   >   will use
   >   the same queue and hence are ordered.
   >
   >   >>
   >   >> One thing I didn't realize is that because we only get one
   >   "VM_BIND" engine,
   >   >> there is a disconnect from the Vulkan specification.
   >   >>
   >   >> In Vulkan VM_BIND operations are serialized but per 
engine.

   >   >>
   >   >> So you could have something like this :
   >   >>
   >   >> VM_BIND (engine=rcs0, in_fence=fence1, out_fence=fence2)
   >   >>
   >   >> VM_BIND (engine=ccs0, in_fence=fence3, out_fence=fence4)
   >   >>
   >   >>
   >   >> fence1 is not signaled
   >   >>
   >   >> fence3 is signaled
   >   >>
   >   >> So the second VM_BIND will proceed before the first 
VM_BIND.

   >   >>
   >   >>
   >   >> I guess we can deal with that scenario in userspace by 
doing

   the
   >   wait
   >   >> ourselves in one thread per engines.
   >   >>
   >   >> But then it makes the VM_BIND input fences useless.
   >   >>
   >   >>
   >   >> Daniel : what do you think? Should be rework this or just
   deal with
   >   wait
   >   >> fences in userspace?
   >   >>
   >   >
   >   >My opinion is rework this but make the ordering via an 
engine

   param
   >   optional.
   >   >
   >   >e.g. A VM can be configured so all binds are ordered 
within the

   VM
   >   >
   >   >e.g. A VM can be configured so all binds accept an engine
   argument
   >   (in
   >   >the case of the i915 likely this is a gem context handle) 
and

   binds
   >   >ordered with respect to that engine.
   >   >
   >   >This gives UMDs options as the later likely consumes more 
KMD

   >   resources
   >   >so if a different UMD can live with binds being ordered 
within

   the VM
   >   >they can use a mode consuming less resources.
   >   >
   >
   >   I think we need to be careful here if we are looking for some
   out of
   >   (submission) order completion of vm_bind/unbind.
   >   In-order completion means, in a batch of binds and unbinds 
to be
   >   completed in-order, user only needs to specify in-fence 
for the

   >   first bind/unbind call and the our-fence for the last
   bind/unbind
   >   call. Also, the VA rel

Re: [Intel-gfx] [RFC v3 3/3] drm/doc/rfc: VM_BIND uapi definition

2022-06-08 Thread Tvrtko Ursulin



On 07/06/2022 22:25, Niranjana Vishwanathapura wrote:

On Tue, Jun 07, 2022 at 11:42:08AM +0100, Tvrtko Ursulin wrote:


On 03/06/2022 07:53, Niranjana Vishwanathapura wrote:
On Wed, Jun 01, 2022 at 10:08:35PM -0700, Niranjana Vishwanathapura 
wrote:

On Wed, Jun 01, 2022 at 11:27:17AM +0200, Daniel Vetter wrote:

On Wed, 1 Jun 2022 at 11:03, Dave Airlie  wrote:


On Tue, 24 May 2022 at 05:20, Niranjana Vishwanathapura
 wrote:


On Thu, May 19, 2022 at 04:07:30PM -0700, Zanoni, Paulo R wrote:

On Tue, 2022-05-17 at 11:32 -0700, Niranjana Vishwanathapura wrote:

VM_BIND and related uapi definitions

v2: Ensure proper kernel-doc formatting with cross references.
 Also add new uapi and documentation as per review comments
 from Daniel.

Signed-off-by: Niranjana Vishwanathapura 



---
  Documentation/gpu/rfc/i915_vm_bind.h | 399 

+++

  1 file changed, 399 insertions(+)
  create mode 100644 Documentation/gpu/rfc/i915_vm_bind.h

diff --git a/Documentation/gpu/rfc/i915_vm_bind.h 

b/Documentation/gpu/rfc/i915_vm_bind.h

new file mode 100644
index ..589c0a009107
--- /dev/null
+++ b/Documentation/gpu/rfc/i915_vm_bind.h
@@ -0,0 +1,399 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2022 Intel Corporation
+ */
+
+/**
+ * DOC: I915_PARAM_HAS_VM_BIND
+ *
+ * VM_BIND feature availability.
+ * See typedef drm_i915_getparam_t param.
+ */
+#define I915_PARAM_HAS_VM_BIND   57
+
+/**
+ * DOC: I915_VM_CREATE_FLAGS_USE_VM_BIND
+ *
+ * Flag to opt-in for VM_BIND mode of binding during VM creation.
+ * See struct drm_i915_gem_vm_control flags.
+ *
+ * A VM in VM_BIND mode will not support the older 

execbuff mode of binding.
+ * In VM_BIND mode, execbuff ioctl will not accept any 

execlist (ie., the

+ * &drm_i915_gem_execbuffer2.buffer_count must be 0).
+ * Also, &drm_i915_gem_execbuffer2.batch_start_offset and
+ * &drm_i915_gem_execbuffer2.batch_len must be 0.
+ * DRM_I915_GEM_EXECBUFFER_EXT_BATCH_ADDRESSES extension 

must be provided

+ * to pass in the batch buffer addresses.
+ *
+ * Additionally, I915_EXEC_NO_RELOC, I915_EXEC_HANDLE_LUT and
+ * I915_EXEC_BATCH_FIRST of 

&drm_i915_gem_execbuffer2.flags must be 0
+ * (not used) in VM_BIND mode. I915_EXEC_USE_EXTENSIONS 

flag must always be

+ * set (See struct drm_i915_gem_execbuffer_ext_batch_addresses).
+ * The buffers_ptr, buffer_count, batch_start_offset and 

batch_len fields
+ * of struct drm_i915_gem_execbuffer2 are also not used 

and must be 0.

+ */


From that description, it seems we have:

struct drm_i915_gem_execbuffer2 {
    __u64 buffers_ptr;  -> must be 0 (new)
    __u32 buffer_count; -> must be 0 (new)
    __u32 batch_start_offset;   -> must be 0 (new)
    __u32 batch_len;    -> must be 0 (new)
    __u32 DR1;  -> must be 0 (old)
    __u32 DR4;  -> must be 0 (old)
    __u32 num_cliprects; (fences)   -> must be 0 since 

using extensions
    __u64 cliprects_ptr; (fences, extensions) -> 

contains an actual pointer!
    __u64 flags;    -> some flags must be 0 
(new)

    __u64 rsvd1; (context info) -> repurposed field (old)
    __u64 rsvd2;    -> unused
};

Based on that, why can't we just get drm_i915_gem_execbuffer3 
instead
of adding even more complexity to an already abused interface? 
While

the Vulkan-like extension thing is really nice, I don't think what
we're doing here is extending the ioctl usage, we're completely
changing how the base struct should be interpreted based on 

how the VM

was created (which is an entirely different ioctl).

From Rusty Russel's API Design grading, drm_i915_gem_execbuffer2 is
already at -6 without these changes. I think after vm_bind we'll 
need

to create a -11 entry just to deal with this ioctl.



The only change here is removing the execlist support for VM_BIND
mode (other than natual extensions).
Adding a new execbuffer3 was considered, but I think we need to 
be careful
with that as that goes beyond the VM_BIND support, including any 
future

requirements (as we don't want an execbuffer4 after VM_BIND).


Why not? it's not like adding extensions here is really that 
different

than adding new ioctls.

I definitely think this deserves an execbuffer3 without even
considering future requirements. Just  to burn down the old
requirements and pointless fields.

Make execbuffer3 be vm bind only, no relocs, no legacy bits, leave 
the

older sw on execbuf2 for ever.


I guess another point in favour of execbuf3 would be that it's less
midlayer. If we share the entry point then there's quite a few vfuncs
needed to cleanly split out the vm_bind paths from the legacy
reloc/softping paths.

If we invert this and do execbuf3, then there's the existing ioctl
vfunc, and then we share code (where it even makes sense, probably
request setup/submit need to be shared, anything else is probably
cleaner

[Intel-gfx] [PATCH v2 00/15] drm/edid: expand on struct drm_edid usage

2022-06-08 Thread Jani Nikula
v2 of [1], addressing review comments.

BR,
Jani.


[1] https://patchwork.freedesktop.org/series/104309/

Jani Nikula (15):
  drm/edid: fix CTA data block collection size for CTA version 3
  drm/edid: abstract cea data block collection size
  drm/edid: add block count and data helper functions for drm_edid
  drm/edid: keep track of alloc size in drm_do_get_edid()
  drm/edid: add new interfaces around struct drm_edid
  drm/edid: add drm_edid_connector_update()
  drm/probe-helper: abstract .get_modes() connector helper call
  drm/probe-helper: add drm_connector_helper_get_modes()
  drm/edid: add drm_edid_raw() to access the raw EDID data
  drm/i915/edid: convert DP, HDMI and LVDS to drm_edid
  drm/i915/bios: convert intel_bios_init_panel() to drm_edid
  drm/edid: do invalid block filtering in-place
  drm/edid: add HF-EEODB support to EDID read and allocation
  drm/edid: take HF-EEODB extension count into account
  drm/todo: add entry for converting the subsystem to struct drm_edid

 Documentation/gpu/todo.rst|  25 +
 drivers/gpu/drm/drm_connector.c   |   2 +
 drivers/gpu/drm/drm_edid.c| 572 --
 drivers/gpu/drm/drm_probe_helper.c|  63 +-
 drivers/gpu/drm/i915/display/intel_bios.c |  23 +-
 drivers/gpu/drm/i915/display/intel_bios.h |   4 +-
 .../gpu/drm/i915/display/intel_connector.c|   4 +-
 .../drm/i915/display/intel_display_types.h|   4 +-
 drivers/gpu/drm/i915/display/intel_dp.c   |  74 ++-
 drivers/gpu/drm/i915/display/intel_hdmi.c |  26 +-
 drivers/gpu/drm/i915/display/intel_lvds.c |  37 +-
 include/drm/drm_edid.h|  12 +
 include/drm/drm_probe_helper.h|   1 +
 13 files changed, 692 insertions(+), 155 deletions(-)

-- 
2.30.2



[Intel-gfx] [PATCH v2 02/15] drm/edid: abstract cea data block collection size

2022-06-08 Thread Jani Nikula
Add a function to get the cea data block collection size.

Cc: Ville Syrjälä 
Signed-off-by: Jani Nikula 
---
 drivers/gpu/drm/drm_edid.c | 24 +---
 1 file changed, 21 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
index c57f6333ea7d..002816509fc8 100644
--- a/drivers/gpu/drm/drm_edid.c
+++ b/drivers/gpu/drm/drm_edid.c
@@ -4482,6 +4482,20 @@ __cea_db_iter_current_block(const struct cea_db_iter 
*iter)
return NULL;
 }
 
+/*
+ * References:
+ * - CTA-861-H section 7.3.3 CTA Extension Version 3
+ */
+static int cea_db_collection_size(const u8 *cta)
+{
+   u8 d = cta[2];
+
+   if (d < 4 || d > 127)
+   return 0;
+
+   return d - 4;
+}
+
 /*
  * References:
  * - VESA E-EDID v1.4
@@ -4492,15 +4506,19 @@ static const void *__cea_db_iter_edid_next(struct 
cea_db_iter *iter)
const u8 *ext;
 
drm_edid_iter_for_each(ext, &iter->edid_iter) {
+   int size;
+
/* Only support CTA Extension revision 3+ */
if (ext[0] != CEA_EXT || cea_revision(ext) < 3)
continue;
 
-   iter->index = 4;
-   iter->end = ext[2];
-   if (iter->end < 4 || iter->end > 127)
+   size = cea_db_collection_size(ext);
+   if (!size)
continue;
 
+   iter->index = 4;
+   iter->end = iter->index + size;
+
return ext;
}
 
-- 
2.30.2



[Intel-gfx] [PATCH v2 01/15] drm/edid: fix CTA data block collection size for CTA version 3

2022-06-08 Thread Jani Nikula
The CTA Data Block Collection is valid only for CTA extension version
3. In versions 1 and 2, it is a reserved block, which we ignore.

The DTD start offset (byte 2, or d in CTA-861 spec), which determines
the CTA Data Block Collection size, is specified slightly differently
for different versions:

Version 1:
d = offset for the byte following the reserved data block. If no
data is provided in the reserved data block, then d=4. If no DTDs
are provided, then d=0

Version 2:
d = offset for the byte following the reserved data block. If no
data is provided in the reserved data block, then d=4. If d=0, then
no detailed timing descriptors are provided, and no data is provided
in the reserved data block.

Version 3:
d = offset for the byte following the data block collection. If no
data is provided in the data block collection, then d=4. If d=0,
then no detailed timing descriptors are provided, and no data is
provided in the data block collection.

Ever since commit 9e50b9d55e9c ("drm: edid: Add some bounds checking"),
we've interpreted 0 to mean there are no DTDs but it's all Data
Blocks. Per the spec, Data Blocks are only valid for version 3, where we
should interpret 0 to mean there are no data blocks.

Follow the spec (and hope the EDIDs follow it too).

Cc: Ville Syrjälä 
Signed-off-by: Jani Nikula 
---
 drivers/gpu/drm/drm_edid.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
index 929fc0e46751..c57f6333ea7d 100644
--- a/drivers/gpu/drm/drm_edid.c
+++ b/drivers/gpu/drm/drm_edid.c
@@ -4498,8 +4498,6 @@ static const void *__cea_db_iter_edid_next(struct 
cea_db_iter *iter)
 
iter->index = 4;
iter->end = ext[2];
-   if (iter->end == 0)
-   iter->end = 127;
if (iter->end < 4 || iter->end > 127)
continue;
 
-- 
2.30.2



[Intel-gfx] [PATCH v2 03/15] drm/edid: add block count and data helper functions for drm_edid

2022-06-08 Thread Jani Nikula
Add drm_edid based block count and data access helper functions that
take the EDID allocated size into account.

At the moment, the allocated size should always match the EDID size
indicated by the extension count, but this will change in the future.

Signed-off-by: Jani Nikula 
Reviewed-by: Andrzej Hajda 
---
 drivers/gpu/drm/drm_edid.c | 42 +++---
 1 file changed, 35 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
index 002816509fc8..f44ada4bfa5b 100644
--- a/drivers/gpu/drm/drm_edid.c
+++ b/drivers/gpu/drm/drm_edid.c
@@ -1613,6 +1613,35 @@ static const void *edid_extension_block_data(const 
struct edid *edid, int index)
return edid_block_data(edid, index + 1);
 }
 
+static int drm_edid_block_count(const struct drm_edid *drm_edid)
+{
+   int num_blocks;
+
+   /* Starting point */
+   num_blocks = edid_block_count(drm_edid->edid);
+
+   /* Limit by allocated size */
+   num_blocks = min(num_blocks, (int)drm_edid->size / EDID_LENGTH);
+
+   return num_blocks;
+}
+
+static int drm_edid_extension_block_count(const struct drm_edid *drm_edid)
+{
+   return drm_edid_block_count(drm_edid) - 1;
+}
+
+static const void *drm_edid_block_data(const struct drm_edid *drm_edid, int 
index)
+{
+   return edid_block_data(drm_edid->edid, index);
+}
+
+static const void *drm_edid_extension_block_data(const struct drm_edid 
*drm_edid,
+int index)
+{
+   return edid_extension_block_data(drm_edid->edid, index);
+}
+
 /*
  * Initializer helper for legacy interfaces, where we have no choice but to
  * trust edid size. Not for general purpose use.
@@ -1665,8 +1694,8 @@ static const void *__drm_edid_iter_next(struct 
drm_edid_iter *iter)
if (!iter->drm_edid)
return NULL;
 
-   if (iter->index < edid_block_count(iter->drm_edid->edid))
-   block = edid_block_data(iter->drm_edid->edid, iter->index++);
+   if (iter->index < drm_edid_block_count(iter->drm_edid))
+   block = drm_edid_block_data(iter->drm_edid, iter->index++);
 
return block;
 }
@@ -3574,22 +3603,21 @@ static int add_detailed_modes(struct drm_connector 
*connector,
 const u8 *drm_find_edid_extension(const struct drm_edid *drm_edid,
  int ext_id, int *ext_index)
 {
-   const struct edid *edid = drm_edid ? drm_edid->edid : NULL;
const u8 *edid_ext = NULL;
int i;
 
/* No EDID or EDID extensions */
-   if (!edid || !edid_extension_block_count(edid))
+   if (!drm_edid || !drm_edid_extension_block_count(drm_edid))
return NULL;
 
/* Find CEA extension */
-   for (i = *ext_index; i < edid_extension_block_count(edid); i++) {
-   edid_ext = edid_extension_block_data(edid, i);
+   for (i = *ext_index; i < drm_edid_extension_block_count(drm_edid); i++) 
{
+   edid_ext = drm_edid_extension_block_data(drm_edid, i);
if (edid_block_tag(edid_ext) == ext_id)
break;
}
 
-   if (i >= edid_extension_block_count(edid))
+   if (i >= drm_edid_extension_block_count(drm_edid))
return NULL;
 
*ext_index = i + 1;
-- 
2.30.2



[Intel-gfx] [PATCH v2 04/15] drm/edid: keep track of alloc size in drm_do_get_edid()

2022-06-08 Thread Jani Nikula
We'll want to return the allocated buffer size in the future. Keep track
of it.

Signed-off-by: Jani Nikula 
---
 drivers/gpu/drm/drm_edid.c | 27 +++
 1 file changed, 19 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
index f44ada4bfa5b..2beaa48301c1 100644
--- a/drivers/gpu/drm/drm_edid.c
+++ b/drivers/gpu/drm/drm_edid.c
@@ -2021,13 +2021,16 @@ bool drm_edid_is_valid(struct edid *edid)
 EXPORT_SYMBOL(drm_edid_is_valid);
 
 static struct edid *edid_filter_invalid_blocks(const struct edid *edid,
-  int invalid_blocks)
+  int invalid_blocks,
+  size_t *alloc_size)
 {
struct edid *new, *dest_block;
int valid_extensions = edid->extensions - invalid_blocks;
int i;
 
-   new = kmalloc(edid_size_by_blocks(valid_extensions + 1), GFP_KERNEL);
+   *alloc_size = edid_size_by_blocks(valid_extensions + 1);
+
+   new = kmalloc(*alloc_size, GFP_KERNEL);
if (!new)
goto out;
 
@@ -2140,7 +2143,8 @@ static void connector_bad_edid(struct drm_connector 
*connector,
 }
 
 /* Get override or firmware EDID */
-static struct edid *drm_get_override_edid(struct drm_connector *connector)
+static struct edid *drm_get_override_edid(struct drm_connector *connector,
+ size_t *alloc_size)
 {
struct edid *override = NULL;
 
@@ -2150,6 +2154,10 @@ static struct edid *drm_get_override_edid(struct 
drm_connector *connector)
if (!override)
override = drm_load_edid_firmware(connector);
 
+   /* FIXME: Get alloc size from deeper down the stack */
+   if (!IS_ERR_OR_NULL(override) && alloc_size)
+   *alloc_size = edid_size(override);
+
return IS_ERR(override) ? NULL : override;
 }
 
@@ -2169,7 +2177,7 @@ int drm_add_override_edid_modes(struct drm_connector 
*connector)
struct edid *override;
int num_modes = 0;
 
-   override = drm_get_override_edid(connector);
+   override = drm_get_override_edid(connector, NULL);
if (override) {
drm_connector_update_edid_property(connector, override);
num_modes = drm_add_edid_modes(connector, override);
@@ -2245,12 +2253,13 @@ struct edid *drm_do_get_edid(struct drm_connector 
*connector,
enum edid_block_status status;
int i, invalid_blocks = 0;
struct edid *edid, *new;
+   size_t alloc_size = EDID_LENGTH;
 
-   edid = drm_get_override_edid(connector);
+   edid = drm_get_override_edid(connector, &alloc_size);
if (edid)
goto ok;
 
-   edid = kmalloc(EDID_LENGTH, GFP_KERNEL);
+   edid = kmalloc(alloc_size, GFP_KERNEL);
if (!edid)
return NULL;
 
@@ -2278,7 +2287,8 @@ struct edid *drm_do_get_edid(struct drm_connector 
*connector,
if (!edid_extension_block_count(edid))
goto ok;
 
-   new = krealloc(edid, edid_size(edid), GFP_KERNEL);
+   alloc_size = edid_size(edid);
+   new = krealloc(edid, alloc_size, GFP_KERNEL);
if (!new)
goto fail;
edid = new;
@@ -2300,7 +2310,8 @@ struct edid *drm_do_get_edid(struct drm_connector 
*connector,
if (invalid_blocks) {
connector_bad_edid(connector, edid, edid_block_count(edid));
 
-   edid = edid_filter_invalid_blocks(edid, invalid_blocks);
+   edid = edid_filter_invalid_blocks(edid, invalid_blocks,
+ &alloc_size);
}
 
 ok:
-- 
2.30.2



[Intel-gfx] [PATCH v2 05/15] drm/edid: add new interfaces around struct drm_edid

2022-06-08 Thread Jani Nikula
Add new functions drm_edid_read(), drm_edid_read_ddc(), and
drm_edid_read_custom() to replace drm_get_edid() and drm_do_get_edid()
for reading the EDID. The transition is expected to happen over a fairly
long time.

Note that the new drm_edid_read*() functions do not do any of the
connector updates anymore. The reading and parsing will be completely
separated from each other.

Add new functions drm_edid_alloc(), drm_edid_dup(), and drm_edid_free()
for allocating and freeing drm_edid containers.

Cc: David Airlie 
Cc: Daniel Vetter 
Signed-off-by: Jani Nikula 
---
 drivers/gpu/drm/drm_edid.c | 245 +
 include/drm/drm_edid.h |   9 ++
 2 files changed, 230 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
index 2beaa48301c1..2bdaf1e34a9d 100644
--- a/drivers/gpu/drm/drm_edid.c
+++ b/drivers/gpu/drm/drm_edid.c
@@ -2226,29 +2226,9 @@ static enum edid_block_status edid_block_read(void 
*block, unsigned int block_nu
return status;
 }
 
-/**
- * drm_do_get_edid - get EDID data using a custom EDID block read function
- * @connector: connector we're probing
- * @read_block: EDID block read function
- * @context: private data passed to the block read function
- *
- * When the I2C adapter connected to the DDC bus is hidden behind a device that
- * exposes a different interface to read EDID blocks this function can be used
- * to get EDID data using a custom block read function.
- *
- * As in the general case the DDC bus is accessible by the kernel at the I2C
- * level, drivers must make all reasonable efforts to expose it as an I2C
- * adapter and use drm_get_edid() instead of abusing this function.
- *
- * The EDID may be overridden using debugfs override_edid or firmware EDID
- * (drm_load_edid_firmware() and drm.edid_firmware parameter), in this priority
- * order. Having either of them bypasses actual EDID reads.
- *
- * Return: Pointer to valid EDID or NULL if we couldn't find any.
- */
-struct edid *drm_do_get_edid(struct drm_connector *connector,
-read_block_fn read_block,
-void *context)
+static struct edid *_drm_do_get_edid(struct drm_connector *connector,
+read_block_fn read_block, void *context,
+size_t *size)
 {
enum edid_block_status status;
int i, invalid_blocks = 0;
@@ -2315,14 +2295,125 @@ struct edid *drm_do_get_edid(struct drm_connector 
*connector,
}
 
 ok:
+   if (size)
+   *size = alloc_size;
+
return edid;
 
 fail:
kfree(edid);
return NULL;
 }
+
+/**
+ * drm_do_get_edid - get EDID data using a custom EDID block read function
+ * @connector: connector we're probing
+ * @read_block: EDID block read function
+ * @context: private data passed to the block read function
+ *
+ * When the I2C adapter connected to the DDC bus is hidden behind a device that
+ * exposes a different interface to read EDID blocks this function can be used
+ * to get EDID data using a custom block read function.
+ *
+ * As in the general case the DDC bus is accessible by the kernel at the I2C
+ * level, drivers must make all reasonable efforts to expose it as an I2C
+ * adapter and use drm_get_edid() instead of abusing this function.
+ *
+ * The EDID may be overridden using debugfs override_edid or firmware EDID
+ * (drm_load_edid_firmware() and drm.edid_firmware parameter), in this priority
+ * order. Having either of them bypasses actual EDID reads.
+ *
+ * Return: Pointer to valid EDID or NULL if we couldn't find any.
+ */
+struct edid *drm_do_get_edid(struct drm_connector *connector,
+read_block_fn read_block,
+void *context)
+{
+   return _drm_do_get_edid(connector, read_block, context, NULL);
+}
 EXPORT_SYMBOL_GPL(drm_do_get_edid);
 
+/* Allocate struct drm_edid container *without* duplicating the edid data */
+static const struct drm_edid *_drm_edid_alloc(const void *edid, size_t size)
+{
+   struct drm_edid *drm_edid;
+
+   if (!edid || !size || size < EDID_LENGTH)
+   return NULL;
+
+   drm_edid = kzalloc(sizeof(*drm_edid), GFP_KERNEL);
+   if (drm_edid) {
+   drm_edid->edid = edid;
+   drm_edid->size = size;
+   }
+
+   return drm_edid;
+}
+
+/**
+ * drm_edid_alloc - Allocate a new drm_edid container
+ * @edid: Pointer to raw EDID data
+ * @size: Size of memory allocated for EDID
+ *
+ * Allocate a new drm_edid container. Do not calculate edid size from edid, 
pass
+ * the actual size that has been allocated for the data. There is no validation
+ * of the raw EDID data against the size, but at least the EDID base block must
+ * fit in the buffer.
+ *
+ * The returned pointer must be freed using drm_edid_free().
+ *
+ * Return: drm_edid container, or NULL on errors
+ */
+const struct drm_edid *drm_edid_alloc(const 

[Intel-gfx] [PATCH v2 06/15] drm/edid: add drm_edid_connector_update()

2022-06-08 Thread Jani Nikula
Add a new function drm_edid_connector_update() to replace the
combination of calls drm_connector_update_edid_property() and
drm_add_edid_modes(). Usually they are called in the drivers in this
order, however the former needs information from the latter.

Since the new drm_edid_read*() functions no longer call the connector
updates directly, and the read and update are separated, we'll need this
new function for the connector update.

This is all in drm_edid.c simply to keep struct drm_edid opaque.

Cc: David Airlie 
Cc: Daniel Vetter 
Signed-off-by: Jani Nikula 
---
 drivers/gpu/drm/drm_connector.c |  2 +
 drivers/gpu/drm/drm_edid.c  | 71 +++--
 include/drm/drm_edid.h  |  2 +
 3 files changed, 71 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
index 1c48d162c77e..ae9c640a641a 100644
--- a/drivers/gpu/drm/drm_connector.c
+++ b/drivers/gpu/drm/drm_connector.c
@@ -2088,6 +2088,8 @@ EXPORT_SYMBOL(drm_connector_set_tile_property);
  * set the connector's tile property here. See 
drm_connector_set_tile_property()
  * for more details.
  *
+ * This function is deprecated. Use drm_edid_connector_update() instead.
+ *
  * Returns:
  * Zero on success, negative errno on failure.
  */
diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
index 2bdaf1e34a9d..952724788963 100644
--- a/drivers/gpu/drm/drm_edid.c
+++ b/drivers/gpu/drm/drm_edid.c
@@ -6143,8 +6143,8 @@ static int add_displayid_detailed_modes(struct 
drm_connector *connector,
return num_modes;
 }
 
-static int drm_edid_connector_update(struct drm_connector *connector,
-const struct drm_edid *drm_edid)
+static int _drm_edid_connector_update(struct drm_connector *connector,
+ const struct drm_edid *drm_edid)
 {
int num_modes = 0;
u32 quirks;
@@ -6207,6 +6207,67 @@ static int drm_edid_connector_update(struct 
drm_connector *connector,
return num_modes;
 }
 
+static void _drm_update_tile_info(struct drm_connector *connector,
+ const struct drm_edid *drm_edid);
+
+/**
+ * drm_edid_connector_update - Update connector information from EDID
+ * @connector: Connector
+ * @drm_edid: EDID
+ *
+ * Update the connector mode list, display info, ELD, HDR metadata, relevant
+ * properties, etc. from the passed in EDID.
+ *
+ * If EDID is NULL, reset the information.
+ *
+ * Return: The number of modes added or 0 if we couldn't find any.
+ */
+int drm_edid_connector_update(struct drm_connector *connector,
+ const struct drm_edid *drm_edid)
+{
+   struct drm_device *dev = connector->dev;
+   const struct edid *old_edid = connector->edid_blob_ptr ?
+   connector->edid_blob_ptr->data : NULL;
+   const struct edid *edid = drm_edid ? drm_edid->edid : NULL;
+   size_t size = drm_edid ? drm_edid->size : 0;
+   int count, ret;
+
+   count = _drm_edid_connector_update(connector, drm_edid);
+
+   _drm_update_tile_info(connector, drm_edid);
+
+   if (old_edid && !drm_edid_are_equal(edid, old_edid)) {
+   connector->epoch_counter++;
+
+   drm_dbg_kms(dev, "[CONNECTOR:%d:%s] EDID changed, epoch counter 
%llu\n",
+   connector->base.id, connector->name,
+   connector->epoch_counter);
+   }
+
+   ret = drm_property_replace_global_blob(dev, &connector->edid_blob_ptr,
+  size, edid,
+  &connector->base,
+  dev->mode_config.edid_property);
+   if (ret)
+   drm_dbg_kms(dev, "[CONNECTOR:%d:%s] EDID property update failed 
(%d)\n",
+   connector->base.id, connector->name, ret);
+
+   ret = drm_object_property_set_value(&connector->base,
+   
dev->mode_config.non_desktop_property,
+   
connector->display_info.non_desktop);
+   if (ret)
+   drm_dbg_kms(dev, "[CONNECTOR:%d:%s] Non-desktop property update 
failed (%d)\n",
+   connector->base.id, connector->name, ret);
+
+   ret = drm_connector_set_tile_property(connector);
+   if (ret)
+   drm_dbg_kms(dev, "[CONNECTOR:%d:%s] Tile property update failed 
(%d)\n",
+   connector->base.id, connector->name, ret);
+
+   return count;
+}
+EXPORT_SYMBOL(drm_edid_connector_update);
+
 /**
  * drm_add_edid_modes - add modes from EDID data, if available
  * @connector: connector we're probing
@@ -6216,6 +6277,8 @@ static int drm_edid_connector_update(struct drm_connector 
*connector,
  * &drm_display_info structure and ELD in @connector with any information which
  * can be derived from the edid.
  *
+ * This function is dep

[Intel-gfx] [PATCH v2 08/15] drm/probe-helper: add drm_connector_helper_get_modes()

2022-06-08 Thread Jani Nikula
Add a helper function to be used as the "default" .get_modes()
hook. This also works as an example of what the driver .get_modes()
hooks are supposed to do regarding the new drm_edid_read*() and
drm_edid_connector_update() calls.

Signed-off-by: Jani Nikula 
---
 drivers/gpu/drm/drm_probe_helper.c | 34 ++
 include/drm/drm_probe_helper.h |  1 +
 2 files changed, 35 insertions(+)

diff --git a/drivers/gpu/drm/drm_probe_helper.c 
b/drivers/gpu/drm/drm_probe_helper.c
index a8d26b29bfa0..e6b8f2923aa7 100644
--- a/drivers/gpu/drm/drm_probe_helper.c
+++ b/drivers/gpu/drm/drm_probe_helper.c
@@ -1049,3 +1049,37 @@ int drm_connector_helper_get_modes_from_ddc(struct 
drm_connector *connector)
return count;
 }
 EXPORT_SYMBOL(drm_connector_helper_get_modes_from_ddc);
+
+/**
+ * drm_connector_helper_get_modes - Read EDID and update connector.
+ * @connector: The connector
+ *
+ * Read the EDID using drm_edid_read() (which requires that connector->ddc is
+ * set), and update the connector using the EDID.
+ *
+ * This can be used as the "default" connector helper .get_modes() hook if the
+ * driver does not need any special processing. This is sets the example what
+ * custom .get_modes() hooks should do regarding EDID read and connector 
update.
+ *
+ * Returns: Number of modes.
+ */
+int drm_connector_helper_get_modes(struct drm_connector *connector)
+{
+   const struct drm_edid *drm_edid;
+   int count;
+
+   drm_edid = drm_edid_read(connector);
+
+   /*
+* Unconditionally update the connector. If the EDID was read
+* succesfully, fill in the connector information derived from the
+* EDID. Otherwise, if the EDID is NULL, clear the connector
+* information.
+*/
+   count = drm_edid_connector_update(connector, drm_edid);
+
+   drm_edid_free(drm_edid);
+
+   return count;
+}
+EXPORT_SYMBOL(drm_connector_helper_get_modes);
diff --git a/include/drm/drm_probe_helper.h b/include/drm/drm_probe_helper.h
index c80cab7a53b7..8075e02aa865 100644
--- a/include/drm/drm_probe_helper.h
+++ b/include/drm/drm_probe_helper.h
@@ -27,5 +27,6 @@ void drm_kms_helper_poll_enable(struct drm_device *dev);
 bool drm_kms_helper_is_poll_worker(void);
 
 int drm_connector_helper_get_modes_from_ddc(struct drm_connector *connector);
+int drm_connector_helper_get_modes(struct drm_connector *connector);
 
 #endif
-- 
2.30.2



[Intel-gfx] [PATCH v2 09/15] drm/edid: add drm_edid_raw() to access the raw EDID data

2022-06-08 Thread Jani Nikula
Unfortunately, there are still plenty of interfaces around that require
a struct edid pointer, and it's impossible to change them all at
once. Add an accessor to the raw EDID data to help the transition.

While there are no such cases now, be defensive against raw EDID
extension count indicating bigger EDID than is actually allocated.

Signed-off-by: Jani Nikula 
---
 drivers/gpu/drm/drm_edid.c | 26 ++
 include/drm/drm_edid.h |  1 +
 2 files changed, 27 insertions(+)

diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
index 952724788963..4e788c5cbf25 100644
--- a/drivers/gpu/drm/drm_edid.c
+++ b/drivers/gpu/drm/drm_edid.c
@@ -2333,6 +2333,32 @@ struct edid *drm_do_get_edid(struct drm_connector 
*connector,
 }
 EXPORT_SYMBOL_GPL(drm_do_get_edid);
 
+/**
+ * drm_edid_raw - Get a pointer to the raw EDID data.
+ * @drm_edid: drm_edid container
+ *
+ * Get a pointer to the raw EDID data.
+ *
+ * This is for transition only. Avoid using this like the plague.
+ *
+ * Return: Pointer to raw EDID data.
+ */
+const struct edid *drm_edid_raw(const struct drm_edid *drm_edid)
+{
+   if (!drm_edid || !drm_edid->size)
+   return NULL;
+
+   /*
+* Do not return pointers where relying on EDID extension count would
+* lead to buffer overflow.
+*/
+   if (WARN_ON(edid_size(drm_edid->edid) > drm_edid->size))
+   return NULL;
+
+   return drm_edid->edid;
+}
+EXPORT_SYMBOL(drm_edid_raw);
+
 /* Allocate struct drm_edid container *without* duplicating the edid data */
 static const struct drm_edid *_drm_edid_alloc(const void *edid, size_t size)
 {
diff --git a/include/drm/drm_edid.h b/include/drm/drm_edid.h
index aeb2fa95bc04..2181977ae683 100644
--- a/include/drm/drm_edid.h
+++ b/include/drm/drm_edid.h
@@ -597,6 +597,7 @@ drm_display_mode_from_cea_vic(struct drm_device *dev,
 const struct drm_edid *drm_edid_alloc(const void *edid, size_t size);
 const struct drm_edid *drm_edid_dup(const struct drm_edid *drm_edid);
 void drm_edid_free(const struct drm_edid *drm_edid);
+const struct edid *drm_edid_raw(const struct drm_edid *drm_edid);
 const struct drm_edid *drm_edid_read(struct drm_connector *connector);
 const struct drm_edid *drm_edid_read_ddc(struct drm_connector *connector,
 struct i2c_adapter *adapter);
-- 
2.30.2



[Intel-gfx] [PATCH v2 07/15] drm/probe-helper: abstract .get_modes() connector helper call

2022-06-08 Thread Jani Nikula
Abstract the .get_modes() connector helper call, including the
override/firmware EDID fallback, for clarity.

Signed-off-by: Jani Nikula 
---
 drivers/gpu/drm/drm_probe_helper.c | 29 +++--
 1 file changed, 19 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/drm_probe_helper.c 
b/drivers/gpu/drm/drm_probe_helper.c
index 75a71649b64d..a8d26b29bfa0 100644
--- a/drivers/gpu/drm/drm_probe_helper.c
+++ b/drivers/gpu/drm/drm_probe_helper.c
@@ -354,6 +354,24 @@ drm_helper_probe_detect(struct drm_connector *connector,
 }
 EXPORT_SYMBOL(drm_helper_probe_detect);
 
+static int drm_helper_probe_get_modes(struct drm_connector *connector)
+{
+   const struct drm_connector_helper_funcs *connector_funcs =
+   connector->helper_private;
+   int count;
+
+   count = connector_funcs->get_modes(connector);
+
+   /*
+* Fallback for when DDC probe failed in drm_get_edid() and thus skipped
+* override/firmware EDID.
+*/
+   if (count == 0 && connector->status == connector_status_connected)
+   count = drm_add_override_edid_modes(connector);
+
+   return count;
+}
+
 static int __drm_helper_update_and_validate(struct drm_connector *connector,
uint32_t maxX, uint32_t maxY,
struct drm_modeset_acquire_ctx *ctx)
@@ -473,8 +491,6 @@ int drm_helper_probe_single_connector_modes(struct 
drm_connector *connector,
 {
struct drm_device *dev = connector->dev;
struct drm_display_mode *mode;
-   const struct drm_connector_helper_funcs *connector_funcs =
-   connector->helper_private;
int count = 0, ret;
enum drm_connector_status old_status;
struct drm_modeset_acquire_ctx ctx;
@@ -559,14 +575,7 @@ int drm_helper_probe_single_connector_modes(struct 
drm_connector *connector,
goto exit;
}
 
-   count = (*connector_funcs->get_modes)(connector);
-
-   /*
-* Fallback for when DDC probe failed in drm_get_edid() and thus skipped
-* override/firmware EDID.
-*/
-   if (count == 0 && connector->status == connector_status_connected)
-   count = drm_add_override_edid_modes(connector);
+   count = drm_helper_probe_get_modes(connector);
 
if (count == 0 && (connector->status == connector_status_connected ||
   connector->status == connector_status_unknown)) {
-- 
2.30.2



[Intel-gfx] [PATCH v2 10/15] drm/i915/edid: convert DP, HDMI and LVDS to drm_edid

2022-06-08 Thread Jani Nikula
Convert all the connectors that use cached connector edid and
detect_edid to drm_edid.

Signed-off-by: Jani Nikula 
---
 .../gpu/drm/i915/display/intel_connector.c|  4 +-
 .../drm/i915/display/intel_display_types.h|  4 +-
 drivers/gpu/drm/i915/display/intel_dp.c   | 74 ++-
 drivers/gpu/drm/i915/display/intel_hdmi.c | 26 ---
 drivers/gpu/drm/i915/display/intel_lvds.c | 37 +-
 5 files changed, 79 insertions(+), 66 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_connector.c 
b/drivers/gpu/drm/i915/display/intel_connector.c
index 1dcc268927a2..d83b2a64f618 100644
--- a/drivers/gpu/drm/i915/display/intel_connector.c
+++ b/drivers/gpu/drm/i915/display/intel_connector.c
@@ -95,12 +95,12 @@ void intel_connector_destroy(struct drm_connector 
*connector)
 {
struct intel_connector *intel_connector = to_intel_connector(connector);
 
-   kfree(intel_connector->detect_edid);
+   drm_edid_free(intel_connector->detect_edid);
 
intel_hdcp_cleanup(intel_connector);
 
if (!IS_ERR_OR_NULL(intel_connector->edid))
-   kfree(intel_connector->edid);
+   drm_edid_free(intel_connector->edid);
 
intel_panel_fini(intel_connector);
 
diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h 
b/drivers/gpu/drm/i915/display/intel_display_types.h
index 9b44358e8d9e..dffbc7d59587 100644
--- a/drivers/gpu/drm/i915/display/intel_display_types.h
+++ b/drivers/gpu/drm/i915/display/intel_display_types.h
@@ -590,8 +590,8 @@ struct intel_connector {
struct intel_panel panel;
 
/* Cached EDID for eDP and LVDS. May hold ERR_PTR for invalid EDID. */
-   struct edid *edid;
-   struct edid *detect_edid;
+   const struct drm_edid *edid;
+   const struct drm_edid *detect_edid;
 
/* Number of times hotplug detection was tried after an HPD interrupt */
int hotplug_retries;
diff --git a/drivers/gpu/drm/i915/display/intel_dp.c 
b/drivers/gpu/drm/i915/display/intel_dp.c
index e198c6d7e3b5..64b6481225f1 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -3534,12 +3534,11 @@ static u8 intel_dp_autotest_edid(struct intel_dp 
*intel_dp)
intel_dp->aux.i2c_defer_count);
intel_dp->compliance.test_data.edid = 
INTEL_DP_RESOLUTION_FAILSAFE;
} else {
-   struct edid *block = intel_connector->detect_edid;
+   /* FIXME: Get rid of drm_edid_raw() */
+   const struct edid *block = 
drm_edid_raw(intel_connector->detect_edid);
 
-   /* We have to write the checksum
-* of the last block read
-*/
-   block += intel_connector->detect_edid->extensions;
+   /* We have to write the checksum of the last block read */
+   block += block->extensions;
 
if (drm_dp_dpcd_writeb(&intel_dp->aux, DP_TEST_EDID_CHECKSUM,
   block->checksum) <= 0)
@@ -4418,7 +4417,7 @@ bool intel_digital_port_connected(struct intel_encoder 
*encoder)
return is_connected;
 }
 
-static struct edid *
+static const struct drm_edid *
 intel_dp_get_edid(struct intel_dp *intel_dp)
 {
struct intel_connector *intel_connector = intel_dp->attached_connector;
@@ -4429,18 +4428,22 @@ intel_dp_get_edid(struct intel_dp *intel_dp)
if (IS_ERR(intel_connector->edid))
return NULL;
 
-   return drm_edid_duplicate(intel_connector->edid);
+   return drm_edid_dup(intel_connector->edid);
} else
-   return drm_get_edid(&intel_connector->base,
-   &intel_dp->aux.ddc);
+   return drm_edid_read_ddc(&intel_connector->base,
+&intel_dp->aux.ddc);
 }
 
 static void
 intel_dp_update_dfp(struct intel_dp *intel_dp,
-   const struct edid *edid)
+   const struct drm_edid *drm_edid)
 {
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
struct intel_connector *connector = intel_dp->attached_connector;
+   const struct edid *edid;
+
+   /* FIXME: Get rid of drm_edid_raw() */
+   edid = drm_edid_raw(drm_edid);
 
intel_dp->dfp.max_bpc =
drm_dp_downstream_max_bpc(intel_dp->dpcd,
@@ -4540,21 +4543,24 @@ intel_dp_set_edid(struct intel_dp *intel_dp)
 {
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
struct intel_connector *connector = intel_dp->attached_connector;
-   struct edid *edid;
+   const struct drm_edid *drm_edid;
+   const struct edid *edid;
bool vrr_capable;
 
intel_dp_unset_edid(intel_dp);
-   edid = intel_dp_get_edid(intel_dp);
-   connector->detect_edid = edid;
+   drm_edid = intel_dp_get_edid(intel_dp);
+   connector->detect_edid = drm_edid;
 
vrr_c

[Intel-gfx] [PATCH v2 11/15] drm/i915/bios: convert intel_bios_init_panel() to drm_edid

2022-06-08 Thread Jani Nikula
Try to use struct drm_edid where possible, even if having to fall back
to looking into struct edid down low via drm_edid_raw().

Signed-off-by: Jani Nikula 
---
 drivers/gpu/drm/i915/display/intel_bios.c | 23 ---
 drivers/gpu/drm/i915/display/intel_bios.h |  4 ++--
 drivers/gpu/drm/i915/display/intel_dp.c   |  2 +-
 drivers/gpu/drm/i915/display/intel_lvds.c |  2 +-
 4 files changed, 16 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_bios.c 
b/drivers/gpu/drm/i915/display/intel_bios.c
index c42b9e7d0dce..be0b4264d526 100644
--- a/drivers/gpu/drm/i915/display/intel_bios.c
+++ b/drivers/gpu/drm/i915/display/intel_bios.c
@@ -604,13 +604,13 @@ get_lfp_data_tail(const struct bdb_lvds_lfp_data *data,
 }
 
 static int opregion_get_panel_type(struct drm_i915_private *i915,
-  const struct edid *edid)
+  const struct drm_edid *drm_edid)
 {
return intel_opregion_get_panel_type(i915);
 }
 
 static int vbt_get_panel_type(struct drm_i915_private *i915,
- const struct edid *edid)
+ const struct drm_edid *drm_edid)
 {
const struct bdb_lvds_options *lvds_options;
 
@@ -629,12 +629,13 @@ static int vbt_get_panel_type(struct drm_i915_private 
*i915,
 }
 
 static int pnpid_get_panel_type(struct drm_i915_private *i915,
-   const struct edid *edid)
+   const struct drm_edid *drm_edid)
 {
const struct bdb_lvds_lfp_data *data;
const struct bdb_lvds_lfp_data_ptrs *ptrs;
const struct lvds_pnp_id *edid_id;
struct lvds_pnp_id edid_id_nodate;
+   const struct edid *edid = drm_edid_raw(drm_edid); /* FIXME */
int i, best = -1;
 
if (!edid)
@@ -675,7 +676,7 @@ static int pnpid_get_panel_type(struct drm_i915_private 
*i915,
 }
 
 static int fallback_get_panel_type(struct drm_i915_private *i915,
-  const struct edid *edid)
+  const struct drm_edid *drm_edid)
 {
return 0;
 }
@@ -688,12 +689,12 @@ enum panel_type {
 };
 
 static int get_panel_type(struct drm_i915_private *i915,
- const struct edid *edid)
+ const struct drm_edid *drm_edid)
 {
struct {
const char *name;
int (*get_panel_type)(struct drm_i915_private *i915,
- const struct edid *edid);
+ const struct drm_edid *drm_edid);
int panel_type;
} panel_types[] = {
[PANEL_TYPE_OPREGION] = {
@@ -716,7 +717,7 @@ static int get_panel_type(struct drm_i915_private *i915,
int i;
 
for (i = 0; i < ARRAY_SIZE(panel_types); i++) {
-   panel_types[i].panel_type = panel_types[i].get_panel_type(i915, 
edid);
+   panel_types[i].panel_type = panel_types[i].get_panel_type(i915, 
drm_edid);
 
drm_WARN_ON(&i915->drm, panel_types[i].panel_type > 0xf &&
panel_types[i].panel_type != 0xff);
@@ -747,7 +748,7 @@ static int get_panel_type(struct drm_i915_private *i915,
 static void
 parse_panel_options(struct drm_i915_private *i915,
struct intel_panel *panel,
-   const struct edid *edid)
+   const struct drm_edid *drm_edid)
 {
const struct bdb_lvds_options *lvds_options;
int panel_type;
@@ -759,7 +760,7 @@ parse_panel_options(struct drm_i915_private *i915,
 
panel->vbt.lvds_dither = lvds_options->pixel_dither;
 
-   panel_type = get_panel_type(i915, edid);
+   panel_type = get_panel_type(i915, drm_edid);
 
panel->vbt.panel_type = panel_type;
 
@@ -3092,11 +3093,11 @@ void intel_bios_init(struct drm_i915_private *i915)
 
 void intel_bios_init_panel(struct drm_i915_private *i915,
   struct intel_panel *panel,
-  const struct edid *edid)
+  const struct drm_edid *drm_edid)
 {
init_vbt_panel_defaults(panel);
 
-   parse_panel_options(i915, panel, edid);
+   parse_panel_options(i915, panel, drm_edid);
parse_generic_dtd(i915, panel);
parse_lfp_data(i915, panel);
parse_lfp_backlight(i915, panel);
diff --git a/drivers/gpu/drm/i915/display/intel_bios.h 
b/drivers/gpu/drm/i915/display/intel_bios.h
index b112200ae0a0..7bcc818e8d80 100644
--- a/drivers/gpu/drm/i915/display/intel_bios.h
+++ b/drivers/gpu/drm/i915/display/intel_bios.h
@@ -32,8 +32,8 @@
 
 #include 
 
+struct drm_edid;
 struct drm_i915_private;
-struct edid;
 struct intel_bios_encoder_data;
 struct intel_crtc_state;
 struct intel_encoder;
@@ -234,7 +234,7 @@ struct mipi_pps_data {
 void intel_bios_init(struct drm_i915_private *dev_priv);
 void intel_bios_init_panel(struct drm_i915_private *dev_priv,
 

[Intel-gfx] [PATCH v2 12/15] drm/edid: do invalid block filtering in-place

2022-06-08 Thread Jani Nikula
Rewrite edid_filter_invalid_blocks() to filter invalid blocks
in-place. The main motivation is to not rely on passed in information on
invalid block count or the allocation size, which will be helpful in
follow-up work on HF-EEODB.

Signed-off-by: Jani Nikula 
---
 drivers/gpu/drm/drm_edid.c | 43 --
 1 file changed, 23 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
index 4e788c5cbf25..77ec5b0e436d 100644
--- a/drivers/gpu/drm/drm_edid.c
+++ b/drivers/gpu/drm/drm_edid.c
@@ -2020,33 +2020,37 @@ bool drm_edid_is_valid(struct edid *edid)
 }
 EXPORT_SYMBOL(drm_edid_is_valid);
 
-static struct edid *edid_filter_invalid_blocks(const struct edid *edid,
-  int invalid_blocks,
+static struct edid *edid_filter_invalid_blocks(struct edid *edid,
   size_t *alloc_size)
 {
-   struct edid *new, *dest_block;
-   int valid_extensions = edid->extensions - invalid_blocks;
-   int i;
+   struct edid *new;
+   int i, valid_blocks = 0;
 
-   *alloc_size = edid_size_by_blocks(valid_extensions + 1);
+   for (i = 0; i < edid_block_count(edid); i++) {
+   const void *src_block = edid_block_data(edid, i);
 
-   new = kmalloc(*alloc_size, GFP_KERNEL);
-   if (!new)
-   goto out;
+   if (edid_block_valid(src_block, i == 0)) {
+   void *dst_block = (void *)edid_block_data(edid, 
valid_blocks);
 
-   dest_block = new;
-   for (i = 0; i < edid_block_count(edid); i++) {
-   const void *block = edid_block_data(edid, i);
+   memmove(dst_block, src_block, EDID_LENGTH);
+   valid_blocks++;
+   }
+   }
 
-   if (edid_block_valid(block, i == 0))
-   memcpy(dest_block++, block, EDID_LENGTH);
+   /* We already trusted the base block to be valid here... */
+   if (WARN_ON(!valid_blocks)) {
+   kfree(edid);
+   return NULL;
}
 
-   new->extensions = valid_extensions;
-   new->checksum = edid_block_compute_checksum(new);
+   edid->extensions = valid_blocks - 1;
+   edid->checksum = edid_block_compute_checksum(edid);
 
-out:
-   kfree(edid);
+   *alloc_size = edid_size_by_blocks(valid_blocks);
+
+   new = krealloc(edid, *alloc_size, GFP_KERNEL);
+   if (!new)
+   kfree(edid);
 
return new;
 }
@@ -2290,8 +2294,7 @@ static struct edid *_drm_do_get_edid(struct drm_connector 
*connector,
if (invalid_blocks) {
connector_bad_edid(connector, edid, edid_block_count(edid));
 
-   edid = edid_filter_invalid_blocks(edid, invalid_blocks,
- &alloc_size);
+   edid = edid_filter_invalid_blocks(edid, &alloc_size);
}
 
 ok:
-- 
2.30.2



[Intel-gfx] [PATCH v2 14/15] drm/edid: take HF-EEODB extension count into account

2022-06-08 Thread Jani Nikula
Take the HF-EEODB extension count override into account.

Signed-off-by: Jani Nikula 
---
 drivers/gpu/drm/drm_edid.c | 13 +
 1 file changed, 13 insertions(+)

diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
index 5cac357e50b0..b7b1f0639115 100644
--- a/drivers/gpu/drm/drm_edid.c
+++ b/drivers/gpu/drm/drm_edid.c
@@ -1629,6 +1629,19 @@ static int drm_edid_block_count(const struct drm_edid 
*drm_edid)
/* Starting point */
num_blocks = edid_block_count(drm_edid->edid);
 
+   /* HF-EEODB override */
+   if (drm_edid->size >= edid_size_by_blocks(2)) {
+   int eeodb;
+
+   /*
+* Note: HF-EEODB may specify a smaller extension count than the
+* regular one. Unlike in buffer allocation, here we can use it.
+*/
+   eeodb = edid_hfeeodb_block_count(drm_edid->edid);
+   if (eeodb)
+   num_blocks = eeodb;
+   }
+
/* Limit by allocated size */
num_blocks = min(num_blocks, (int)drm_edid->size / EDID_LENGTH);
 
-- 
2.30.2



[Intel-gfx] [PATCH v2 13/15] drm/edid: add HF-EEODB support to EDID read and allocation

2022-06-08 Thread Jani Nikula
HDMI 2.1 section 10.3.6 defines an HDMI Forum EDID Extension Override
Data Block, which may contain a different extension count than the base
block claims. Add support for reading more EDID data if available. The
extra blocks aren't parsed yet, though.

Hard-coding the EEODB parsing instead of using the iterators we have is
a bit of a bummer, but we have to be able to do this on a partially
allocated EDID while reading it.

v2:
- Check for CEA Data Block Collection size (Ville)
- Amend commit message and comment about hard-coded parsing

Signed-off-by: Jani Nikula 
---
 drivers/gpu/drm/drm_edid.c | 89 --
 1 file changed, 86 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
index 77ec5b0e436d..5cac357e50b0 100644
--- a/drivers/gpu/drm/drm_edid.c
+++ b/drivers/gpu/drm/drm_edid.c
@@ -1581,6 +1581,15 @@ static bool version_greater(const struct drm_edid 
*drm_edid,
(edid->version == version && edid->revision > revision);
 }
 
+static int edid_hfeeodb_extension_block_count(const struct edid *edid);
+
+static int edid_hfeeodb_block_count(const struct edid *edid)
+{
+   int eeodb = edid_hfeeodb_extension_block_count(edid);
+
+   return eeodb ? eeodb + 1 : 0;
+}
+
 static int edid_extension_block_count(const struct edid *edid)
 {
return edid->extensions;
@@ -2026,6 +2035,11 @@ static struct edid *edid_filter_invalid_blocks(struct 
edid *edid,
struct edid *new;
int i, valid_blocks = 0;
 
+   /*
+* Note: If the EDID uses HF-EEODB, but has invalid blocks, we'll revert
+* back to regular extension count here. We don't want to start
+* modifying the HF-EEODB extension too.
+*/
for (i = 0; i < edid_block_count(edid); i++) {
const void *src_block = edid_block_data(edid, i);
 
@@ -2235,7 +2249,7 @@ static struct edid *_drm_do_get_edid(struct drm_connector 
*connector,
 size_t *size)
 {
enum edid_block_status status;
-   int i, invalid_blocks = 0;
+   int i, num_blocks, invalid_blocks = 0;
struct edid *edid, *new;
size_t alloc_size = EDID_LENGTH;
 
@@ -2277,7 +2291,8 @@ static struct edid *_drm_do_get_edid(struct drm_connector 
*connector,
goto fail;
edid = new;
 
-   for (i = 1; i < edid_block_count(edid); i++) {
+   num_blocks = edid_block_count(edid);
+   for (i = 1; i < num_blocks; i++) {
void *block = (void *)edid_block_data(edid, i);
 
status = edid_block_read(block, i, read_block, context);
@@ -2288,11 +2303,31 @@ static struct edid *_drm_do_get_edid(struct 
drm_connector *connector,
if (status == EDID_BLOCK_READ_FAIL)
goto fail;
invalid_blocks++;
+   } else if (i == 1) {
+   /*
+* If the first EDID extension is a CTA extension, and
+* the first Data Block is HF-EEODB, override the
+* extension block count.
+*
+* Note: HF-EEODB could specify a smaller extension
+* count too, but we can't risk allocating a smaller
+* amount.
+*/
+   int eeodb = edid_hfeeodb_block_count(edid);
+
+   if (eeodb > num_blocks) {
+   num_blocks = eeodb;
+   alloc_size = edid_size_by_blocks(num_blocks);
+   new = krealloc(edid, alloc_size, GFP_KERNEL);
+   if (!new)
+   goto fail;
+   edid = new;
+   }
}
}
 
if (invalid_blocks) {
-   connector_bad_edid(connector, edid, edid_block_count(edid));
+   connector_bad_edid(connector, edid, num_blocks);
 
edid = edid_filter_invalid_blocks(edid, &alloc_size);
}
@@ -3825,6 +3860,7 @@ static int add_detailed_modes(struct drm_connector 
*connector,
 #define CTA_EXT_DB_HDR_STATIC_METADATA 6
 #define CTA_EXT_DB_420_VIDEO_DATA  14
 #define CTA_EXT_DB_420_VIDEO_CAP_MAP   15
+#define CTA_EXT_DB_HF_EEODB0x78
 #define CTA_EXT_DB_HF_SCDB 0x79
 
 #define EDID_BASIC_AUDIO   (1 << 6)
@@ -4884,6 +4920,12 @@ static bool cea_db_is_hdmi_forum_vsdb(const struct 
cea_db *db)
cea_db_payload_len(db) >= 7;
 }
 
+static bool cea_db_is_hdmi_forum_eeodb(const void *db)
+{
+   return cea_db_is_extended_tag(db, CTA_EXT_DB_HF_EEODB) &&
+   cea_db_payload_len(db) >= 2;
+}
+
 static bool cea_db_is_microsoft_vsdb(const struct cea_db *db)
 {
return cea_db_is_vendor(db, MICROSOFT_IEEE_OUI) &&
@@ -4918,6 +4960,47 @@ static bool cea_

[Intel-gfx] [PATCH v2 15/15] drm/todo: add entry for converting the subsystem to struct drm_edid

2022-06-08 Thread Jani Nikula
We need to stop duplicating EDID validation and parsing all over the
subsystem in various broken ways.

v2: Update to reflect drm_connector_helper_get_modes()

Cc: David Airlie 
Cc: Daniel Vetter 
Signed-off-by: Jani Nikula 
---
 Documentation/gpu/todo.rst | 25 +
 1 file changed, 25 insertions(+)

diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 513b20ccef1e..04ef31e3405f 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -480,6 +480,31 @@ Contact: Thomas Zimmermann 
 
 Level: Starter
 
+Convert core and drivers from struct edid to struct drm_edid
+
+
+Go through all drivers and drm core KMS code to convert all raw struct edid
+usage to the opaque struct drm_edid. See commit e4ccf9a777d3 ("drm/edid: add
+struct drm_edid container") for rationale.
+
+Convert drm_get_edid() and drm_do_get_edid() usage to drm_edid_read(),
+drm_edid_read_ddc(), or drm_edid_read_custom().
+
+Convert drm_add_edid_modes() and drm_connector_update_edid_property() to
+drm_edid_connector_update(). See drm_connector_helper_get_modes() for reference
+for converting the ->get_modes() hooks.
+
+Convert decentralized, direct struct edid parsing to centralized parsing in
+drm_edid.c. Prefer one-time parsing as part of drm_edid_connector_update() and
+storing the result in drm_connector->display_info over adding individual,
+exported parser functions.
+
+During the transition period, it may be necessary to use drm_edid_raw(), but do
+use it sparingly. Eventually, all of them need to go.
+
+Contact: Jani Nikula 
+
+Level: Intermediate
 
 Core refactorings
 =
-- 
2.30.2



Re: [Intel-gfx] [PATCH] drm/i915/guc/slpc: Use non-blocking H2G for waitboost

2022-06-08 Thread Jani Nikula
On Tue, 07 Jun 2022, John Harrison  wrote:
> Oops. Just saw your follow up message. No worries!

Again, sorry for the noise, and for wasting your time!

BR,
Jani.

-- 
Jani Nikula, Intel Open Source Graphics Center


Re: [Intel-gfx] [PATCH] drm/i915/overlay: remove redundant GEM_BUG_ON()

2022-06-08 Thread Tvrtko Ursulin



On 16/05/2022 09:10, Jani Nikula wrote:

There's an early return for !engine->kernel_context.

Signed-off-by: Jani Nikula 
---
  drivers/gpu/drm/i915/display/intel_overlay.c | 2 --
  1 file changed, 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_overlay.c 
b/drivers/gpu/drm/i915/display/intel_overlay.c
index ee46561b5ae8..79ed8bd04a07 100644
--- a/drivers/gpu/drm/i915/display/intel_overlay.c
+++ b/drivers/gpu/drm/i915/display/intel_overlay.c
@@ -1399,8 +1399,6 @@ void intel_overlay_setup(struct drm_i915_private 
*dev_priv)
  
  	overlay->i915 = dev_priv;

overlay->context = engine->kernel_context;
-   GEM_BUG_ON(!overlay->context);
-
overlay->color_key = 0x0101fe;
overlay->color_key_enabled = true;
overlay->brightness = -19;


Reviewed-by: Tvrtko Ursulin 

Regards,

Tvrtko


Re: [Intel-gfx] [PATCH] drm/i915/pxp: fix sparse warning for not declared symbol

2022-06-08 Thread Tvrtko Ursulin



On 06/05/2022 13:04, Jani Nikula wrote:

Fix:

drivers/gpu/drm/i915/pxp/intel_pxp_debugfs.c:61:6: warning: symbol
'intel_pxp_debugfs_register' was not declared. Should it be static?

Sort and remove the redundant pxp prefixes from the includes while at
it.

Signed-off-by: Jani Nikula 
---
  drivers/gpu/drm/i915/pxp/intel_pxp_debugfs.c | 5 +++--
  1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/pxp/intel_pxp_debugfs.c 
b/drivers/gpu/drm/i915/pxp/intel_pxp_debugfs.c
index c9da1015eb42..e888b5124a07 100644
--- a/drivers/gpu/drm/i915/pxp/intel_pxp_debugfs.c
+++ b/drivers/gpu/drm/i915/pxp/intel_pxp_debugfs.c
@@ -9,9 +9,10 @@
  #include 
  
  #include "gt/intel_gt_debugfs.h"

-#include "pxp/intel_pxp.h"
-#include "pxp/intel_pxp_irq.h"
  #include "i915_drv.h"
+#include "intel_pxp.h"
+#include "intel_pxp_debugfs.h"
+#include "intel_pxp_irq.h"
  
  static int pxp_info_show(struct seq_file *m, void *data)

  {


Reviewed-by: Tvrtko Ursulin 

Lets try to copy domain owners when doing cleanups to lessen the 
maintainer load, since it appears people are not really scanning the 
upstream mailing list these days. :(


Regards,

Tvrtko


Re: [Intel-gfx] [PATCH] drm/i915/uc: remove accidental static from a local variable

2022-06-08 Thread Tvrtko Ursulin



On 11/05/2022 10:46, Jani Nikula wrote:

The arrays are static const, but the pointer shouldn't be static.

Fixes: 3d832f370d16 ("drm/i915/uc: Allow platforms to have GuC but not HuC")
Cc: John Harrison 
Cc: Lucas De Marchi 
Cc: Daniele Ceraolo Spurio 
Signed-off-by: Jani Nikula 
---
  drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c 
b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c
index 9361532726d6..d2c5c9367cc4 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c
@@ -156,7 +156,7 @@ __uc_fw_auto_select(struct drm_i915_private *i915, struct 
intel_uc_fw *uc_fw)
[INTEL_UC_FW_TYPE_GUC] = { blobs_guc, ARRAY_SIZE(blobs_guc) },
[INTEL_UC_FW_TYPE_HUC] = { blobs_huc, ARRAY_SIZE(blobs_huc) },
};
-   static const struct uc_fw_platform_requirement *fw_blobs;
+   const struct uc_fw_platform_requirement *fw_blobs;
enum intel_platform p = INTEL_INFO(i915)->platform;
u32 fw_count;
u8 rev = INTEL_REVID(i915);


Reviewed-by: Tvrtko Ursulin 

Domain owners were copied in this case (mailman just dropped it from the 
mailing list copy?) on this one, but maybe needed extra prodding.


Regards,

Tvrtko


[Intel-gfx] ✗ Fi.CI.BUILD: failure for tests/kms_async_flips: first async flip discarded on i915 (rev2)

2022-06-08 Thread Patchwork
== Series Details ==

Series: tests/kms_async_flips: first async flip discarded on i915 (rev2)
URL   : https://patchwork.freedesktop.org/series/104876/
State : failure

== Summary ==

Error: patch 
https://patchwork.freedesktop.org/api/1.0/series/104876/revisions/2/mbox/ not 
applied
Applying: tests/kms_async_flips: first async flip discarded on i915
error: sha1 information is lacking or useless (tests/kms_async_flips.c).
error: could not build fake ancestor
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0001 tests/kms_async_flips: first async flip discarded on i915
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".




[Intel-gfx] ✗ Fi.CI.IGT: failure for drm/i915/xehp: Correct steering initialization

2022-06-08 Thread Patchwork
== Series Details ==

Series: drm/i915/xehp: Correct steering initialization
URL   : https://patchwork.freedesktop.org/series/104842/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_11732_full -> Patchwork_104842v1_full


Summary
---

  **FAILURE**

  Serious unknown changes coming with Patchwork_104842v1_full absolutely need 
to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_104842v1_full, please notify your bug team to allow 
them
  to document this new failure mode, which will reduce false positives in CI.

  

Participating hosts (10 -> 13)
--

  Additional (3): shard-rkl shard-dg1 shard-tglu 

Possible new issues
---

  Here are the unknown changes that may have been introduced in 
Patchwork_104842v1_full:

### IGT changes ###

 Possible regressions 

  * igt@i915_module_load@reload-with-fault-injection:
- shard-snb:  [PASS][1] -> [DMESG-WARN][2]
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-snb6/igt@i915_module_l...@reload-with-fault-injection.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-snb2/igt@i915_module_l...@reload-with-fault-injection.html

  * igt@i915_pm_sseu@full-enable:
- shard-skl:  NOTRUN -> [FAIL][3]
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-skl7/igt@i915_pm_s...@full-enable.html

  * igt@kms_flip@flip-vs-panning-vs-hang:
- shard-kbl:  NOTRUN -> [INCOMPLETE][4]
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-kbl1/igt@kms_f...@flip-vs-panning-vs-hang.html

  
Known issues


  Here are the changes found in Patchwork_104842v1_full that come from known 
issues:

### CI changes ###

 Issues hit 

  * boot:
- shard-glk:  ([PASS][5], [PASS][6], [PASS][7], [PASS][8], 
[PASS][9], [PASS][10], [PASS][11], [PASS][12], [PASS][13], [PASS][14], 
[PASS][15], [PASS][16], [PASS][17], [PASS][18], [PASS][19], [PASS][20], 
[PASS][21], [PASS][22], [PASS][23], [PASS][24], [PASS][25], [PASS][26], 
[PASS][27], [PASS][28]) -> ([PASS][29], [PASS][30], [PASS][31], [PASS][32], 
[PASS][33], [PASS][34], [PASS][35], [PASS][36], [PASS][37], [PASS][38], 
[PASS][39], [PASS][40], [PASS][41], [PASS][42], [PASS][43], [PASS][44], 
[PASS][45], [PASS][46], [PASS][47], [PASS][48], [FAIL][49], [PASS][50], 
[PASS][51], [PASS][52], [PASS][53]) ([i915#4392])
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk7/boot.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk7/boot.html
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk6/boot.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk6/boot.html
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk5/boot.html
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk5/boot.html
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk4/boot.html
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk4/boot.html
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk4/boot.html
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk4/boot.html
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk3/boot.html
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk3/boot.html
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk8/boot.html
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk3/boot.html
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk2/boot.html
   [20]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk2/boot.html
   [21]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk2/boot.html
   [22]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk8/boot.html
   [23]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk9/boot.html
   [24]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk1/boot.html
   [25]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk9/boot.html
   [26]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk1/boot.html
   [27]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk9/boot.html
   [28]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk1/boot.html
   [29]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk9/boot.html
   [30]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk9/boot.html
   [31]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk8/boot.html
   [32]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk8/boot.html
   [33]: 
https://intel-gfx-ci.01.org/tre

[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/edid: expand on struct drm_edid usage (rev3)

2022-06-08 Thread Patchwork
== Series Details ==

Series: drm/edid: expand on struct drm_edid usage (rev3)
URL   : https://patchwork.freedesktop.org/series/104309/
State : warning

== Summary ==

Error: dim checkpatch failed
cbd046eb3347 drm/edid: fix CTA data block collection size for CTA version 3
57c0da5db445 drm/edid: abstract cea data block collection size
8a18cc54c495 drm/edid: add block count and data helper functions for drm_edid
339317982780 drm/edid: keep track of alloc size in drm_do_get_edid()
20f7e6373c64 drm/edid: add new interfaces around struct drm_edid
-:320: WARNING:LONG_LINE: line length of 118 exceeds 100 columns
#320: FILE: include/drm/drm_edid.h:604:
+   int (*read_block)(void *context, u8 
*buf, unsigned int block, size_t len),

total: 0 errors, 1 warnings, 0 checks, 291 lines checked
7bb7d4b16112 drm/edid: add drm_edid_connector_update()
923ccd9981fa drm/probe-helper: abstract .get_modes() connector helper call
05977fe383c7 drm/probe-helper: add drm_connector_helper_get_modes()
-:44: WARNING:TYPO_SPELLING: 'succesfully' may be misspelled - perhaps 
'successfully'?
#44: FILE: drivers/gpu/drm/drm_probe_helper.c:1075:
+* succesfully, fill in the connector information derived from the
   ^^^

total: 0 errors, 1 warnings, 0 checks, 43 lines checked
b6e91df12a6b drm/edid: add drm_edid_raw() to access the raw EDID data
e76cd9721c85 drm/i915/edid: convert DP, HDMI and LVDS to drm_edid
-:260: CHECK:COMPARISON_TO_NULL: Comparison to NULL could be written "drm_edid"
#260: FILE: drivers/gpu/drm/i915/display/intel_hdmi.c:2429:
+   intel_hdmi_dp_dual_mode_detect(connector, drm_edid != NULL);

total: 0 errors, 0 warnings, 1 checks, 308 lines checked
65d8f4dc11db drm/i915/bios: convert intel_bios_init_panel() to drm_edid
e789e7b1cbc0 drm/edid: do invalid block filtering in-place
bfe12401468e drm/edid: add HF-EEODB support to EDID read and allocation
60e4c8b9d502 drm/edid: take HF-EEODB extension count into account
6707dfd9931f drm/todo: add entry for converting the subsystem to struct drm_edid




[Intel-gfx] ✗ Fi.CI.SPARSE: warning for drm/edid: expand on struct drm_edid usage (rev3)

2022-06-08 Thread Patchwork
== Series Details ==

Series: drm/edid: expand on struct drm_edid usage (rev3)
URL   : https://patchwork.freedesktop.org/series/104309/
State : warning

== Summary ==

Error: dim sparse failed
Sparse version: v0.6.2
Fast mode used, each commit won't be checked separately.
-
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:314:49: error: static 
assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/..

Re: [Intel-gfx] [RFC v3 3/3] drm/doc/rfc: VM_BIND uapi definition

2022-06-08 Thread Tvrtko Ursulin



On 08/06/2022 07:40, Lionel Landwerlin wrote:

On 03/06/2022 09:53, Niranjana Vishwanathapura wrote:
On Wed, Jun 01, 2022 at 10:08:35PM -0700, Niranjana Vishwanathapura 
wrote:

On Wed, Jun 01, 2022 at 11:27:17AM +0200, Daniel Vetter wrote:

On Wed, 1 Jun 2022 at 11:03, Dave Airlie  wrote:


On Tue, 24 May 2022 at 05:20, Niranjana Vishwanathapura
 wrote:


On Thu, May 19, 2022 at 04:07:30PM -0700, Zanoni, Paulo R wrote:
>On Tue, 2022-05-17 at 11:32 -0700, Niranjana Vishwanathapura wrote:
>> VM_BIND and related uapi definitions
>>
>> v2: Ensure proper kernel-doc formatting with cross references.
>> Also add new uapi and documentation as per review comments
>> from Daniel.
>>
>> Signed-off-by: Niranjana Vishwanathapura 


>> ---
>>  Documentation/gpu/rfc/i915_vm_bind.h | 399 
+++

>>  1 file changed, 399 insertions(+)
>>  create mode 100644 Documentation/gpu/rfc/i915_vm_bind.h
>>
>> diff --git a/Documentation/gpu/rfc/i915_vm_bind.h 
b/Documentation/gpu/rfc/i915_vm_bind.h

>> new file mode 100644
>> index ..589c0a009107
>> --- /dev/null
>> +++ b/Documentation/gpu/rfc/i915_vm_bind.h
>> @@ -0,0 +1,399 @@
>> +/* SPDX-License-Identifier: MIT */
>> +/*
>> + * Copyright © 2022 Intel Corporation
>> + */
>> +
>> +/**
>> + * DOC: I915_PARAM_HAS_VM_BIND
>> + *
>> + * VM_BIND feature availability.
>> + * See typedef drm_i915_getparam_t param.
>> + */
>> +#define I915_PARAM_HAS_VM_BIND   57
>> +
>> +/**
>> + * DOC: I915_VM_CREATE_FLAGS_USE_VM_BIND
>> + *
>> + * Flag to opt-in for VM_BIND mode of binding during VM creation.
>> + * See struct drm_i915_gem_vm_control flags.
>> + *
>> + * A VM in VM_BIND mode will not support the older execbuff 
mode of binding.
>> + * In VM_BIND mode, execbuff ioctl will not accept any 
execlist (ie., the

>> + * &drm_i915_gem_execbuffer2.buffer_count must be 0).
>> + * Also, &drm_i915_gem_execbuffer2.batch_start_offset and
>> + * &drm_i915_gem_execbuffer2.batch_len must be 0.
>> + * DRM_I915_GEM_EXECBUFFER_EXT_BATCH_ADDRESSES extension must 
be provided

>> + * to pass in the batch buffer addresses.
>> + *
>> + * Additionally, I915_EXEC_NO_RELOC, I915_EXEC_HANDLE_LUT and
>> + * I915_EXEC_BATCH_FIRST of &drm_i915_gem_execbuffer2.flags 
must be 0
>> + * (not used) in VM_BIND mode. I915_EXEC_USE_EXTENSIONS flag 
must always be

>> + * set (See struct drm_i915_gem_execbuffer_ext_batch_addresses).
>> + * The buffers_ptr, buffer_count, batch_start_offset and 
batch_len fields
>> + * of struct drm_i915_gem_execbuffer2 are also not used and 
must be 0.

>> + */
>
>From that description, it seems we have:
>
>struct drm_i915_gem_execbuffer2 {
>    __u64 buffers_ptr;  -> must be 0 (new)
>    __u32 buffer_count; -> must be 0 (new)
>    __u32 batch_start_offset;   -> must be 0 (new)
>    __u32 batch_len;    -> must be 0 (new)
>    __u32 DR1;  -> must be 0 (old)
>    __u32 DR4;  -> must be 0 (old)
>    __u32 num_cliprects; (fences)   -> must be 0 since using 
extensions
>    __u64 cliprects_ptr; (fences, extensions) -> contains an 
actual pointer!
>    __u64 flags;    -> some flags must be 0 
(new)

>    __u64 rsvd1; (context info) -> repurposed field (old)
>    __u64 rsvd2;    -> unused
>};
>
>Based on that, why can't we just get drm_i915_gem_execbuffer3 
instead

>of adding even more complexity to an already abused interface? While
>the Vulkan-like extension thing is really nice, I don't think what
>we're doing here is extending the ioctl usage, we're completely
>changing how the base struct should be interpreted based on how 
the VM

>was created (which is an entirely different ioctl).
>
>From Rusty Russel's API Design grading, drm_i915_gem_execbuffer2 is
>already at -6 without these changes. I think after vm_bind we'll 
need

>to create a -11 entry just to deal with this ioctl.
>

The only change here is removing the execlist support for VM_BIND
mode (other than natual extensions).
Adding a new execbuffer3 was considered, but I think we need to be 
careful
with that as that goes beyond the VM_BIND support, including any 
future

requirements (as we don't want an execbuffer4 after VM_BIND).


Why not? it's not like adding extensions here is really that different
than adding new ioctls.

I definitely think this deserves an execbuffer3 without even
considering future requirements. Just  to burn down the old
requirements and pointless fields.

Make execbuffer3 be vm bind only, no relocs, no legacy bits, leave the
older sw on execbuf2 for ever.


I guess another point in favour of execbuf3 would be that it's less
midlayer. If we share the entry point then there's quite a few vfuncs
needed to cleanly split out the vm_bind paths from the legacy
reloc/softping paths.

If we invert this and do execbuf3, then there's the existing ioctl
vfunc, and then we share code (where it even makes sense, p

Re: [Intel-gfx] [RFC v3 3/3] drm/doc/rfc: VM_BIND uapi definition

2022-06-08 Thread Lionel Landwerlin

On 08/06/2022 11:36, Tvrtko Ursulin wrote:


On 08/06/2022 07:40, Lionel Landwerlin wrote:

On 03/06/2022 09:53, Niranjana Vishwanathapura wrote:
On Wed, Jun 01, 2022 at 10:08:35PM -0700, Niranjana Vishwanathapura 
wrote:

On Wed, Jun 01, 2022 at 11:27:17AM +0200, Daniel Vetter wrote:

On Wed, 1 Jun 2022 at 11:03, Dave Airlie  wrote:


On Tue, 24 May 2022 at 05:20, Niranjana Vishwanathapura
 wrote:


On Thu, May 19, 2022 at 04:07:30PM -0700, Zanoni, Paulo R wrote:
>On Tue, 2022-05-17 at 11:32 -0700, Niranjana Vishwanathapura 
wrote:

>> VM_BIND and related uapi definitions
>>
>> v2: Ensure proper kernel-doc formatting with cross references.
>> Also add new uapi and documentation as per review comments
>> from Daniel.
>>
>> Signed-off-by: Niranjana Vishwanathapura 


>> ---
>>  Documentation/gpu/rfc/i915_vm_bind.h | 399 
+++

>>  1 file changed, 399 insertions(+)
>>  create mode 100644 Documentation/gpu/rfc/i915_vm_bind.h
>>
>> diff --git a/Documentation/gpu/rfc/i915_vm_bind.h 
b/Documentation/gpu/rfc/i915_vm_bind.h

>> new file mode 100644
>> index ..589c0a009107
>> --- /dev/null
>> +++ b/Documentation/gpu/rfc/i915_vm_bind.h
>> @@ -0,0 +1,399 @@
>> +/* SPDX-License-Identifier: MIT */
>> +/*
>> + * Copyright © 2022 Intel Corporation
>> + */
>> +
>> +/**
>> + * DOC: I915_PARAM_HAS_VM_BIND
>> + *
>> + * VM_BIND feature availability.
>> + * See typedef drm_i915_getparam_t param.
>> + */
>> +#define I915_PARAM_HAS_VM_BIND 57
>> +
>> +/**
>> + * DOC: I915_VM_CREATE_FLAGS_USE_VM_BIND
>> + *
>> + * Flag to opt-in for VM_BIND mode of binding during VM 
creation.

>> + * See struct drm_i915_gem_vm_control flags.
>> + *
>> + * A VM in VM_BIND mode will not support the older execbuff 
mode of binding.
>> + * In VM_BIND mode, execbuff ioctl will not accept any 
execlist (ie., the

>> + * &drm_i915_gem_execbuffer2.buffer_count must be 0).
>> + * Also, &drm_i915_gem_execbuffer2.batch_start_offset and
>> + * &drm_i915_gem_execbuffer2.batch_len must be 0.
>> + * DRM_I915_GEM_EXECBUFFER_EXT_BATCH_ADDRESSES extension 
must be provided

>> + * to pass in the batch buffer addresses.
>> + *
>> + * Additionally, I915_EXEC_NO_RELOC, I915_EXEC_HANDLE_LUT and
>> + * I915_EXEC_BATCH_FIRST of &drm_i915_gem_execbuffer2.flags 
must be 0
>> + * (not used) in VM_BIND mode. I915_EXEC_USE_EXTENSIONS flag 
must always be
>> + * set (See struct 
drm_i915_gem_execbuffer_ext_batch_addresses).
>> + * The buffers_ptr, buffer_count, batch_start_offset and 
batch_len fields
>> + * of struct drm_i915_gem_execbuffer2 are also not used and 
must be 0.

>> + */
>
>From that description, it seems we have:
>
>struct drm_i915_gem_execbuffer2 {
>    __u64 buffers_ptr;  -> must be 0 (new)
>    __u32 buffer_count; -> must be 0 (new)
>    __u32 batch_start_offset;   -> must be 0 (new)
>    __u32 batch_len;    -> must be 0 (new)
>    __u32 DR1;  -> must be 0 (old)
>    __u32 DR4;  -> must be 0 (old)
>    __u32 num_cliprects; (fences)   -> must be 0 since 
using extensions
>    __u64 cliprects_ptr; (fences, extensions) -> contains 
an actual pointer!
>    __u64 flags;    -> some flags must be 0 
(new)

>    __u64 rsvd1; (context info) -> repurposed field (old)
>    __u64 rsvd2;    -> unused
>};
>
>Based on that, why can't we just get drm_i915_gem_execbuffer3 
instead
>of adding even more complexity to an already abused interface? 
While

>the Vulkan-like extension thing is really nice, I don't think what
>we're doing here is extending the ioctl usage, we're completely
>changing how the base struct should be interpreted based on how 
the VM

>was created (which is an entirely different ioctl).
>
>From Rusty Russel's API Design grading, 
drm_i915_gem_execbuffer2 is
>already at -6 without these changes. I think after vm_bind 
we'll need

>to create a -11 entry just to deal with this ioctl.
>

The only change here is removing the execlist support for VM_BIND
mode (other than natual extensions).
Adding a new execbuffer3 was considered, but I think we need to 
be careful
with that as that goes beyond the VM_BIND support, including any 
future

requirements (as we don't want an execbuffer4 after VM_BIND).


Why not? it's not like adding extensions here is really that 
different

than adding new ioctls.

I definitely think this deserves an execbuffer3 without even
considering future requirements. Just  to burn down the old
requirements and pointless fields.

Make execbuffer3 be vm bind only, no relocs, no legacy bits, 
leave the

older sw on execbuf2 for ever.


I guess another point in favour of execbuf3 would be that it's less
midlayer. If we share the entry point then there's quite a few vfuncs
needed to cleanly split out the vm_bind paths from the legacy
reloc/softping paths.

If we invert this and do execbuf3, then there's the existing ioctl
vfunc, and then we s

Re: [Intel-gfx] [PATCH] drm/i915/xehp: Correct steering initialization

2022-06-08 Thread Balasubramani Vivekanandan
On 07.06.2022 10:57, Matt Roper wrote:
> Another mistake during the conversion to DSS bitmaps:  after retrieving
> the DSS ID intel_sseu_find_first_xehp_dss() we forgot to modulo it down
> to obtain which ID within the current gslice it is.
> 
> Fixes: b87d39019651 ("drm/i915/sseu: Disassociate internal subslice mask 
> representation from uapi")
> Cc: Balasubramani Vivekanandan 
> Signed-off-by: Matt Roper 
> ---
>  drivers/gpu/drm/i915/gt/intel_workarounds.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c 
> b/drivers/gpu/drm/i915/gt/intel_workarounds.c
> index b7421f109c13..a5c0508c5b63 100644
> --- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
> +++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
> @@ -1177,8 +1177,8 @@ xehp_init_mcr(struct intel_gt *gt, struct i915_wa_list 
> *wal)
>   }
>  
>   slice = __ffs(slice_mask);
> - subslice = intel_sseu_find_first_xehp_dss(sseu, GEN_DSS_PER_GSLICE, 
> slice);
> - WARN_ON(subslice > GEN_DSS_PER_GSLICE);
> + subslice = intel_sseu_find_first_xehp_dss(sseu, GEN_DSS_PER_GSLICE, 
> slice) %
> + GEN_DSS_PER_GSLICE;

Acked-by: Balasubramani Vivekanandan 
>  
>   __add_mcr_wa(gt, wal, slice, subslice);
>  
> -- 
> 2.35.3
> 


Re: [Intel-gfx] [PATCH] drm/i915/pxp: fix sparse warning for not declared symbol

2022-06-08 Thread Jani Nikula
On Wed, 08 Jun 2022, Tvrtko Ursulin  wrote:
> On 06/05/2022 13:04, Jani Nikula wrote:
>> Fix:
>> 
>> drivers/gpu/drm/i915/pxp/intel_pxp_debugfs.c:61:6: warning: symbol
>> 'intel_pxp_debugfs_register' was not declared. Should it be static?
>> 
>> Sort and remove the redundant pxp prefixes from the includes while at
>> it.
>> 
>> Signed-off-by: Jani Nikula 
>> ---
>>   drivers/gpu/drm/i915/pxp/intel_pxp_debugfs.c | 5 +++--
>>   1 file changed, 3 insertions(+), 2 deletions(-)
>> 
>> diff --git a/drivers/gpu/drm/i915/pxp/intel_pxp_debugfs.c 
>> b/drivers/gpu/drm/i915/pxp/intel_pxp_debugfs.c
>> index c9da1015eb42..e888b5124a07 100644
>> --- a/drivers/gpu/drm/i915/pxp/intel_pxp_debugfs.c
>> +++ b/drivers/gpu/drm/i915/pxp/intel_pxp_debugfs.c
>> @@ -9,9 +9,10 @@
>>   #include 
>>   
>>   #include "gt/intel_gt_debugfs.h"
>> -#include "pxp/intel_pxp.h"
>> -#include "pxp/intel_pxp_irq.h"
>>   #include "i915_drv.h"
>> +#include "intel_pxp.h"
>> +#include "intel_pxp_debugfs.h"
>> +#include "intel_pxp_irq.h"
>>   
>>   static int pxp_info_show(struct seq_file *m, void *data)
>>   {
>
> Reviewed-by: Tvrtko Ursulin 

Thanks, pushed to din.

BR,
Jani.

>
> Lets try to copy domain owners when doing cleanups to lessen the 
> maintainer load, since it appears people are not really scanning the 
> upstream mailing list these days. :(
>
> Regards,
>
> Tvrtko

-- 
Jani Nikula, Intel Open Source Graphics Center


Re: [Intel-gfx] [PATCH] drm/i915/overlay: remove redundant GEM_BUG_ON()

2022-06-08 Thread Jani Nikula
On Wed, 08 Jun 2022, Tvrtko Ursulin  wrote:
> On 16/05/2022 09:10, Jani Nikula wrote:
>> There's an early return for !engine->kernel_context.
>> 
>> Signed-off-by: Jani Nikula 
>> ---
>>   drivers/gpu/drm/i915/display/intel_overlay.c | 2 --
>>   1 file changed, 2 deletions(-)
>> 
>> diff --git a/drivers/gpu/drm/i915/display/intel_overlay.c 
>> b/drivers/gpu/drm/i915/display/intel_overlay.c
>> index ee46561b5ae8..79ed8bd04a07 100644
>> --- a/drivers/gpu/drm/i915/display/intel_overlay.c
>> +++ b/drivers/gpu/drm/i915/display/intel_overlay.c
>> @@ -1399,8 +1399,6 @@ void intel_overlay_setup(struct drm_i915_private 
>> *dev_priv)
>>   
>>  overlay->i915 = dev_priv;
>>  overlay->context = engine->kernel_context;
>> -GEM_BUG_ON(!overlay->context);
>> -
>>  overlay->color_key = 0x0101fe;
>>  overlay->color_key_enabled = true;
>>  overlay->brightness = -19;
>
> Reviewed-by: Tvrtko Ursulin 

Thanks, pushed to din.

BR,
Jani.

>
> Regards,
>
> Tvrtko

-- 
Jani Nikula, Intel Open Source Graphics Center


Re: [Intel-gfx] [PATCH] drm/i915/uc: remove accidental static from a local variable

2022-06-08 Thread Jani Nikula
On Wed, 08 Jun 2022, Tvrtko Ursulin  wrote:
> On 11/05/2022 10:46, Jani Nikula wrote:
>> The arrays are static const, but the pointer shouldn't be static.
>> 
>> Fixes: 3d832f370d16 ("drm/i915/uc: Allow platforms to have GuC but not HuC")
>> Cc: John Harrison 
>> Cc: Lucas De Marchi 
>> Cc: Daniele Ceraolo Spurio 
>> Signed-off-by: Jani Nikula 
>> ---
>>   drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>> 
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c 
>> b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c
>> index 9361532726d6..d2c5c9367cc4 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c
>> @@ -156,7 +156,7 @@ __uc_fw_auto_select(struct drm_i915_private *i915, 
>> struct intel_uc_fw *uc_fw)
>>  [INTEL_UC_FW_TYPE_GUC] = { blobs_guc, ARRAY_SIZE(blobs_guc) },
>>  [INTEL_UC_FW_TYPE_HUC] = { blobs_huc, ARRAY_SIZE(blobs_huc) },
>>  };
>> -static const struct uc_fw_platform_requirement *fw_blobs;
>> +const struct uc_fw_platform_requirement *fw_blobs;
>>  enum intel_platform p = INTEL_INFO(i915)->platform;
>>  u32 fw_count;
>>  u8 rev = INTEL_REVID(i915);
>
> Reviewed-by: Tvrtko Ursulin 

Thanks, pushed to drm-intel-gt-next.

BR,
Jani.


>
> Domain owners were copied in this case (mailman just dropped it from the 
> mailing list copy?) on this one, but maybe needed extra prodding.
>
> Regards,
>
> Tvrtko

-- 
Jani Nikula, Intel Open Source Graphics Center


Re: [Intel-gfx] [RFC v3 3/3] drm/doc/rfc: VM_BIND uapi definition

2022-06-08 Thread Tvrtko Ursulin



On 08/06/2022 09:45, Lionel Landwerlin wrote:

On 08/06/2022 11:36, Tvrtko Ursulin wrote:


On 08/06/2022 07:40, Lionel Landwerlin wrote:

On 03/06/2022 09:53, Niranjana Vishwanathapura wrote:
On Wed, Jun 01, 2022 at 10:08:35PM -0700, Niranjana Vishwanathapura 
wrote:

On Wed, Jun 01, 2022 at 11:27:17AM +0200, Daniel Vetter wrote:

On Wed, 1 Jun 2022 at 11:03, Dave Airlie  wrote:


On Tue, 24 May 2022 at 05:20, Niranjana Vishwanathapura
 wrote:


On Thu, May 19, 2022 at 04:07:30PM -0700, Zanoni, Paulo R wrote:
>On Tue, 2022-05-17 at 11:32 -0700, Niranjana Vishwanathapura 
wrote:

>> VM_BIND and related uapi definitions
>>
>> v2: Ensure proper kernel-doc formatting with cross references.
>> Also add new uapi and documentation as per review comments
>> from Daniel.
>>
>> Signed-off-by: Niranjana Vishwanathapura 


>> ---
>>  Documentation/gpu/rfc/i915_vm_bind.h | 399 
+++

>>  1 file changed, 399 insertions(+)
>>  create mode 100644 Documentation/gpu/rfc/i915_vm_bind.h
>>
>> diff --git a/Documentation/gpu/rfc/i915_vm_bind.h 
b/Documentation/gpu/rfc/i915_vm_bind.h

>> new file mode 100644
>> index ..589c0a009107
>> --- /dev/null
>> +++ b/Documentation/gpu/rfc/i915_vm_bind.h
>> @@ -0,0 +1,399 @@
>> +/* SPDX-License-Identifier: MIT */
>> +/*
>> + * Copyright © 2022 Intel Corporation
>> + */
>> +
>> +/**
>> + * DOC: I915_PARAM_HAS_VM_BIND
>> + *
>> + * VM_BIND feature availability.
>> + * See typedef drm_i915_getparam_t param.
>> + */
>> +#define I915_PARAM_HAS_VM_BIND 57
>> +
>> +/**
>> + * DOC: I915_VM_CREATE_FLAGS_USE_VM_BIND
>> + *
>> + * Flag to opt-in for VM_BIND mode of binding during VM 
creation.

>> + * See struct drm_i915_gem_vm_control flags.
>> + *
>> + * A VM in VM_BIND mode will not support the older execbuff 
mode of binding.
>> + * In VM_BIND mode, execbuff ioctl will not accept any 
execlist (ie., the

>> + * &drm_i915_gem_execbuffer2.buffer_count must be 0).
>> + * Also, &drm_i915_gem_execbuffer2.batch_start_offset and
>> + * &drm_i915_gem_execbuffer2.batch_len must be 0.
>> + * DRM_I915_GEM_EXECBUFFER_EXT_BATCH_ADDRESSES extension 
must be provided

>> + * to pass in the batch buffer addresses.
>> + *
>> + * Additionally, I915_EXEC_NO_RELOC, I915_EXEC_HANDLE_LUT and
>> + * I915_EXEC_BATCH_FIRST of &drm_i915_gem_execbuffer2.flags 
must be 0
>> + * (not used) in VM_BIND mode. I915_EXEC_USE_EXTENSIONS flag 
must always be
>> + * set (See struct 
drm_i915_gem_execbuffer_ext_batch_addresses).
>> + * The buffers_ptr, buffer_count, batch_start_offset and 
batch_len fields
>> + * of struct drm_i915_gem_execbuffer2 are also not used and 
must be 0.

>> + */
>
>From that description, it seems we have:
>
>struct drm_i915_gem_execbuffer2 {
>    __u64 buffers_ptr;  -> must be 0 (new)
>    __u32 buffer_count; -> must be 0 (new)
>    __u32 batch_start_offset;   -> must be 0 (new)
>    __u32 batch_len;    -> must be 0 (new)
>    __u32 DR1;  -> must be 0 (old)
>    __u32 DR4;  -> must be 0 (old)
>    __u32 num_cliprects; (fences)   -> must be 0 since 
using extensions
>    __u64 cliprects_ptr; (fences, extensions) -> contains 
an actual pointer!
>    __u64 flags;    -> some flags must be 0 
(new)

>    __u64 rsvd1; (context info) -> repurposed field (old)
>    __u64 rsvd2;    -> unused
>};
>
>Based on that, why can't we just get drm_i915_gem_execbuffer3 
instead
>of adding even more complexity to an already abused interface? 
While

>the Vulkan-like extension thing is really nice, I don't think what
>we're doing here is extending the ioctl usage, we're completely
>changing how the base struct should be interpreted based on how 
the VM

>was created (which is an entirely different ioctl).
>
>From Rusty Russel's API Design grading, 
drm_i915_gem_execbuffer2 is
>already at -6 without these changes. I think after vm_bind 
we'll need

>to create a -11 entry just to deal with this ioctl.
>

The only change here is removing the execlist support for VM_BIND
mode (other than natual extensions).
Adding a new execbuffer3 was considered, but I think we need to 
be careful
with that as that goes beyond the VM_BIND support, including any 
future

requirements (as we don't want an execbuffer4 after VM_BIND).


Why not? it's not like adding extensions here is really that 
different

than adding new ioctls.

I definitely think this deserves an execbuffer3 without even
considering future requirements. Just  to burn down the old
requirements and pointless fields.

Make execbuffer3 be vm bind only, no relocs, no legacy bits, 
leave the

older sw on execbuf2 for ever.


I guess another point in favour of execbuf3 would be that it's less
midlayer. If we share the entry point then there's quite a few vfuncs
needed to cleanly split out the vm_bind paths from the legacy
reloc/softping paths.

If we invert this and do execbuf3, the

Re: [Intel-gfx] [RFC v3 3/3] drm/doc/rfc: VM_BIND uapi definition

2022-06-08 Thread Matthew Auld

On 08/06/2022 08:17, Tvrtko Ursulin wrote:


On 07/06/2022 20:37, Niranjana Vishwanathapura wrote:

On Tue, Jun 07, 2022 at 11:27:14AM +0100, Tvrtko Ursulin wrote:


On 17/05/2022 19:32, Niranjana Vishwanathapura wrote:

VM_BIND and related uapi definitions

v2: Ensure proper kernel-doc formatting with cross references.
    Also add new uapi and documentation as per review comments
    from Daniel.

Signed-off-by: Niranjana Vishwanathapura 


---
 Documentation/gpu/rfc/i915_vm_bind.h | 399 +++
 1 file changed, 399 insertions(+)
 create mode 100644 Documentation/gpu/rfc/i915_vm_bind.h

diff --git a/Documentation/gpu/rfc/i915_vm_bind.h 
b/Documentation/gpu/rfc/i915_vm_bind.h

new file mode 100644
index ..589c0a009107
--- /dev/null
+++ b/Documentation/gpu/rfc/i915_vm_bind.h
@@ -0,0 +1,399 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2022 Intel Corporation
+ */
+
+/**
+ * DOC: I915_PARAM_HAS_VM_BIND
+ *
+ * VM_BIND feature availability.
+ * See typedef drm_i915_getparam_t param.
+ */
+#define I915_PARAM_HAS_VM_BIND    57
+
+/**
+ * DOC: I915_VM_CREATE_FLAGS_USE_VM_BIND
+ *
+ * Flag to opt-in for VM_BIND mode of binding during VM creation.
+ * See struct drm_i915_gem_vm_control flags.
+ *
+ * A VM in VM_BIND mode will not support the older execbuff mode of 
binding.
+ * In VM_BIND mode, execbuff ioctl will not accept any execlist 
(ie., the

+ * &drm_i915_gem_execbuffer2.buffer_count must be 0).
+ * Also, &drm_i915_gem_execbuffer2.batch_start_offset and
+ * &drm_i915_gem_execbuffer2.batch_len must be 0.
+ * DRM_I915_GEM_EXECBUFFER_EXT_BATCH_ADDRESSES extension must be 
provided

+ * to pass in the batch buffer addresses.
+ *
+ * Additionally, I915_EXEC_NO_RELOC, I915_EXEC_HANDLE_LUT and
+ * I915_EXEC_BATCH_FIRST of &drm_i915_gem_execbuffer2.flags must be 0
+ * (not used) in VM_BIND mode. I915_EXEC_USE_EXTENSIONS flag must 
always be

+ * set (See struct drm_i915_gem_execbuffer_ext_batch_addresses).
+ * The buffers_ptr, buffer_count, batch_start_offset and batch_len 
fields

+ * of struct drm_i915_gem_execbuffer2 are also not used and must be 0.
+ */
+#define I915_VM_CREATE_FLAGS_USE_VM_BIND    (1 << 0)
+
+/**
+ * DOC: I915_CONTEXT_CREATE_FLAGS_LONG_RUNNING
+ *
+ * Flag to declare context as long running.
+ * See struct drm_i915_gem_context_create_ext flags.
+ *
+ * Usage of dma-fence expects that they complete in reasonable 
amount of time.
+ * Compute on the other hand can be long running. Hence it is not 
appropriate
+ * for compute contexts to export request completion dma-fence to 
user.

+ * The dma-fence usage will be limited to in-kernel consumption only.
+ * Compute contexts need to use user/memory fence.
+ *
+ * So, long running contexts do not support output fences. Hence,
+ * I915_EXEC_FENCE_OUT (See &drm_i915_gem_execbuffer2.flags and
+ * I915_EXEC_FENCE_SIGNAL (See &drm_i915_gem_exec_fence.flags) are 
expected

+ * to be not used.
+ *
+ * DRM_I915_GEM_WAIT ioctl call is also not supported for objects 
mapped

+ * to long running contexts.
+ */
+#define I915_CONTEXT_CREATE_FLAGS_LONG_RUNNING   (1u << 2)
+
+/* VM_BIND related ioctls */
+#define DRM_I915_GEM_VM_BIND    0x3d
+#define DRM_I915_GEM_VM_UNBIND    0x3e
+#define DRM_I915_GEM_WAIT_USER_FENCE    0x3f
+
+#define DRM_IOCTL_I915_GEM_VM_BIND    DRM_IOWR(DRM_COMMAND_BASE 
+ DRM_I915_GEM_VM_BIND, struct drm_i915_gem_vm_bind)
+#define DRM_IOCTL_I915_GEM_VM_UNBIND DRM_IOWR(DRM_COMMAND_BASE + 
DRM_I915_GEM_VM_UNBIND, struct drm_i915_gem_vm_bind)
+#define DRM_IOCTL_I915_GEM_WAIT_USER_FENCE 
DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GEM_WAIT_USER_FENCE, struct 
drm_i915_gem_wait_user_fence)

+
+/**
+ * struct drm_i915_gem_vm_bind - VA to object mapping to bind.
+ *
+ * This structure is passed to VM_BIND ioctl and specifies the 
mapping of GPU
+ * virtual address (VA) range to the section of an object that 
should be bound

+ * in the device page table of the specified address space (VM).
+ * The VA range specified must be unique (ie., not currently bound) 
and can
+ * be mapped to whole object or a section of the object (partial 
binding).
+ * Multiple VA mappings can be created to the same section of the 
object

+ * (aliasing).
+ */
+struct drm_i915_gem_vm_bind {
+    /** @vm_id: VM (address space) id to bind */
+    __u32 vm_id;
+
+    /** @handle: Object handle */
+    __u32 handle;
+
+    /** @start: Virtual Address start to bind */
+    __u64 start;
+
+    /** @offset: Offset in object to bind */
+    __u64 offset;
+
+    /** @length: Length of mapping to bind */
+    __u64 length;


Does it support, or should it, equivalent of EXEC_OBJECT_PAD_TO_SIZE? 
Or if not userspace is expected to map the remainder of the space to 
a dummy object? In which case would there be any alignment/padding 
issues preventing the two bind to be placed next to each other?


I ask because someone from the compute side asked me about a problem 
with their strategy of dealing with overfetch and I suggested pad

Re: [Intel-gfx] [PATCH 1/2] drm/i915/opregion: add function to check if headless sku

2022-06-08 Thread Hogander, Jouni
On Mon, 2022-06-06 at 13:15 +, Souza, Jose wrote:
> On Mon, 2022-06-06 at 11:16 +0300, Jani Nikula wrote:
> > On Mon, 06 Jun 2022, "Hogander, Jouni" 
> > wrote:
> > > On Fri, 2022-06-03 at 16:32 +, Souza, Jose wrote:
> > > > On Fri, 2022-06-03 at 13:14 +, Hogander, Jouni wrote:
> > > > > On Fri, 2022-06-03 at 15:43 +0300, Jani Nikula wrote:
> > > > > > On Fri, 03 Jun 2022, Jouni Högander <
> > > > > > jouni.hogan...@intel.com>
> > > > > > wrote:
> > > > > > > Export headless sku bit (bit 13) from opregion->header-
> > > > > > > >pcon as
> > > > > > > an
> > > > > > > interface to check if our device is headless
> > > > > > > configuration.
> > > > > > > 
> > > > > > > Bspec: 53441
> > > > > > > Signed-off-by: Jouni Högander 
> > > > > > > ---
> > > > > > >  drivers/gpu/drm/i915/display/intel_opregion.c | 12
> > > > > > > 
> > > > > > >  drivers/gpu/drm/i915/display/intel_opregion.h |  7
> > > > > > > +++
> > > > > > >  2 files changed, 19 insertions(+)
> > > > > > > 
> > > > > > > diff --git
> > > > > > > a/drivers/gpu/drm/i915/display/intel_opregion.c
> > > > > > > b/drivers/gpu/drm/i915/display/intel_opregion.c
> > > > > > > index f31e8c3f8ce0..eab3f2e6b786 100644
> > > > > > > --- a/drivers/gpu/drm/i915/display/intel_opregion.c
> > > > > > > +++ b/drivers/gpu/drm/i915/display/intel_opregion.c
> > > > > > > @@ -53,6 +53,8 @@
> > > > > > >  #define MBOX_ASLE_EXTBIT(4)/* Mailbox #5 */
> > > > > > >  #define MBOX_BACKLIGHTBIT(5)/* Mailbox #2
> > > > > > > (valid from v3.x) */
> > > > > > > 
> > > > > > > +#define PCON_HEADLESS_SKUBIT(13)
> > > > > > 
> > > > > > Here we go again.
> > > > > > 
> > > > > > What does headless mean here? The spec does not say. Does
> > > > > > it have
> > > > > > display hardware? Apparently yes, since otherwise we
> > > > > > wouldn't be
> > > > > > here.
> > > > > 
> > > > > This is for hybrid setup with several display hw and the
> > > > > panel wont
> > > > > be
> > > > > connected into device driven by i915 driver.
> > > > > 
> > > > > > We have INTEL_DISPLAY_ENABLED() which should do the right
> > > > > > thing
> > > > > > when
> > > > > > you
> > > > > > do have display hardware and have done output setup etc.
> > > > > > but want
> > > > > > to
> > > > > > force them disconnected, i.e. you take the hardware over
> > > > > > properly,
> > > > > > but
> > > > > > put it to sleep for power savings.
> > > > > > 
> > > > > > Maybe we should bolt this opregion check in that macro?
> > > > > > 
> > > > > > Maybe we need to use INTEL_DISPLAY_ENABLED() also to
> > > > > > prevent
> > > > > > polling.
> > > > > 
> > > > > Thank you for pointing this out. HAS_DISPLAY I already notice
> > > > > and
> > > > > it's
> > > > > not suitable for what we want here. I think bolting this
> > > > > check into
> > > > > INTEL_DISPLAY_ENABLED as you suggested is enough. That will
> > > > > prevent
> > > > > waking up the hw into D0 state for polling.
> > > > 
> > > > A headless sku should not have any DDI ports enabled, much
> > > > easier
> > > > check for that.
> > > 
> > > Could you please clarify this a bit? What exactly you are
> > > thinking
> > > should be checked? Aren't DDI port also disabled when non-
> > > headless
> > > setup is in runtime suspend?
> > 
> > I also think "headless" and "DDI ports enabled" need clarification.
> > They
> > are overloaded terms.
> 
> In a properly setup headless sku, VBT should have all ports marked as
> disabled.

Should we take into account headless bit in opregion possibly
conflicting with VBT contents?

Now as you pointed out this I started to think
intel_opregion_headless_sku check could be also in here as additional
check:

if (!init_dp && !init_hdmi || intel_opregion_headless_sku()) {

This would prevent initializing any connector including setting them up
for polling?

> 
> intel_ddi_init() {
> ...
> 
> if (!init_dp && !init_hdmi) {
> drm_dbg_kms(&dev_priv->drm,
> "VBT says port %c is not DVI/HDMI/DP compatible, respect it\n",
> port_name(port));
> return;
> }
> 
> 
> All DDI should return earlier in the above.
> So you can use the number of enabled connectors to know if it is a
> headless sku or not.
> 
> So you can skip the pooling in case there is no connectors.
> 
> > Seems to me the use case here could be the same as
> > INTEL_DISPLAY_ENABLED(), and that could benefit from polling
> > disable.
> > 
> > BR,
> > Jani.
> > 
> > 
> > > > > > I certainly would not want to add another mode that's
> > > > > > separate
> > > > > > from
> > > > > > HAS_DISPLAY() and INTEL_DISPLAY_ENABLED().
> > > > > 
> > > > > No need for this. I think we can go with
> > > > > INTEL_DISPLAY_ENABLED.
> > > > > > > +
> > > > > > >  struct opregion_header {
> > > > > > >  u8 signature[16];
> > > > > > >  u32 size;
> > > > > > > @@ -1135,6 +1137,16 @@ struct edid
> > > > > > > *intel_opregion_get_edid(struct
> > > > > > > intel_connector *intel_connector)
> > > > > > >  return new_edid;
> > > > > > >  }
> > > > > > > 
> > > > > > > 

Re: [Intel-gfx] [PATCH] drm/i915: More PVC+DG2 workarounds

2022-06-08 Thread Gupta, Anshuman



> -Original Message-
> From: Roper, Matthew D 
> Sent: Wednesday, June 8, 2022 6:21 AM
> To: intel-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org; Roper, Matthew D
> ; Gupta, Anshuman
> ; Nilawar, Badal ;
> Kumar Valsan, Prathap 
> Subject: [PATCH] drm/i915: More PVC+DG2 workarounds
> 
> A new PVC+DG2 workaround has appeared recently:
>  - Wa_16015675438
> 
> And a couple existing DG2 workarounds have been extended to PVC:
>  - Wa_14015795083
>  - Wa_18018781329
Looks good to me.
Reviewed-by: Anshuman Gupta 
Regards,
Anshuman Gupta.
> 
> Note that Wa_16015675438 asks us to program a register that is in the 0x2xxx
> range typically associated with the RCS engine, even though PVC does not have
> an RCS.  By default the GuC will think we've made a mistake and throw an
> exception when it sees this register on a CCS engine's save/restore list, so 
> we
> need to pass an extra GuC control flag to tell it that this is expected and 
> not a
> problem.
> 
> Signed-off-by: Anshuman Gupta 
> Signed-off-by: Badal Nilawar 
> Signed-off-by: Prathap Kumar Valsan 
> Signed-off-by: Matt Roper 
> ---
>  drivers/gpu/drm/i915/gt/intel_gt_regs.h |  1 +
>  drivers/gpu/drm/i915/gt/intel_workarounds.c | 24 +++--
>  drivers/gpu/drm/i915/gt/uc/intel_guc.c  |  4 
>  drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h |  1 +
>  4 files changed, 23 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_gt_regs.h
> b/drivers/gpu/drm/i915/gt/intel_gt_regs.h
> index c8129a351731..226557018037 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gt_regs.h
> +++ b/drivers/gpu/drm/i915/gt/intel_gt_regs.h
> @@ -140,6 +140,7 @@
>  #define FF_SLICE_CS_CHICKEN2 _MMIO(0x20e4)
>  #define   GEN9_TSG_BARRIER_ACK_DISABLE   (1 << 8)
>  #define   GEN9_POOLED_EU_LOAD_BALANCING_FIX_DISABLE  (1 << 10)
> +#define   GEN12_PERF_FIX_BALANCING_CFE_DISABLE   REG_BIT(15)
> 
>  #define GEN9_CS_DEBUG_MODE1  _MMIO(0x20ec)
>  #define   FF_DOP_CLOCK_GATE_DISABLE  REG_BIT(1)
> diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c
> b/drivers/gpu/drm/i915/gt/intel_workarounds.c
> index 1e7ca3863f20..e1e70eff9aac 100644
> --- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
> +++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
> @@ -1491,13 +1491,20 @@ dg2_gt_workarounds_init(struct intel_gt *gt, struct
> i915_wa_list *wal)
>   wa_write_clr(wal, GEN7_MISCCPCTL,
> GEN12_DOP_CLOCK_GATE_RENDER_ENABLE);
>  }
> 
> +static void
> +pvc_gt_workarounds_init(struct intel_gt *gt, struct i915_wa_list *wal)
> +{
> + /* Wa_14015795083 */
> + wa_write_clr(wal, GEN7_MISCCPCTL,
> GEN12_DOP_CLOCK_GATE_RENDER_ENABLE);
> +}
> +
>  static void
>  gt_init_workarounds(struct intel_gt *gt, struct i915_wa_list *wal)  {
>   struct drm_i915_private *i915 = gt->i915;
> 
>   if (IS_PONTEVECCHIO(i915))
> - ; /* none yet */
> + pvc_gt_workarounds_init(gt, wal);
>   else if (IS_DG2(i915))
>   dg2_gt_workarounds_init(gt, wal);
>   else if (IS_XEHPSDV(i915))
> @@ -2082,12 +2089,6 @@ rcs_engine_wa_init(struct intel_engine_cs *engine,
> struct i915_wa_list *wal)
>* performance guide section.
>*/
>   wa_write_or(wal, XEHP_L3SCQREG7,
> BLEND_FILL_CACHING_OPT_DIS);
> -
> - /* Wa_18018781329:dg2 */
> - wa_write_or(wal, RENDER_MOD_CTRL, FORCE_MISS_FTLB);
> - wa_write_or(wal, COMP_MOD_CTRL, FORCE_MISS_FTLB);
> - wa_write_or(wal, VDBX_MOD_CTRL, FORCE_MISS_FTLB);
> - wa_write_or(wal, VEBX_MOD_CTRL, FORCE_MISS_FTLB);
>   }
> 
>   if (IS_DG2_GRAPHICS_STEP(i915, G11, STEP_A0, STEP_B0)) { @@ -
> 2700,6 +2701,15 @@ general_render_compute_wa_init(struct intel_engine_cs
> *engine, struct i915_wa_li
> 
>   /* Wa_22014226127:dg2,pvc */
>   wa_write_or(wal, LSC_CHICKEN_BIT_0,
> DISABLE_D8_D16_COASLESCE);
> +
> + /* Wa_16015675438:dg2,pvc */
> + wa_masked_en(wal, FF_SLICE_CS_CHICKEN2,
> +GEN12_PERF_FIX_BALANCING_CFE_DISABLE);
> +
> + /* Wa_18018781329:dg2,pvc */
> + wa_write_or(wal, RENDER_MOD_CTRL, FORCE_MISS_FTLB);
> + wa_write_or(wal, COMP_MOD_CTRL, FORCE_MISS_FTLB);
> + wa_write_or(wal, VDBX_MOD_CTRL, FORCE_MISS_FTLB);
> + wa_write_or(wal, VEBX_MOD_CTRL, FORCE_MISS_FTLB);
>   }
>  }
> 
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
> b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
> index 2c4ad4a65089..35887cb53201 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
> @@ -327,6 +327,10 @@ static u32 guc_ctl_wa_flags(struct intel_guc *guc)
>   IS_DG2_GRAPHICS_STEP(gt->i915, G11, STEP_A0, STEP_FOREVER))
>   flags |= GUC_WA_CONTEXT_ISOLATION;
> 
> + /* Wa_16015675438 */
> + if (!RCS_MASK(gt))
> + flags |= GUC_WA_RCS_REGS_IN_CCS_REGS_LIS

[Intel-gfx] [PULL] drm-misc-next

2022-06-08 Thread Thomas Zimmermann
Hi Dave and Daniel,

here's the first PR for drm-misc-next that will go into v5.20.

Best regards
Thomas

drm-misc-next-2022-06-08:
drm-misc-next for 5.20:

UAPI Changes:

 * connector: export bpc limits in debugfs

 * dma-buf: Print buffer name in debugfs

Cross-subsystem Changes:

 * dma-buf: Improve dma-fence handling; Cleanups

 * fbdev: Device-unregistering fixes

Core Changes:

 * client: Only use driver-validated modes to avoid blank screen

 * dp-aux: Make probing more reliable; Small fixes

 * edit: CEA data-block iterators; Introduce struct drm_edid; Many cleanups

 * gem: Don't use framebuffer format's non-exising color planes

 * probe-helper: Use 640x480 as DisplayPort fallback; Refactoring

 * scheduler: Don't kill jobs in interrupt context

Driver Changes:

 * amdgpu: Use atomic fence helpers in DM; Fix VRAM address calculation;
   Export CRTC bpc settings via debugfs

 * bridge: Add TI-DLPC3433;  anx7625: Fixes;  fy07024di26a30d: Optional
   GPIO reset;  icn6211: Cleanups;  ldb: Add reg and reg-name properties
   to bindings, Kconfig fixes;  lt9611: Fix display sensing;  lt9611uxc:
   Fixes;  nwl-dsi: Fixes;  ps8640: Cleanups;  st7735r: Fixes;  tc358767:
   DSI/DPI refactoring and DSI-to-eDP support, Fixes; ti-sn65dsi83:
   Fixes;

 * gma500: Cleanup connector I2C handling

 * hyperv: Unify VRAM allocation of Gen1 and Gen2

 * i915: export CRTC bpc settings via debugfs

 * meson: Support YUV422 output; Refcount fixes

 * mgag200: Support damage clipping; Support gamma handling; Protect
   concurrent HW access; Fixes to connector; Store model-specific limits
   in device-info structure; Cleanups

 * nouveau: Fixes and Cleanups

 * panel: Kconfig fixes

 * panfrost: Valhall support

 * r128: Fix bit-shift overflow

 * rockchip: Locking fixes in error path; Minor cleanups

 * ssd130x: Fix built-in linkage

 * ttm: Cleanups

 * udl; Always advertize VGA connector

 * fbdev/vesa: Support COMPILE_TEST
The following changes since commit 6071c4c2a319da360b0bf2bc397d4fefad10b2c8:

  drm/qxl: add drm_gem_plane_helper_prepare_fb (2022-05-05 12:30:10 +0200)

are available in the Git repository at:

  git://anongit.freedesktop.org/drm/drm-misc tags/drm-misc-next-2022-06-08

for you to fetch changes up to dfa687bffc8a4a21ed929c7dececf01b8f1f52ee:

  drm/bridge: lt9611uxc: Cancel only driver's work (2022-06-07 14:57:47 +0200)


drm-misc-next for 5.20:

UAPI Changes:

 * connector: export bpc limits in debugfs

 * dma-buf: Print buffer name in debugfs

Cross-subsystem Changes:

 * dma-buf: Improve dma-fence handling; Cleanups

 * fbdev: Device-unregistering fixes

Core Changes:

 * client: Only use driver-validated modes to avoid blank screen

 * dp-aux: Make probing more reliable; Small fixes

 * edit: CEA data-block iterators; Introduce struct drm_edid; Many cleanups

 * gem: Don't use framebuffer format's non-exising color planes

 * probe-helper: Use 640x480 as DisplayPort fallback; Refactoring

 * scheduler: Don't kill jobs in interrupt context

Driver Changes:

 * amdgpu: Use atomic fence helpers in DM; Fix VRAM address calculation;
   Export CRTC bpc settings via debugfs

 * bridge: Add TI-DLPC3433;  anx7625: Fixes;  fy07024di26a30d: Optional
   GPIO reset;  icn6211: Cleanups;  ldb: Add reg and reg-name properties
   to bindings, Kconfig fixes;  lt9611: Fix display sensing;  lt9611uxc:
   Fixes;  nwl-dsi: Fixes;  ps8640: Cleanups;  st7735r: Fixes;  tc358767:
   DSI/DPI refactoring and DSI-to-eDP support, Fixes; ti-sn65dsi83:
   Fixes;

 * gma500: Cleanup connector I2C handling

 * hyperv: Unify VRAM allocation of Gen1 and Gen2

 * i915: export CRTC bpc settings via debugfs

 * meson: Support YUV422 output; Refcount fixes

 * mgag200: Support damage clipping; Support gamma handling; Protect
   concurrent HW access; Fixes to connector; Store model-specific limits
   in device-info structure; Cleanups

 * nouveau: Fixes and Cleanups

 * panel: Kconfig fixes

 * panfrost: Valhall support

 * r128: Fix bit-shift overflow

 * rockchip: Locking fixes in error path; Minor cleanups

 * ssd130x: Fix built-in linkage

 * ttm: Cleanups

 * udl; Always advertize VGA connector

 * fbdev/vesa: Support COMPILE_TEST


Alyssa Rosenzweig (9):
  dt-bindings: Add compatible for Mali Valhall (JM)
  drm/panfrost: Handle HW_ISSUE_TTRX_2968_TTRX_3162
  drm/panfrost: Constify argument to has_hw_issue
  drm/panfrost: Handle HW_ISSUE_TTRX_3076
  drm/panfrost: Add HW_ISSUE_TTRX_3485 quirk
  drm/panfrost: Add "clean only safe" feature bit
  drm/panfrost: Don't set L2_MMU_CONFIG quirks
  drm/panfrost: Add Mali-G57 "Natt" support
  drm/panfrost: Add arm,mali-valhall-jm compatible

André Almeida (1):
  drm/vkms: Update vkms_composer_worker documentation

Bhanuprakash Modem (3):
  drm/debug: Expose connector's max supported bpc via debugfs
  drm/i915/display/d

[Intel-gfx] [PATCH i-g-t] tests/kms_async_flips: first async flip discarded on i915

2022-06-08 Thread Arun R Murthy
The i915 KMD will use the first async flip to update the watermarks as
per the watermark optimization in DISPLAY13. Hence the actual async flip
will happen from the subsequent flips.
For alternate sync async test, a dummy async flip has to be done to
allow the KMD to perform the watermark related updates before writing to
the surface base address.

Signed-off-by: Arun R Murthy 
---
 tests/kms_async_flips.c | 17 -
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/tests/kms_async_flips.c b/tests/kms_async_flips.c
index 1701883b..67150e50 100644
--- a/tests/kms_async_flips.c
+++ b/tests/kms_async_flips.c
@@ -189,19 +189,18 @@ static void test_async_flip(data_t *data, bool 
alternate_sync_async)
 * In older platforms (<= Gen10), async address update 
bit is double buffered.
 * So flip timestamp can be verified only from the 
second flip.
 * The first async flip just enables the async address 
update.
+* In platforms greater than DISPLAY13 thr first async 
flip will be discarded
+* in order to change the watermark levels as per the 
optimization. Hence the
+* subsequent async flips will actually do the 
asynchronous flips.
 */
if (is_i915_device(data->drm_fd)) {
-   uint32_t devid = 
intel_get_drm_devid(data->drm_fd);
+   ret = drmModePageFlip(data->drm_fd, 
data->crtc_id,
+ data->bufs[frame % 
4].fb_id,
+ flags, data);
 
-   if (IS_GEN9(devid) || IS_GEN10(devid)) {
-   ret = drmModePageFlip(data->drm_fd, 
data->crtc_id,
- data->bufs[frame 
% 4].fb_id,
- flags, data);
+   igt_assert(ret == 0);
 
-   igt_assert(ret == 0);
-
-   wait_flip_event(data);
-   }
+   wait_flip_event(data);
}
}
 
-- 
2.25.1



Re: [Intel-gfx] [RFC v3 2/3] drm/i915: Update i915 uapi documentation

2022-06-08 Thread Matthew Auld
On Tue, 17 May 2022 at 19:32, Niranjana Vishwanathapura
 wrote:
>
> Add some missing i915 upai documentation which the new
> i915 VM_BIND feature documentation will be refer to.
>
> Signed-off-by: Niranjana Vishwanathapura 
> ---
>  include/uapi/drm/i915_drm.h | 153 +++-
>  1 file changed, 116 insertions(+), 37 deletions(-)
>
> diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
> index a2def7b27009..8c834a31b56f 100644
> --- a/include/uapi/drm/i915_drm.h
> +++ b/include/uapi/drm/i915_drm.h
> @@ -751,9 +751,16 @@ typedef struct drm_i915_irq_wait {
>
>  /* Must be kept compact -- no holes and well documented */
>
> +/**
> + * typedef drm_i915_getparam_t - Driver parameter query structure.

This one looks funny in the rendered html for some reason, since it
doesn't seem to emit the @param and @value, I guess it doesn't really
understand typedef  ?

Maybe make this "struct drm_i915_getparam - Driver parameter query structure." ?

> + */
>  typedef struct drm_i915_getparam {
> +   /** @param: Driver parameter to query. */
> __s32 param;
> -   /*
> +
> +   /**
> +* @value: Address of memory where queried value should be put.
> +*
>  * WARNING: Using pointers instead of fixed-size u64 means we need to 
> write
>  * compat32 code. Don't repeat this mistake.
>  */
> @@ -1239,76 +1246,114 @@ struct drm_i915_gem_exec_object2 {
> __u64 rsvd2;
>  };
>
> +/**
> + * struct drm_i915_gem_exec_fence - An input or output fence for the execbuff

s/execbuff/execbuf/, at least that seems to be what we use elsewhere, AFAICT.

> + * ioctl.
> + *
> + * The request will wait for input fence to signal before submission.
> + *
> + * The returned output fence will be signaled after the completion of the
> + * request.
> + */
>  struct drm_i915_gem_exec_fence {
> -   /**
> -* User's handle for a drm_syncobj to wait on or signal.
> -*/
> +   /** @handle: User's handle for a drm_syncobj to wait on or signal. */
> __u32 handle;
>
> +   /**
> +* @flags: Supported flags are,

are:

> +*
> +* I915_EXEC_FENCE_WAIT:
> +* Wait for the input fence before request submission.
> +*
> +* I915_EXEC_FENCE_SIGNAL:
> +* Return request completion fence as output
> +*/
> +   __u32 flags;
>  #define I915_EXEC_FENCE_WAIT(1<<0)
>  #define I915_EXEC_FENCE_SIGNAL  (1<<1)
>  #define __I915_EXEC_FENCE_UNKNOWN_FLAGS (-(I915_EXEC_FENCE_SIGNAL << 1))
> -   __u32 flags;
>  };
>
> -/*
> - * See drm_i915_gem_execbuffer_ext_timeline_fences.
> - */
> -#define DRM_I915_GEM_EXECBUFFER_EXT_TIMELINE_FENCES 0
> -
> -/*
> +/**
> + * struct drm_i915_gem_execbuffer_ext_timeline_fences - Timeline fences
> + * for execbuff.
> + *
>   * This structure describes an array of drm_syncobj and associated points for
>   * timeline variants of drm_syncobj. It is invalid to append this structure 
> to
>   * the execbuf if I915_EXEC_FENCE_ARRAY is set.
>   */
>  struct drm_i915_gem_execbuffer_ext_timeline_fences {
> +#define DRM_I915_GEM_EXECBUFFER_EXT_TIMELINE_FENCES 0
> +   /** @base: Extension link. See struct i915_user_extension. */
> struct i915_user_extension base;
>
> /**
> -* Number of element in the handles_ptr & value_ptr arrays.
> +* @fence_count: Number of element in the @handles_ptr & @value_ptr

s/element/elements/

> +* arrays.
>  */
> __u64 fence_count;
>
> /**
> -* Pointer to an array of struct drm_i915_gem_exec_fence of length
> -* fence_count.
> +* @handles_ptr: Pointer to an array of struct drm_i915_gem_exec_fence
> +* of length @fence_count.
>  */
> __u64 handles_ptr;
>
> /**
> -* Pointer to an array of u64 values of length fence_count. Values
> -* must be 0 for a binary drm_syncobj. A Value of 0 for a timeline
> -* drm_syncobj is invalid as it turns a drm_syncobj into a binary one.
> +* @values_ptr: Pointer to an array of u64 values of length
> +* @fence_count.
> +* Values must be 0 for a binary drm_syncobj. A Value of 0 for a
> +* timeline drm_syncobj is invalid as it turns a drm_syncobj into a
> +* binary one.
>  */
> __u64 values_ptr;
>  };
>
> +/**
> + * struct drm_i915_gem_execbuffer2 - Structure for execbuff submission
> + */
>  struct drm_i915_gem_execbuffer2 {
> -   /**
> -* List of gem_exec_object2 structs
> -*/
> +   /** @buffers_ptr: Pointer to a list of gem_exec_object2 structs */
> __u64 buffers_ptr;
> +
> +   /** @buffer_count: Number of elements in @buffers_ptr array */
> __u32 buffer_count;
>
> -   /** Offset in the batchbuffer to start execution from. */
> +   /**
> +* @batch_start_offset: Offset in the batchbuffer to start execution
> +* fr

[Intel-gfx] ✓ Fi.CI.BAT: success for drm/edid: expand on struct drm_edid usage (rev3)

2022-06-08 Thread Patchwork
== Series Details ==

Series: drm/edid: expand on struct drm_edid usage (rev3)
URL   : https://patchwork.freedesktop.org/series/104309/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_11733 -> Patchwork_104309v3


Summary
---

  **SUCCESS**

  No regressions found.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/index.html

Participating hosts (45 -> 42)
--

  Additional (3): fi-bxt-dsi bat-atsm-1 bat-dg1-5 
  Missing(6): fi-rkl-11600 bat-dg2-8 bat-adlm-1 bat-dg2-9 bat-rpls-2 
fi-bdw-samus 

Possible new issues
---

  Here are the unknown changes that may have been introduced in 
Patchwork_104309v3:

### IGT changes ###

 Suppressed 

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@i915_pm_rpm@module-reload:
- {bat-atsm-1}:   NOTRUN -> [DMESG-WARN][1] +12 similar issues
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/bat-atsm-1/igt@i915_pm_...@module-reload.html

  * igt@i915_selftest@live@mman:
- {fi-jsl-1}: [PASS][2] -> [INCOMPLETE][3]
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/fi-jsl-1/igt@i915_selftest@l...@mman.html
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/fi-jsl-1/igt@i915_selftest@l...@mman.html

  * igt@i915_selftest@live@workarounds:
- {bat-atsm-1}:   NOTRUN -> [DMESG-FAIL][4]
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/bat-atsm-1/igt@i915_selftest@l...@workarounds.html

  
Known issues


  Here are the changes found in Patchwork_104309v3 that come from known issues:

### IGT changes ###

 Issues hit 

  * igt@fbdev@nullptr:
- bat-dg1-5:  NOTRUN -> [SKIP][5] ([i915#2582]) +4 similar issues
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/bat-dg1-5/igt@fb...@nullptr.html

  * igt@gem_huc_copy@huc-copy:
- fi-bxt-dsi: NOTRUN -> [SKIP][6] ([fdo#109271] / [i915#2190])
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/fi-bxt-dsi/igt@gem_huc_c...@huc-copy.html

  * igt@gem_lmem_swapping@parallel-random-engines:
- fi-bxt-dsi: NOTRUN -> [SKIP][7] ([fdo#109271] / [i915#4613]) +3 
similar issues
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/fi-bxt-dsi/igt@gem_lmem_swapp...@parallel-random-engines.html

  * igt@gem_mmap@basic:
- bat-dg1-5:  NOTRUN -> [SKIP][8] ([i915#4083])
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/bat-dg1-5/igt@gem_m...@basic.html

  * igt@gem_tiled_blits@basic:
- fi-bxt-dsi: NOTRUN -> [SKIP][9] ([fdo#109271]) +13 similar issues
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/fi-bxt-dsi/igt@gem_tiled_bl...@basic.html
- bat-dg1-5:  NOTRUN -> [SKIP][10] ([i915#4077]) +2 similar issues
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/bat-dg1-5/igt@gem_tiled_bl...@basic.html

  * igt@gem_tiled_pread_basic:
- bat-dg1-5:  NOTRUN -> [SKIP][11] ([i915#4079]) +1 similar issue
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/bat-dg1-5/igt@gem_tiled_pread_basic.html

  * igt@i915_pm_backlight@basic-brightness:
- bat-dg1-5:  NOTRUN -> [SKIP][12] ([i915#1155])
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/bat-dg1-5/igt@i915_pm_backli...@basic-brightness.html

  * igt@i915_selftest@live@hangcheck:
- fi-snb-2600:[PASS][13] -> [INCOMPLETE][14] ([i915#3921])
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/fi-snb-2600/igt@i915_selftest@l...@hangcheck.html
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/fi-snb-2600/igt@i915_selftest@l...@hangcheck.html
- bat-dg1-6:  [PASS][15] -> [DMESG-FAIL][16] ([i915#4494] / 
[i915#4957])
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/bat-dg1-6/igt@i915_selftest@l...@hangcheck.html
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/bat-dg1-6/igt@i915_selftest@l...@hangcheck.html

  * igt@i915_suspend@basic-s2idle-without-i915:
- bat-dg1-5:  NOTRUN -> [INCOMPLETE][17] ([i915#6011])
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/bat-dg1-5/igt@i915_susp...@basic-s2idle-without-i915.html

  * igt@kms_addfb_basic@basic-x-tiled-legacy:
- bat-dg1-5:  NOTRUN -> [SKIP][18] ([i915#4212]) +7 similar issues
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/bat-dg1-5/igt@kms_addfb_ba...@basic-x-tiled-legacy.html

  * igt@kms_addfb_basic@basic-y-tiled-legacy:
- bat-dg1-5:  NOTRUN -> [SKIP][19] ([i915#4215])
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/bat-dg1-5/igt@kms_addfb_ba...@basic-y-tiled-legacy.html

  * igt@kms_busy@basic:
- bat-dg1-5:  

[Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915: More PVC+DG2 workarounds

2022-06-08 Thread Patchwork
== Series Details ==

Series: drm/i915: More PVC+DG2 workarounds
URL   : https://patchwork.freedesktop.org/series/104866/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_11733_full -> Patchwork_104866v1_full


Summary
---

  **SUCCESS**

  No regressions found.

  

Participating hosts (13 -> 13)
--

  No changes in participating hosts

Possible new issues
---

  Here are the unknown changes that may have been introduced in 
Patchwork_104866v1_full:

### IGT changes ###

 Possible regressions 

  * 
{igt@kms_plane_scaling@plane-downscale-with-modifiers-factor-0-25@pipe-a-hdmi-a-3}
 (NEW):
- {shard-dg1}:NOTRUN -> [SKIP][1] +3 similar issues
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104866v1/shard-dg1-18/igt@kms_plane_scaling@plane-downscale-with-modifiers-factor-0...@pipe-a-hdmi-a-3.html

  
 Suppressed 

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@kms_cursor_legacy@pipe-a-torture-move:
- {shard-dg1}:NOTRUN -> [WARN][2]
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104866v1/shard-dg1-13/igt@kms_cursor_leg...@pipe-a-torture-move.html

  * igt@kms_draw_crc@draw-method-rgb565-pwrite-xtiled:
- {shard-dg1}:NOTRUN -> [FAIL][3] +2 similar issues
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104866v1/shard-dg1-13/igt@kms_draw_...@draw-method-rgb565-pwrite-xtiled.html

  
New tests
-

  New tests have been introduced between CI_DRM_11733_full and 
Patchwork_104866v1_full:

### New IGT tests (4) ###

  * 
igt@kms_plane_scaling@plane-downscale-with-modifiers-factor-0-25@pipe-a-hdmi-a-3:
- Statuses : 1 skip(s)
- Exec time: [0.03] s

  * 
igt@kms_plane_scaling@plane-downscale-with-modifiers-factor-0-25@pipe-b-hdmi-a-3:
- Statuses : 1 skip(s)
- Exec time: [0.03] s

  * 
igt@kms_plane_scaling@plane-downscale-with-modifiers-factor-0-25@pipe-c-hdmi-a-3:
- Statuses : 1 skip(s)
- Exec time: [0.03] s

  * 
igt@kms_plane_scaling@plane-downscale-with-modifiers-factor-0-25@pipe-d-hdmi-a-3:
- Statuses : 1 skip(s)
- Exec time: [0.03] s

  

Known issues


  Here are the changes found in Patchwork_104866v1_full that come from known 
issues:

### IGT changes ###

 Issues hit 

  * igt@gem_ctx_isolation@preservation-s3@vcs0:
- shard-kbl:  [PASS][4] -> [DMESG-WARN][5] ([i915#180]) +6 similar 
issues
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/shard-kbl4/igt@gem_ctx_isolation@preservation...@vcs0.html
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104866v1/shard-kbl6/igt@gem_ctx_isolation@preservation...@vcs0.html

  * igt@gem_eio@kms:
- shard-tglb: [PASS][6] -> [FAIL][7] ([i915#5784])
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/shard-tglb6/igt@gem_...@kms.html
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104866v1/shard-tglb2/igt@gem_...@kms.html

  * igt@gem_exec_balancer@parallel-bb-first:
- shard-iclb: [PASS][8] -> [SKIP][9] ([i915#4525])
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/shard-iclb1/igt@gem_exec_balan...@parallel-bb-first.html
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104866v1/shard-iclb3/igt@gem_exec_balan...@parallel-bb-first.html

  * igt@gem_exec_fair@basic-none-share@rcs0:
- shard-tglb: [PASS][10] -> [FAIL][11] ([i915#2842])
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/shard-tglb5/igt@gem_exec_fair@basic-none-sh...@rcs0.html
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104866v1/shard-tglb1/igt@gem_exec_fair@basic-none-sh...@rcs0.html

  * igt@gem_exec_fair@basic-none-solo@rcs0:
- shard-apl:  [PASS][12] -> [FAIL][13] ([i915#2842])
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/shard-apl1/igt@gem_exec_fair@basic-none-s...@rcs0.html
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104866v1/shard-apl8/igt@gem_exec_fair@basic-none-s...@rcs0.html

  * igt@gem_exec_fair@basic-none@rcs0:
- shard-kbl:  [PASS][14] -> [FAIL][15] ([i915#2842]) +1 similar 
issue
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/shard-kbl6/igt@gem_exec_fair@basic-n...@rcs0.html
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104866v1/shard-kbl7/igt@gem_exec_fair@basic-n...@rcs0.html

  * igt@gem_exec_fair@basic-pace@vcs1:
- shard-iclb: NOTRUN -> [FAIL][16] ([i915#2842])
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104866v1/shard-iclb1/igt@gem_exec_fair@basic-p...@vcs1.html

  * igt@gem_exec_fair@basic-throttle@rcs0:
- shard-glk:  [PASS][17] -> [FAIL][18] ([i915#2842]) +1 similar 
issue
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/shard-glk8/igt@gem_exec_fair@basic-throt...@rcs0.htm

Re: [Intel-gfx] [PATCH i-g-t 9/9] lib/i915: request CPU_ACCESS for fb objects

2022-06-08 Thread Das, Nirmoy
Patch 6 is missing commit message, with that and the GitLab.Pipeline 
warning fix the series LGTM


Reviewed-by: Nirmoy Das 

On 5/25/2022 8:37 PM, Matthew Auld wrote:

kms_frontbuffer_tracking@basic falls over if the fb needs to be migrated
from non-mappable device memory, to the mappable part, due to being
temporarily pinned for scanout, when hitting the CPU fault handler,
which just gives us SIGBUS. If the device has a small BAR let's attempt
to use the mappable portion, if possible.

Signed-off-by: Matthew Auld 
Cc: Thomas Hellström 
---
  lib/ioctl_wrappers.c | 3 ++-
  1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/lib/ioctl_wrappers.c b/lib/ioctl_wrappers.c
index 09eb3ce7..7713e78b 100644
--- a/lib/ioctl_wrappers.c
+++ b/lib/ioctl_wrappers.c
@@ -635,7 +635,8 @@ uint32_t gem_buffer_create_fb_obj(int fd, uint64_t size)
uint32_t handle;
  
  	if (gem_has_lmem(fd))

-   handle = gem_create_in_memory_regions(fd, size, REGION_LMEM(0));
+   handle = gem_create_with_cpu_access_in_memory_regions(fd, size,
+ 
REGION_LMEM(0));
else
handle = gem_create(fd, size);
  


Re: [Intel-gfx] ✗ Fi.CI.IGT: failure for drm/i915/xehp: Correct steering initialization

2022-06-08 Thread Matt Roper
On Wed, Jun 08, 2022 at 08:23:34AM +, Patchwork wrote:
> == Series Details ==
> 
> Series: drm/i915/xehp: Correct steering initialization
> URL   : https://patchwork.freedesktop.org/series/104842/
> State : failure
> 
> == Summary ==
> 
> CI Bug Log - changes from CI_DRM_11732_full -> Patchwork_104842v1_full
> 
> 
> Summary
> ---
> 
>   **FAILURE**
> 
>   Serious unknown changes coming with Patchwork_104842v1_full absolutely need 
> to be
>   verified manually.
>   
>   If you think the reported changes have nothing to do with the changes
>   introduced in Patchwork_104842v1_full, please notify your bug team to allow 
> them
>   to document this new failure mode, which will reduce false positives in CI.
> 
>   
> 
> Participating hosts (10 -> 13)
> --
> 
>   Additional (3): shard-rkl shard-dg1 shard-tglu 
> 
> Possible new issues
> ---
> 
>   Here are the unknown changes that may have been introduced in 
> Patchwork_104842v1_full:
> 
> ### IGT changes ###
> 
>  Possible regressions 
> 
>   * igt@i915_module_load@reload-with-fault-injection:
> - shard-snb:  [PASS][1] -> [DMESG-WARN][2]
>[1]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-snb6/igt@i915_module_l...@reload-with-fault-injection.html
>[2]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-snb2/igt@i915_module_l...@reload-with-fault-injection.html

Filesystem panic; not related to i915.

> 
>   * igt@i915_pm_sseu@full-enable:
> - shard-skl:  NOTRUN -> [FAIL][3]
>[3]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-skl7/igt@i915_pm_s...@full-enable.html

This Xe_HP change wouldn't have affected the behavior of SKL.  I don't
see a preexisting bug that matches this though.

> 
>   * igt@kms_flip@flip-vs-panning-vs-hang:
> - shard-kbl:  NOTRUN -> [INCOMPLETE][4]
>[4]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-kbl1/igt@kms_f...@flip-vs-panning-vs-hang.html

  <2>[  427.216493] softdog: Initiating panic
  <0>[  427.216501] Kernel panic - not syncing: Software Watchdog Timer expired

It doesn't seem like anything was actually hung before the watchdog
fired though.  Not sure what the cause is, but not related to this
patch.

Patch applied to drm-intel-gt-next.  Thanks Bala for the review.


Matt
> 
>   
> Known issues
> 
> 
>   Here are the changes found in Patchwork_104842v1_full that come from known 
> issues:
> 
> ### CI changes ###
> 
>  Issues hit 
> 
>   * boot:
> - shard-glk:  ([PASS][5], [PASS][6], [PASS][7], [PASS][8], 
> [PASS][9], [PASS][10], [PASS][11], [PASS][12], [PASS][13], [PASS][14], 
> [PASS][15], [PASS][16], [PASS][17], [PASS][18], [PASS][19], [PASS][20], 
> [PASS][21], [PASS][22], [PASS][23], [PASS][24], [PASS][25], [PASS][26], 
> [PASS][27], [PASS][28]) -> ([PASS][29], [PASS][30], [PASS][31], [PASS][32], 
> [PASS][33], [PASS][34], [PASS][35], [PASS][36], [PASS][37], [PASS][38], 
> [PASS][39], [PASS][40], [PASS][41], [PASS][42], [PASS][43], [PASS][44], 
> [PASS][45], [PASS][46], [PASS][47], [PASS][48], [FAIL][49], [PASS][50], 
> [PASS][51], [PASS][52], [PASS][53]) ([i915#4392])
>[5]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk7/boot.html
>[6]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk7/boot.html
>[7]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk6/boot.html
>[8]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk6/boot.html
>[9]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk5/boot.html
>[10]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk5/boot.html
>[11]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk4/boot.html
>[12]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk4/boot.html
>[13]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk4/boot.html
>[14]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk4/boot.html
>[15]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk3/boot.html
>[16]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk3/boot.html
>[17]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk8/boot.html
>[18]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk3/boot.html
>[19]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk2/boot.html
>[20]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk2/boot.html
>[21]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk2/boot.html
>[22]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk8/boot.html
>[23]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk9/boot.html
>[24]: 
> https://intel-gf

Re: [Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915: More PVC+DG2 workarounds

2022-06-08 Thread Matt Roper
On Wed, Jun 08, 2022 at 12:24:04PM +, Patchwork wrote:
> == Series Details ==
> 
> Series: drm/i915: More PVC+DG2 workarounds
> URL   : https://patchwork.freedesktop.org/series/104866/
> State : success
> 
> == Summary ==
> 
> CI Bug Log - changes from CI_DRM_11733_full -> Patchwork_104866v1_full
> 
> 
> Summary
> ---
> 
>   **SUCCESS**
> 
>   No regressions found.

Applied to drm-intel-gt-next.  Thanks Anshuman for the review.


Matt

> 
>   
> 
> Participating hosts (13 -> 13)
> --
> 
>   No changes in participating hosts
> 
> Possible new issues
> ---
> 
>   Here are the unknown changes that may have been introduced in 
> Patchwork_104866v1_full:
> 
> ### IGT changes ###
> 
>  Possible regressions 
> 
>   * 
> {igt@kms_plane_scaling@plane-downscale-with-modifiers-factor-0-25@pipe-a-hdmi-a-3}
>  (NEW):
> - {shard-dg1}:NOTRUN -> [SKIP][1] +3 similar issues
>[1]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104866v1/shard-dg1-18/igt@kms_plane_scaling@plane-downscale-with-modifiers-factor-0...@pipe-a-hdmi-a-3.html
> 
>   
>  Suppressed 
> 
>   The following results come from untrusted machines, tests, or statuses.
>   They do not affect the overall result.
> 
>   * igt@kms_cursor_legacy@pipe-a-torture-move:
> - {shard-dg1}:NOTRUN -> [WARN][2]
>[2]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104866v1/shard-dg1-13/igt@kms_cursor_leg...@pipe-a-torture-move.html
> 
>   * igt@kms_draw_crc@draw-method-rgb565-pwrite-xtiled:
> - {shard-dg1}:NOTRUN -> [FAIL][3] +2 similar issues
>[3]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104866v1/shard-dg1-13/igt@kms_draw_...@draw-method-rgb565-pwrite-xtiled.html
> 
>   
> New tests
> -
> 
>   New tests have been introduced between CI_DRM_11733_full and 
> Patchwork_104866v1_full:
> 
> ### New IGT tests (4) ###
> 
>   * 
> igt@kms_plane_scaling@plane-downscale-with-modifiers-factor-0-25@pipe-a-hdmi-a-3:
> - Statuses : 1 skip(s)
> - Exec time: [0.03] s
> 
>   * 
> igt@kms_plane_scaling@plane-downscale-with-modifiers-factor-0-25@pipe-b-hdmi-a-3:
> - Statuses : 1 skip(s)
> - Exec time: [0.03] s
> 
>   * 
> igt@kms_plane_scaling@plane-downscale-with-modifiers-factor-0-25@pipe-c-hdmi-a-3:
> - Statuses : 1 skip(s)
> - Exec time: [0.03] s
> 
>   * 
> igt@kms_plane_scaling@plane-downscale-with-modifiers-factor-0-25@pipe-d-hdmi-a-3:
> - Statuses : 1 skip(s)
> - Exec time: [0.03] s
> 
>   
> 
> Known issues
> 
> 
>   Here are the changes found in Patchwork_104866v1_full that come from known 
> issues:
> 
> ### IGT changes ###
> 
>  Issues hit 
> 
>   * igt@gem_ctx_isolation@preservation-s3@vcs0:
> - shard-kbl:  [PASS][4] -> [DMESG-WARN][5] ([i915#180]) +6 
> similar issues
>[4]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/shard-kbl4/igt@gem_ctx_isolation@preservation...@vcs0.html
>[5]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104866v1/shard-kbl6/igt@gem_ctx_isolation@preservation...@vcs0.html
> 
>   * igt@gem_eio@kms:
> - shard-tglb: [PASS][6] -> [FAIL][7] ([i915#5784])
>[6]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/shard-tglb6/igt@gem_...@kms.html
>[7]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104866v1/shard-tglb2/igt@gem_...@kms.html
> 
>   * igt@gem_exec_balancer@parallel-bb-first:
> - shard-iclb: [PASS][8] -> [SKIP][9] ([i915#4525])
>[8]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/shard-iclb1/igt@gem_exec_balan...@parallel-bb-first.html
>[9]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104866v1/shard-iclb3/igt@gem_exec_balan...@parallel-bb-first.html
> 
>   * igt@gem_exec_fair@basic-none-share@rcs0:
> - shard-tglb: [PASS][10] -> [FAIL][11] ([i915#2842])
>[10]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/shard-tglb5/igt@gem_exec_fair@basic-none-sh...@rcs0.html
>[11]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104866v1/shard-tglb1/igt@gem_exec_fair@basic-none-sh...@rcs0.html
> 
>   * igt@gem_exec_fair@basic-none-solo@rcs0:
> - shard-apl:  [PASS][12] -> [FAIL][13] ([i915#2842])
>[12]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/shard-apl1/igt@gem_exec_fair@basic-none-s...@rcs0.html
>[13]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104866v1/shard-apl8/igt@gem_exec_fair@basic-none-s...@rcs0.html
> 
>   * igt@gem_exec_fair@basic-none@rcs0:
> - shard-kbl:  [PASS][14] -> [FAIL][15] ([i915#2842]) +1 similar 
> issue
>[14]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/shard-kbl6/igt@gem_exec_fair@basic-n...@rcs0.html
>[15]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104866v1/shard-kbl7/igt@gem_exec_fair@basic-n...@rcs0.html
> 
>   * igt@gem_exec_fair@basic-pace@vcs1:
> 

Re: [Intel-gfx] [RFC PATCH v2 00/27] DRM.debug on DYNAMIC_DEBUG, add trace events

2022-06-08 Thread Daniel Vetter
On Mon, Jun 06, 2022 at 08:59:36AM -0600, jim.cro...@gmail.com wrote:
> On Wed, May 25, 2022 at 9:02 AM Daniel Vetter  wrote:
> 
> > On Mon, May 16, 2022 at 04:56:13PM -0600, Jim Cromie wrote:
> > > DRM.debug API is 23 macros, issuing 10 exclusive categories of debug
> > > messages.  By rough count, they are used 5140 times in the kernel.
> > > These all call drm_dbg or drm_devdbg, which call drm_debug_enabled(),
> > > which checks bits in global __drm_debug.  Some of these are page-flips
> > > and vblanks, and get called often.
> > >
> > > DYNAMIC_DEBUG (with CONFIG_JUMP_LABEL) is built to avoid this kind of
> > > work, with NOOPd jump/callsites.
> > >
> > > This patchset is RFC because:
> > > - it touches 2.5 subsystems: dyndbg, drm, tracefs (new events)
> > > - dyndbg class support is built for drm, needs it for validation
> > > - new api, used by drm
> > > - big memory impact, with 5100 new pr-debug callsites.
> > > - drm class bikeshedding opportunities
> > > - others, names etc.
> >
> > Thanks a lot for keeping on pushing this!
> 
> 
> > >
> > > DYNAMIC_DEBUG:
> > >
> 
> 
> 
> > > RFC:
> > >
> > > dynamic_debug_register_classes() cannot act early enough to be in
> > > effect at module-load.  So this will not work as you'd reasonably
> > > expect:
> > >
> > >   modprobe test_dynamic_debug dyndbg='+pfm; class FOO +pfmlt'
> > >
> > > The 1st query:+pfm will be enabled during load, but in the 2nd query,
> > > "class FOO" will be unknown at load time.  Early class enablement
> > > would be nice.  DYNAMIC_DEBUG_CLASSES is a static initializer, which
> > > is certainly early enough, but Im missing a trick, suggestions?
> >
> > So maybe I'm just totally overloading this work here so feel free to
> > ignore or postpone, but: Could we do the dynamic_debug_register_classes()
> > automatically at module load as a new special section? And then throw in a
> > bit of kbuild so that in a given subsystem every driver gets the same
> > class names by default and everything would just work, without having to
> > sprinkle calls to dynamic_debug_register_classes() all over the place?
> >
> 
> This is now done; Ive added __dyndbg_classes section.
> load_module() now grabs it from the .ko
> and ddebug_add_module() attaches it to the module's ddebug_table record.
> for builtins, dynamic_debug_init feeds the builtin class-maps to
> ddebug_add_module
> 
> bash-5.1# modprobe test_dynamic_debug dyndbg="class FOO +p"
> [   88.374722] dyndbg: class[0]: nm:test_dynamic_debug base:20 len:7 ty:1
> [   88.375158] dyndbg:  0: EMERG
> [   88.375345] dyndbg:  1: DANGER
> [   88.375540] dyndbg:  2: ERROR
> [   88.375726] dyndbg:  3: WARNING
> [   88.375930] dyndbg:  4: NOTICE
> [   88.376130] dyndbg:  5: INFO
> [   88.376310] dyndbg:  6: DEBUG
> [   88.376499] dyndbg: class[1]: nm:test_dynamic_debug base:12 len:3 ty:1
> [   88.376903] dyndbg:  0: ONE
> [   88.377079] dyndbg:  1: TWO
> [   88.377253] dyndbg:  2: THREE
> [   88.377441] dyndbg: class[2]: nm:test_dynamic_debug base:8 len:3 ty:0
> [   88.377837] dyndbg:  0: bing
> [   88.378022] dyndbg:  1: bong
> [   88.378203] dyndbg:  2: boom
> [   88.378387] dyndbg: class[3]: nm:test_dynamic_debug base:4 len:3 ty:0
> [   88.378800] dyndbg:  0: Foo
> [   88.378986] dyndbg:  1: Bar
> [   88.379167] dyndbg:  2: Buzz
> [   88.379348] dyndbg: class[4]: nm:test_dynamic_debug base:0 len:3 ty:0
> [   88.379757] dyndbg:  0: FOO
> [   88.379938] dyndbg:  1: BAR
> [   88.380136] dyndbg:  2: BUZZ
> [   88.380410] dyndbg: module:test_dynamic_debug attached 5 classes
> [   88.380881] dyndbg:  24 debug prints in module test_dynamic_debug
> [   88.381315] dyndbg: module: test_dynamic_debug dyndbg="class FOO +p"
> [   88.381714] dyndbg: query 0: "class FOO +p" mod:test_dynamic_debug
> [   88.382109] dyndbg: split into words: "class" "FOO" "+p"
> [   88.382445] dyndbg: op='+'
> [   88.382616] dyndbg: flags=0x1
> [   88.382802] dyndbg: *flagsp=0x1 *maskp=0x
> [   88.383101] dyndbg: parsed: func="" file="" module="test_dynamic_debug"
> format="" lineno=0-0 class=FOO
> [   88.383740] dyndbg: applied: func="" file="" module="test_dynamic_debug"
> format="" lineno=0-0 class=FOO
> [   88.384324] dyndbg: processed 1 queries, with 2 matches, 0 errs
> bash-5.1#
> 
> so its working at module-load time.

Awesome!

> > For the entire class approach, did you spot another subsystem that could
> > benefit from this and maybe make a more solid case that this is something
> > good?
> >
> 
> I had been working on the premise that ~4k drm*dbg callsites was a good
> case.

Oh I'm happy with just drm, but occasionally we've done stuff in drm that
the wider kernel community found a bit silly. So bit more acks/validation
from outside the dri-devel echo chamber would be great, whatever form it
is.

> verbosity-levels - with x missing.
> 
> the next revision adds something, which "kinda works".
> But I think I'll rip it out, and do this simpler approach instead:
> 
> implement a verbose/levels  param & callback, which take

[Intel-gfx] [CI] drm/i915/pvc: Add register steering

2022-06-08 Thread Matt Roper
Ponte Vecchio no longer has MSLICE or LNCF steering, but the bspec does
document several new types of multicast register ranges.  Fortunately,
most of the different MCR types all provide valid values at instance
(0,0) so there's no need to read fuse registers and calculate a
non-terminated instance.  We'll lump all of those range types (BSLICE,
HALFBSLICE, TILEPSMI, CC, and L3BANK) into a single category called
"INSTANCE0" to keep things simple.  We'll also perform explicit steering
for each of these multicast register types, even if the implicit
steering setup for COMPUTE/DSS ranges would have worked too; this is
based on guidance from our hardware architects who suggested that we
move away from implicit steering and start explicitly steer all MCR
register accesses on modern platforms (we'll work on transitioning
COMPUTE/DSS to explicit steering in the future).

Note that there's one additional MCR range type defined in the bspec
(SQIDI) that we don't handle here.  Those ranges use a different
steering control register that we never touch; since instance 0 is also
always a valid setting there, we can just ignore those ranges.

Finally, we'll rename the HAS_MSLICES() macro to HAS_MSLICE_STEERING().
PVC hardware still has units referred to as mslices, but there's no
register steering based on mslice for this platform.

v2:
 - Rebase on other recent changes
 - Swap two table rows to keep table sorted & easy to read.  (Harish)

Bspec: 67609
Signed-off-by: Matt Roper 
Reviewed-by: Harish Chegondi 
---
 drivers/gpu/drm/i915/gt/intel_gt.c  | 50 ++---
 drivers/gpu/drm/i915/gt/intel_gt_types.h|  7 +++
 drivers/gpu/drm/i915/gt/intel_workarounds.c | 16 +++
 drivers/gpu/drm/i915/i915_drv.h |  3 +-
 drivers/gpu/drm/i915/i915_pci.c |  3 +-
 drivers/gpu/drm/i915/intel_device_info.h|  2 +-
 6 files changed, 71 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c 
b/drivers/gpu/drm/i915/gt/intel_gt.c
index ddfb98f70489..f33290358c51 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt.c
@@ -106,6 +106,7 @@ static const char * const intel_steering_types[] = {
"L3BANK",
"MSLICE",
"LNCF",
+   "INSTANCE 0",
 };
 
 static const struct intel_mmio_range icl_l3bank_steering_table[] = {
@@ -133,6 +134,27 @@ static const struct intel_mmio_range 
dg2_lncf_steering_table[] = {
{},
 };
 
+/*
+ * We have several types of MCR registers on PVC where steering to (0,0)
+ * will always provide us with a non-terminated value.  We'll stick them
+ * all in the same table for simplicity.
+ */
+static const struct intel_mmio_range pvc_instance0_steering_table[] = {
+   { 0x004000, 0x004AFF }, /* HALF-BSLICE */
+   { 0x008800, 0x00887F }, /* CC */
+   { 0x008A80, 0x008AFF }, /* TILEPSMI */
+   { 0x00B000, 0x00B0FF }, /* HALF-BSLICE */
+   { 0x00B100, 0x00B3FF }, /* L3BANK */
+   { 0x00C800, 0x00CFFF }, /* HALF-BSLICE */
+   { 0x00D800, 0x00D8FF }, /* HALF-BSLICE */
+   { 0x00DD00, 0x00DDFF }, /* BSLICE */
+   { 0x00E900, 0x00E9FF }, /* HALF-BSLICE */
+   { 0x00EC00, 0x00EEFF }, /* HALF-BSLICE */
+   { 0x00F000, 0x00 }, /* HALF-BSLICE */
+   { 0x024180, 0x0241FF }, /* HALF-BSLICE */
+   {},
+};
+
 int intel_gt_init_mmio(struct intel_gt *gt)
 {
struct drm_i915_private *i915 = gt->i915;
@@ -146,7 +168,7 @@ int intel_gt_init_mmio(struct intel_gt *gt)
 * An mslice is unavailable only if both the meml3 for the slice is
 * disabled *and* all of the DSS in the slice (quadrant) are disabled.
 */
-   if (HAS_MSLICES(i915)) {
+   if (HAS_MSLICE_STEERING(i915)) {
gt->info.mslice_mask =

intel_slicemask_from_xehp_dssmask(gt->info.sseu.subslice_mask,
  GEN_DSS_PER_MSLICE);
@@ -158,7 +180,9 @@ int intel_gt_init_mmio(struct intel_gt *gt)
drm_warn(&i915->drm, "mslice mask all zero!\n");
}
 
-   if (IS_DG2(i915)) {
+   if (IS_PONTEVECCHIO(i915)) {
+   gt->steering_table[INSTANCE0] = pvc_instance0_steering_table;
+   } else if (IS_DG2(i915)) {
gt->steering_table[MSLICE] = xehpsdv_mslice_steering_table;
gt->steering_table[LNCF] = dg2_lncf_steering_table;
} else if (IS_XEHPSDV(i915)) {
@@ -172,7 +196,11 @@ int intel_gt_init_mmio(struct intel_gt *gt)
GEN10_L3BANK_MASK;
if (!gt->info.l3bank_mask) /* should be impossible! */
drm_warn(&i915->drm, "L3 bank mask is all zero!\n");
-   } else if (HAS_MSLICES(i915)) {
+   } else if (GRAPHICS_VER(i915) >= 11) {
+   /*
+* We expect all modern platforms to have at least some
+* type o

Re: [Intel-gfx] [PATCH] drm/i915/guc/slpc: Use non-blocking H2G for waitboost

2022-06-08 Thread Dixit, Ashutosh
On Tue, 07 Jun 2022 16:15:19 -0700, John Harrison wrote:
>
> On 6/7/2022 15:29, Dixit, Ashutosh wrote:
> > On Sat, 14 May 2022 23:05:06 -0700, Vinay Belgaumkar wrote:
> >> SLPC min/max frequency updates require H2G calls. We are seeing
> >> timeouts when GuC channel is backed up and it is unable to respond
> >> in a timely fashion causing warnings and affecting CI.
> >>
> >> This is seen when waitboosting happens during a stress test.
> >> this patch updates the waitboost path to use a non-blocking
> >> H2G call instead, which returns as soon as the message is
> >> successfully transmitted.
> > Overall I think this patch is trying to paper over problems in the blocking
> > H2G CT interface (specifically the 1 second timeout in
> > wait_for_ct_request_update()). So I think we should address that problem in
> > the interface directly rather than having each client (SLPC and any future
> > client) work around the problem. Following points:
> >
> > 1. This patch seems to assume that it is 'ok' to ignore the return code
> > from FW for a waitboost request (arguing waitboost is best effort so
> > it's ok to 'fire and forget'). But the return code is still useful
> > e.g. in cases where we see performance issues and want to go back and
> > investigate if FW rejected any waitboost requests.
>
> You still get errors reported in the GuC log. Indeed, some errors (or at
> least error reasons) are only visible in the log not in the return code.

OK, so we at least have this method for debug available.

> > 2. We are already seeing that a 1 second timeout is not sufficient. So why
> > not simply increase that timeout?
> >
> > 3. In fact if we are saying that the CT interface is a "reliable" interface
> > (implying no message loss), to ensure reliability that timeout should
> > not simply be increased, it should be made "infinite" (in quotes).
> >
> > 4. Maybe it would have been best to not have a "blocking" H2G interface at
> > all (with the wait in wait_for_ct_request_update()). Just have an
> > asynchronous interface (which mirrors the actual interface between FW
> > and i915) in which clients register callbacks which are invoked when FW
> > responds. If this is too big a change we can probably continue with the
> > current blocking interface after increasing the timeout as mentioned
> > above.
> >
> > 5. Finally, the waitboost request is just the most likely to get stuck at
> > the back of a full CT queue since it happens during normal
> > operation. Actually any request, say one initiated from sysfs, can also
> > get similarly stuck at the back of a full queue. So any solution should
> > also address that situation (where the return code is needed and
> > similarly for a future client of the "blocking" (REQUEST/RESPONSE)
> > interface).
> The blocking interface is only intended for init time operations, not
> runtime.

In that case we should probably have code to enforce this in i915.

> Stuff where the operation is meant to be synchronous and the KMD
> should not proceed until it has an ack back from the GuC that the update
> has taken place. All runtime operations are expected to be asynchronous. If
> a response is required, then it should be sent via an async
> callback. E.g. context de-registration is a 'fire and forget' H2G call but
> gets a 'deregistration complete' G2H notification when it is safe for the
> KMD to free up the associated storage.

At present all GuC interactions in intel_guc_slpc.c (in i915) do *not*
follow this. They use the REQUEST/RESPONSE FW interface which is pushed
through the blocking H2G CT interface in i915. If we are serious about this
this needs a GuC FW change to use bi-directional EVENT's used in the
asynchronous interface (with corresponding changes in intel_guc_slpc.c).

> There is an 'errors only' H2G mechanism. That will not send an ack back in
> the case of a successful H2G but will send back an error notification in
> the case of a failure. All async H2Gs should really be using that
> mechanism. I think Michal W did post a patch for it and I was meant to be
> reviewing it but it dropped of my radar due to other higher priorities.

These I believe are referred to as FAST_REQUEST's in GuC FW. That success
is not communicated back to the KMD might be an issue in cases where KMD
needs to know whether a particular operation was successful (such as
for operations initiated via sysfs).

Thanks.
--
Ashutosh


[Intel-gfx] ✗ Fi.CI.SPARSE: warning for drm/i915/pvc: Add register steering (rev2)

2022-06-08 Thread Patchwork
== Series Details ==

Series: drm/i915/pvc: Add register steering (rev2)
URL   : https://patchwork.freedesktop.org/series/104691/
State : warning

== Summary ==

Error: dim sparse failed
Sparse version: v0.6.2
Fast mode used, each commit won't be checked separately.




[Intel-gfx] ✓ Fi.CI.IGT: success for drm/edid: expand on struct drm_edid usage (rev3)

2022-06-08 Thread Patchwork
== Series Details ==

Series: drm/edid: expand on struct drm_edid usage (rev3)
URL   : https://patchwork.freedesktop.org/series/104309/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_11733_full -> Patchwork_104309v3_full


Summary
---

  **SUCCESS**

  No regressions found.

  

Participating hosts (13 -> 13)
--

  No changes in participating hosts

Known issues


  Here are the changes found in Patchwork_104309v3_full that come from known 
issues:

### IGT changes ###

 Issues hit 

  * igt@gem_ctx_persistence@legacy-engines-hostile:
- shard-snb:  NOTRUN -> [SKIP][1] ([fdo#109271] / [i915#1099])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/shard-snb5/igt@gem_ctx_persiste...@legacy-engines-hostile.html

  * igt@gem_eio@in-flight-contexts-10ms:
- shard-tglb: [PASS][2] -> [TIMEOUT][3] ([i915#3063]) +1 similar 
issue
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/shard-tglb7/igt@gem_...@in-flight-contexts-10ms.html
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/shard-tglb3/igt@gem_...@in-flight-contexts-10ms.html

  * igt@gem_eio@unwedge-stress:
- shard-skl:  [PASS][4] -> [TIMEOUT][5] ([i915#3063])
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/shard-skl2/igt@gem_...@unwedge-stress.html
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/shard-skl1/igt@gem_...@unwedge-stress.html

  * igt@gem_exec_balancer@parallel-contexts:
- shard-iclb: [PASS][6] -> [SKIP][7] ([i915#4525])
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/shard-iclb1/igt@gem_exec_balan...@parallel-contexts.html
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/shard-iclb3/igt@gem_exec_balan...@parallel-contexts.html

  * igt@gem_exec_fair@basic-none-share@rcs0:
- shard-iclb: [PASS][8] -> [FAIL][9] ([i915#2842])
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/shard-iclb3/igt@gem_exec_fair@basic-none-sh...@rcs0.html
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/shard-iclb6/igt@gem_exec_fair@basic-none-sh...@rcs0.html

  * igt@gem_exec_fair@basic-none-solo@rcs0:
- shard-apl:  [PASS][10] -> [FAIL][11] ([i915#2842])
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/shard-apl1/igt@gem_exec_fair@basic-none-s...@rcs0.html
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/shard-apl7/igt@gem_exec_fair@basic-none-s...@rcs0.html

  * igt@gem_exec_fair@basic-pace-share@rcs0:
- shard-tglb: [PASS][12] -> [FAIL][13] ([i915#2842])
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/shard-tglb5/igt@gem_exec_fair@basic-pace-sh...@rcs0.html
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/shard-tglb3/igt@gem_exec_fair@basic-pace-sh...@rcs0.html

  * igt@gem_exec_fair@basic-pace-solo@rcs0:
- shard-kbl:  [PASS][14] -> [FAIL][15] ([i915#2842]) +2 similar 
issues
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/shard-kbl4/igt@gem_exec_fair@basic-pace-s...@rcs0.html
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/shard-kbl4/igt@gem_exec_fair@basic-pace-s...@rcs0.html

  * igt@gem_exec_fair@basic-throttle@rcs0:
- shard-glk:  [PASS][16] -> [FAIL][17] ([i915#2842]) +1 similar 
issue
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/shard-glk8/igt@gem_exec_fair@basic-throt...@rcs0.html
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/shard-glk1/igt@gem_exec_fair@basic-throt...@rcs0.html

  * igt@gem_lmem_swapping@basic:
- shard-skl:  NOTRUN -> [SKIP][18] ([fdo#109271] / [i915#4613])
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/shard-skl4/igt@gem_lmem_swapp...@basic.html

  * igt@gem_ppgtt@flink-and-close-vma-leak:
- shard-glk:  [PASS][19] -> [FAIL][20] ([i915#644])
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/shard-glk7/igt@gem_pp...@flink-and-close-vma-leak.html
   [20]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/shard-glk8/igt@gem_pp...@flink-and-close-vma-leak.html

  * igt@gem_softpin@noreloc-s3:
- shard-kbl:  [PASS][21] -> [DMESG-WARN][22] ([i915#180]) +1 
similar issue
   [21]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/shard-kbl7/igt@gem_soft...@noreloc-s3.html
   [22]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/shard-kbl6/igt@gem_soft...@noreloc-s3.html

  * igt@gem_workarounds@suspend-resume-context:
- shard-skl:  [PASS][23] -> [INCOMPLETE][24] ([i915#4939] / 
[i915#5129])
   [23]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11733/shard-skl1/igt@gem_workarou...@suspend-resume-context.html
   [24]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104309v3/shard-skl4/igt@gem_workarou...@suspen

Re: [Intel-gfx] [RFC v3 3/3] drm/doc/rfc: VM_BIND uapi definition

2022-06-08 Thread Niranjana Vishwanathapura

On Wed, Jun 08, 2022 at 08:34:36AM +0100, Tvrtko Ursulin wrote:


On 07/06/2022 22:25, Niranjana Vishwanathapura wrote:

On Tue, Jun 07, 2022 at 11:42:08AM +0100, Tvrtko Ursulin wrote:


On 03/06/2022 07:53, Niranjana Vishwanathapura wrote:
On Wed, Jun 01, 2022 at 10:08:35PM -0700, Niranjana 
Vishwanathapura wrote:

On Wed, Jun 01, 2022 at 11:27:17AM +0200, Daniel Vetter wrote:

On Wed, 1 Jun 2022 at 11:03, Dave Airlie  wrote:


On Tue, 24 May 2022 at 05:20, Niranjana Vishwanathapura
 wrote:


On Thu, May 19, 2022 at 04:07:30PM -0700, Zanoni, Paulo R wrote:

On Tue, 2022-05-17 at 11:32 -0700, Niranjana Vishwanathapura wrote:

VM_BIND and related uapi definitions

v2: Ensure proper kernel-doc formatting with cross references.
 Also add new uapi and documentation as per review comments
 from Daniel.

Signed-off-by: Niranjana Vishwanathapura



---
  Documentation/gpu/rfc/i915_vm_bind.h | 399

+++

  1 file changed, 399 insertions(+)
  create mode 100644 Documentation/gpu/rfc/i915_vm_bind.h

diff --git a/Documentation/gpu/rfc/i915_vm_bind.h

b/Documentation/gpu/rfc/i915_vm_bind.h

new file mode 100644
index ..589c0a009107
--- /dev/null
+++ b/Documentation/gpu/rfc/i915_vm_bind.h
@@ -0,0 +1,399 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2022 Intel Corporation
+ */
+
+/**
+ * DOC: I915_PARAM_HAS_VM_BIND
+ *
+ * VM_BIND feature availability.
+ * See typedef drm_i915_getparam_t param.
+ */
+#define I915_PARAM_HAS_VM_BIND   57
+
+/**
+ * DOC: I915_VM_CREATE_FLAGS_USE_VM_BIND
+ *
+ * Flag to opt-in for VM_BIND mode of binding during VM creation.
+ * See struct drm_i915_gem_vm_control flags.
+ *
+ * A VM in VM_BIND mode will not support the older

execbuff mode of binding.
+ * In VM_BIND mode, execbuff ioctl will not accept 
any

execlist (ie., the

+ * &drm_i915_gem_execbuffer2.buffer_count must be 0).
+ * Also, &drm_i915_gem_execbuffer2.batch_start_offset and
+ * &drm_i915_gem_execbuffer2.batch_len must be 0.
+ * DRM_I915_GEM_EXECBUFFER_EXT_BATCH_ADDRESSES 
extension

must be provided

+ * to pass in the batch buffer addresses.
+ *
+ * Additionally, I915_EXEC_NO_RELOC, I915_EXEC_HANDLE_LUT and
+ * I915_EXEC_BATCH_FIRST of

&drm_i915_gem_execbuffer2.flags must be 0
+ * (not used) in VM_BIND mode. 
I915_EXEC_USE_EXTENSIONS

flag must always be

+ * set (See struct drm_i915_gem_execbuffer_ext_batch_addresses).
+ * The buffers_ptr, buffer_count, 
batch_start_offset and

batch_len fields
+ * of struct drm_i915_gem_execbuffer2 are also not 
used

and must be 0.

+ */


From that description, it seems we have:

struct drm_i915_gem_execbuffer2 {
    __u64 buffers_ptr;  -> must be 0 (new)
    __u32 buffer_count; -> must be 0 (new)
    __u32 batch_start_offset;   -> must be 0 (new)
    __u32 batch_len;    -> must be 0 (new)
    __u32 DR1;  -> must be 0 (old)
    __u32 DR4;  -> must be 0 (old)
    __u32 num_cliprects; (fences)   -> must be 0 
since

using extensions

    __u64 cliprects_ptr; (fences, extensions) ->

contains an actual pointer!
    __u64 flags;    -> some flags 
must be 0 (new)

    __u64 rsvd1; (context info) -> repurposed field (old)
    __u64 rsvd2;    -> unused
};

Based on that, why can't we just get 
drm_i915_gem_execbuffer3 instead
of adding even more complexity to an already abused 
interface? While

the Vulkan-like extension thing is really nice, I don't think what
we're doing here is extending the ioctl usage, we're completely
changing how the base struct should be interpreted 
based on

how the VM

was created (which is an entirely different ioctl).

From Rusty Russel's API Design grading, drm_i915_gem_execbuffer2 is
already at -6 without these changes. I think after 
vm_bind we'll need

to create a -11 entry just to deal with this ioctl.



The only change here is removing the execlist support for VM_BIND
mode (other than natual extensions).
Adding a new execbuffer3 was considered, but I think we 
need to be careful
with that as that goes beyond the VM_BIND support, 
including any future

requirements (as we don't want an execbuffer4 after VM_BIND).


Why not? it's not like adding extensions here is really 
that different

than adding new ioctls.

I definitely think this deserves an execbuffer3 without even
considering future requirements. Just  to burn down the old
requirements and pointless fields.

Make execbuffer3 be vm bind only, no relocs, no legacy 
bits, leave the

older sw on execbuf2 for ever.


I guess another point in favour of execbuf3 would be that it's less
midlayer. If we share the entry point then there's quite a few vfuncs
needed to cleanly split out the vm_bind paths from the legacy
reloc/softping paths.

If we invert this and do execbuf3, then there's the existing ioctl
vfunc, and then we share code (where it even makes sense, probably
request setup/

[Intel-gfx] [PATCH] drm/i915/display: Fix handling of enable_psr parameter

2022-06-08 Thread José Roberto de Souza
Commit 3cf050762534 ("drm/i915/bios: Split VBT data into per-panel vs.
global parts") cause PSR to be disabled when enable_psr has the
default value and there is at least one DP port that do not supports
PSR.

That was happening because intel_psr_init() is called for every DP
port and then enable_psr is globaly set to 0 based on the PSR support
of the DP port.

Here dropping the enable_psr overwritten and using the VBT PSR value
when enable_psr is set as default.

Fixes: 3cf050762534 ("drm/i915/bios: Split VBT data into per-panel vs. global 
parts")
Cc: Ville Syrjälä 
Cc: Jani Nikula 
Cc: Jouni Högander 
Cc: Mika Kahola 
Signed-off-by: José Roberto de Souza 
---
 drivers/gpu/drm/i915/display/intel_psr.c | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_psr.c 
b/drivers/gpu/drm/i915/display/intel_psr.c
index aedb3e0e69ecd..7d61c55184e51 100644
--- a/drivers/gpu/drm/i915/display/intel_psr.c
+++ b/drivers/gpu/drm/i915/display/intel_psr.c
@@ -86,10 +86,13 @@
 
 static bool psr_global_enabled(struct intel_dp *intel_dp)
 {
+   struct intel_connector *connector = intel_dp->attached_connector;
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
 
switch (intel_dp->psr.debug & I915_PSR_DEBUG_MODE_MASK) {
case I915_PSR_DEBUG_DEFAULT:
+   if (i915->params.enable_psr == -1)
+   return connector->panel.vbt.psr.enable;
return i915->params.enable_psr;
case I915_PSR_DEBUG_DISABLE:
return false;
@@ -2394,10 +2397,6 @@ void intel_psr_init(struct intel_dp *intel_dp)
 
intel_dp->psr.source_support = true;
 
-   if (dev_priv->params.enable_psr == -1)
-   if (!connector->panel.vbt.psr.enable)
-   dev_priv->params.enable_psr = 0;
-
/* Set link_standby x link_off defaults */
if (DISPLAY_VER(dev_priv) < 12)
/* For new platforms up to TGL let's respect VBT back again */
-- 
2.36.1



Re: [Intel-gfx] [RFC v3 3/3] drm/doc/rfc: VM_BIND uapi definition

2022-06-08 Thread Niranjana Vishwanathapura

On Wed, Jun 08, 2022 at 09:54:24AM +0100, Tvrtko Ursulin wrote:


On 08/06/2022 09:45, Lionel Landwerlin wrote:

On 08/06/2022 11:36, Tvrtko Ursulin wrote:


On 08/06/2022 07:40, Lionel Landwerlin wrote:

On 03/06/2022 09:53, Niranjana Vishwanathapura wrote:
On Wed, Jun 01, 2022 at 10:08:35PM -0700, Niranjana 
Vishwanathapura wrote:

On Wed, Jun 01, 2022 at 11:27:17AM +0200, Daniel Vetter wrote:

On Wed, 1 Jun 2022 at 11:03, Dave Airlie  wrote:


On Tue, 24 May 2022 at 05:20, Niranjana Vishwanathapura
 wrote:


On Thu, May 19, 2022 at 04:07:30PM -0700, Zanoni, Paulo R wrote:
On Tue, 2022-05-17 at 11:32 -0700, Niranjana 

Vishwanathapura wrote:

VM_BIND and related uapi definitions

v2: Ensure proper kernel-doc formatting with cross references.
 Also add new uapi and documentation as per review comments
 from Daniel.

Signed-off-by: Niranjana Vishwanathapura 



---
  Documentation/gpu/rfc/i915_vm_bind.h | 399 

+++

  1 file changed, 399 insertions(+)
  create mode 100644 Documentation/gpu/rfc/i915_vm_bind.h

diff --git a/Documentation/gpu/rfc/i915_vm_bind.h 

b/Documentation/gpu/rfc/i915_vm_bind.h

new file mode 100644
index ..589c0a009107
--- /dev/null
+++ b/Documentation/gpu/rfc/i915_vm_bind.h
@@ -0,0 +1,399 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2022 Intel Corporation
+ */
+
+/**
+ * DOC: I915_PARAM_HAS_VM_BIND
+ *
+ * VM_BIND feature availability.
+ * See typedef drm_i915_getparam_t param.
+ */
+#define I915_PARAM_HAS_VM_BIND 57
+
+/**
+ * DOC: I915_VM_CREATE_FLAGS_USE_VM_BIND
+ *
+ * Flag to opt-in for VM_BIND mode of binding 

during VM creation.

+ * See struct drm_i915_gem_vm_control flags.
+ *
+ * A VM in VM_BIND mode will not support the older 

execbuff mode of binding.
+ * In VM_BIND mode, execbuff ioctl will not accept 

any execlist (ie., the

+ * &drm_i915_gem_execbuffer2.buffer_count must be 0).
+ * Also, &drm_i915_gem_execbuffer2.batch_start_offset and
+ * &drm_i915_gem_execbuffer2.batch_len must be 0.
+ * DRM_I915_GEM_EXECBUFFER_EXT_BATCH_ADDRESSES 

extension must be provided

+ * to pass in the batch buffer addresses.
+ *
+ * Additionally, I915_EXEC_NO_RELOC, I915_EXEC_HANDLE_LUT and
+ * I915_EXEC_BATCH_FIRST of 

&drm_i915_gem_execbuffer2.flags must be 0
+ * (not used) in VM_BIND mode. 

I915_EXEC_USE_EXTENSIONS flag must always be
+ * set (See struct 

drm_i915_gem_execbuffer_ext_batch_addresses).
+ * The buffers_ptr, buffer_count, 

batch_start_offset and batch_len fields
+ * of struct drm_i915_gem_execbuffer2 are also not 

used and must be 0.

+ */


From that description, it seems we have:

struct drm_i915_gem_execbuffer2 {
    __u64 buffers_ptr;  -> must be 0 (new)
    __u32 buffer_count; -> must be 0 (new)
    __u32 batch_start_offset;   -> must be 0 (new)
    __u32 batch_len;    -> must be 0 (new)
    __u32 DR1;  -> must be 0 (old)
    __u32 DR4;  -> must be 0 (old)
    __u32 num_cliprects; (fences)   -> must be 0 

since using extensions
    __u64 cliprects_ptr; (fences, extensions) -> 

contains an actual pointer!
    __u64 flags;    -> some flags 

must be 0 (new)

    __u64 rsvd1; (context info) -> repurposed field (old)
    __u64 rsvd2;    -> unused
};

Based on that, why can't we just get 

drm_i915_gem_execbuffer3 instead
of adding even more complexity to an already abused 

interface? While

the Vulkan-like extension thing is really nice, I don't think what
we're doing here is extending the ioctl usage, we're completely
changing how the base struct should be interpreted 

based on how the VM

was created (which is an entirely different ioctl).

From Rusty Russel's API Design grading, 

drm_i915_gem_execbuffer2 is
already at -6 without these changes. I think after 

vm_bind we'll need

to create a -11 entry just to deal with this ioctl.



The only change here is removing the execlist support for VM_BIND
mode (other than natual extensions).
Adding a new execbuffer3 was considered, but I think 
we need to be careful
with that as that goes beyond the VM_BIND support, 
including any future

requirements (as we don't want an execbuffer4 after VM_BIND).


Why not? it's not like adding extensions here is really 
that different

than adding new ioctls.

I definitely think this deserves an execbuffer3 without even
considering future requirements. Just  to burn down the old
requirements and pointless fields.

Make execbuffer3 be vm bind only, no relocs, no legacy 
bits, leave the

older sw on execbuf2 for ever.


I guess another point in favour of execbuf3 would be that it's less
midlayer. If we share the entry point then there's quite a few vfuncs
needed to cleanly split out the vm_bind paths from the legacy
reloc/softping paths.

If we invert this and do execbuf3, then there's the existing ioctl
vfunc, and then we share code (where it even makes sense, pro

[Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915/xehp: Correct steering initialization

2022-06-08 Thread Patchwork
== Series Details ==

Series: drm/i915/xehp: Correct steering initialization
URL   : https://patchwork.freedesktop.org/series/104842/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_11732_full -> Patchwork_104842v1_full


Summary
---

  **SUCCESS**

  No regressions found.

  

Participating hosts (10 -> 13)
--

  Additional (3): shard-rkl shard-dg1 shard-tglu 

Known issues


  Here are the changes found in Patchwork_104842v1_full that come from known 
issues:

### CI changes ###

 Issues hit 

  * boot:
- shard-glk:  ([PASS][1], [PASS][2], [PASS][3], [PASS][4], 
[PASS][5], [PASS][6], [PASS][7], [PASS][8], [PASS][9], [PASS][10], [PASS][11], 
[PASS][12], [PASS][13], [PASS][14], [PASS][15], [PASS][16], [PASS][17], 
[PASS][18], [PASS][19], [PASS][20], [PASS][21], [PASS][22], [PASS][23], 
[PASS][24]) -> ([PASS][25], [PASS][26], [PASS][27], [PASS][28], [PASS][29], 
[PASS][30], [PASS][31], [PASS][32], [PASS][33], [PASS][34], [PASS][35], 
[PASS][36], [PASS][37], [PASS][38], [PASS][39], [PASS][40], [PASS][41], 
[PASS][42], [PASS][43], [PASS][44], [FAIL][45], [PASS][46], [PASS][47], 
[PASS][48], [PASS][49]) ([i915#4392])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk7/boot.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk7/boot.html
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk6/boot.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk6/boot.html
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk5/boot.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk5/boot.html
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk4/boot.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk4/boot.html
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk4/boot.html
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk4/boot.html
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk3/boot.html
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk3/boot.html
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk8/boot.html
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk3/boot.html
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk2/boot.html
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk2/boot.html
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk2/boot.html
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk8/boot.html
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk9/boot.html
   [20]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk1/boot.html
   [21]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk9/boot.html
   [22]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk1/boot.html
   [23]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk9/boot.html
   [24]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk1/boot.html
   [25]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk9/boot.html
   [26]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk9/boot.html
   [27]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk8/boot.html
   [28]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk8/boot.html
   [29]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk7/boot.html
   [30]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk7/boot.html
   [31]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk6/boot.html
   [32]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk6/boot.html
   [33]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk6/boot.html
   [34]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk5/boot.html
   [35]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk5/boot.html
   [36]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk5/boot.html
   [37]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk4/boot.html
   [38]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk4/boot.html
   [39]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk4/boot.html
   [40]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk4/boot.html
   [41]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk3/boot.html
   [42]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk3/boot.html
   [43]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_10484

[Intel-gfx] [PATCH v2 0/8] drm/i915: ttm for internal

2022-06-08 Thread Robert Beckett
This series refactors i915's internal buffer backend to use ttm.
It uses ttm's pool allocator to allocate volatile pages in place of the
old code which rolled its own via alloc_pages.
This is continuing progress to align all backends on using ttm.

v2: - commit message improvements to add detail
- fix i915_ttm_shrink to purge !is_shmem volatile buffers
- limit ttm pool allocator to using dma32 on i965G[M]
- fix mman selftest to always use smem buffers
- create new internal memory region
- make internal backend allocate from internal region
- Fixed various issues with tests and i915 ttm usage as a result
  of supporting regions other than lmem via ttm.

Robert Beckett (8):
  drm/i915/ttm: dont trample cache_level overrides during ttm move
  drm/i915: add gen6 ppgtt dummy creation function
  drm/i915: setup ggtt scratch page after memory regions
  drm/i915: allow volatile buffers to use ttm pool allocator
  drm/i915: limit ttm to dma32 for i965G[M]
  drm/i915/gem: further fix mman selftest
  drm/i915/ttm: trust snooping when gen 6+ when deciding default
cache_level
  drm/i915: internal buffers use ttm backend

 drivers/gpu/drm/i915/gem/i915_gem_internal.c  | 187 +-
 drivers/gpu/drm/i915/gem/i915_gem_internal.h  |   5 -
 drivers/gpu/drm/i915/gem/i915_gem_object.c|   1 +
 .../gpu/drm/i915/gem/i915_gem_object_types.h  |   1 +
 drivers/gpu/drm/i915/gem/i915_gem_ttm.c   |   8 +-
 drivers/gpu/drm/i915/gem/i915_gem_ttm.h   |   1 +
 drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c  |  13 +-
 .../drm/i915/gem/selftests/i915_gem_mman.c|  20 +-
 drivers/gpu/drm/i915/gt/gen6_ppgtt.c  |  43 +++-
 drivers/gpu/drm/i915/gt/intel_gt_gmch.c   |  20 +-
 drivers/gpu/drm/i915/gt/intel_gt_gmch.h   |   6 +
 drivers/gpu/drm/i915/i915_driver.c|  16 +-
 drivers/gpu/drm/i915/i915_pci.c   |   4 +-
 drivers/gpu/drm/i915/intel_memory_region.c|   8 +-
 drivers/gpu/drm/i915/intel_memory_region.h|   2 +
 drivers/gpu/drm/i915/intel_region_ttm.c   |   7 +-
 .../gpu/drm/i915/selftests/mock_gem_device.c  |   2 +-
 17 files changed, 123 insertions(+), 221 deletions(-)

-- 
2.25.1



[Intel-gfx] [PATCH v2 1/8] drm/i915/ttm: dont trample cache_level overrides during ttm move

2022-06-08 Thread Robert Beckett
Various places within the driver override the default chosen cache_level.
Before ttm, these overrides were permanent until explicitly changed again
or for the lifetime of the buffer.

TTM movement code came along and decided that it could make that
decision at that time, which is usually well after object creation, so
overrode the cache_level decision and reverted it back to its default
decision.

Add logic to indicate whether the caching mode has been set by anything
other than the move logic. If so, assume that the code that overrode the
defaults knows best and keep it.

Signed-off-by: Robert Beckett 
---
 drivers/gpu/drm/i915/gem/i915_gem_object.c   | 1 +
 drivers/gpu/drm/i915/gem/i915_gem_object_types.h | 1 +
 drivers/gpu/drm/i915/gem/i915_gem_ttm.c  | 1 +
 drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c | 9 ++---
 4 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c 
b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 06b1b188ce5a..519887769c08 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -125,6 +125,7 @@ void i915_gem_object_set_cache_coherency(struct 
drm_i915_gem_object *obj,
struct drm_i915_private *i915 = to_i915(obj->base.dev);
 
obj->cache_level = cache_level;
+   obj->ttm.cache_level_override = true;
 
if (cache_level != I915_CACHE_NONE)
obj->cache_coherent = (I915_BO_CACHE_COHERENT_FOR_READ |
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h 
b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index 2c88bdb8ff7c..6632ed52e919 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -605,6 +605,7 @@ struct drm_i915_gem_object {
struct i915_gem_object_page_iter get_io_page;
struct drm_i915_gem_object *backup;
bool created:1;
+   bool cache_level_override:1;
} ttm;
 
/*
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c 
b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
index 4c25d9b2f138..27d59639177f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
@@ -1241,6 +1241,7 @@ int __i915_gem_ttm_object_init(struct intel_memory_region 
*mem,
i915_gem_object_init_memory_region(obj, mem);
i915_ttm_adjust_domains_after_move(obj);
i915_ttm_adjust_gem_after_move(obj);
+   obj->ttm.cache_level_override = false;
i915_gem_object_unlock(obj);
 
return 0;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c 
b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c
index a10716f4e717..4c1de0b4a10f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c
@@ -123,9 +123,12 @@ void i915_ttm_adjust_gem_after_move(struct 
drm_i915_gem_object *obj)
obj->mem_flags |= i915_ttm_cpu_maps_iomem(bo->resource) ? 
I915_BO_FLAG_IOMEM :
I915_BO_FLAG_STRUCT_PAGE;
 
-   cache_level = i915_ttm_cache_level(to_i915(bo->base.dev), bo->resource,
-  bo->ttm);
-   i915_gem_object_set_cache_coherency(obj, cache_level);
+   if (!obj->ttm.cache_level_override) {
+   cache_level = i915_ttm_cache_level(to_i915(bo->base.dev),
+  bo->resource, bo->ttm);
+   i915_gem_object_set_cache_coherency(obj, cache_level);
+   obj->ttm.cache_level_override = false;
+   }
 }
 
 /**
-- 
2.25.1



[Intel-gfx] [PATCH v2 2/8] drm/i915: add gen6 ppgtt dummy creation function

2022-06-08 Thread Robert Beckett
Internal gem objects will soon just be volatile system memory region
objects.
To enable this, create a separate dummy object creation function
for gen6 ppgtt. The object only exists as a fake object pointing to ggtt
and gains no benefit in going via the internal backend.
Instead, create a dummy gem object and avoid having to maintain a custom
ops api in the internal backend, which makes later refactoring easier.

Signed-off-by: Robert Beckett 
---
 drivers/gpu/drm/i915/gt/gen6_ppgtt.c | 43 ++--
 1 file changed, 40 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c 
b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
index 1bb766c79dcb..f3b660cfeb7f 100644
--- a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
+++ b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
@@ -372,6 +372,45 @@ static const struct drm_i915_gem_object_ops 
pd_dummy_obj_ops = {
.put_pages = pd_dummy_obj_put_pages,
 };
 
+static struct drm_i915_gem_object *
+i915_gem_object_create_dummy(struct drm_i915_private *i915, phys_addr_t size)
+{
+   static struct lock_class_key lock_class;
+   struct drm_i915_gem_object *obj;
+   unsigned int cache_level;
+
+   GEM_BUG_ON(!size);
+   GEM_BUG_ON(!IS_ALIGNED(size, PAGE_SIZE));
+
+   if (overflows_type(size, obj->base.size))
+   return ERR_PTR(-E2BIG);
+
+   obj = i915_gem_object_alloc();
+   if (!obj)
+   return ERR_PTR(-ENOMEM);
+
+   drm_gem_private_object_init(&i915->drm, &obj->base, size);
+   i915_gem_object_init(obj, &pd_dummy_obj_ops, &lock_class, 0);
+   obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
+
+   /*
+* Mark the object as volatile, such that the pages are marked as
+* dontneed whilst they are still pinned. As soon as they are unpinned
+* they are allowed to be reaped by the shrinker, and the caller is
+* expected to repopulate - the contents of this object are only valid
+* whilst active and pinned.
+*/
+   i915_gem_object_set_volatile(obj);
+
+   obj->read_domains = I915_GEM_DOMAIN_CPU;
+   obj->write_domain = I915_GEM_DOMAIN_CPU;
+
+   cache_level = HAS_LLC(i915) ? I915_CACHE_LLC : I915_CACHE_NONE;
+   i915_gem_object_set_cache_coherency(obj, cache_level);
+
+   return obj;
+}
+
 static struct i915_page_directory *
 gen6_alloc_top_pd(struct gen6_ppgtt *ppgtt)
 {
@@ -383,9 +422,7 @@ gen6_alloc_top_pd(struct gen6_ppgtt *ppgtt)
if (unlikely(!pd))
return ERR_PTR(-ENOMEM);
 
-   pd->pt.base = __i915_gem_object_create_internal(ppgtt->base.vm.gt->i915,
-   &pd_dummy_obj_ops,
-   I915_PDES * SZ_4K);
+   pd->pt.base = i915_gem_object_create_dummy(ppgtt->base.vm.gt->i915, 
I915_PDES * SZ_4K);
if (IS_ERR(pd->pt.base)) {
err = PTR_ERR(pd->pt.base);
pd->pt.base = NULL;
-- 
2.25.1



[Intel-gfx] [PATCH v2 3/8] drm/i915: setup ggtt scratch page after memory regions

2022-06-08 Thread Robert Beckett
Reorder scratch page allocation so that memory regions are available
to allocate the buffers

Signed-off-by: Robert Beckett 
Reviewed-by: Thomas Hellström 
---
 drivers/gpu/drm/i915/gt/intel_gt_gmch.c | 20 ++--
 drivers/gpu/drm/i915/gt/intel_gt_gmch.h |  6 ++
 drivers/gpu/drm/i915/i915_driver.c  | 16 ++--
 3 files changed, 34 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_gt_gmch.c 
b/drivers/gpu/drm/i915/gt/intel_gt_gmch.c
index 18e488672d1b..5411df1734ac 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_gmch.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_gmch.c
@@ -440,8 +440,6 @@ static int ggtt_probe_common(struct i915_ggtt *ggtt, u64 
size)
struct drm_i915_private *i915 = ggtt->vm.i915;
struct pci_dev *pdev = to_pci_dev(i915->drm.dev);
phys_addr_t phys_addr;
-   u32 pte_flags;
-   int ret;
 
GEM_WARN_ON(pci_resource_len(pdev, 0) != gen6_gttmmadr_size(i915));
phys_addr = pci_resource_start(pdev, 0) + gen6_gttadr_offset(i915);
@@ -463,6 +461,24 @@ static int ggtt_probe_common(struct i915_ggtt *ggtt, u64 
size)
}
 
kref_init(&ggtt->vm.resv_ref);
+
+   return 0;
+}
+
+/**
+ * i915_ggtt_setup_scratch_page - setup ggtt scratch page
+ * @i915: i915 device
+ */
+int i915_ggtt_setup_scratch_page(struct drm_i915_private *i915)
+{
+   struct i915_ggtt *ggtt = to_gt(i915)->ggtt;
+   u32 pte_flags;
+   int ret;
+
+   /* gen5- scratch setup currently happens in @intel_gtt_init */
+   if (GRAPHICS_VER(i915) <= 5)
+   return 0;
+
ret = setup_scratch_page(&ggtt->vm);
if (ret) {
drm_err(&i915->drm, "Scratch setup failed\n");
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_gmch.h 
b/drivers/gpu/drm/i915/gt/intel_gt_gmch.h
index 75ed55c1f30a..c6b79cb78637 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_gmch.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_gmch.h
@@ -15,6 +15,7 @@ int intel_gt_gmch_gen6_probe(struct i915_ggtt *ggtt);
 int intel_gt_gmch_gen8_probe(struct i915_ggtt *ggtt);
 int intel_gt_gmch_gen5_probe(struct i915_ggtt *ggtt);
 int intel_gt_gmch_gen5_enable_hw(struct drm_i915_private *i915);
+int i915_ggtt_setup_scratch_page(struct drm_i915_private *i915);
 
 /* Stubs for non-x86 platforms */
 #else
@@ -41,6 +42,11 @@ static inline int intel_gt_gmch_gen5_enable_hw(struct 
drm_i915_private *i915)
/* No HW should be enabled for this case yet, return fail */
return -ENODEV;
 }
+
+static inline int i915_ggtt_setup_scratch_page(struct drm_i915_private *i915)
+{
+   return 0;
+}
 #endif
 
 #endif /* __INTEL_GT_GMCH_H__ */
diff --git a/drivers/gpu/drm/i915/i915_driver.c 
b/drivers/gpu/drm/i915/i915_driver.c
index d26dcca7e654..4e8a92ffbfe9 100644
--- a/drivers/gpu/drm/i915/i915_driver.c
+++ b/drivers/gpu/drm/i915/i915_driver.c
@@ -69,6 +69,7 @@
 #include "gem/i915_gem_mman.h"
 #include "gem/i915_gem_pm.h"
 #include "gt/intel_gt.h"
+#include "gt/intel_gt_gmch.h"
 #include "gt/intel_gt_pm.h"
 #include "gt/intel_rc6.h"
 
@@ -605,12 +606,16 @@ static int i915_driver_hw_probe(struct drm_i915_private 
*dev_priv)
 
ret = intel_gt_tiles_init(dev_priv);
if (ret)
-   goto err_mem_regions;
+   goto err_ggtt;
+
+   ret = i915_ggtt_setup_scratch_page(dev_priv);
+   if (ret)
+   goto err_ggtt;
 
ret = i915_ggtt_enable_hw(dev_priv);
if (ret) {
drm_err(&dev_priv->drm, "failed to enable GGTT\n");
-   goto err_mem_regions;
+   goto err_ggtt;
}
 
pci_set_master(pdev);
@@ -662,11 +667,10 @@ static int i915_driver_hw_probe(struct drm_i915_private 
*dev_priv)
 err_msi:
if (pdev->msi_enabled)
pci_disable_msi(pdev);
-err_mem_regions:
-   intel_memory_regions_driver_release(dev_priv);
 err_ggtt:
i915_ggtt_driver_release(dev_priv);
i915_gem_drain_freed_objects(dev_priv);
+   intel_memory_regions_driver_release(dev_priv);
i915_ggtt_driver_late_release(dev_priv);
 err_perf:
i915_perf_fini(dev_priv);
@@ -912,9 +916,9 @@ int i915_driver_probe(struct pci_dev *pdev, const struct 
pci_device_id *ent)
intel_modeset_driver_remove_nogem(i915);
 out_cleanup_hw:
i915_driver_hw_remove(i915);
-   intel_memory_regions_driver_release(i915);
i915_ggtt_driver_release(i915);
i915_gem_drain_freed_objects(i915);
+   intel_memory_regions_driver_release(i915);
i915_ggtt_driver_late_release(i915);
 out_cleanup_mmio:
i915_driver_mmio_release(i915);
@@ -971,9 +975,9 @@ static void i915_driver_release(struct drm_device *dev)
 
i915_gem_driver_release(dev_priv);
 
-   intel_memory_regions_driver_release(dev_priv);
i915_ggtt_driver_release(dev_priv);
i915_gem_drain_freed_objects(dev_priv);
+   intel_memory_regions_driver_release(dev_priv);
i915_ggtt_driver_late_release(dev_priv);
 
i9

[Intel-gfx] [PATCH v2 4/8] drm/i915: allow volatile buffers to use ttm pool allocator

2022-06-08 Thread Robert Beckett
Internal/volatile buffers should not be shmem backed.
If a volatile buffer is requested, allow ttm to use the pool allocator
to provide volatile pages as backing.
Fix i915_ttm_shrink to handle !is_shmem volatile buffers by purging.

Signed-off-by: Robert Beckett 
---
 drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c 
b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
index 27d59639177f..8edce04a0509 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
@@ -309,7 +309,8 @@ static struct ttm_tt *i915_ttm_tt_create(struct 
ttm_buffer_object *bo,
page_flags |= TTM_TT_FLAG_ZERO_ALLOC;
 
caching = i915_ttm_select_tt_caching(obj);
-   if (i915_gem_object_is_shrinkable(obj) && caching == ttm_cached) {
+   if (i915_gem_object_is_shrinkable(obj) && caching == ttm_cached &&
+   !i915_gem_object_is_volatile(obj)) {
page_flags |= TTM_TT_FLAG_EXTERNAL |
  TTM_TT_FLAG_EXTERNAL_MAPPABLE;
i915_tt->is_shmem = true;
@@ -531,9 +532,9 @@ static int i915_ttm_shrink(struct drm_i915_gem_object *obj, 
unsigned int flags)
if (!bo->ttm || bo->resource->mem_type != TTM_PL_SYSTEM)
return 0;
 
-   GEM_BUG_ON(!i915_tt->is_shmem);
+   GEM_BUG_ON(!i915_tt->is_shmem && obj->mm.madv != I915_MADV_DONTNEED);
 
-   if (!i915_tt->filp)
+   if (i915_tt->is_shmem && !i915_tt->filp)
return 0;
 
ret = ttm_bo_wait_ctx(bo, &ctx);
-- 
2.25.1



[Intel-gfx] [PATCH v2 7/8] drm/i915/ttm: trust snooping when gen 6+ when deciding default cache_level

2022-06-08 Thread Robert Beckett
By default i915_ttm_cache_level() decides I915_CACHE_LLC if HAS_SNOOP.
this is divergent from existing backends code which only considers
HAS_LLC.
Testing shows that trusting snooping on gen5- is unreliable, so limit to
gen6+

Signed-off-by: Robert Beckett 
---
 drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c 
b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c
index 4c1de0b4a10f..e6cce35edb65 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c
@@ -46,7 +46,9 @@ static enum i915_cache_level
 i915_ttm_cache_level(struct drm_i915_private *i915, struct ttm_resource *res,
 struct ttm_tt *ttm)
 {
-   return ((HAS_LLC(i915) || HAS_SNOOP(i915)) &&
+   bool can_snoop = HAS_SNOOP(i915) && (GRAPHICS_VER(i915) > 5);
+
+   return ((HAS_LLC(i915) || can_snoop) &&
!i915_ttm_gtt_binds_lmem(res) &&
ttm->caching == ttm_cached) ? I915_CACHE_LLC :
I915_CACHE_NONE;
-- 
2.25.1



[Intel-gfx] [PATCH v2 6/8] drm/i915/gem: further fix mman selftest

2022-06-08 Thread Robert Beckett
In commit 450cede7f380 ("drm/i915/gem: Fix the mman selftest") we fixed up
the mman selftest to allocate user buffers via smem only if we have lmem,
otherwise it uses internal buffers.

As the commit message asserts, we should only be using buffers that
userland should be able to create.
Internal buffers are not intended to be used by userland.

Instead, fix the code to always create buffers from smem.
In the case of integrated, this will create them from the shmem non-ttm
backend, which is fine.

This also fixes up the code to allow conversion of internal backend to
ttm without breaking this test.

Signed-off-by: Robert Beckett 
---
 .../gpu/drm/i915/gem/selftests/i915_gem_mman.c  | 17 ++---
 1 file changed, 6 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c 
b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
index 5bc93a1ce3e3..ee2ad1281f97 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
@@ -594,17 +594,12 @@ static enum i915_mmap_type default_mapping(struct 
drm_i915_private *i915)
 }
 
 static struct drm_i915_gem_object *
-create_sys_or_internal(struct drm_i915_private *i915,
-  unsigned long size)
+create_sys(struct drm_i915_private *i915, unsigned long size)
 {
-   if (HAS_LMEM(i915)) {
-   struct intel_memory_region *sys_region =
-   i915->mm.regions[INTEL_REGION_SMEM];
+   struct intel_memory_region *sys_region =
+   i915->mm.regions[INTEL_REGION_SMEM];
 
-   return __i915_gem_object_create_user(i915, size, &sys_region, 
1);
-   }
-
-   return i915_gem_object_create_internal(i915, size);
+   return __i915_gem_object_create_user(i915, size, &sys_region, 1);
 }
 
 static bool assert_mmap_offset(struct drm_i915_private *i915,
@@ -615,7 +610,7 @@ static bool assert_mmap_offset(struct drm_i915_private 
*i915,
u64 offset;
int ret;
 
-   obj = create_sys_or_internal(i915, size);
+   obj = create_sys(i915, size);
if (IS_ERR(obj))
return expected && expected == PTR_ERR(obj);
 
@@ -717,7 +712,7 @@ static int igt_mmap_offset_exhaustion(void *arg)
}
 
/* Fill the hole, further allocation attempts should then fail */
-   obj = create_sys_or_internal(i915, PAGE_SIZE);
+   obj = create_sys(i915, PAGE_SIZE);
if (IS_ERR(obj)) {
err = PTR_ERR(obj);
pr_err("Unable to create object for reclaimed hole\n");
-- 
2.25.1



[Intel-gfx] [PATCH v2 5/8] drm/i915: limit ttm to dma32 for i965G[M]

2022-06-08 Thread Robert Beckett
i965G[M] cannot relocate objects above 4GiB.
Ensure ttm uses dma32 on these systems.

Signed-off-by: Robert Beckett 
---
 drivers/gpu/drm/i915/intel_region_ttm.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/intel_region_ttm.c 
b/drivers/gpu/drm/i915/intel_region_ttm.c
index 62ff77445b01..fd2ecfdd8fa1 100644
--- a/drivers/gpu/drm/i915/intel_region_ttm.c
+++ b/drivers/gpu/drm/i915/intel_region_ttm.c
@@ -32,10 +32,15 @@
 int intel_region_ttm_device_init(struct drm_i915_private *dev_priv)
 {
struct drm_device *drm = &dev_priv->drm;
+   bool use_dma32 = false;
+
+   /* i965g[m] cannot relocate objects above 4GiB. */
+   if (IS_I965GM(dev_priv) || IS_I965G(dev_priv))
+   use_dma32 = true;
 
return ttm_device_init(&dev_priv->bdev, i915_ttm_driver(),
   drm->dev, drm->anon_inode->i_mapping,
-  drm->vma_offset_manager, false, false);
+  drm->vma_offset_manager, false, use_dma32);
 }
 
 /**
-- 
2.25.1



[Intel-gfx] [PATCH v2 8/8] drm/i915: internal buffers use ttm backend

2022-06-08 Thread Robert Beckett
Create a kernel only internal memory region that uses ttm pool allocator to
allocate volatile system pages.
Refactor internal buffer backend to simply allocate from this new
region.

Signed-off-by: Robert Beckett 
---
 drivers/gpu/drm/i915/gem/i915_gem_internal.c  | 187 +-
 drivers/gpu/drm/i915/gem/i915_gem_internal.h  |   5 -
 drivers/gpu/drm/i915/gem/i915_gem_ttm.h   |   1 +
 .../drm/i915/gem/selftests/i915_gem_mman.c|   3 +
 drivers/gpu/drm/i915/i915_pci.c   |   4 +-
 drivers/gpu/drm/i915/intel_memory_region.c|   8 +-
 drivers/gpu/drm/i915/intel_memory_region.h|   2 +
 .../gpu/drm/i915/selftests/mock_gem_device.c  |   2 +-
 8 files changed, 21 insertions(+), 191 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c 
b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
index c698f95af15f..a83751867ac7 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
@@ -4,188 +4,9 @@
  * Copyright © 2014-2016 Intel Corporation
  */
 
-#include 
-#include 
-#include 
-
+#include "gem/i915_gem_internal.h"
+#include "gem/i915_gem_region.h"
 #include "i915_drv.h"
-#include "i915_gem.h"
-#include "i915_gem_internal.h"
-#include "i915_gem_object.h"
-#include "i915_scatterlist.h"
-#include "i915_utils.h"
-
-#define QUIET (__GFP_NORETRY | __GFP_NOWARN)
-#define MAYFAIL (__GFP_RETRY_MAYFAIL | __GFP_NOWARN)
-
-static void internal_free_pages(struct sg_table *st)
-{
-   struct scatterlist *sg;
-
-   for (sg = st->sgl; sg; sg = __sg_next(sg)) {
-   if (sg_page(sg))
-   __free_pages(sg_page(sg), get_order(sg->length));
-   }
-
-   sg_free_table(st);
-   kfree(st);
-}
-
-static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
-{
-   struct drm_i915_private *i915 = to_i915(obj->base.dev);
-   struct sg_table *st;
-   struct scatterlist *sg;
-   unsigned int sg_page_sizes;
-   unsigned int npages;
-   int max_order;
-   gfp_t gfp;
-
-   max_order = MAX_ORDER;
-#ifdef CONFIG_SWIOTLB
-   if (is_swiotlb_active(obj->base.dev->dev)) {
-   unsigned int max_segment;
-
-   max_segment = swiotlb_max_segment();
-   if (max_segment) {
-   max_segment = max_t(unsigned int, max_segment,
-   PAGE_SIZE) >> PAGE_SHIFT;
-   max_order = min(max_order, ilog2(max_segment));
-   }
-   }
-#endif
-
-   gfp = GFP_KERNEL | __GFP_HIGHMEM | __GFP_RECLAIMABLE;
-   if (IS_I965GM(i915) || IS_I965G(i915)) {
-   /* 965gm cannot relocate objects above 4GiB. */
-   gfp &= ~__GFP_HIGHMEM;
-   gfp |= __GFP_DMA32;
-   }
-
-create_st:
-   st = kmalloc(sizeof(*st), GFP_KERNEL);
-   if (!st)
-   return -ENOMEM;
-
-   npages = obj->base.size / PAGE_SIZE;
-   if (sg_alloc_table(st, npages, GFP_KERNEL)) {
-   kfree(st);
-   return -ENOMEM;
-   }
-
-   sg = st->sgl;
-   st->nents = 0;
-   sg_page_sizes = 0;
-
-   do {
-   int order = min(fls(npages) - 1, max_order);
-   struct page *page;
-
-   do {
-   page = alloc_pages(gfp | (order ? QUIET : MAYFAIL),
-  order);
-   if (page)
-   break;
-   if (!order--)
-   goto err;
-
-   /* Limit subsequent allocations as well */
-   max_order = order;
-   } while (1);
-
-   sg_set_page(sg, page, PAGE_SIZE << order, 0);
-   sg_page_sizes |= PAGE_SIZE << order;
-   st->nents++;
-
-   npages -= 1 << order;
-   if (!npages) {
-   sg_mark_end(sg);
-   break;
-   }
-
-   sg = __sg_next(sg);
-   } while (1);
-
-   if (i915_gem_gtt_prepare_pages(obj, st)) {
-   /* Failed to dma-map try again with single page sg segments */
-   if (get_order(st->sgl->length)) {
-   internal_free_pages(st);
-   max_order = 0;
-   goto create_st;
-   }
-   goto err;
-   }
-
-   __i915_gem_object_set_pages(obj, st, sg_page_sizes);
-
-   return 0;
-
-err:
-   sg_set_page(sg, NULL, 0, 0);
-   sg_mark_end(sg);
-   internal_free_pages(st);
-
-   return -ENOMEM;
-}
-
-static void i915_gem_object_put_pages_internal(struct drm_i915_gem_object *obj,
-  struct sg_table *pages)
-{
-   i915_gem_gtt_finish_pages(obj, pages);
-   internal_free_pages(pages);
-
-   obj->mm.dirty = false;
-
-   __start_cpu_write(obj);
-}
-
-static const struct drm_i915_

Re: [Intel-gfx] [RFC v3 3/3] drm/doc/rfc: VM_BIND uapi definition

2022-06-08 Thread Matthew Brost
On Wed, Jun 08, 2022 at 10:12:45AM +0300, Lionel Landwerlin wrote:
> On 03/06/2022 09:53, Niranjana Vishwanathapura wrote:
> > On Wed, Jun 01, 2022 at 10:08:35PM -0700, Niranjana Vishwanathapura
> > wrote:
> > > On Wed, Jun 01, 2022 at 11:27:17AM +0200, Daniel Vetter wrote:
> > > > On Wed, 1 Jun 2022 at 11:03, Dave Airlie  wrote:
> > > > > 
> > > > > On Tue, 24 May 2022 at 05:20, Niranjana Vishwanathapura
> > > > >  wrote:
> > > > > > 
> > > > > > On Thu, May 19, 2022 at 04:07:30PM -0700, Zanoni, Paulo R wrote:
> > > > > > >On Tue, 2022-05-17 at 11:32 -0700, Niranjana Vishwanathapura wrote:
> > > > > > >> VM_BIND and related uapi definitions
> > > > > > >>
> > > > > > >> v2: Ensure proper kernel-doc formatting with cross references.
> > > > > > >> Also add new uapi and documentation as per review comments
> > > > > > >> from Daniel.
> > > > > > >>
> > > > > > >> Signed-off-by: Niranjana Vishwanathapura
> > > > > > 
> > > > > > >> ---
> > > > > > >>  Documentation/gpu/rfc/i915_vm_bind.h | 399
> > > > > > +++
> > > > > > >>  1 file changed, 399 insertions(+)
> > > > > > >>  create mode 100644 Documentation/gpu/rfc/i915_vm_bind.h
> > > > > > >>
> > > > > > >> diff --git a/Documentation/gpu/rfc/i915_vm_bind.h
> > > > > > b/Documentation/gpu/rfc/i915_vm_bind.h
> > > > > > >> new file mode 100644
> > > > > > >> index ..589c0a009107
> > > > > > >> --- /dev/null
> > > > > > >> +++ b/Documentation/gpu/rfc/i915_vm_bind.h
> > > > > > >> @@ -0,0 +1,399 @@
> > > > > > >> +/* SPDX-License-Identifier: MIT */
> > > > > > >> +/*
> > > > > > >> + * Copyright © 2022 Intel Corporation
> > > > > > >> + */
> > > > > > >> +
> > > > > > >> +/**
> > > > > > >> + * DOC: I915_PARAM_HAS_VM_BIND
> > > > > > >> + *
> > > > > > >> + * VM_BIND feature availability.
> > > > > > >> + * See typedef drm_i915_getparam_t param.
> > > > > > >> + */
> > > > > > >> +#define I915_PARAM_HAS_VM_BIND   57
> > > > > > >> +
> > > > > > >> +/**
> > > > > > >> + * DOC: I915_VM_CREATE_FLAGS_USE_VM_BIND
> > > > > > >> + *
> > > > > > >> + * Flag to opt-in for VM_BIND mode of binding during VM 
> > > > > > >> creation.
> > > > > > >> + * See struct drm_i915_gem_vm_control flags.
> > > > > > >> + *
> > > > > > >> + * A VM in VM_BIND mode will not support the older
> > > > > > execbuff mode of binding.
> > > > > > >> + * In VM_BIND mode, execbuff ioctl will not accept
> > > > > > any execlist (ie., the
> > > > > > >> + * &drm_i915_gem_execbuffer2.buffer_count must be 0).
> > > > > > >> + * Also, &drm_i915_gem_execbuffer2.batch_start_offset and
> > > > > > >> + * &drm_i915_gem_execbuffer2.batch_len must be 0.
> > > > > > >> + * DRM_I915_GEM_EXECBUFFER_EXT_BATCH_ADDRESSES
> > > > > > extension must be provided
> > > > > > >> + * to pass in the batch buffer addresses.
> > > > > > >> + *
> > > > > > >> + * Additionally, I915_EXEC_NO_RELOC, I915_EXEC_HANDLE_LUT and
> > > > > > >> + * I915_EXEC_BATCH_FIRST of
> > > > > > &drm_i915_gem_execbuffer2.flags must be 0
> > > > > > >> + * (not used) in VM_BIND mode.
> > > > > > I915_EXEC_USE_EXTENSIONS flag must always be
> > > > > > >> + * set (See struct drm_i915_gem_execbuffer_ext_batch_addresses).
> > > > > > >> + * The buffers_ptr, buffer_count, batch_start_offset
> > > > > > and batch_len fields
> > > > > > >> + * of struct drm_i915_gem_execbuffer2 are also not
> > > > > > used and must be 0.
> > > > > > >> + */
> > > > > > >
> > > > > > >From that description, it seems we have:
> > > > > > >
> > > > > > >struct drm_i915_gem_execbuffer2 {
> > > > > > >    __u64 buffers_ptr;  -> must be 0 (new)
> > > > > > >    __u32 buffer_count; -> must be 0 (new)
> > > > > > >    __u32 batch_start_offset;   -> must be 0 (new)
> > > > > > >    __u32 batch_len;    -> must be 0 (new)
> > > > > > >    __u32 DR1;  -> must be 0 (old)
> > > > > > >    __u32 DR4;  -> must be 0 (old)
> > > > > > >    __u32 num_cliprects; (fences)   -> must be 0
> > > > > > since using extensions
> > > > > > >    __u64 cliprects_ptr; (fences, extensions) ->
> > > > > > contains an actual pointer!
> > > > > > >    __u64 flags;    -> some flags
> > > > > > must be 0 (new)
> > > > > > >    __u64 rsvd1; (context info) -> repurposed field (old)
> > > > > > >    __u64 rsvd2;    -> unused
> > > > > > >};
> > > > > > >
> > > > > > >Based on that, why can't we just get
> > > > > > drm_i915_gem_execbuffer3 instead
> > > > > > >of adding even more complexity to an already abused interface? 
> > > > > > >While
> > > > > > >the Vulkan-like extension thing is really nice, I don't think what
> > > > > > >we're doing here is extending the ioctl usage, we're completely
> > > > > > >changing how the base struct should be interpreted
> > > > > > based on how the VM
> > > > > > >was created (which is an entirely different ioctl).
> > > > > > >
> > > > > > >From Rust

[Intel-gfx] i915 w/5.19: wild flickering glitching technicolor pyrotechnics on resumption from suspend

2022-06-08 Thread Jason A. Donenfeld
Hi,

Using the i7-11850H's iGPU, when I wake my Thinkpad X1 Extreme Gen 4
from suspend on 5.19-rc1, the display shows wild flickering with
strobing colors and more artifacts than real pixels in an utter
disaster explosion of pantone, as though bombs were dropped on the
leprechauns at the base of the rainbow. That sounds ridiculous, but
the strobing pattern really is a sight to be seen.

I can only stop it by rebooting.

I took a video, but I worry about having private info in my video
buffer, so if you'd like to see it, email me privately.

The system is otherwise operational and fine. Nothing in dmesg, even
with i915 debugging enabled, and if it weren't for the display
artifacts, I believe the actual rendering in video memory would be
fine. So this sort of looks like a display driver or some sort of psr
problem.

Let me know if there's anything else I can do to provide more debug
info. It's pretty trivial to reproduce: I just put my laptop to sleep
and wake it back up.

Jason


Re: [Intel-gfx] [RFC v3 3/3] drm/doc/rfc: VM_BIND uapi definition

2022-06-08 Thread Niranjana Vishwanathapura

On Wed, Jun 08, 2022 at 10:12:05AM +0100, Matthew Auld wrote:

On 08/06/2022 08:17, Tvrtko Ursulin wrote:


On 07/06/2022 20:37, Niranjana Vishwanathapura wrote:

On Tue, Jun 07, 2022 at 11:27:14AM +0100, Tvrtko Ursulin wrote:


On 17/05/2022 19:32, Niranjana Vishwanathapura wrote:

VM_BIND and related uapi definitions

v2: Ensure proper kernel-doc formatting with cross references.
    Also add new uapi and documentation as per review comments
    from Daniel.

Signed-off-by: Niranjana Vishwanathapura 


---
 Documentation/gpu/rfc/i915_vm_bind.h | 399 +++
 1 file changed, 399 insertions(+)
 create mode 100644 Documentation/gpu/rfc/i915_vm_bind.h

diff --git a/Documentation/gpu/rfc/i915_vm_bind.h 
b/Documentation/gpu/rfc/i915_vm_bind.h

new file mode 100644
index ..589c0a009107
--- /dev/null
+++ b/Documentation/gpu/rfc/i915_vm_bind.h
@@ -0,0 +1,399 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2022 Intel Corporation
+ */
+
+/**
+ * DOC: I915_PARAM_HAS_VM_BIND
+ *
+ * VM_BIND feature availability.
+ * See typedef drm_i915_getparam_t param.
+ */
+#define I915_PARAM_HAS_VM_BIND    57
+
+/**
+ * DOC: I915_VM_CREATE_FLAGS_USE_VM_BIND
+ *
+ * Flag to opt-in for VM_BIND mode of binding during VM creation.
+ * See struct drm_i915_gem_vm_control flags.
+ *
+ * A VM in VM_BIND mode will not support the older execbuff 
mode of binding.
+ * In VM_BIND mode, execbuff ioctl will not accept any 
execlist (ie., the

+ * &drm_i915_gem_execbuffer2.buffer_count must be 0).
+ * Also, &drm_i915_gem_execbuffer2.batch_start_offset and
+ * &drm_i915_gem_execbuffer2.batch_len must be 0.
+ * DRM_I915_GEM_EXECBUFFER_EXT_BATCH_ADDRESSES extension must 
be provided

+ * to pass in the batch buffer addresses.
+ *
+ * Additionally, I915_EXEC_NO_RELOC, I915_EXEC_HANDLE_LUT and
+ * I915_EXEC_BATCH_FIRST of &drm_i915_gem_execbuffer2.flags must be 0
+ * (not used) in VM_BIND mode. I915_EXEC_USE_EXTENSIONS flag 
must always be

+ * set (See struct drm_i915_gem_execbuffer_ext_batch_addresses).
+ * The buffers_ptr, buffer_count, batch_start_offset and 
batch_len fields

+ * of struct drm_i915_gem_execbuffer2 are also not used and must be 0.
+ */
+#define I915_VM_CREATE_FLAGS_USE_VM_BIND    (1 << 0)
+
+/**
+ * DOC: I915_CONTEXT_CREATE_FLAGS_LONG_RUNNING
+ *
+ * Flag to declare context as long running.
+ * See struct drm_i915_gem_context_create_ext flags.
+ *
+ * Usage of dma-fence expects that they complete in 
reasonable amount of time.
+ * Compute on the other hand can be long running. Hence it is 
not appropriate
+ * for compute contexts to export request completion 
dma-fence to user.

+ * The dma-fence usage will be limited to in-kernel consumption only.
+ * Compute contexts need to use user/memory fence.
+ *
+ * So, long running contexts do not support output fences. Hence,
+ * I915_EXEC_FENCE_OUT (See &drm_i915_gem_execbuffer2.flags and
+ * I915_EXEC_FENCE_SIGNAL (See 
&drm_i915_gem_exec_fence.flags) are expected

+ * to be not used.
+ *
+ * DRM_I915_GEM_WAIT ioctl call is also not supported for 
objects mapped

+ * to long running contexts.
+ */
+#define I915_CONTEXT_CREATE_FLAGS_LONG_RUNNING   (1u << 2)
+
+/* VM_BIND related ioctls */
+#define DRM_I915_GEM_VM_BIND    0x3d
+#define DRM_I915_GEM_VM_UNBIND    0x3e
+#define DRM_I915_GEM_WAIT_USER_FENCE    0x3f
+
+#define DRM_IOCTL_I915_GEM_VM_BIND    
DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GEM_VM_BIND, struct 
drm_i915_gem_vm_bind)
+#define DRM_IOCTL_I915_GEM_VM_UNBIND 
DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GEM_VM_UNBIND, struct 
drm_i915_gem_vm_bind)
+#define DRM_IOCTL_I915_GEM_WAIT_USER_FENCE 
DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GEM_WAIT_USER_FENCE, 
struct drm_i915_gem_wait_user_fence)

+
+/**
+ * struct drm_i915_gem_vm_bind - VA to object mapping to bind.
+ *
+ * This structure is passed to VM_BIND ioctl and specifies 
the mapping of GPU
+ * virtual address (VA) range to the section of an object 
that should be bound

+ * in the device page table of the specified address space (VM).
+ * The VA range specified must be unique (ie., not currently 
bound) and can
+ * be mapped to whole object or a section of the object 
(partial binding).
+ * Multiple VA mappings can be created to the same section of 
the object

+ * (aliasing).
+ */
+struct drm_i915_gem_vm_bind {
+    /** @vm_id: VM (address space) id to bind */
+    __u32 vm_id;
+
+    /** @handle: Object handle */
+    __u32 handle;
+
+    /** @start: Virtual Address start to bind */
+    __u64 start;
+
+    /** @offset: Offset in object to bind */
+    __u64 offset;
+
+    /** @length: Length of mapping to bind */
+    __u64 length;


Does it support, or should it, equivalent of 
EXEC_OBJECT_PAD_TO_SIZE? Or if not userspace is expected to map 
the remainder of the space to a dummy object? In which case 
would there be any alignment/padding issues preventing the two 
bind to be placed next to each other?


I ask because someone from the compute side asked me about a 
problem

Re: [Intel-gfx] [RFC v3 1/3] drm/doc/rfc: VM_BIND feature design document

2022-06-08 Thread Niranjana Vishwanathapura

On Wed, Jun 08, 2022 at 08:33:25AM +0100, Tvrtko Ursulin wrote:



On 07/06/2022 22:32, Niranjana Vishwanathapura wrote:

On Tue, Jun 07, 2022 at 11:18:11AM -0700, Niranjana Vishwanathapura wrote:

On Tue, Jun 07, 2022 at 12:12:03PM -0500, Jason Ekstrand wrote:

 On Fri, Jun 3, 2022 at 6:52 PM Niranjana Vishwanathapura
  wrote:

   On Fri, Jun 03, 2022 at 10:20:25AM +0300, Lionel Landwerlin wrote:
   >   On 02/06/2022 23:35, Jason Ekstrand wrote:
   >
   > On Thu, Jun 2, 2022 at 3:11 PM Niranjana Vishwanathapura
   >  wrote:
   >
   >   On Wed, Jun 01, 2022 at 01:28:36PM -0700, Matthew 
Brost wrote:

   >   >On Wed, Jun 01, 2022 at 05:25:49PM +0300, Lionel Landwerlin
   wrote:
   >   >> On 17/05/2022 21:32, Niranjana Vishwanathapura wrote:
   >   >> > +VM_BIND/UNBIND ioctl will immediately start
   binding/unbinding
   >   the mapping in an
   >   >> > +async worker. The binding and unbinding will 
work like a

   special
   >   GPU engine.
   >   >> > +The binding and unbinding operations are serialized and
   will
   >   wait on specified
   >   >> > +input fences before the operation and will signal the
   output
   >   fences upon the
   >   >> > +completion of the operation. Due to serialization,
   completion of
   >   an operation
   >   >> > +will also indicate that all previous operations 
are also

   >   complete.
   >   >>
   >   >> I guess we should avoid saying "will immediately start
   >   binding/unbinding" if
   >   >> there are fences involved.
   >   >>
   >   >> And the fact that it's happening in an async 
worker seem to

   imply
   >   it's not
   >   >> immediate.
   >   >>
   >
   >   Ok, will fix.
   >   This was added because in earlier design binding was deferred
   until
   >   next execbuff.
   >   But now it is non-deferred (immediate in that sense). 
But yah,

   this is
   >   confusing
   >   and will fix it.
   >
   >   >>
   >   >> I have a question on the behavior of the bind 
operation when

   no
   >   input fence
   >   >> is provided. Let say I do :
   >   >>
   >   >> VM_BIND (out_fence=fence1)
   >   >>
   >   >> VM_BIND (out_fence=fence2)
   >   >>
   >   >> VM_BIND (out_fence=fence3)
   >   >>
   >   >>
   >   >> In what order are the fences going to be signaled?
   >   >>
   >   >> In the order of VM_BIND ioctls? Or out of order?
   >   >>
   >   >> Because you wrote "serialized I assume it's : in order
   >   >>
   >
   >   Yes, in the order of VM_BIND/UNBIND ioctls. Note that 
bind and

   unbind
   >   will use
   >   the same queue and hence are ordered.
   >
   >   >>
   >   >> One thing I didn't realize is that because we only get one
   >   "VM_BIND" engine,
   >   >> there is a disconnect from the Vulkan specification.
   >   >>
   >   >> In Vulkan VM_BIND operations are serialized but 
per engine.

   >   >>
   >   >> So you could have something like this :
   >   >>
   >   >> VM_BIND (engine=rcs0, in_fence=fence1, out_fence=fence2)
   >   >>
   >   >> VM_BIND (engine=ccs0, in_fence=fence3, out_fence=fence4)
   >   >>
   >   >>
   >   >> fence1 is not signaled
   >   >>
   >   >> fence3 is signaled
   >   >>
   >   >> So the second VM_BIND will proceed before the 
first VM_BIND.

   >   >>
   >   >>
   >   >> I guess we can deal with that scenario in 
userspace by doing

   the
   >   wait
   >   >> ourselves in one thread per engines.
   >   >>
   >   >> But then it makes the VM_BIND input fences useless.
   >   >>
   >   >>
   >   >> Daniel : what do you think? Should be rework this or just
   deal with
   >   wait
   >   >> fences in userspace?
   >   >>
   >   >
   >   >My opinion is rework this but make the ordering via 
an engine

   param
   >   optional.
   >   >
   >   >e.g. A VM can be configured so all binds are ordered 
within the

   VM
   >   >
   >   >e.g. A VM can be configured so all binds accept an engine
   argument
   >   (in
   >   >the case of the i915 likely this is a gem context 
handle) and

   binds
   >   >ordered with respect to that engine.
   >   >
   >   >This gives UMDs options as the later likely consumes 
more KMD

   >   resources
   >   >so if a different UMD can live with binds being 
ordered within

   the VM
   >   >they can use a mode consuming less resources.
   >   >
   >
   >   I think we need to be careful here if we are looking for some
   out of
   >   (submission) order completion of vm_bind/unbind.
   >   In-order completion means, in a batch of binds and 
unbinds to be
   >   completed in-order, user only needs to specify 
in-fence for the

   >   first bind/unbind call and the our-fen

Re: [Intel-gfx] [RFC v3 1/3] drm/doc/rfc: VM_BIND feature design document

2022-06-08 Thread Jason Ekstrand
On Wed, Jun 8, 2022 at 4:44 PM Niranjana Vishwanathapura <
niranjana.vishwanathap...@intel.com> wrote:

> On Wed, Jun 08, 2022 at 08:33:25AM +0100, Tvrtko Ursulin wrote:
> >
> >
> >On 07/06/2022 22:32, Niranjana Vishwanathapura wrote:
> >>On Tue, Jun 07, 2022 at 11:18:11AM -0700, Niranjana Vishwanathapura
> wrote:
> >>>On Tue, Jun 07, 2022 at 12:12:03PM -0500, Jason Ekstrand wrote:
>  On Fri, Jun 3, 2022 at 6:52 PM Niranjana Vishwanathapura
>   wrote:
> 
>    On Fri, Jun 03, 2022 at 10:20:25AM +0300, Lionel Landwerlin wrote:
>    >   On 02/06/2022 23:35, Jason Ekstrand wrote:
>    >
>    > On Thu, Jun 2, 2022 at 3:11 PM Niranjana Vishwanathapura
>    >  wrote:
>    >
>    >   On Wed, Jun 01, 2022 at 01:28:36PM -0700, Matthew
> Brost wrote:
>    >   >On Wed, Jun 01, 2022 at 05:25:49PM +0300, Lionel Landwerlin
>    wrote:
>    >   >> On 17/05/2022 21:32, Niranjana Vishwanathapura wrote:
>    >   >> > +VM_BIND/UNBIND ioctl will immediately start
>    binding/unbinding
>    >   the mapping in an
>    >   >> > +async worker. The binding and unbinding will
> work like a
>    special
>    >   GPU engine.
>    >   >> > +The binding and unbinding operations are serialized
> and
>    will
>    >   wait on specified
>    >   >> > +input fences before the operation and will signal the
>    output
>    >   fences upon the
>    >   >> > +completion of the operation. Due to serialization,
>    completion of
>    >   an operation
>    >   >> > +will also indicate that all previous operations
> are also
>    >   complete.
>    >   >>
>    >   >> I guess we should avoid saying "will immediately start
>    >   binding/unbinding" if
>    >   >> there are fences involved.
>    >   >>
>    >   >> And the fact that it's happening in an async
> worker seem to
>    imply
>    >   it's not
>    >   >> immediate.
>    >   >>
>    >
>    >   Ok, will fix.
>    >   This was added because in earlier design binding was
> deferred
>    until
>    >   next execbuff.
>    >   But now it is non-deferred (immediate in that sense).
> But yah,
>    this is
>    >   confusing
>    >   and will fix it.
>    >
>    >   >>
>    >   >> I have a question on the behavior of the bind
> operation when
>    no
>    >   input fence
>    >   >> is provided. Let say I do :
>    >   >>
>    >   >> VM_BIND (out_fence=fence1)
>    >   >>
>    >   >> VM_BIND (out_fence=fence2)
>    >   >>
>    >   >> VM_BIND (out_fence=fence3)
>    >   >>
>    >   >>
>    >   >> In what order are the fences going to be signaled?
>    >   >>
>    >   >> In the order of VM_BIND ioctls? Or out of order?
>    >   >>
>    >   >> Because you wrote "serialized I assume it's : in order
>    >   >>
>    >
>    >   Yes, in the order of VM_BIND/UNBIND ioctls. Note that
> bind and
>    unbind
>    >   will use
>    >   the same queue and hence are ordered.
>    >
>    >   >>
>    >   >> One thing I didn't realize is that because we only get
> one
>    >   "VM_BIND" engine,
>    >   >> there is a disconnect from the Vulkan specification.
>    >   >>
>    >   >> In Vulkan VM_BIND operations are serialized but
> per engine.
>    >   >>
>    >   >> So you could have something like this :
>    >   >>
>    >   >> VM_BIND (engine=rcs0, in_fence=fence1, out_fence=fence2)
>    >   >>
>    >   >> VM_BIND (engine=ccs0, in_fence=fence3, out_fence=fence4)
>    >   >>
>    >   >>
>    >   >> fence1 is not signaled
>    >   >>
>    >   >> fence3 is signaled
>    >   >>
>    >   >> So the second VM_BIND will proceed before the
> first VM_BIND.
>    >   >>
>    >   >>
>    >   >> I guess we can deal with that scenario in
> userspace by doing
>    the
>    >   wait
>    >   >> ourselves in one thread per engines.
>    >   >>
>    >   >> But then it makes the VM_BIND input fences useless.
>    >   >>
>    >   >>
>    >   >> Daniel : what do you think? Should be rework this or just
>    deal with
>    >   wait
>    >   >> fences in userspace?
>    >   >>
>    >   >
>    >   >My opinion is rework this but make the ordering via
> an engine
>    param
>    >   optional.
>    >   >
>    >   >e.g. A VM can be configured so all binds are ordered
> within the
> >>

[Intel-gfx] ✗ Fi.CI.SPARSE: warning for drm/i915: ttm for internal

2022-06-08 Thread Patchwork
== Series Details ==

Series: drm/i915: ttm for internal
URL   : https://patchwork.freedesktop.org/series/104909/
State : warning

== Summary ==

Error: dim sparse failed
Sparse version: v0.6.2
Fast mode used, each commit won't be checked separately.




[Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915/xehp: Correct steering initialization

2022-06-08 Thread Patchwork
== Series Details ==

Series: drm/i915/xehp: Correct steering initialization
URL   : https://patchwork.freedesktop.org/series/104842/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_11732_full -> Patchwork_104842v1_full


Summary
---

  **SUCCESS**

  No regressions found.

  

Participating hosts (10 -> 13)
--

  Additional (3): shard-rkl shard-dg1 shard-tglu 

Known issues


  Here are the changes found in Patchwork_104842v1_full that come from known 
issues:

### CI changes ###

 Issues hit 

  * boot:
- shard-glk:  ([PASS][1], [PASS][2], [PASS][3], [PASS][4], 
[PASS][5], [PASS][6], [PASS][7], [PASS][8], [PASS][9], [PASS][10], [PASS][11], 
[PASS][12], [PASS][13], [PASS][14], [PASS][15], [PASS][16], [PASS][17], 
[PASS][18], [PASS][19], [PASS][20], [PASS][21], [PASS][22], [PASS][23], 
[PASS][24]) -> ([PASS][25], [PASS][26], [PASS][27], [PASS][28], [PASS][29], 
[PASS][30], [PASS][31], [PASS][32], [PASS][33], [PASS][34], [PASS][35], 
[PASS][36], [PASS][37], [PASS][38], [PASS][39], [PASS][40], [PASS][41], 
[PASS][42], [PASS][43], [PASS][44], [FAIL][45], [PASS][46], [PASS][47], 
[PASS][48], [PASS][49]) ([i915#4392])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk7/boot.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk7/boot.html
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk6/boot.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk6/boot.html
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk5/boot.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk5/boot.html
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk4/boot.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk4/boot.html
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk4/boot.html
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk4/boot.html
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk3/boot.html
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk3/boot.html
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk8/boot.html
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk3/boot.html
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk2/boot.html
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk2/boot.html
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk2/boot.html
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk8/boot.html
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk9/boot.html
   [20]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk1/boot.html
   [21]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk9/boot.html
   [22]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk1/boot.html
   [23]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk9/boot.html
   [24]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11732/shard-glk1/boot.html
   [25]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk9/boot.html
   [26]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk9/boot.html
   [27]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk8/boot.html
   [28]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk8/boot.html
   [29]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk7/boot.html
   [30]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk7/boot.html
   [31]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk6/boot.html
   [32]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk6/boot.html
   [33]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk6/boot.html
   [34]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk5/boot.html
   [35]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk5/boot.html
   [36]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk5/boot.html
   [37]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk4/boot.html
   [38]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk4/boot.html
   [39]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk4/boot.html
   [40]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk4/boot.html
   [41]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk3/boot.html
   [42]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104842v1/shard-glk3/boot.html
   [43]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_10484

Re: [Intel-gfx] [RFC v3 1/3] drm/doc/rfc: VM_BIND feature design document

2022-06-08 Thread Niranjana Vishwanathapura

On Wed, Jun 08, 2022 at 04:55:38PM -0500, Jason Ekstrand wrote:

  On Wed, Jun 8, 2022 at 4:44 PM Niranjana Vishwanathapura
   wrote:

On Wed, Jun 08, 2022 at 08:33:25AM +0100, Tvrtko Ursulin wrote:
>
>
>On 07/06/2022 22:32, Niranjana Vishwanathapura wrote:
>>On Tue, Jun 07, 2022 at 11:18:11AM -0700, Niranjana Vishwanathapura
wrote:
>>>On Tue, Jun 07, 2022 at 12:12:03PM -0500, Jason Ekstrand wrote:
 On Fri, Jun 3, 2022 at 6:52 PM Niranjana Vishwanathapura
  wrote:

   On Fri, Jun 03, 2022 at 10:20:25AM +0300, Lionel Landwerlin
wrote:
   >   On 02/06/2022 23:35, Jason Ekstrand wrote:
   >
   > On Thu, Jun 2, 2022 at 3:11 PM Niranjana Vishwanathapura
   >  wrote:
   >
   >   On Wed, Jun 01, 2022 at 01:28:36PM -0700, Matthew
Brost wrote:
   >   >On Wed, Jun 01, 2022 at 05:25:49PM +0300, Lionel
Landwerlin
   wrote:
   >   >> On 17/05/2022 21:32, Niranjana Vishwanathapura wrote:
   >   >> > +VM_BIND/UNBIND ioctl will immediately start
   binding/unbinding
   >   the mapping in an
   >   >> > +async worker. The binding and unbinding will
work like a
   special
   >   GPU engine.
   >   >> > +The binding and unbinding operations are serialized
and
   will
   >   wait on specified
   >   >> > +input fences before the operation and will signal
the
   output
   >   fences upon the
   >   >> > +completion of the operation. Due to serialization,
   completion of
   >   an operation
   >   >> > +will also indicate that all previous operations
are also
   >   complete.
   >   >>
   >   >> I guess we should avoid saying "will immediately start
   >   binding/unbinding" if
   >   >> there are fences involved.
   >   >>
   >   >> And the fact that it's happening in an async
worker seem to
   imply
   >   it's not
   >   >> immediate.
   >   >>
   >
   >   Ok, will fix.
   >   This was added because in earlier design binding was
deferred
   until
   >   next execbuff.
   >   But now it is non-deferred (immediate in that sense).
But yah,
   this is
   >   confusing
   >   and will fix it.
   >
   >   >>
   >   >> I have a question on the behavior of the bind
operation when
   no
   >   input fence
   >   >> is provided. Let say I do :
   >   >>
   >   >> VM_BIND (out_fence=fence1)
   >   >>
   >   >> VM_BIND (out_fence=fence2)
   >   >>
   >   >> VM_BIND (out_fence=fence3)
   >   >>
   >   >>
   >   >> In what order are the fences going to be signaled?
   >   >>
   >   >> In the order of VM_BIND ioctls? Or out of order?
   >   >>
   >   >> Because you wrote "serialized I assume it's : in order
   >   >>
   >
   >   Yes, in the order of VM_BIND/UNBIND ioctls. Note that
bind and
   unbind
   >   will use
   >   the same queue and hence are ordered.
   >
   >   >>
   >   >> One thing I didn't realize is that because we only get
one
   >   "VM_BIND" engine,
   >   >> there is a disconnect from the Vulkan specification.
   >   >>
   >   >> In Vulkan VM_BIND operations are serialized but
per engine.
   >   >>
   >   >> So you could have something like this :
   >   >>
   >   >> VM_BIND (engine=rcs0, in_fence=fence1,
out_fence=fence2)
   >   >>
   >   >> VM_BIND (engine=ccs0, in_fence=fence3,
out_fence=fence4)
   >   >>
   >   >>
   >   >> fence1 is not signaled
   >   >>
   >   >> fence3 is signaled
   >   >>
   >   >> So the second VM_BIND will proceed before the
first VM_BIND.
   >   >>
   >   >>
   >   >> I guess we can deal with that scenario in
userspace by doing
   the
   >   wait
   >   >> ourselves in one thread per engines.
   >   >>
   >   >> But then it makes the VM_BIND input fences useless.
   >   >>
   >   >>
   >   >> Daniel : what do you think? Should be rework this or
just
   deal with
   >   wait
 

[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/pvc: Add register steering (rev2)

2022-06-08 Thread Patchwork
== Series Details ==

Series: drm/i915/pvc: Add register steering (rev2)
URL   : https://patchwork.freedesktop.org/series/104691/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_11740 -> Patchwork_104691v2


Summary
---

  **SUCCESS**

  No regressions found.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104691v2/index.html

Participating hosts (43 -> 44)
--

  Additional (5): bat-adlm-1 fi-icl-u2 bat-adlp-4 bat-atsm-1 bat-jsl-1 
  Missing(4): fi-kbl-soraka bat-dg2-8 bat-dg2-9 fi-bdw-samus 

Known issues


  Here are the changes found in Patchwork_104691v2 that come from known issues:

### IGT changes ###

 Issues hit 

  * igt@gem_huc_copy@huc-copy:
- fi-icl-u2:  NOTRUN -> [SKIP][1] ([i915#2190])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104691v2/fi-icl-u2/igt@gem_huc_c...@huc-copy.html

  * igt@gem_lmem_swapping@parallel-random-engines:
- bat-adlp-4: NOTRUN -> [SKIP][2] ([i915#4613]) +3 similar issues
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104691v2/bat-adlp-4/igt@gem_lmem_swapp...@parallel-random-engines.html

  * igt@gem_lmem_swapping@random-engines:
- fi-icl-u2:  NOTRUN -> [SKIP][3] ([i915#4613]) +3 similar issues
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104691v2/fi-icl-u2/igt@gem_lmem_swapp...@random-engines.html

  * igt@gem_tiled_pread_basic:
- bat-adlp-4: NOTRUN -> [SKIP][4] ([i915#3282])
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104691v2/bat-adlp-4/igt@gem_tiled_pread_basic.html

  * igt@i915_selftest@live@hangcheck:
- bat-dg1-6:  [PASS][5] -> [DMESG-FAIL][6] ([i915#4494] / 
[i915#4957])
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11740/bat-dg1-6/igt@i915_selftest@l...@hangcheck.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104691v2/bat-dg1-6/igt@i915_selftest@l...@hangcheck.html

  * igt@i915_selftest@live@requests:
- fi-pnv-d510:[PASS][7] -> [DMESG-FAIL][8] ([i915#4528])
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11740/fi-pnv-d510/igt@i915_selftest@l...@requests.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104691v2/fi-pnv-d510/igt@i915_selftest@l...@requests.html

  * igt@i915_suspend@basic-s3-without-i915:
- fi-icl-u2:  NOTRUN -> [SKIP][9] ([i915#5903])
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104691v2/fi-icl-u2/igt@i915_susp...@basic-s3-without-i915.html
- bat-adlp-4: NOTRUN -> [SKIP][10] ([i915#5903])
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104691v2/bat-adlp-4/igt@i915_susp...@basic-s3-without-i915.html

  * igt@kms_chamelium@common-hpd-after-suspend:
- fi-hsw-4770:NOTRUN -> [SKIP][11] ([fdo#109271] / [fdo#111827])
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104691v2/fi-hsw-4770/igt@kms_chamel...@common-hpd-after-suspend.html

  * igt@kms_chamelium@dp-crc-fast:
- bat-adlp-4: NOTRUN -> [SKIP][12] ([fdo#111827]) +8 similar issues
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104691v2/bat-adlp-4/igt@kms_chamel...@dp-crc-fast.html

  * igt@kms_chamelium@hdmi-hpd-fast:
- fi-icl-u2:  NOTRUN -> [SKIP][13] ([fdo#111827]) +8 similar issues
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104691v2/fi-icl-u2/igt@kms_chamel...@hdmi-hpd-fast.html

  * igt@kms_cursor_legacy@basic-busy-flip-before-cursor-legacy:
- bat-adlp-4: NOTRUN -> [SKIP][14] ([i915#4103]) +1 similar issue
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104691v2/bat-adlp-4/igt@kms_cursor_leg...@basic-busy-flip-before-cursor-legacy.html

  * igt@kms_force_connector_basic@force-connector-state:
- fi-icl-u2:  NOTRUN -> [WARN][15] ([i915#6008])
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104691v2/fi-icl-u2/igt@kms_force_connector_ba...@force-connector-state.html

  * igt@kms_force_connector_basic@force-load-detect:
- bat-adlp-4: NOTRUN -> [SKIP][16] ([i915#4093]) +3 similar issues
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104691v2/bat-adlp-4/igt@kms_force_connector_ba...@force-load-detect.html
- fi-icl-u2:  NOTRUN -> [SKIP][17] ([fdo#109285])
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104691v2/fi-icl-u2/igt@kms_force_connector_ba...@force-load-detect.html

  * igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d:
- fi-icl-u2:  NOTRUN -> [SKIP][18] ([fdo#109278]) +2 similar issues
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104691v2/fi-icl-u2/igt@kms_pipe_crc_ba...@compare-crc-sanitycheck-pipe-d.html

  * igt@kms_setmode@basic-clone-single-crtc:
- fi-icl-u2:  NOTRUN -> [SKIP][19] ([i915#3555])
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104691v2/fi-icl-u2/igt@kms_setm...@bas

Re: [Intel-gfx] [PATCH v2 2/2] vfio: Replace the iommu notifier with a device list

2022-06-08 Thread Tian, Kevin
> From: Jason Gunthorpe 
> Sent: Wednesday, June 8, 2022 7:27 PM
> 
> On Wed, Jun 08, 2022 at 03:47:12AM +, Tian, Kevin wrote:
> > > From: Jason Gunthorpe
> > > Sent: Wednesday, June 8, 2022 7:02 AM
> > >
> > > Instead of bouncing the function call to the driver op through a blocking
> > > notifier just have the iommu layer call it directly.
> > >
> > > Register each device that is being attached to the iommu with the lower
> > > driver which then threads them on a linked list and calls the appropriate
> > > driver op at the right time.
> > >
> > > Currently the only use is if dma_unmap() is defined.
> > >
> > > Also, fully lock all the debugging tests on the pinning path that a
> > > dma_unmap is registered.
> > >
> > ...
> > > Signed-off-by: Jason Gunthorpe 
> > > diff --git a/drivers/vfio/vfio_iommu_type1.c
> > > b/drivers/vfio/vfio_iommu_type1.c
> > > index c13b9290e35759..4ddb1f1abd238b 100644
> > > +++ b/drivers/vfio/vfio_iommu_type1.c
> > > @@ -67,7 +67,8 @@ struct vfio_iommu {
> > >   struct list_headiova_list;
> > >   struct mutexlock;
> > >   struct rb_root  dma_list;
> > > - struct blocking_notifier_head notifier;
> > > + struct list_headdevice_list;
> > > + struct mutexdevice_list_lock;
> >
> > emulated_device_list or unmap_device_list, to be more accurate?
> 
> I don't think it is worth getting too specific here, the fact it only
> has dma_unmap devices is a bit of a shortcut to save some work.
> 
> > btw a few lines downward there is already a emulated_iommu_groups.
> > the two sounds a bit overlapping given mdev's iommu group is faked
> > and singleton. Wonder whether it's cleaner to just reuse the existing 
> > field...
> 
> The locking is more complicated on this since we still have check
> every device in the group if it is opened or not while launching the
> callback and prevent it from opening/closing while the callback is
> running.
> 
> Since I plan to delete the dirty tracking which will drop the
> emulated_iommu_groups too I would leave it like this.
> 

Make sense.

Reviewed-by: Kevin Tian 


[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/display: Fix handling of enable_psr parameter

2022-06-08 Thread Patchwork
== Series Details ==

Series: drm/i915/display: Fix handling of enable_psr parameter
URL   : https://patchwork.freedesktop.org/series/104907/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_11740 -> Patchwork_104907v1


Summary
---

  **SUCCESS**

  No regressions found.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104907v1/index.html

Participating hosts (43 -> 45)
--

  Additional (5): fi-icl-u2 bat-adlp-4 bat-atsm-1 bat-jsl-2 bat-jsl-1 
  Missing(3): bat-dg2-8 bat-dg2-9 fi-bdw-samus 

Known issues


  Here are the changes found in Patchwork_104907v1 that come from known issues:

### IGT changes ###

 Issues hit 

  * igt@gem_huc_copy@huc-copy:
- fi-icl-u2:  NOTRUN -> [SKIP][1] ([i915#2190])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104907v1/fi-icl-u2/igt@gem_huc_c...@huc-copy.html

  * igt@gem_lmem_swapping@parallel-random-engines:
- bat-adlp-4: NOTRUN -> [SKIP][2] ([i915#4613]) +3 similar issues
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104907v1/bat-adlp-4/igt@gem_lmem_swapp...@parallel-random-engines.html

  * igt@gem_lmem_swapping@random-engines:
- fi-icl-u2:  NOTRUN -> [SKIP][3] ([i915#4613]) +3 similar issues
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104907v1/fi-icl-u2/igt@gem_lmem_swapp...@random-engines.html

  * igt@gem_tiled_pread_basic:
- bat-adlp-4: NOTRUN -> [SKIP][4] ([i915#3282])
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104907v1/bat-adlp-4/igt@gem_tiled_pread_basic.html

  * igt@i915_selftest@live@gem:
- fi-blb-e6850:   NOTRUN -> [DMESG-FAIL][5] ([i915#4528])
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104907v1/fi-blb-e6850/igt@i915_selftest@l...@gem.html

  * igt@i915_selftest@live@gt_heartbeat:
- fi-skl-6700k2:  [PASS][6] -> [DMESG-FAIL][7] ([i915#5334])
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11740/fi-skl-6700k2/igt@i915_selftest@live@gt_heartbeat.html
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104907v1/fi-skl-6700k2/igt@i915_selftest@live@gt_heartbeat.html

  * igt@i915_selftest@live@hangcheck:
- bat-dg1-5:  NOTRUN -> [DMESG-FAIL][8] ([i915#4494] / [i915#4957])
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104907v1/bat-dg1-5/igt@i915_selftest@l...@hangcheck.html
- bat-dg1-6:  [PASS][9] -> [DMESG-FAIL][10] ([i915#4494] / 
[i915#4957])
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11740/bat-dg1-6/igt@i915_selftest@l...@hangcheck.html
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104907v1/bat-dg1-6/igt@i915_selftest@l...@hangcheck.html

  * igt@i915_selftest@live@late_gt_pm:
- fi-bsw-nick:[PASS][11] -> [DMESG-FAIL][12] ([i915#3428])
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11740/fi-bsw-nick/igt@i915_selftest@live@late_gt_pm.html
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104907v1/fi-bsw-nick/igt@i915_selftest@live@late_gt_pm.html

  * igt@i915_suspend@basic-s2idle-without-i915:
- bat-dg1-5:  NOTRUN -> [INCOMPLETE][13] ([i915#6011])
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104907v1/bat-dg1-5/igt@i915_susp...@basic-s2idle-without-i915.html

  * igt@i915_suspend@basic-s3-without-i915:
- fi-icl-u2:  NOTRUN -> [SKIP][14] ([i915#5903])
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104907v1/fi-icl-u2/igt@i915_susp...@basic-s3-without-i915.html
- bat-adlp-4: NOTRUN -> [SKIP][15] ([i915#5903])
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104907v1/bat-adlp-4/igt@i915_susp...@basic-s3-without-i915.html

  * igt@kms_chamelium@common-hpd-after-suspend:
- fi-hsw-4770:NOTRUN -> [SKIP][16] ([fdo#109271] / [fdo#111827])
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104907v1/fi-hsw-4770/igt@kms_chamel...@common-hpd-after-suspend.html
- fi-pnv-d510:NOTRUN -> [SKIP][17] ([fdo#109271])
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104907v1/fi-pnv-d510/igt@kms_chamel...@common-hpd-after-suspend.html

  * igt@kms_chamelium@dp-crc-fast:
- bat-adlp-4: NOTRUN -> [SKIP][18] ([fdo#111827]) +8 similar issues
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104907v1/bat-adlp-4/igt@kms_chamel...@dp-crc-fast.html

  * igt@kms_chamelium@hdmi-hpd-fast:
- fi-icl-u2:  NOTRUN -> [SKIP][19] ([fdo#111827]) +8 similar issues
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104907v1/fi-icl-u2/igt@kms_chamel...@hdmi-hpd-fast.html

  * igt@kms_cursor_legacy@basic-busy-flip-before-cursor-legacy:
- bat-adlp-4: NOTRUN -> [SKIP][20] ([i915#4103]) +1 similar issue
   [20]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104907v1/bat-adlp-4/igt@kms_cursor_leg...@basic-busy-flip-before-cursor-legac

[Intel-gfx] ✗ Fi.CI.BAT: failure for drm/i915: ttm for internal

2022-06-08 Thread Patchwork
== Series Details ==

Series: drm/i915: ttm for internal
URL   : https://patchwork.freedesktop.org/series/104909/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_11741 -> Patchwork_104909v1


Summary
---

  **FAILURE**

  Serious unknown changes coming with Patchwork_104909v1 absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_104909v1, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104909v1/index.html

Participating hosts (44 -> 45)
--

  Additional (2): bat-adlm-1 fi-rkl-11600 
  Missing(1): fi-bdw-samus 

Possible new issues
---

  Here are the unknown changes that may have been introduced in 
Patchwork_104909v1:

### CI changes ###

 Possible regressions 

  * boot:
- fi-ivb-3770:[PASS][1] -> [FAIL][2]
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11741/fi-ivb-3770/boot.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104909v1/fi-ivb-3770/boot.html

  

### IGT changes ###

 Possible regressions 

  * igt@i915_selftest@live@coherency:
- fi-bsw-nick:[PASS][3] -> [DMESG-FAIL][4]
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11741/fi-bsw-nick/igt@i915_selftest@l...@coherency.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104909v1/fi-bsw-nick/igt@i915_selftest@l...@coherency.html
- fi-bsw-kefka:   [PASS][5] -> [DMESG-FAIL][6]
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11741/fi-bsw-kefka/igt@i915_selftest@l...@coherency.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104909v1/fi-bsw-kefka/igt@i915_selftest@l...@coherency.html
- fi-bsw-n3050:   [PASS][7] -> [DMESG-FAIL][8]
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11741/fi-bsw-n3050/igt@i915_selftest@l...@coherency.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104909v1/fi-bsw-n3050/igt@i915_selftest@l...@coherency.html

  
 Suppressed 

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@kms_busy@basic@flip:
- {bat-dg2-9}:[PASS][9] -> [DMESG-WARN][10]
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11741/bat-dg2-9/igt@kms_busy@ba...@flip.html
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104909v1/bat-dg2-9/igt@kms_busy@ba...@flip.html

  
Known issues


  Here are the changes found in Patchwork_104909v1 that come from known issues:

### IGT changes ###

 Issues hit 

  * igt@gem_huc_copy@huc-copy:
- fi-rkl-11600:   NOTRUN -> [SKIP][11] ([i915#2190])
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104909v1/fi-rkl-11600/igt@gem_huc_c...@huc-copy.html

  * igt@gem_lmem_swapping@basic:
- fi-rkl-11600:   NOTRUN -> [SKIP][12] ([i915#4613]) +3 similar issues
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104909v1/fi-rkl-11600/igt@gem_lmem_swapp...@basic.html

  * igt@gem_tiled_pread_basic:
- fi-rkl-11600:   NOTRUN -> [SKIP][13] ([i915#3282])
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104909v1/fi-rkl-11600/igt@gem_tiled_pread_basic.html

  * igt@i915_pm_backlight@basic-brightness:
- fi-rkl-11600:   NOTRUN -> [SKIP][14] ([i915#3012])
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104909v1/fi-rkl-11600/igt@i915_pm_backli...@basic-brightness.html

  * igt@i915_selftest@live@gtt:
- fi-bdw-5557u:   [PASS][15] -> [INCOMPLETE][16] ([i915#5685])
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11741/fi-bdw-5557u/igt@i915_selftest@l...@gtt.html
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104909v1/fi-bdw-5557u/igt@i915_selftest@l...@gtt.html

  * igt@i915_suspend@basic-s3-without-i915:
- fi-rkl-11600:   NOTRUN -> [INCOMPLETE][17] ([i915#5982])
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104909v1/fi-rkl-11600/igt@i915_susp...@basic-s3-without-i915.html

  * igt@kms_chamelium@common-hpd-after-suspend:
- fi-pnv-d510:NOTRUN -> [SKIP][18] ([fdo#109271])
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104909v1/fi-pnv-d510/igt@kms_chamel...@common-hpd-after-suspend.html
- fi-blb-e6850:   NOTRUN -> [SKIP][19] ([fdo#109271])
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104909v1/fi-blb-e6850/igt@kms_chamel...@common-hpd-after-suspend.html

  * igt@kms_chamelium@hdmi-edid-read:
- fi-rkl-11600:   NOTRUN -> [SKIP][20] ([fdo#111827]) +7 similar issues
   [20]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_104909v1/fi-rkl-11600/igt@kms_chamel...@hdmi-edid-read.html

  * igt@kms_cursor_legacy@basic-busy-flip-before-cursor-legac

Re: [Intel-gfx] [PATCH] drm/i915/display: Fix handling of enable_psr parameter

2022-06-08 Thread Hogander, Jouni
On Wed, 2022-06-08 at 13:33 -0700, José Roberto de Souza wrote:
> Commit 3cf050762534 ("drm/i915/bios: Split VBT data into per-panel
> vs.
> global parts") cause PSR to be disabled when enable_psr has the
> default value and there is at least one DP port that do not supports
> PSR.
> 
> That was happening because intel_psr_init() is called for every DP
> port and then enable_psr is globaly set to 0 based on the PSR support
> of the DP port.
> 
> Here dropping the enable_psr overwritten and using the VBT PSR value
> when enable_psr is set as default.

Reviewed-by: Jouni Högander 

> 
> Fixes: 3cf050762534 ("drm/i915/bios: Split VBT data into per-panel
> vs. global parts")
> Cc: Ville Syrjälä 
> Cc: Jani Nikula 
> Cc: Jouni Högander 
> Cc: Mika Kahola 
> Signed-off-by: José Roberto de Souza 
> ---
>  drivers/gpu/drm/i915/display/intel_psr.c | 7 +++
>  1 file changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_psr.c
> b/drivers/gpu/drm/i915/display/intel_psr.c
> index aedb3e0e69ecd..7d61c55184e51 100644
> --- a/drivers/gpu/drm/i915/display/intel_psr.c
> +++ b/drivers/gpu/drm/i915/display/intel_psr.c
> @@ -86,10 +86,13 @@
>  
>  static bool psr_global_enabled(struct intel_dp *intel_dp)
>  {
> + struct intel_connector *connector = intel_dp-
> >attached_connector;
>   struct drm_i915_private *i915 = dp_to_i915(intel_dp);
>  
>   switch (intel_dp->psr.debug & I915_PSR_DEBUG_MODE_MASK) {
>   case I915_PSR_DEBUG_DEFAULT:
> + if (i915->params.enable_psr == -1)
> + return connector->panel.vbt.psr.enable;
>   return i915->params.enable_psr;
>   case I915_PSR_DEBUG_DISABLE:
>   return false;
> @@ -2394,10 +2397,6 @@ void intel_psr_init(struct intel_dp *intel_dp)
>  
>   intel_dp->psr.source_support = true;
>  
> - if (dev_priv->params.enable_psr == -1)
> - if (!connector->panel.vbt.psr.enable)
> - dev_priv->params.enable_psr = 0;
> -
>   /* Set link_standby x link_off defaults */
>   if (DISPLAY_VER(dev_priv) < 12)
>   /* For new platforms up to TGL let's respect VBT back
> again */