Hi,
On 01/05/2020 00:15, Matt Roper wrote:
We're seeing some CI errors indicating that a workaround did not apply
properly on EHL/JSL. The workaround in question is updating a multicast
register, the failures are only seen on specific CI machines, and the
failures only seem to happen on reset
On 30/04/2020 19:33, Chris Wilson wrote:
While a perf event is open, keep a reference to the module so we don't
remove the driver internals mid-sampling.
Testcase: igt/perf_pmu/module-unload
Signed-off-by: Chris Wilson
Cc: Tvrtko Ursulin
Cc: sta...@vger.kernel.org
---
drivers/gpu/drm/i915/
== Series Details ==
Series: drm/i915/gem: Use chained reloc batches
URL : https://patchwork.freedesktop.org/series/76793/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_8403_full -> Patchwork_17535_full
Summary
---
*
As we can now keep chaining together a relocation batch to process any
number of relocations, we can keep building that relocation batch for
all of the target vma. This avoiding emitting a new request into the
ring for each target, consuming precious ring space and a potential
stall.
Testcase: igt
The ring is a precious resource: we anticipate to only use a few hundred
bytes for a request, and only try to reserve that before we start. If we
go beyond our guess in building the request, then instead of waiting at
the start of execbuf before we hold any locks or other resources, we
may trigger
As we only restore the default context state upon banning a context, we
only need enough of the state to run the ring and nothing more. That is
we only need our bare protocontext.
Signed-off-by: Chris Wilson
Cc: Tvrtko Ursulin
Cc: Mika Kuoppala
Cc: Andi Shyti
---
drivers/gpu/drm/i915/gt/intel
gdb uses ptrace() to peek and poke bytes of the target's address space.
The driver must implement an vm_ops->access() handler or else gdb will
be unable to inspect the pointer and report it as out-of-bounds.
Worse than useless as it causes immediate suspicion of the valid GTT
pointer, distracting t
== Series Details ==
Series: Rebased Big Joiner patch series for 8K 2p1p (rev2)
URL : https://patchwork.freedesktop.org/series/76791/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_8404_full -> Patchwork_17536_full
Summary
-
As we can now keep chaining together a relocation batch to process any
number of relocations, we can keep building that relocation batch for
all of the target vma. This avoiding emitting a new request into the
ring for each target, consuming precious ring space and a potential
stall.
Testcase: igt
The ring is a precious resource: we anticipate to only use a few hundred
bytes for a request, and only try to reserve that before we start. If we
go beyond our guess in building the request, then instead of waiting at
the start of execbuf before we hold any locks or other resources, we
may trigger
If at first we don't succeed, try try again.
No all engines may support the MI ops we need to perform asynchronous
relocation patching, and so we end up failing back to a synchronous
operation that has a liability of blocking. However, Tvrtko pointed out
we don't need to use the same engine to per
== Series Details ==
Series: series starting with [1/4] drm/i915/gem: Use chained reloc batches
URL : https://patchwork.freedesktop.org/series/76812/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_8405 -> Patchwork_17537
Sum
On 2020-04-30 16:10:16 [-0600], Jason A. Donenfeld wrote:
> Sometimes it's not okay to use SIMD registers, the conditions for which
> have changed subtly from kernel release to kernel release. Usually the
> pattern is to check for may_use_simd() and then fallback to using
> something slower in the
== Series Details ==
Series: series starting with [1/3] drm/i915/gem: Use chained reloc batches
URL : https://patchwork.freedesktop.org/series/76813/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8405 -> Patchwork_17538
Sum
From: Sebastian Andrzej Siewior
> Sent: 01 May 2020 11:42
> On 2020-04-30 16:10:16 [-0600], Jason A. Donenfeld wrote:
> > Sometimes it's not okay to use SIMD registers, the conditions for which
> > have changed subtly from kernel release to kernel release. Usually the
> > pattern is to check for ma
Those arguments are already set as eb.file and eb.args, so kill off
the extra arguments. This will allow us to move eb_pin_engine() to
after we reserved all BO's.
Signed-off-by: Maarten Lankhorst
---
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 17 +++--
1 file changed, 7 inserti
We inadvertently create a dependency on mmap_sem with a whole chain.
This breaks any user who wants to take a lock and call rcu_barrier(),
while also taking that lock inside mmap_sem:
<4> [604.892532] ==
<4> [604.892534] WARNING: possible circul
Signed-off-by: Maarten Lankhorst
---
drivers/gpu/drm/i915/gt/intel_renderstate.c | 2 +-
drivers/gpu/drm/i915/i915_vma.c | 9 -
drivers/gpu/drm/i915/i915_vma.h | 1 +
3 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_rend
This function does not use intel_context_create_request, so it has
to use the same locking order as normal code. This is required to
shut up lockdep in selftests.
Signed-off-by: Maarten Lankhorst
---
drivers/gpu/drm/i915/gt/selftest_lrc.c | 15 ---
1 file changed, 12 insertions(+), 3
Execbuffer submission will perform its own WW locking, and we
cannot rely on the implicit lock there.
This also makes it clear that the GVT code will get a lockdep splat when
multiple batchbuffer shadows need to be performed in the same instance,
fix that up.
Signed-off-by: Maarten Lankhorst
---
We want to lock all gem objects, including the engine context objects,
rework the throttling to ensure that we can do this. Now we only throttle
once, but can take eb_pin_engine while acquiring objects. This means we
will have to drop the lock to wait. If we don't have to throttle we can
still take
i915_gem_ww_ctx is used to lock all gem bo's for pinning and memory
eviction. We don't use it yet, but lets start adding the definition
first.
To use it, we have to pass a non-NULL ww to gem_object_lock, and don't
unlock directly. It is done in i915_gem_ww_ctx_fini.
Changes since v1:
- Change ww_
As a preparation step for full object locking and wait/wound handling
during pin and object mapping, ensure that we always pass the ww context
in i915_gem_execbuffer.c to i915_vma_pin, use lockdep to ensure this
happens.
This also requires changing the order of eb_parse slightly, to ensure
we pass
We have the ordering of timeline->mutex vs resv_lock wrong,
convert the i915_pin_vma and intel_context_pin as well to
future-proof this.
We may need to do future changes to do this more transaction-like,
and only get down to a single i915_gem_ww_ctx, but for now this
should work.
Signed-off-by: M
Signed-off-by: Maarten Lankhorst
---
drivers/gpu/drm/i915/gem/i915_gem_domain.c | 65 --
drivers/gpu/drm/i915/gem/i915_gem_object.h | 1 +
2 files changed, 49 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c
b/drivers/gpu/drm/i915/gem/i
Signed-off-by: Maarten Lankhorst
---
drivers/gpu/drm/i915/gem/i915_gem_mman.c | 51 +++-
1 file changed, 33 insertions(+), 18 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
index b39c24dae64e..e35e8d0b6938 100644
We want to start using ww locking in intel_context_pin, for this
we need to lock multiple objects, and the single i915_gem_object_lock
is not enough.
Convert to using ww-waiting, and make sure we always pin intel_context_state,
even if we don't have a renderstate object.
Signed-off-by: Maarten La
This reverts commit 0f1dd02295f35dcdcbaafcbcbbec0753884ab974.
This conflicts with the ww mutex handling, which needs to drop
the references after gpu submission anyway, because otherwise we
may risk unlocking a BO after first freeing it.
Signed-off-by: Maarten Lankhorst
---
.../gpu/drm/i915/gem/
Now that we changed execbuf submission slightly to allow us to do all
pinning in one place, we can now simply add ww versions on top of
struct_mutex. All we have to do is a separate path for -EDEADLK
handling, which needs to unpin all gem bo's before dropping the lock,
then starting over.
This fin
We want to introduce backoff logic, but we need to lock the
pool object as well for command parsing. Because of this, we
will need backoff logic for the engine pool obj, move the batch
validation up slightly to eb_lookup_vmas, and the actual command
parsing in a separate function which can get call
This is the last part outside of selftests that still don't use the
correct lock ordering of timeline->mutex vs resv_lock.
With gem fixed, there are a few places that still get locking wrong:
- gvt/scheduler.c
- i915_perf.c
- Most if not all selftests.
Changes since v1:
- Add intel_engine_pm_get/
Instead of using intel_context_create_request(), use intel_context_pin()
and i915_create_request directly.
Now all those calls are gone outside of selftests. :)
Signed-off-by: Maarten Lankhorst
---
drivers/gpu/drm/i915/gt/intel_workarounds.c | 43 ++---
1 file changed, 29 insert
Instead of doing everything inside of pin_mutex, we move all pinning
outside. Because i915_active has its own reference counting and
pinning is also having the same issues vs mutexes, we make sure
everything is pinned first, so the pinning in i915_active only needs
to bump refcounts. This allows us
This is required if we want to pass a ww context in intel_context_pin
and gen6_ppgtt_pin().
Signed-off-by: Maarten Lankhorst
---
drivers/gpu/drm/i915/gem/i915_gem_context.c | 55 ++-
.../drm/i915/gem/selftests/i915_gem_context.c | 22 +++-
2 files changed, 48 insertions(+),
Some i915 selftests still use i915_vma_lock() as inner lock, and
intel_context_create_request() intel_timeline->mutex as outer lock.
Fortunately for selftests this is not an issue, they should be fixed
but we can move ahead and cleanify lockdep now.
Signed-off-by: Maarten Lankhorst
---
drivers/g
This reverts commit 7dc8f1143778 ("drm/i915/gem: Drop relocation
slowpath"). We need the slowpath relocation for taking ww-mutex
inside the page fault handler, and we will take this mutex when
pinning all objects.
Cc: Chris Wilson
Cc: Matthew Auld
Signed-off-by: Maarten Lankhorst
---
.../gpu/d
The lock here should be interruptible, so we can backoff if needed.
Signed-off-by: Maarten Lankhorst
---
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
b/drivers/gpu/drm/i91
Make sure vma_lock is not used as inner lock when kernel context is used,
and add ww handling where appropriate.
Signed-off-by: Maarten Lankhorst
---
.../i915/gem/selftests/i915_gem_coherency.c | 26 ++--
.../drm/i915/gem/selftests/i915_gem_mman.c| 41 ++-
drivers/g
We want to get rid of intel_context_pin(), convert
intel_context_create_request() first. :)
Signed-off-by: Maarten Lankhorst
---
drivers/gpu/drm/i915/gt/intel_context.c | 20 +++-
1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_context
In order to allow userspace to rely on timeslicing to reorder their
batches, we must support preemption of those user batches. Declare
timeslicing as an explicit property that is a combination of having the
kernel support and HW support.
Suggested-by: Tvrtko Ursulin
Fixes: 8ee36e048c98 ("drm/i915
== Series Details ==
Series: series starting with [01/24] perf/core: Only copy-to-user after
completely unlocking all locks, v3.
URL : https://patchwork.freedesktop.org/series/76816/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
c7e83b1d5e8c perf/core: Only copy-to-user after
On 01/05/2020 11:18, Chris Wilson wrote:
The ring is a precious resource: we anticipate to only use a few hundred
bytes for a request, and only try to reserve that before we start. If we
go beyond our guess in building the request, then instead of waiting at
the start of execbuf before we hold
Quoting Tvrtko Ursulin (2020-05-01 13:33:14)
>
> On 01/05/2020 11:18, Chris Wilson wrote:
> > +
> > + err = 0;
> > + if (rq->engine->emit_init_breadcrumb)
> > + err = rq->engine->emit_init_breadcrumb(rq);
> > + if (!err)
> > + err = rq->engine->emit_bb_start(rq,
Quoting Chris Wilson (2020-05-01 13:38:03)
> Quoting Tvrtko Ursulin (2020-05-01 13:33:14)
> >
> > On 01/05/2020 11:18, Chris Wilson wrote:
> > > +
> > > + err = 0;
> > > + if (rq->engine->emit_init_breadcrumb)
> > > + err = rq->engine->emit_init_breadcrumb(rq);
> > > + if (
On 01/05/2020 11:18, Chris Wilson wrote:
As we can now keep chaining together a relocation batch to process any
number of relocations, we can keep building that relocation batch for
all of the target vma. This avoiding emitting a new request into the
ring for each target, consuming precious rin
On 01/05/2020 11:19, Chris Wilson wrote:
If at first we don't succeed, try try again.
No all engines may support the MI ops we need to perform asynchronous
relocation patching, and so we end up failing back to a synchronous
operation that has a liability of blocking. However, Tvrtko pointed ou
Quoting Tvrtko Ursulin (2020-05-01 13:47:36)
>
> On 01/05/2020 11:19, Chris Wilson wrote:
> If you are not worried about the context create dance on SNB, and it is
> limited to VCS, then neither am I.
In the short term, since it's limited to vcs on SNB so that means it is
just a plain kmalloc (a
== Series Details ==
Series: series starting with [01/24] perf/core: Only copy-to-user after
completely unlocking all locks, v3.
URL : https://patchwork.freedesktop.org/series/76816/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_8405 -> Patchwork_17539
===
As we can now keep chaining together a relocation batch to process any
number of relocations, we can keep building that relocation batch for
all of the target vma. This avoiding emitting a new request into the
ring for each target, consuming precious ring space and a potential
stall.
v2: Propagate
The ring is a precious resource: we anticipate to only use a few hundred
bytes for a request, and only try to reserve that before we start. If we
go beyond our guess in building the request, then instead of waiting at
the start of execbuf before we hold any locks or other resources, we
may trigger
If at first we don't succeed, try try again.
Not all engines may support the MI ops we need to perform asynchronous
relocation patching, and so we end up falling back to a synchronous
operation that has a liability of blocking. However, Tvrtko pointed out
we don't need to use the same engine to pe
== Series Details ==
Series: drm/i915/gt: Make timeslicing an explicit engine property
URL : https://patchwork.freedesktop.org/series/76817/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8405 -> Patchwork_17540
Summary
On 01/05/2020 14:02, Chris Wilson wrote:
The ring is a precious resource: we anticipate to only use a few hundred
bytes for a request, and only try to reserve that before we start. If we
go beyond our guess in building the request, then instead of waiting at
the start of execbuf before we hold
On 01/05/2020 14:02, Chris Wilson wrote:
As we can now keep chaining together a relocation batch to process any
number of relocations, we can keep building that relocation batch for
all of the target vma. This avoiding emitting a new request into the
ring for each target, consuming precious rin
On 01/05/2020 13:22, Chris Wilson wrote:
In order to allow userspace to rely on timeslicing to reorder their
batches, we must support preemption of those user batches. Declare
timeslicing as an explicit property that is a combination of having the
kernel support and HW support.
Suggested-by: T
== Series Details ==
Series: series starting with [1/3] drm/i915/gem: Use chained reloc batches
URL : https://patchwork.freedesktop.org/series/76813/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8405_full -> Patchwork_17538_full
===
== Series Details ==
Series: series starting with [1/3] drm/i915/gem: Use chained reloc batches
URL : https://patchwork.freedesktop.org/series/76818/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8405 -> Patchwork_17541
Sum
If at first we don't succeed, try try again.
Not all engines may support the MI ops we need to perform asynchronous
relocation patching, and so we end up falling back to a synchronous
operation that has a liability of blocking. However, Tvrtko pointed out
we don't need to use the same engine to pe
As we can now keep chaining together a relocation batch to process any
number of relocations, we can keep building that relocation batch for
all of the target vma. This avoiding emitting a new request into the
ring for each target, consuming precious ring space and a potential
stall.
v2: Propagate
The ring is a precious resource: we anticipate to only use a few hundred
bytes for a request, and only try to reserve that before we start. If we
go beyond our guess in building the request, then instead of waiting at
the start of execbuf before we hold any locks or other resources, we
may trigger
On 01/05/2020 09:42, Chris Wilson wrote:
gdb uses ptrace() to peek and poke bytes of the target's address space.
The driver must implement an vm_ops->access() handler or else gdb will
be unable to inspect the pointer and report it as out-of-bounds.
Worse than useless as it causes immediate suspic
gdb uses ptrace() to peek and poke bytes of the target's address space.
The driver must implement an vm_ops->access() handler or else gdb will
be unable to inspect the pointer and report it as out-of-bounds.
Worse than useless as it causes immediate suspicion of the valid GTT
pointer, distracting t
On Thu, 30 Apr 2020 at 20:42, Chris Wilson wrote:
>
> gdb uses ptrace() to peek and poke bytes of the target's address space.
> The kernel must implement an vm_ops->access() handler or else gdb will
> be unable to inspect the pointer and report it as out-of-bounds. Worse
> than useless as it cause
Quoting Matthew Auld (2020-05-01 15:58:29)
> On Thu, 30 Apr 2020 at 20:42, Chris Wilson wrote:
> > + ptrace(PTRACE_ATTACH, pid, NULL, NULL);
> > + for (int i = 0; i < OBJECT_SIZE / sizeof(long); i++) {
> > + long ret;
> > +
> > + ret = ptrace(PTRACE_PEEKDATA
On Thu, 30 Apr 2020 at 20:51, Chris Wilson wrote:
>
> gdb uses ptrace() to peek and poke bytes of the target's address space.
> The kernel must implement an vm_ops->access() handler or else gdb will
> be unable to inspect the pointer and report it as out-of-bounds. Worse
> than useless as it cause
Thanks
Mikhail Voldman
System Architect
Teledyne LeCroy, Protocol Solutions Group
2111 Big Timber Road
Elgin, IL 60123
email address: mikhail.vold...@teledyne.com
847-888-0450 x136
Send me a file securely
-Original Message-
From: Ramalingam C
Sent: Thursday, April 30, 2020 12:0
== Series Details ==
Series: drm/i915/gt: Make timeslicing an explicit engine property
URL : https://patchwork.freedesktop.org/series/76817/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8405_full -> Patchwork_17540_full
Su
== Series Details ==
Series: series starting with [1/3] drm/i915/gem: Use chained reloc batches
URL : https://patchwork.freedesktop.org/series/76818/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8405_full -> Patchwork_17541_full
===
On 30.04.2020 20:30, Daniel Vetter wrote:
> On Thu, Apr 30, 2020 at 5:38 PM Sean Paul wrote:
>>
>> On Wed, Apr 29, 2020 at 4:57 AM Jani Nikula
>> wrote:
>>>
>>> On Tue, 28 Apr 2020, Michal Orzel wrote:
As suggested by the TODO list for the kernel DRM subsystem, replace
the deprecat
== Series Details ==
Series: series starting with [CI,1/3] drm/i915/gem: Use chained reloc batches
URL : https://patchwork.freedesktop.org/series/76821/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8406 -> Patchwork_17542
On Fri, May 01, 2020 at 09:01:42AM +0100, Tvrtko Ursulin wrote:
>
> Hi,
>
> On 01/05/2020 00:15, Matt Roper wrote:
> > We're seeing some CI errors indicating that a workaround did not apply
> > properly on EHL/JSL. The workaround in question is updating a multicast
> > register, the failures are
== Series Details ==
Series: drm/i915: Implement vm_ops->access for gdb access into mmaps (rev4)
URL : https://patchwork.freedesktop.org/series/76783/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_8406 -> Patchwork_17543
Su
Sometimes it's not okay to use SIMD registers, the conditions for which
have changed subtly from kernel release to kernel release. Usually the
pattern is to check for may_use_simd() and then fallback to using
something slower in the unlikely case SIMD registers aren't available.
So, this patch fixe
RKL re-uses the same stolen memory registers as TGL and ICL.
Bspec: 52055
Bspec: 49589
Bspec: 49636
Cc: Lucas De Marchi
Signed-off-by: Matt Roper
---
arch/x86/kernel/early-quirks.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/kernel/early-quirks.c b/arch/x86/kernel/early-quirks.
Rocket Lake (RKL) is another gen12 platform, so the driver support is
mostly a straightforward evolution of our existing Tiger Lake support.
One area of this patch series that's a bit non-intuitive and warrants
some extra explanation is the output handling. All four of RKL's output
ports use comb
Cc: Anusha Srivatsa
Signed-off-by: Matt Roper
---
drivers/gpu/drm/i915/display/intel_csr.c | 10 +-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/display/intel_csr.c
b/drivers/gpu/drm/i915/display/intel_csr.c
index 3112572cfb7d..319932b03e88 100644
--
The RKL platform has different memory characteristics from past
platforms. Update the values used by our memory bandwidth calculations
accordingly.
Bspec: 53998
Cc: James Ausmus
Signed-off-by: Matt Roper
---
drivers/gpu/drm/i915/display/intel_bw.c | 10 +-
1 file changed, 9 insertions(
Note that the 192000 clock frequencies can be achieved with different
pairs of ratio+divider, which is something we haven't encountered
before. If any of those ratios were common with other legal cdclk
values, then it would mean we could avoid triggering full modesets if we
just needed to change t
Since the number of platforms with this restriction are growing, let's
separate out the platform logic into a has_phy_misc() function.
Bspec: 50107
Signed-off-by: Matt Roper
---
.../gpu/drm/i915/display/intel_combo_phy.c| 30 +++
1 file changed, 17 insertions(+), 13 deletions
When Rocket Lake is paired with a TGP PCH, the last two outputs utilize
the TC1 and TC2 hpd pins, even though these are combo outputs.
Bspec: 49181
Cc: Lucas De Marchi
Signed-off-by: Matt Roper
---
drivers/gpu/drm/i915/display/intel_dp.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletion
RKL only has five universal planes, plus a cursor. Since the
bottom-most universal plane is considered the primary plane, set the
number of sprites available on this platform to 4.
In general, the plane capabilities of the remaining planes stay the same
as TGL. However the NV12 Y-plane support m
RKL uses a slightly different bit layout for the DPCLKA_CFGCR0 register.
Bspec: 50287
Cc: Aditya Swarup
Signed-off-by: Matt Roper
---
drivers/gpu/drm/i915/display/intel_ddi.c | 18 +++---
drivers/gpu/drm/i915/display/intel_display.c | 15 ---
drivers/gpu/drm/i915/i91
Rocket Lake can pair with either TGP or CMP.
Cc: Lucas De Marchi
Signed-off-by: Matt Roper
---
drivers/gpu/drm/i915/intel_pch.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_pch.c b/drivers/gpu/drm/i915/intel_pch.c
index 20ab9a5023b5..102
Certain combo PHYs act as a compensation master to other PHYs and need
to be initialized with a special irefgen bit in the PORT_COMP_DW8
register. Previously PHY A was the only compensation master (for PHYs
B & C), but RKL adds a fourth PHY which is slaved to PHY C instead.
Bspec: 49291
Cc: Lucas
If HTI (also sometimes called HDPORT) is enabled at startup, it may be
using some of the PHYs and DPLLs making them unavailable for general
usage. Let's read out the HDPORT_STATE register and avoid making use of
resources that HTI is already using.
Bspec: 49189
Bspec: 53707
Cc: Lucas De Marchi
S
From: Lucas De Marchi
RKL uses the DDI A, DDI B, DDI USBC1, DDI USBC2 from the DE point of
view, so all DDI/pipe/transcoder register use these indexes to refer to
them. Combo phy and IO functions follow another namespace that we keep
as "enum phy". The VBT in theory would use the DE point of view
Introduce the basic platform definition, macros, and PCI IDs.
Bspec: 44501
Cc: Lucas De Marchi
Cc: Caz Yokoyama
Cc: Aditya Swarup
Signed-off-by: Matt Roper
---
drivers/gpu/drm/i915/i915_drv.h | 8
drivers/gpu/drm/i915/i915_pci.c | 10 ++
drivers/gpu/drm/i91
RKL uses DDI's A, B, TC1, and TC2 which need to map to combo PHY's A-D.
Bspec: 49181
Cc: Imre Deak
Cc: Aditya Swarup
Cc: Lucas De Marchi
Signed-off-by: Matt Roper
---
drivers/gpu/drm/i915/display/intel_display.c | 34
drivers/gpu/drm/i915/i915_reg.h | 4 ++-
From: José Roberto de Souza
RKL doesn't have PSR2 HW tracking, it was replaced by software/manual
tracking. The driver is required to track the areas that needs update
and program hardware to send selective updates.
So until the software tracking is implemented, PSR2 needs to be disabled
for pl
RKL uses the same BW_BUDDY programming table as TGL, but programs the
values into a single set BUDDY0 set of registers rather than the
BUDDY1/BUDDY2 sets used by TGL.
Bspec: 49218
Cc: Aditya Swarup
Signed-off-by: Matt Roper
---
.../drm/i915/display/intel_display_power.c| 44 +++-
RKL and TGL share some general gen12 workarounds, but each platform also
has its own platform-specific workarounds.
Cc: Matt Atwood
Signed-off-by: Matt Roper
---
drivers/gpu/drm/i915/display/intel_sprite.c | 5 +-
drivers/gpu/drm/i915/gt/intel_workarounds.c | 88 +
2 files
RKL power wells are similar to TGL power wells, but have some important
differences:
* PG1 now has pipe A's VDSC (rather than sticking it in PG2)
* PG2 no longer exists
* DDI-C (aka TC-1) moves from PG1 -> PG3
* PG5 no longer exists due to the lack of a fourth pipe
Also note that what we refe
Rocket Lake has a third DPLL (called 'DPLL4') that must be used to
enable a third display. Unlike EHL's variant of DPLL4, the RKL variant
behaves the same as DPLL0/1. And despite its name, the DPLL4 registers
are offset as if it were DPLL2, so no extra offset handling is needed
either.
Bspec: 49
There are a couple places in our driver that loop over transcoders A..D
for gen11+; since RKL only has three pipes/transcoders, this can lead to
unclaimed register reads/writes. We should add checks for transcoder
existence where appropriate.
Cc: Aditya Swarup
Signed-off-by: Matt Roper
---
dri
From: Aditya Swarup
RKL doesn't have DSI outputs, so we shouldn't try to read out the DSI
transcoder registers.
Signed-off-by: Aditya Swarup
Signed-off-by: Matt Roper
---
drivers/gpu/drm/i915/display/intel_display.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/g
RKL uses the same GuC and HuC as TGL and should load the same firmwares.
Bspec: 50668
Cc: Anusha Srivatsa
Signed-off-by: Matt Roper
---
drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c
b/drivers/gpu/drm/i91
The pin mapping for the final two outputs varies according to which PCH
is present on the platform: with TGP the pins are remapped into the TC
range, whereas with CMP they stay in the traditional combo output range.
Bspec: 49181
Cc: Aditya Swarup
Signed-off-by: Matt Roper
---
drivers/gpu/drm/i
> -Original Message-
> From: Roper, Matthew D
> Sent: Friday, May 1, 2020 10:37 PM
> To: intel-gfx@lists.freedesktop.org
> Cc: Roper, Matthew D ; Srivatsa, Anusha
>
> Subject: [PATCH 03/23] drm/i915/rkl: Re-use TGL GuC/HuC firmware
>
> RKL uses the same GuC and HuC as TGL and should l
== Series Details ==
Series: drm/i915: check to see if SIMD registers are available before using SIMD
URL : https://patchwork.freedesktop.org/series/76825/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
962131944f1f drm/i915: check to see if SIMD registers are available before
> -Original Message-
> From: Roper, Matthew D
> Sent: Friday, May 1, 2020 10:37 PM
> To: intel-gfx@lists.freedesktop.org
> Cc: Roper, Matthew D ; Srivatsa, Anusha
>
> Subject: [PATCH 04/23] drm/i915/rkl: Load DMC firmware for Rocket Lake
>
> Cc: Anusha Srivatsa
> Signed-off-by: Matt
1 - 100 of 129 matches
Mail list logo