Hi Maarten, Maxime, and Thomas -
Here's the DP-HDMI2.1 PCON support topic pull consisting of the series
[1]. The series is split roughly 50-50 between drm helpers and i915, so
a topic branch seemed to be the right way to go.
I'll also pull this to drm-intel-next once you've merged to
drm-misc-n
> On 2020.12.18 17:05:31 +0800, Xiong Zhang wrote:
> > From: Zhenyu Wang
> >
> > Some vmm like hyperv and crosvm don't supply any ISA bridge to their
> > guest, when igd passthrough is equipped on these vmm, guest i915
> > display may couldn't work as guest i915 detects PCH_NONE pch type.
> >
> >
On Fri, 18 Dec 2020, Xiong Zhang wrote:
> From: Zhenyu Wang
>
> Some vmm like hyperv and crosvm don't supply any ISA bridge to their guest,
> when igd passthrough is equipped on these vmm, guest i915 display may
> couldn't work as guest i915 detects PCH_NONE pch type.
>
> When i915 runs as guest,
On Wed, 23 Dec 2020, "Sharma, Swati2" wrote:
> On 23-Dec-20 12:24 PM, Shankar, Uma wrote:
>>
>>
>>> -Original Message-
>>> From: Nautiyal, Ankit K
>>> Sent: Wednesday, December 23, 2020 11:27 AM
>>> To: Jani Nikula ; Sharma, Swati2
>>> ; Shankar, Uma
>>> Cc: intel-gfx@lists.freedesktop
On Mon, 14 Dec 2020 at 10:10, Chris Wilson wrote:
>
> The caller determines if the failure is an error or not, so avoid
> warning when we will try again and succeed. For example,
>
> <7> [111.319321] [drm:intel_guc_fw_upload [i915]] GuC status 0x20
> <3> [111.319340] i915 :00:02.0: [drm] *ERRO
drivers/gpu/drm/i915/display/intel_dp.c:6922 intel_dp_update_420() warn: should
this be a bitwise op?
drivers/gpu/drm/i915/display/intel_dp.c:6922 intel_dp_update_420() warn: should
this be a bitwise op?
drivers/gpu/drm/i915/display/intel_dp.c:6923 intel_dp_update_420() warn: should
this be a bi
On Wed, 23 Dec 2020, Chris Wilson wrote:
> drivers/gpu/drm/i915/display/intel_dp.c:6922 intel_dp_update_420() warn:
> should this be a bitwise op?
> drivers/gpu/drm/i915/display/intel_dp.c:6922 intel_dp_update_420() warn:
> should this be a bitwise op?
> drivers/gpu/drm/i915/display/intel_dp.c:6
Rather than going back and forth between the rb_node entry and the
virtual_engine type, store the ve local and reuse it. As the
container_of conversion from rb_node to virtual_engine requires a
variable offset, performing that conversion just once shaves off a bit
of code.
v2: Keep a single virtua
When we know that we are inside the timeline mutex, or inside the
submission flow (under active.lock or the holder's rcu lock), we know
that the rq->hwsp is stable and we can use the simpler direct version.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gem/i915_gem_context.c | 2 +-
If any engine asks for the tasklet to be kicked from the CS interrupt,
do so. Currently, this is used by the execlists scheduler backends to
feed in the next request to the HW, and similarly could be used by a
ring scheduler, as will be seen in the next patch.
Signed-off-by: Chris Wilson
Reviewed
In anticipation of wanting to be able to call pi from underneath an
engine's active.lock, rework the priority inheritance to primarily work
along an engine's priority queue, delegating any other engine that the
chain may traverse to a worker. This reduces the global spinlock from
governing the mult
To support legacy ring buffer scheduling, we want a virtual ringbuffer
for each client. These rings are purely for holding the requests as they
are being constructed on the CPU and never accessed by the GPU, so they
should not be bound into the GGTT, and we can use plain old WB mapped
pages.
As th
When cloning the engines from the source context, we need to ensure that
the engines are not freed as we copy them, and that the flags we clone
from the source correspond with the engines we copy across. To do this
we need only take a reference to the src->engines, rather than hold the
src->engine_
Allow multiple requests to be queued unto a virtual engine, whereas
before we only allowed a single request to be queued at a time. The
advantage of keeping just one request in the queue was to ensure that we
always decided late which engine to use. However, with the introduction
of the virtual dea
Replace the priolist rbtree with a skiplist. The crucial difference is
that walking and removing the first element of a skiplist is O(1), but
O(lgN) for an rbtree, as we need to rebalance on remove. This is a
hindrance for submission latency as it occurs between picking a request
for the priolist a
Originally, we used the signal->lock as a means of following the
previous link in its timeline and peeking at the previous fence.
However, we have replaced the explicit serialisation with a series of
very careful probes that anticipate the links being deleted and the
fences recycled before we are a
Make the ability to suspend and resume a request and its dependents
generic.
Signed-off-by: Chris Wilson
---
.../drm/i915/gt/intel_execlists_submission.c | 148 +-
drivers/gpu/drm/i915/i915_scheduler.c | 120 ++
drivers/gpu/drm/i915/i915_scheduler.h |
When we are not using semaphores with a context/engine, we can simply
reuse the same seqno location across wraps, but we still require each
timeline to have its own address. For LRC submission, each context is
prefixed by a per-process HWSP, which provides us with a unique location
for each context
Exercise rescheduling priority inheritance around a sequence of requests
that wrap around all the engines.
Signed-off-by: Chris Wilson
---
.../gpu/drm/i915/selftests/i915_scheduler.c | 219 ++
1 file changed, 219 insertions(+)
diff --git a/drivers/gpu/drm/i915/selftests/i915_s
Treat the dependency between bonded requests as weak and leave the
remainder of the pair on the GPU if one hangs.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/intel_execlists_submission.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists
Build a bare bones scheduler to sit on top the global legacy ringbuffer
submission. This virtual execlists scheme should be applicable to all
older platforms.
A key problem we have with the legacy ring buffer submission is that it
only allows for FIFO queuing. All clients share the global request
The core of the scheduling algorithm is that we compute the topological
order of the fence DAG. Knowing that we have a DAG, we should be able to
use a DFS to compute the topological sort in linear time. However,
during the conversion of the recursive algorithm into an iterative one,
the memorizatio
Rather than having special case code for opportunistically calling
process_csb() and performing a direct submit while holding the engine
spinlock for submitting the request, simply call the tasklet directly.
This allows us to retain the direct submission path, including the CS
draining to allow fas
As context-in/out is now always serialised, we do not have to worry
about concurrent enabling/disable of the busy-stats and can reduce the
atomic_t active to a plain unsigned int, and the seqlock to a seqcount.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/intel_engine_cs.c| 8 ++-
Avoid the full blown memory barrier of test_and_set_bit() by noting the
completed request and removing it from the lists.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/i915_request.c | 16 +---
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/i915/
Since we are not using any internal priority levels, and in the next few
patches will introduce a new index for which the optimisation is not so
lear cut, discard the small table within the priolist.
Signed-off-by: Chris Wilson
---
.../gpu/drm/i915/gt/intel_engine_heartbeat.c | 2 +-
.../drm/i
Lift the ability to defer a request until later from execlists into the
common layer.
Signed-off-by: Chris Wilson
---
.../drm/i915/gt/intel_execlists_submission.c | 55 ++--
drivers/gpu/drm/i915/i915_scheduler.c | 66 ---
drivers/gpu/drm/i915/i915_scheduler.h
Since schedule-in and schedule-out are now both always under the tasklet
bitlock, we can reduce the individual atomic operations to simple
instructions and worry less.
This notably eliminates the race observed with intel_context_inflight in
__engine_unpark().
Closes: https://gitlab.freedesktop.or
The first "scheduler" was a topographical sorting of requests into
priority order. The execution order was deterministic, the earliest
submitted, highest priority request would be executed first. Priority
inheritance ensured that inversions were kept at bay, and allowed us to
dynamically boost prio
If we allow for per-client timelines, even with legacy ring submission,
we open the door to a world full of possiblities [scheduling and
semaphores].
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/gen6_engine_cs.c | 89 +---
drivers/gpu/drm/i915/gt/gen8_engine_cs.c |
The issue with stale virtual breadcrumbs remain. Now we have the problem
that if the irq-signaler is still referencing the stale breadcrumb as we
transfer it to a new sibling, the list becomes spaghetti. This is a very
small window, but that doesn't stop it being hit infrequently. To
prevent the li
We assume that both timestamps are driven off the same clock [reported
to userspace as I915_PARAM_CS_TIMESTAMP_FREQUENCY]. Verify that this is
so by reading the timestamp registers around a busywait (on an otherwise
idle engine so there should be no preemptions).
v2: Icelake (not ehl, nor tgl) see
Extract the scheduler lists into a related structure, stop sprawling
over struct intel_engine_cs
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/intel_engine_cs.c | 26 +-
drivers/gpu/drm/i915/gt/intel_engine_types.h | 8 +
.../drm/i915/gt/intel_execlists_submission
Switch over from FIFO global submission to the priority-sorted
topographical scheduler. At the cost of more busy work on the CPU to
keep the GPU supplied with the next packet of requests, this allows us
to reorder requests around submission stalls.
This also enables the timer based RPS, with the e
Move the scheduling tasklists out of the execlists backend into the
per-engine scheduling bookkeeping.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/intel_engine.h| 14 -
drivers/gpu/drm/i915/gt/intel_engine_cs.c | 11 ++--
drivers/gpu/drm/i915/gt/intel_engine_types.h
A quick test to verify that the backend accepts each type of timeline
and can use them to track and control request emission.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/selftest_timeline.c | 105
1 file changed, 105 insertions(+)
diff --git a/drivers/gpu/drm/i9
The current implementation of walking the children of a deferred
requests lacks the backtracking required to reduce the dfs to linear.
Having pulled it from execlists into the common layer, we can reuse the
dfs code for priority inheritance.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/i
As we do not have any internal priority levels, the priority can be set
directed from the user values.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/display/intel_display.c | 4 +-
drivers/gpu/drm/i915/gem/i915_gem_context.c | 6 +--
.../i915/gem/selftests/i915_gem_object_blt.c | 4
Now that we are careful to always force-restore contexts upon rewinding
(where necessary), we can restore our optimisation to skip over
completed active execlists when dequeuing.
Referenecs: 35f3fd8182ba ("drm/i915/execlists: Workaround switching back to a
completed context")
References: 8ab3a381
As a topological sort, we expect it to run in linear graph time,
O(V+E). In removing the recursion, it is no longer a DFS but rather a
BFS, and performs as O(VE). Let's demonstrate how bad this is with a few
examples, and build a few test cases to verify a potential fix.
Signed-off-by: Chris Wilso
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/Kconfig.profile | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/Kconfig.profile
b/drivers/gpu/drm/i915/Kconfig.profile
index 35bbe2b80596..3eacea42b19f 100644
--- a/drivers/gpu/drm/i915/Kconfig.profile
In the process of preparing to reuse the request submission logic for
other backends, lift it out of the execlists backend.
While this operates on the common structs, we do have a bit of backend
knowledge, which is harmless for !lrc but still unsightly.
Signed-off-by: Chris Wilson
---
drivers/g
For a modeset/pageflip, there is a very precise deadline by which the
frame must be completed in order to hit the vblank and be shown. While
we don't pass along that exact information, we can at least inform the
scheduler that this request-chain needs to be completed asap.
Signed-off-by: Chris Wil
Since we use a flag within i915_request.flags to indicate when we have
boosted the request (so that we only apply the boost) once, this can be
used as the serialisation with i915_request_retire() to avoid having to
explicitly take the i915_request.lock which is more heavily contended.
Signed-off-b
Now that the tasklet completely controls scheduling of the requests, and
we postpone scheduling out the old requests, we can keep a hanging
virtual request bound to the engine on which it hung, and remove it from
te queue. On release, it will be returned to the same engine and remain
in its queue u
This was removed in commit 478ffad6d690 ("drm/i915: drop
engine_pin/unpin_breadcrumbs_irq") as the last user had been removed,
but now there is a promise of a new user in the next patch.
Signed-off-by: Chris Wilson
Reviewed-by: Mika Kuoppala
---
drivers/gpu/drm/i915/gt/intel_breadcrumbs.c | 24
Take a snapshot of the ctx->engines, so we can avoid taking the
ctx->engines_mutex for a mere read in get_engines().
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gem/i915_gem_context.c | 39 +
1 file changed, 8 insertions(+), 31 deletions(-)
diff --git a/drivers/gpu/
Wrap cmpxchg64 with a try_cmpxchg()-esque helper. Hiding the old-value
dance in the helper allows for cleaner code.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/i915_utils.h | 32 +++
1 file changed, 32 insertions(+)
diff --git a/drivers/gpu/drm/i915/i915_uti
When we introduced the saturated workload detection to tell us to back
off from semaphore usage [semaphores have a noticeable impact on
contended bus cycles with the CPU for some heavy workloads], we first
introduced it as a per-context tracker. This allows individual contexts
to try and optimise t
Once a virtual engine has been bound to a sibling, it will remain bound
until we finally schedule out the last active request. We can not rebind
the context to a new sibling while it is inflight as the context save
will conflict, hence we wait. As we cannot then use any other sibliing
while the con
In preparation for removing the has_initial_breadcrumb field, add a
helper function for the existing callers.
Signed-off-by: Chris Wilson
Reviewed-by: Mika Kuoppala
---
drivers/gpu/drm/i915/gt/gen8_engine_cs.c| 2 +-
drivers/gpu/drm/i915/gt/intel_ring_submission.c | 4 ++--
drivers/gpu/
Inside schedule_out, we do extra work upon idling the context, such as
updating the runtime, kicking off retires, kicking virtual engines.
However, if we are in a series of processing single requests per
contexts, we may find ourselves scheduling out the context, only to
immediately schedule it bac
Looking to the future, we want to set the scheduling attributes
explicitly and so replace the generic engine->schedule() with the more
direct i915_request_set_priority()
What it loses in removing the 'schedule' name from the function, it
gains in having an explicit entry point with a stated goal.
Extract the scheduling queue from "execlists" into the per-engine
scheduling structs, for reuse by other backends.
Signed-off-by: Chris Wilson
---
.../gpu/drm/i915/gem/i915_gem_context_types.h | 2 +-
drivers/gpu/drm/i915/gem/i915_gem_wait.c | 1 +
drivers/gpu/drm/i915/gt/intel_engine_cs.
Since schedule-in/out is now entirely serialised by the tasklet bitlock,
we do not need to worry about concurrent in/out operations and so reduce
the atomic operations to plain instructions.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/intel_engine_cs.c| 2 +-
drivers/gpu/
Currently, we construct and teardown the i915_dependency chains using a
global spinlock. As the lists are entirely local, it should be possible
to use an double-lock with an explicit nesting [signaler -> waiter,
always] and so avoid the costly convenience of a global spinlock.
Signed-off-by: Chris
Pull the GT clock information [used to derive CS timestamps and PM
interval] under the GT so that is it local to the users. In doing so, we
consolidate the two references for the same information, of which the
runtime-info took note of a potential clock source override and scaling
factors.
Signed-
Pull the individual strands of creating a custom heartbeat requests into
a pair of common functions. This will reduce the number of changes we
will need to make in future.
Signed-off-by: Chris Wilson
---
.../gpu/drm/i915/gt/intel_engine_heartbeat.c | 59 +--
1 file changed, 41 i
As we know when we expect the heartbeat to be checked for completion,
pass this information along as its deadline. We still do not complain if
the deadline is missed, at least until we have tried a few times, but it
will allow for quicker hang detection on systems where deadlines are
adhered to.
S
Couple up the context in/out accounting to record how long each engine
is busy handling requests. This is exposed to userspace for more accurate
measurements, and also enables our soft-rps timer.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/intel_ring_scheduler.c | 6 ++
1 file ch
Having recognised that we do not change the sibling until we schedule
out, we can then defer the decision to resubmit the virtual engine from
the unwind of the active queue to scheduling out of the virtual context.
This improves our resilence in virtual engine scheduling, and should
eliminate the r
Let's only wait for the list iterator when decoupling the virtual
breadcrumb, as the signaling of all the requests may take a long time,
during which we do not want to keep the tasklet spinning.
Signed-off-by: Chris Wilson
Reviewed-by: Matthew Brost
---
drivers/gpu/drm/i915/gt/intel_breadcrumbs
Currently we know that the timeline status page is at most a page in
size, and so we can preserve the lower 12bits of the offset when
relocating the status page in the GGTT. If we want to use a larger
object, such as the context state, we may not necessarily use a position
within the first page and
Lift the busy-stats context-in/out implementation out of intel_lrc, so
that we can reuse it for other scheduler implementations.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/intel_engine_stats.h | 49 +++
.../drm/i915/gt/intel_execlists_submission.c | 34 +---
In the next patch, we remove the strict priority system and continuously
re-evaluate the relative priority of tasks. As such we need to enable
the timeslice whenever there is more than one context in the pipeline.
This simplifies the decision and removes some of the tweaks to suppress
timeslicing,
A key prolem with legacy ring buffer submission is that it is an inheret
FIFO queue across all clients; if one blocks, they all block. A
scheduler allows us to avoid that limitation, and ensures that all
clients can submit in parallel, removing the resource contention of the
global ringbuffer.
Hav
Relative timelines are relative to either the global or per-process
HWSP, and so we can replace the absolute addressing with store-index
variants for position invariance.
Signed-off-by: Chris Wilson
Reviewed-by: Matthew Brost
---
drivers/gpu/drm/i915/gt/gen8_engine_cs.c | 98 +--
In the process of preparing to reuse the request submission logic for
other backends, lift it out of the execlists backend. It already
operates on the common structs, so just a matter of moving and renaming.
Signed-off-by: Chris Wilson
---
.../drm/i915/gt/intel_execlists_submission.c | 55 +
Explicitly differentiate between the absolute and relative timelines,
and the global HWSP and ppHWSP relative offsets. When using a timeline
that is relative to a known status page, we can replace the absolute
addressing in the commands with indexed variants.
Signed-off-by: Chris Wilson
Reviewed-
It is a preliminary work for supporting multiple EDP PSR and
DP PanelReplay. And it refactors singleton PSR to Multi Transcoder
supportable PSR.
And this moves and renames the i915_psr structure of drm_i915_private's to
intel_dp's intel_psr structure.
It also causes changes in PSR interrupt handlin
In order to support the PSR state of each transcoder, it adds
i915_psr_status to sub-directory of each transcoder.
v2: Change using of Symbolic permissions 'S_IRUGO' to using of octal
permissions '0444'
v5: Addressed JJani Nikula's review comments
- Remove checking of Gen12 for i915_psr_statu
> -Original Message-
> From: Gwan-gyeong Mun
> Sent: Wednesday, December 23, 2020 5:08 PM
> To: intel-gfx@lists.freedesktop.org
> Cc: Gupta, Anshuman ; Nikula, Jani
>
> Subject: [PATCH v8 1/2] drm/i915/display: Support PSR Multiple Instances
>
> It is a preliminary work for supporting
> -Original Message-
> From: Gwan-gyeong Mun
> Sent: Wednesday, December 23, 2020 5:08 PM
> To: intel-gfx@lists.freedesktop.org
> Cc: Gupta, Anshuman ; Nikula, Jani
>
> Subject: [PATCH v8 2/2] drm/i915/display: Support Multiple Transcoders'
> PSR status on debugfs
>
> In order to suppo
I've tested this patch with SelectiveFetch / PSR Selective Update IGT
test.
igt patch : Add FB_DAMAGE_CLIPS prop and new test for Selective fetch
: https://patchwork.freedesktop.org/series/84696/ (this igt patch is
under reviewing)
As per bspec, when I checked this patch by code level, it looked
As we shrink an object, also see if we can prune the dma-resv of idle
fences it is maintaining a reference to.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/Makefile| 1 +
drivers/gpu/drm/i915/dma_resv_utils.c| 17 +
drivers/gpu/drm/i915/dma_resv_u
If we want to reuse a fence that is in active use by the GPU, we have to
wait an uncertain amount of time, but if we reuse an inactive fence, we
can change it right away. Loop through the list of available fences
twice, ignoring any active fences on the first pass.
Signed-off-by: Chris Wilson
---
Pull the GT clock information [used to derive CS timestamps and PM
interval] under the GT so that is it local to the users. In doing so, we
consolidate the two references for the same information, of which the
runtime-info took note of a potential clock source override and scaling
factors.
Signed-
We assume that both timestamps are driven off the same clock [reported
to userspace as I915_PARAM_CS_TIMESTAMP_FREQUENCY]. Verify that this is
so by reading the timestamp registers around a busywait (on an otherwise
idle engine so there should be no preemptions).
v2: Icelake (not ehl, nor tgl) see
== Series Details ==
Series: drm/i915/display: Bitwise or the conversion colour specifier together
URL : https://patchwork.freedesktop.org/series/85177/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
a91fc9febda0 drm/i915/display: Bitwise or the conversion colour specifier
toge
== Series Details ==
Series: drm/i915/display: Bitwise or the conversion colour specifier together
URL : https://patchwork.freedesktop.org/series/85177/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_9515 -> Patchwork_19199
== Series Details ==
Series: series starting with [01/62] drm/i915/gt: Replace direct submit with
direct call to tasklet
URL : https://patchwork.freedesktop.org/series/85184/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
11ff28d8f6d3 drm/i915/gt: Replace direct submit with dir
== Series Details ==
Series: series starting with [01/62] drm/i915/gt: Replace direct submit with
direct call to tasklet
URL : https://patchwork.freedesktop.org/series/85184/
State : warning
== Summary ==
$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.2
Fast mode used, each commit wo
Chris Wilson writes:
> Pull the GT clock information [used to derive CS timestamps and PM
> interval] under the GT so that is it local to the users. In doing so, we
> consolidate the two references for the same information, of which the
> runtime-info took note of a potential clock source overrid
== Series Details ==
Series: series starting with [v8,1/2] drm/i915/display: Support PSR Multiple
Instances
URL : https://patchwork.freedesktop.org/series/85185/
State : failure
== Summary ==
Applying: drm/i915/display: Support PSR Multiple Instances
Using index info to reconstruct a base tre
== Series Details ==
Series: series starting with [v9,1/5] drm: Add function to convert rect in
16.16 fixed format to regular format (rev2)
URL : https://patchwork.freedesktop.org/series/85092/
State : failure
== Summary ==
Applying: drm: Add function to convert rect in 16.16 fixed format to
== Series Details ==
Series: series starting with [01/62] drm/i915/gt: Replace direct submit with
direct call to tasklet
URL : https://patchwork.freedesktop.org/series/85184/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_9515 -> Patchwork_19200
===
== Series Details ==
Series: series starting with [1/2] drm/i915/gt: Prefer recycling an idle fence
URL : https://patchwork.freedesktop.org/series/85186/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
602d4a02d905 drm/i915/gt: Prefer recycling an idle fence
109a6a7f1a64 drm/i915
Chris Wilson writes:
> We assume that both timestamps are driven off the same clock [reported
> to userspace as I915_PARAM_CS_TIMESTAMP_FREQUENCY]. Verify that this is
> so by reading the timestamp registers around a busywait (on an otherwise
> idle engine so there should be no preemptions).
>
>
Quoting Mika Kuoppala (2020-12-23 14:56:06)
> Chris Wilson writes:
> > + d_ctx *= RUNTIME_INFO(engine->i915)->cs_timestamp_frequency_hz;
> > + if (IS_ICELAKE(engine->i915))
> > + d_ring *= 1250; /* Fixed 80ns for icl ctx timestamp? */
>
> This is...weird. But I am not goin
== Series Details ==
Series: series starting with [1/2] drm/i915/gt: Prefer recycling an idle fence
URL : https://patchwork.freedesktop.org/series/85186/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_9515 -> Patchwork_19203
We just need the context image from the logical state to force eviction
of many contexts, so simplify by avoiding the GEM context container.
Signed-off-by: Chris Wilson
---
.../gpu/drm/i915/selftests/i915_gem_evict.c| 18 +-
1 file changed, 5 insertions(+), 13 deletions(-)
d
On Fri, 04 Dec 2020, Lyude Paul wrote:
> A while ago we ran into issues while trying to enable the eDP backlight
> control interface as defined by VESA, in order to make the DPCD
> backlight controls on newer laptop panels work. The issue ended up being
> much more complicated however, as we also
The shadow batch needs to be in the user visible ppGTT, so make sure we
are not leaking anything, if we can guess where the shadow will be
placed.
Signed-off-by: Matthew Auld
---
tests/i915/gen9_exec_parse.c | 86
1 file changed, 86 insertions(+)
diff --git
On Fri, 04 Dec 2020, Lyude Paul wrote:
> Currently, every different type of backlight hook that i915 supports is
> pretty straight forward - you have a backlight, probably through PWM
> (but maybe DPCD), with a single set of platform-specific hooks that are
> used for controlling it.
>
> HDR backl
On Wed, 23 Dec 2020 at 15:45, Chris Wilson wrote:
>
> We just need the context image from the logical state to force eviction
> of many contexts, so simplify by avoiding the GEM context container.
>
> Signed-off-by: Chris Wilson
Reviewed-by: Matthew Auld
_
On Fri, 04 Dec 2020, Lyude Paul wrote:
> So-recently a bunch of laptops on the market have started using DPCD
> backlight controls instead of the traditional DDI backlight controls.
> Originally we thought we had this handled by adding VESA backlight
> control support to i915, but the story ended
On Fri, 04 Dec 2020, Lyude Paul wrote:
> Since we now support controlling panel backlights through DPCD using
> both the standard VESA interface, and Intel's proprietary HDR backlight
> interface, we should allow the user to be able to explicitly choose
> between one or the other in the event that
== Series Details ==
Series: series starting with [1/2] drm/i915/selftests: Confirm CS_TIMESTAMP /
CTX_TIMESTAMP share a clock
URL : https://patchwork.freedesktop.org/series/85187/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
cfe0bee68e21 drm/i915/selftests: Confirm CS_TIMEST
The shadow batch is an internal object, which doesn't have any page
clearing, and since the batch_len might be within a page, we should take
care to clear any unused dwords before making it visible in the ppGTT.
Testcase: igt/gen9_exec_parse/shadow-peek
Signed-off-by: Matthew Auld
---
drivers/gp
== Series Details ==
Series: series starting with [1/2] drm/i915/selftests: Confirm CS_TIMESTAMP /
CTX_TIMESTAMP share a clock
URL : https://patchwork.freedesktop.org/series/85187/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_9517 -> Patchwork_19204
=
1 - 100 of 133 matches
Mail list logo