Am 19.05.20 um 15:27 schrieb Daniel Vetter:
Do it uncontionally, there's a separate peek function with
dma_fence_is_signalled() which can be called from atomic context.
v2: Consensus calls for an unconditional might_sleep (Chris,
Christian)
Full audit:
- dma-fence.h: Uses MAX_SCHEDULE_TIMOUT, g
On 2020-05-19 at 18:16:21 -0400, Sean Paul wrote:
> From: Sean Paul
>
> We're seeing some R0' mismatches in the field, particularly with
I think you want to say Vprime verification? delay is added in between
the retry for vprime verfication.
-Ram
> repeaters. I'm guessing the (already lenient) 3
Self test failure as usual. And as usual not related to the patch.
Best Regards,
Lisovskiy Stanislav
From: Patchwork
Sent: Wednesday, May 20, 2020 2:59 AM
To: Lisovskiy, Stanislav
Cc: intel-gfx@lists.freedesktop.org
Subject: ✗ Fi.CI.BAT: failure for Cons
== Series Details ==
Series: dma-fence: add might_sleep annotation to _wait()
URL : https://patchwork.freedesktop.org/series/77417/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8505_full -> Patchwork_17713_full
Summary
---
== Series Details ==
Series: series starting with [CI,1/3] drm/i915/selftests: Add tests for
timeslicing virtual engines
URL : https://patchwork.freedesktop.org/series/77414/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8505_full -> Patchwork_17712_full
=
== Series Details ==
Series: drm/i915/gt: Trace the CS interrupt
URL : https://patchwork.freedesktop.org/series/77441/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17720
Summary
---
**SUCCESS**
== Series Details ==
Series: drm/i915/hdcp: Add additional R0' wait
URL : https://patchwork.freedesktop.org/series/77439/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17719
Summary
---
**SUCCESS**
== Series Details ==
Series: Consider DBuf bandwidth when calculating CDCLK (rev15)
URL : https://patchwork.freedesktop.org/series/74739/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17718
Summary
---
Cc'ing x...@kernel.org and maintainers
On Wed, May 6, 2020 at 4:52 AM Srivatsa, Anusha
wrote:
>
>
>
> > -Original Message-
> > From: Intel-gfx On Behalf Of Matt
> > Roper
> > Sent: Tuesday, May 5, 2020 4:22 AM
> > To: intel-gfx@lists.freedesktop.org
> > Cc: De Marchi, Lucas
> > Subject:
On Wed, May 20, 2020 at 12:25:25AM +0300, Stanislav Lisovskiy wrote:
> According to BSpec max BW per slice is calculated using formula
> Max BW = CDCLK * 64. Currently when calculating min CDCLK we
> account only per plane requirements, however in order to avoid
> FIFO underruns we need to estimate
== Series Details ==
Series: Consider DBuf bandwidth when calculating CDCLK (rev15)
URL : https://patchwork.freedesktop.org/series/74739/
State : warning
== Summary ==
$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.0
Fast mode used, each commit won't be checked separately.
-
+drivers/
== Series Details ==
Series: Consider DBuf bandwidth when calculating CDCLK (rev15)
URL : https://patchwork.freedesktop.org/series/74739/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
42922a1cf4d9 drm/i915: Decouple cdclk calculation from modeset checks
a2e2a5f43cd7 drm/i915: E
== Series Details ==
Series: drm/i915/ehl: Extend w/a 14010685332 to JSP/MCC (rev2)
URL : https://patchwork.freedesktop.org/series/77382/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17717
Summary
---
== Series Details ==
Series: drm/i915/ehl: Extend w/a 14010685332 to JSP/MCC (rev2)
URL : https://patchwork.freedesktop.org/series/77382/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
e71c461a0da4 drm/i915/ehl: Extend w/a 14010685332 to JSP/MCC
-:26: CHECK:PARENTHESIS_ALIGNMENT
== Series Details ==
Series: drm/i915/ehl: Wa_22010271021 (rev2)
URL : https://patchwork.freedesktop.org/series/77428/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17716
Summary
---
**SUCCESS**
== Series Details ==
Series: drm/i915/ehl: Wa_22010271021 (rev2)
URL : https://patchwork.freedesktop.org/series/77428/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
60adbb75a3d8 drm/i915/ehl: Wa_22010271021
-:12: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit descripti
== Series Details ==
Series: drm/i915/gem: Suppress some random warnings
URL : https://patchwork.freedesktop.org/series/77431/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17715
Summary
---
**SUCCE
We have traces for the semaphore and the error, but not the far more
frequent CS interrupts. This is likely to be too much, but for the
purpose of live_unlite_preempt it may answer a question or two.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/intel_gt_irq.c | 6 +-
1 file change
From: Sean Paul
We're seeing some R0' mismatches in the field, particularly with
repeaters. I'm guessing the (already lenient) 300ms wait time isn't
enough for some setups. So add an additional wait when R0' is
mismatched.
Signed-off-by: Sean Paul
---
drivers/gpu/drm/i915/display/intel_hdcp.c
== Series Details ==
Series: drm/i915/gem: Suppress some random warnings
URL : https://patchwork.freedesktop.org/series/77431/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
c47e2d0db533 drm/i915/gem: Suppress some random warnings
-:62: CHECK:COMPARISON_TO_NULL: Comparison to NU
Quoting Daniel Vetter (2020-05-19 14:27:56)
> Do it uncontionally, there's a separate peek function with
> dma_fence_is_signalled() which can be called from atomic context.
>
> v2: Consensus calls for an unconditional might_sleep (Chris,
> Christian)
>
> Full audit:
> - dma-fence.h: Uses MAX_SCHE
According to BSpec max BW per slice is calculated using formula
Max BW = CDCLK * 64. Currently when calculating min CDCLK we
account only per plane requirements, however in order to avoid
FIFO underruns we need to estimate accumulated BW consumed by
all planes(ddb entries basically) residing on tha
== Series Details ==
Series: drm/i915: Neuter virtual rq->engine on retire
URL : https://patchwork.freedesktop.org/series/77425/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17714
Summary
---
**SUC
Start our preparations for guaranteeing endless execution.
First, we just want to estimate the direct userspace dispatch overhead
of running an endless chain of batch buffers. The legacy binding process
here will be replaced by async VM_BIND, but for the moment this
suffices to construct the GTT a
This is a permanent w/a for JSL/EHL.This is to be applied to the
PCH types on JSL/EHL ie JSP/MCC
Bspec: 52888
v2: Fixed the wrong usage of logical OR(ville)
Signed-off-by: Swathi Dhanavanthri
---
drivers/gpu/drm/i915/i915_irq.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --
Maybe we can add JSL to the comment too.
Other than that looks good to me.
Reviewed-by: Swathi Dhanavanthri
-Original Message-
From: Intel-gfx On Behalf Of Matt
Atwood
Sent: Tuesday, May 19, 2020 9:26 AM
To: intel-gfx@lists.freedesktop.org
Subject: [Intel-gfx] [PATCH] drm/i915/ehl: Wa_
Leave the error propagation in place, but limit the warnings to only
show up in CI if the unlikely errors are hit.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 3 +--
drivers/gpu/drm/i915/gem/i915_gem_phys.c | 3 +--
drivers/gpu/drm/i915/gem/i915_gem_shm
== Series Details ==
Series: dma-fence: add might_sleep annotation to _wait()
URL : https://patchwork.freedesktop.org/series/77417/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8505 -> Patchwork_17713
Summary
---
**
== Series Details ==
Series: dma-fence: add might_sleep annotation to _wait()
URL : https://patchwork.freedesktop.org/series/77417/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
aa2f5c93ddcf dma-fence: add might_sleep annotation to _wait()
-:16: WARNING:TYPO_SPELLING: 'TIMOUT'
== Series Details ==
Series: series starting with [CI,1/3] drm/i915/selftests: Add tests for
timeslicing virtual engines
URL : https://patchwork.freedesktop.org/series/77414/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8505 -> Patchwork_17712
===
== Series Details ==
Series: Consider DBuf bandwidth when calculating CDCLK (rev14)
URL : https://patchwork.freedesktop.org/series/74739/
State : failure
== Summary ==
Applying: drm/i915: Decouple cdclk calculation from modeset checks
Applying: drm/i915: Extract cdclk requirements checking to
Quoting Chris Wilson (2020-05-19 18:00:04)
> Quoting Chris Wilson (2020-05-19 15:51:31)
> > We do not hold a reference to rq->engine, and so if it is a virtual
> > engine it may have already been freed by the time we free the request.
> > The last reference we hold on the virtual engine is via rq->
== Series Details ==
Series: drm/i915/selftests: Measure dispatch latency (rev10)
URL : https://patchwork.freedesktop.org/series/77308/
State : failure
== Summary ==
Applying: drm/i915/selftests: Measure dispatch latency
Using index info to reconstruct a base tree...
M drivers/gpu/drm/i9
Quoting Chris Wilson (2020-05-19 15:51:31)
> We do not hold a reference to rq->engine, and so if it is a virtual
> engine it may have already been freed by the time we free the request.
> The last reference we hold on the virtual engine is via rq->context,
> and that is released on request retireme
Reflect recent Bspec changes.
Bspec: 33451
Signed-off-by: Matt Atwood
---
drivers/gpu/drm/i915/gt/intel_workarounds.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c
b/drivers/gpu/drm/i915/gt/intel_workarounds.c
index 90a2b9e399b0..fa1e1565
Op 18-05-2020 om 14:12 schreef Animesh Manna:
> Pre-allocate command buffer in atomic_commit using intel_dsb_prepare
> function which also includes pinning and map in cpu domain.
>
> No functional change is dsb write/commit functions.
>
> Now dsb get/put function is removed and ref-count mechanism
We do not hold a reference to rq->engine, and so if it is a virtual
engine it may have already been freed by the time we free the request.
The last reference we hold on the virtual engine is via rq->context,
and that is released on request retirement. So if we find ourselves
retiring a virtual requ
Chris Wilson writes:
> When we look at i915_request_is_started() we must be careful in case we
> are using a request that does not have the initial-breadcrumb and
> instead the is-started is being compared against the end of the previous
> request. This will make wait_for_submit() declare that a
system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]
url:
https://github.com/0day-ci/linux/commits/Swathi-Dhanavanthri/drm-i915-ehl-Extend-w-a-14010685332-to-JSP-MCC/20200519-184947
b
> -Original Message-
> From: Jani Nikula
> Sent: 19 May 2020 19:12
> To: Laxminarayan Bharadiya, Pankaj
> ; dan...@ffwll.ch; intel-
> g...@lists.freedesktop.org; dri-de...@lists.freedesktop.org; Joonas Lahtinen
> ; Vivi, Rodrigo ;
> David Airlie ; Ville Syrjälä
> ; Chris
> Wilson ; Deak
Chris Wilson writes:
> Check for integer overflow in the priority chain, rather than against a
> type-constricted max-priority check.
>
> Signed-off-by: Chris Wilson
Reviewed-by: Mika Kuoppala
> ---
> drivers/gpu/drm/i915/gt/selftest_lrc.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 del
Chris Wilson writes:
> Since we temporarily disable the heartbeat and restore back to the
> default value, we can use the stored defaults on the engine and avoid
> using a local.
>
> Signed-off-by: Chris Wilson
> ---
Reviewed-by: Mika Kuoppala
> drivers/gpu/drm/i915/gt/selftest_hangcheck.c |
On Fri, 08 May 2020, "Laxminarayan Bharadiya, Pankaj"
wrote:
>> -Original Message-
>> From: Jani Nikula
>> Sent: 08 May 2020 12:19
>> To: Laxminarayan Bharadiya, Pankaj
>> ; dan...@ffwll.ch; intel-
>> g...@lists.freedesktop.org; dri-de...@lists.freedesktop.org; Joonas Lahtinen
>> ; Viv
Chris Wilson writes:
s/supressing/suppressing
> We recorded the execlists->queue_priority_hint update for the inflight
> request without kicking the tasklet. The next submitted request then
> failed to be scheduled as it had a lower priority than the hint, leaving
> the HW runnning with only the
Do it uncontionally, there's a separate peek function with
dma_fence_is_signalled() which can be called from atomic context.
v2: Consensus calls for an unconditional might_sleep (Chris,
Christian)
Full audit:
- dma-fence.h: Uses MAX_SCHEDULE_TIMOUT, good chance this sleeps
- dma-resv.c: Timeout a
== Series Details ==
Series: drm/i915/selftests: Measure dispatch latency (rev9)
URL : https://patchwork.freedesktop.org/series/77308/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8502 -> Patchwork_17709
Summary
---
If we decide to timeslice out the current virtual request, we will
unsubmit it while it is still busy (ve->context.inflight == sibling[0]).
If the virtual tasklet and then the other sibling tasklets run before we
completely schedule out the active virtual request for the preemption,
those other tas
Make sure that we can execute a virtual request on an already busy
engine, and conversely that we can execute a normal request if the
engines are already fully occupied by virtual requests.
Signed-off-by: Chris Wilson
Reviewed-by: Tvrtko Ursulin
---
drivers/gpu/drm/i915/gt/selftest_lrc.c | 200
It was quite the oversight to only factor in the normal queue to decide
the timeslicing switch priority. By leaving out the next virtual request
from the priority decision, we would not timeslice the current engine if
there was an available virtual request.
Testcase: igt/gem_exec_balancer/sliced
F
From: Stanislav Lisovskiy
So lets support it.
Reviewed-by: Manasi Navare
Signed-off-by: Stanislav Lisovskiy
---
drivers/gpu/drm/i915/display/intel_display.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/display/intel_display.c
b/drivers/gpu/drm/i915
In Gen11+ whenever we might exceed DBuf bandwidth we might need to
recalculate CDCLK which DBuf bandwidth is scaled with.
Total Dbuf bw used might change based on particular plane needs.
Thus to calculate if cdclk needs to be changed it is not enough
anymore to check plane configuration and plane
According to BSpec max BW per slice is calculated using formula
Max BW = CDCLK * 64. Currently when calculating min CDCLK we
account only per plane requirements, however in order to avoid
FIFO underruns we need to estimate accumulated BW consumed by
all planes(ddb entries basically) residing on tha
We need to calculate cdclk after watermarks/ddb has been calculated
as with recent hw CDCLK needs to be adjusted accordingly to DBuf
requirements, which is not possible with current code organization.
Setting CDCLK according to DBuf BW requirements and not just rejecting
if it doesn't satisfy BW r
We need to calculate cdclk after watermarks/ddb has been calculated
as with recent hw CDCLK needs to be adjusted accordingly to DBuf
requirements, which is not possible with current code organization.
Setting CDCLK according to DBuf BW requirements and not just rejecting
if it doesn't satisfy BW r
No need to bump up CDCLK now, as it is now correctly
calculated, accounting for DBuf BW as BSpec says.
Reviewed-by: Manasi Navare
Signed-off-by: Stanislav Lisovskiy
---
drivers/gpu/drm/i915/display/intel_cdclk.c | 12
1 file changed, 12 deletions(-)
diff --git a/drivers/gpu/drm/i9
From: Stanislav Lisovskiy
Checking with hweight8 if plane configuration had
changed seems to be wrong as different plane configs
can result in a same hamming weight.
So lets check the bitmask itself.
Reviewed-by: Manasi Navare
Signed-off-by: Stanislav Lisovskiy
---
drivers/gpu/drm/i915/displa
We quite often need now to iterate only particular dbuf slices
in mask, whether they are active or related to particular crtc.
v2: - Minor code refactoring
v3: - Use enum for max slices instead of macro
Let's make our life a bit easier and use a macro for that.
Reviewed-by: Manasi Navare
Signed
A useful metric of the system's health is how fast we can tell the GPU
to do various actions, so measure our latency.
v2: Refactor all the instruction building into emitters.
v3: Mark the error handling if not perfect, at least consistent.
Signed-off-by: Chris Wilson
Cc: Mika Kuoppala
Cc: Joona
== Series Details ==
Series: drm/i915/selftests: Measure CS_TIMESTAMP (rev3)
URL : https://patchwork.freedesktop.org/series/77320/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_8502 -> Patchwork_17708
Summary
---
**F
Quoting Mika Kuoppala (2020-05-19 13:47:31)
> Chris Wilson writes:
>
> > A useful metric of the system's health is how fast we can tell the GPU
> > to do various actions, so measure our latency.
> >
> > v2: Refactor all the instruction building into emitters.
> >
> > Signed-off-by: Chris Wilson
Chris Wilson writes:
> A useful metric of the system's health is how fast we can tell the GPU
> to do various actions, so measure our latency.
>
> v2: Refactor all the instruction building into emitters.
>
> Signed-off-by: Chris Wilson
> Cc: Mika Kuoppala
> Cc: Joonas Lahtinen
Not much nitpic
== Series Details ==
Series: drm/i915/selftests: Measure CS_TIMESTAMP (rev3)
URL : https://patchwork.freedesktop.org/series/77320/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
4686c5234501 drm/i915/selftests: Measure CS_TIMESTAMP
-:68: CHECK:USLEEP_RANGE: usleep_range is prefe
On Tue, May 19, 2020 at 11:46:54AM +0100, Chris Wilson wrote:
> Quoting Ville Syrjälä (2020-05-19 11:42:45)
> > On Sat, May 16, 2020 at 02:31:02PM +0100, Chris Wilson wrote:
> > > Count the number of CS_TIMESTAMP ticks and check that it matches our
> > > expectations.
> >
> > Looks ok for everythi
A useful metric of the system's health is how fast we can tell the GPU
to do various actions, so measure our latency.
v2: Refactor all the instruction building into emitters.
Signed-off-by: Chris Wilson
Cc: Mika Kuoppala
Cc: Joonas Lahtinen
---
drivers/gpu/drm/i915/selftests/i915_request.c |
On Fri, May 15, 2020 at 08:36:31PM +, Patchwork wrote:
> == Series Details ==
>
> Series: drm/i915: Fix AUX power domain toggling across TypeC mode resets
> URL : https://patchwork.freedesktop.org/series/77280/
> State : success
Thanks for the review, pushed to -dinq.
>
> == Summary ==
>
Quoting Mika Kuoppala (2020-05-19 11:43:16)
> Chris Wilson writes:
> > +static void supervisor_dispatch(struct supervisor *sv, uint64_t addr)
> > +{
> > + WRITE_ONCE(*sv->dispatch, 64 << 10);
>
> addr << 10 ?
addr :)
-Chris
___
Intel-gfx mailing li
Quoting Ville Syrjälä (2020-05-19 11:42:45)
> On Sat, May 16, 2020 at 02:31:02PM +0100, Chris Wilson wrote:
> > Count the number of CS_TIMESTAMP ticks and check that it matches our
> > expectations.
>
> Looks ok for everything except g4x/ilk. Those would need something
> like
> https://patchwork.f
Chris Wilson writes:
> Start our preparations for guaranteeing endless execution.
>
> First, we just want to estimate the 'ulta-low latency' dispatch overhead
> by running an endless chain of batch buffers. The legacy binding process
> here will be replaced by async VM_BIND, but for the moment th
On Sat, May 16, 2020 at 02:31:02PM +0100, Chris Wilson wrote:
> Count the number of CS_TIMESTAMP ticks and check that it matches our
> expectations.
Looks ok for everything except g4x/ilk. Those would need something
like
https://patchwork.freedesktop.org/patch/355944/?series=74145&rev=1
+ read TIM
Start our preparations for guaranteeing endless execution.
First, we just want to estimate the 'ulta-low latency' dispatch overhead
by running an endless chain of batch buffers. The legacy binding process
here will be replaced by async VM_BIND, but for the moment this
suffices to construct the GTT
On 19/05/2020 07:31, Chris Wilson wrote:
It was quite the oversight to only factor in the normal queue to decide
the timeslicing switch priority. By leaving out the next virtual request
from the priority decision, we would not timeslice the current engine if
there was an available virtual reque
On 19/05/2020 07:31, Chris Wilson wrote:
Make sure that we can execute a virtual request on an already busy
engine, and conversely that we can execute a normal request if the
engines are already fully occupied by virtual requests.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/self
== Series Details ==
Series: series starting with [01/12] drm/i915: Don't set queue-priority hint
when supressing the reschedule
URL : https://patchwork.freedesktop.org/series/77389/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_8498_full -> Patchwork_17707_full
=
On 2020.05.18 22:00:52 +0100, Chris Wilson wrote:
> Quoting Aishwarya Ramakrishnan (2020-05-18 16:03:36)
> > Prefer ARRAY_SIZE instead of using sizeof
> >
> > Fixes coccicheck warning: Use ARRAY_SIZE
> >
> > Signed-off-by: Aishwarya Ramakrishnan
> Reviewed-by: Chris Wilson
Applied, thanks!
--
On Mon, May 18, 2020 at 05:58:32PM -0700, Swathi Dhanavanthri wrote:
> This is a permanent w/a for JSL/EHL.This is to be applied to the
> PCH types on JSL/EHL ie JSP/MCC
> Bspec: 52888
>
> Signed-off-by: Swathi Dhanavanthri
> ---
> drivers/gpu/drm/i915/i915_irq.c | 4 ++--
> 1 file changed, 2 in
== Series Details ==
Series: series starting with [01/12] drm/i915: Don't set queue-priority hint
when supressing the reschedule
URL : https://patchwork.freedesktop.org/series/77389/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8498 -> Patchwork_17707
===
== Series Details ==
Series: series starting with [01/12] drm/i915: Don't set queue-priority hint
when supressing the reschedule
URL : https://patchwork.freedesktop.org/series/77389/
State : warning
== Summary ==
$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.0
Fast mode used, each c
== Series Details ==
Series: series starting with [01/12] drm/i915: Don't set queue-priority hint
when supressing the reschedule
URL : https://patchwork.freedesktop.org/series/77389/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
364ab8bd9968 drm/i915: Don't set queue-priority
78 matches
Mail list logo