Is there another place to report this kind of issue? Didn't get any
feedback. Did I make mistakes in my mail? I already tried the Ubuntu
bug-tracker (see below) with no success.
Thank you,
Arno
Am 11.05.20 um 10:18 schrieb Arno:
My laptop (core m5-6y54) starts flickering after returning from
Please make gitlab issue if not already done. Lakshmi, please guide.
From: Intel-gfx On Behalf Of Arno
Sent: maanantai 18. toukokuuta 2020 10.07
To: intel-gfx@lists.freedesktop.org; ch...@chris-wilson.co.uk
Subject: Re: [Intel-gfx] intel_cpu_fifo_underrun_irq_handler [i915]] *ERROR*
CPU pipe A F
Arno, I have created bug https://gitlab.freedesktop.org/drm/intel/-/issues/1900
for this issue.
Now discussions can happen in the bug report directly. Thanks for reporting the
issue.
Thanks,
Lakshmi.
From: Saarinen, Jani
Sent: Monday, May 18, 2020 10:21 AM
To: Arno ; intel-gfx@lists.freedesktop
When we introduced the saturated workload detection to tell us to back
off from semaphore usage [semaphores have a noticeable impact on
contended bus cycles with the CPU for some heavy workloads], we first
introduced it as a per-context tracker. This allows individual contexts
to try and optimise t
Rather than going back and forth between the rb_node entry and the
virtual_engine type, store the ve local and reuse it. As the
container_of conversion from rb_node to virtual_engine requires a
variable offset, performing that conversion just once shaves off a bit
of code.
Signed-off-by: Chris Wil
In order to keep all the tasklets in the same execution lists and so
fifo ordered, be consistent and use the same priority for all.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/intel_lrc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/
Once a virtual engine has been bound to a sibling, it will remain bound
until we finally schedule out the last active request. We can not rebind
the context to a new sibling while it is inflight as the context save
will conflict, hence we wait. As we cannot then use any other sibliing
while the con
If we decide to timeslice out the current virtual request, we will
unsubmit it while it is still busy (ve->context.inflight == sibling[0]).
If the virtual tasklet and then the other sibling tasklets run before we
completely schedule out the active virtual request for the preemption,
those other tas
Having recognised that we do not change the sibling until we schedule
out, we can then defer the decision to resubmit the virtual engine from
the unwind of the active queue to scheduling out of the virtual context.
By keeping the unwind order intact on the local engine, we can preserve
data depend
Make sure that we can execute a virtual request on an already busy
engine, and conversely that we can execute a normal request if the
engines are already fully occupied by virtual requests.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/selftest_lrc.c | 179 +
1
It was quite the oversight to only factor in the normal queue to decide
the timeslicing switch priority. By leaving out the next virtual request
from the priority decision, we would not timeslice the current engine if
there was an available virtual request.
Testcase: igt/gem_exec_balancer/sliced
F
Quoting Chris Wilson (2020-05-18 08:57:47)
> @@ -5519,7 +5537,9 @@ static void virtual_submission_tasklet(unsigned long
> data)
> submit_engine:
> GEM_BUG_ON(RB_EMPTY_NODE(&node->rb));
> node->prio = prio;
> - if (first && prio > sibling->execlists.qu
Rather than going back and forth between the rb_node entry and the
virtual_engine type, store the ve local and reuse it. As the
container_of conversion from rb_node to virtual_engine requires a
variable offset, performing that conversion just once shaves off a bit
of code.
Signed-off-by: Chris Wil
Once a virtual engine has been bound to a sibling, it will remain bound
until we finally schedule out the last active request. We can not rebind
the context to a new sibling while it is inflight as the context save
will conflict, hence we wait. As we cannot then use any other sibliing
while the con
It was quite the oversight to only factor in the normal queue to decide
the timeslicing switch priority. By leaving out the next virtual request
from the priority decision, we would not timeslice the current engine if
there was an available virtual request.
Testcase: igt/gem_exec_balancer/sliced
F
When we introduced the saturated workload detection to tell us to back
off from semaphore usage [semaphores have a noticeable impact on
contended bus cycles with the CPU for some heavy workloads], we first
introduced it as a per-context tracker. This allows individual contexts
to try and optimise t
Make sure that we can execute a virtual request on an already busy
engine, and conversely that we can execute a normal request if the
engines are already fully occupied by virtual requests.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/selftest_lrc.c | 179 +
1
Having recognised that we do not change the sibling until we schedule
out, we can then defer the decision to resubmit the virtual engine from
the unwind of the active queue to scheduling out of the virtual context.
By keeping the unwind order intact on the local engine, we can preserve
data depend
If we decide to timeslice out the current virtual request, we will
unsubmit it while it is still busy (ve->context.inflight == sibling[0]).
If the virtual tasklet and then the other sibling tasklets run before we
completely schedule out the active virtual request for the preemption,
those other tas
In order to keep all the tasklets in the same execution lists and so
fifo ordered, be consistent and use the same priority for all.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/intel_lrc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/
== Series Details ==
Series: drm/i915/selftests: Measure CS_TIMESTAMP (rev2)
URL : https://patchwork.freedesktop.org/series/77320/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
07967d1266af drm/i915/selftests: Measure CS_TIMESTAMP
-:67: CHECK:USLEEP_RANGE: usleep_range is prefe
On 2020-05-15 at 11:40:29 +0530, Anshuman Gupta wrote:
> Content Protection property should be updated as per the kernel
> internal state. Let's say if Content protection is disabled
> by userspace, CP property should be set to UNDESIRED so that
> reauthentication will not happen until userspace re
== Series Details ==
Series: drm/i915/selftests: Measure CS_TIMESTAMP (rev2)
URL : https://patchwork.freedesktop.org/series/77320/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_8493 -> Patchwork_17679
Summary
---
**F
A useful metric of the system's health is how fast we can tell the GPU
to do various actions, so measure our latency.
Signed-off-by: Chris Wilson
Cc: Mika Kuoppala
Cc: Joonas Lahtinen
---
drivers/gpu/drm/i915/selftests/i915_request.c | 802 ++
1 file changed, 802 insertions(+)
== Series Details ==
Series: drm/i915: Fix dbuf slice mask when turning off all the pipes
URL : https://patchwork.freedesktop.org/series/77322/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8493 -> Patchwork_17680
Summary
-
Quoting Chris Wilson (2020-05-16 14:31:02)
> Count the number of CS_TIMESTAMP ticks and check that it matches our
> expectations.
>
> Signed-off-by: Chris Wilson
> Cc: Ville Syrjälä
ilk:
<6> [197.410742] rcs0: TIMESTAMP 0 cycles [0ns] in 1001322ns [12517 cycles],
using CS clock frequency of 12
== Series Details ==
Series: series starting with [1/2] drm/i915: Remove PIN_UPDATE for i915_vma_pin
URL : https://patchwork.freedesktop.org/series/77323/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_8493 -> Patchwork_17681
On Fri, 15 May 2020, "Mun, Gwan-gyeong" wrote:
> Hi Ville,
> Thank you for notifying me that. I definitely missed the crash.
> Sorry for that.
> Danial and Jani, I' under debugging the crash case.
> If you are availabe please do not merge current version.
It has been merged, and that's the oops i
On 18/05/2020 09:14, Chris Wilson wrote:
When we introduced the saturated workload detection to tell us to back
off from semaphore usage [semaphores have a noticeable impact on
contended bus cycles with the CPU for some heavy workloads], we first
introduced it as a per-context tracker. This all
== Series Details ==
Series: drm/i915: Fix dbuf slice mask when turning off all the pipes
URL : https://patchwork.freedesktop.org/series/77322/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8493_full -> Patchwork_17680_full
Quoting Tvrtko Ursulin (2020-05-18 10:53:22)
>
> On 18/05/2020 09:14, Chris Wilson wrote:
> > When we introduced the saturated workload detection to tell us to back
> > off from semaphore usage [semaphores have a noticeable impact on
> > contended bus cycles with the CPU for some heavy workloads],
On 18/05/2020 09:14, Chris Wilson wrote:
In order to keep all the tasklets in the same execution lists and so
fifo ordered, be consistent and use the same priority for all.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/intel_lrc.c | 4 ++--
1 file changed, 2 insertions(+), 2 del
On 18/05/2020 09:14, Chris Wilson wrote:
Make sure that we can execute a virtual request on an already busy
engine, and conversely that we can execute a normal request if the
engines are already fully occupied by virtual requests.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/self
> -Original Message-
> From: Intel-gfx On Behalf Of
> Anshuman Gupta
> Sent: Wednesday, May 13, 2020 5:49 PM
> To: intel-gfx@lists.freedesktop.org
> Subject: [Intel-gfx] [PATCH 1/2] drm/i915/hdcp: Add update_pipe early return
>
> Currently intel_hdcp_update_pipe() is also getting calle
-Original Message-
From: Shankar, Uma
Sent: Monday, May 18, 2020 3:45 PM
To: Gupta, Anshuman ; intel-gfx@lists.freedesktop.org
Subject: RE: [Intel-gfx] [PATCH 1/2] drm/i915/hdcp: Add update_pipe early return
> -Original Message-
> From: Intel-gfx On Behalf Of
> Anshuman Gu
Quoting Tvrtko Ursulin (2020-05-18 11:12:29)
>
> On 18/05/2020 09:14, Chris Wilson wrote:
> > Make sure that we can execute a virtual request on an already busy
> > engine, and conversely that we can execute a normal request if the
> > engines are already fully occupied by virtual requests.
> >
>
Tvrtko spotted that some selftests were using 'break' not 'continue',
which will fail for discontiguous engine layouts such as on Icelake
(which may have vcs0 and vcs2).
Reported-by: Tvrtko Ursulin
Signed-off-by: Chris Wilson
Cc: Tvrtko Ursulin
---
drivers/gpu/drm/i915/gt/selftest_lrc.c | 68 +
On 18/05/2020 09:14, Chris Wilson wrote:
If we decide to timeslice out the current virtual request, we will
unsubmit it while it is still busy (ve->context.inflight == sibling[0]).
If the virtual tasklet and then the other sibling tasklets run before we
completely schedule out the active virtua
Op 14-05-2020 om 16:58 schreef Animesh Manna:
> Pre-allocate command buffer in atomic_commit using intel_dsb_prepare
> function which also includes pinning and map in cpu domain.
>
> No functional change is dsb write/commit functions.
>
> Now dsb get/put function is removed and ref-count mechanism
== Series Details ==
Series: drm/i915/display: Return error from dbuf allocation failure
URL : https://patchwork.freedesktop.org/series/77325/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8493 -> Patchwork_17682
Summary
--
On 18/05/2020 09:14, Chris Wilson wrote:
It was quite the oversight to only factor in the normal queue to decide
the timeslicing switch priority. By leaving out the next virtual request
from the priority decision, we would not timeslice the current engine if
there was an available virtual reque
On 18/05/2020 11:29, Chris Wilson wrote:
Tvrtko spotted that some selftests were using 'break' not 'continue',
which will fail for discontiguous engine layouts such as on Icelake
(which may have vcs0 and vcs2).
Reported-by: Tvrtko Ursulin
Signed-off-by: Chris Wilson
Cc: Tvrtko Ursulin
---
Quoting Tvrtko Ursulin (2020-05-18 11:36:15)
>
> On 18/05/2020 09:14, Chris Wilson wrote:
> > @@ -5519,7 +5537,7 @@ static void virtual_submission_tasklet(unsigned long
> > data)
> > submit_engine:
> > GEM_BUG_ON(RB_EMPTY_NODE(&node->rb));
> > node->prio = prio;
> >
On 18/05/2020 09:14, Chris Wilson wrote:
Rather than going back and forth between the rb_node entry and the
virtual_engine type, store the ve local and reuse it. As the
container_of conversion from rb_node to virtual_engine requires a
variable offset, performing that conversion just once shaves
Quoting Tvrtko Ursulin (2020-05-18 11:51:40)
>
> On 18/05/2020 09:14, Chris Wilson wrote:
> > @@ -2125,9 +2128,10 @@ static void execlists_dequeue(struct intel_engine_cs
> > *engine)
> >* find itself trying to jump back into a context it has just
> >* completed and barf.
> >
Currently intel_hdcp_update_pipe() is also getting called for non-hdcp
connectors and get through its conditional code flow, which is completely
unnecessary for non-hdcp connectors, therefore it make sense to
have an early return. No functional change.
Reviewed-by: Uma Shankar
Signed-off-by: Ansh
No functional change.
Anshuman Gupta (2):
drm/i915/hdcp: Add update_pipe early return
drm/i915/hdcp: No direct access to power_well desc
drivers/gpu/drm/i915/display/intel_hdcp.c | 23 +--
1 file changed, 9 insertions(+), 14 deletions(-)
--
2.26.0
_
HDCP code doesn't require to access power_well internal stuff,
instead it should use the intel_display_power_well_is_enabled()
to get the status of desired power_well.
No functional change.
v2:
- used with_intel_runtime_pm instead of get/put. [Jani]
Cc: Jani Nikula
Signed-off-by: Anshuman Gupta
Added the changes in next version, thanks for review.
Regards,
Animesh
On 18-05-2020 16:01, Maarten Lankhorst wrote:
Op 14-05-2020 om 16:58 schreef Animesh Manna:
Pre-allocate command buffer in atomic_commit using intel_dsb_prepare
function which also includes pinning and map in cpu domain.
N
== Series Details ==
Series: drm/i915/display: Return error from dbuf allocation failure
URL : https://patchwork.freedesktop.org/series/77325/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8493_full -> Patchwork_17682_full
== Series Details ==
Series: drm/i915/selftests: Change priority overflow detection
URL : https://patchwork.freedesktop.org/series/77326/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8494 -> Patchwork_17683
Summary
---
== Series Details ==
Series: series starting with [1/8] drm/i915: Move saturated workload detection
back to the context
URL : https://patchwork.freedesktop.org/series/77343/
State : failure
== Summary ==
CALLscripts/checksyscalls.sh
CALLscripts/atomic/check-atomics.sh
DESCEND obj
== Series Details ==
Series: series starting with [1/8] drm/i915: Move saturated workload detection
back to the context
URL : https://patchwork.freedesktop.org/series/77344/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
a07fc7f12a37 drm/i915: Move saturated workload detection
== Series Details ==
Series: series starting with [1/8] drm/i915: Move saturated workload detection
back to the context
URL : https://patchwork.freedesktop.org/series/77344/
State : warning
== Summary ==
$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.0
Fast mode used, each commit won
Pre-allocate command buffer in atomic_commit using intel_dsb_prepare
function which also includes pinning and map in cpu domain.
No functional change is dsb write/commit functions.
Now dsb get/put function is removed and ref-count mechanism is
not needed. Below dsb api added to do respective job
From: Ville Syrjälä
The current dbuf slice computation only happens when there are
active pipes. If we are turning off all the pipes we just leave
the dbuf slice mask at it's previous value, which may be something
other that BIT(S1). If runtime PM will kick in it will however
turn off everything
== Series Details ==
Series: series starting with [1/8] drm/i915: Move saturated workload detection
back to the context
URL : https://patchwork.freedesktop.org/series/77344/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8494 -> Patchwork_17685
From: Ville Syrjälä
Dbuf slice tracking busted across runtime PM. Back to the
drawing board.
This reverts commit 70b1a26f299c729cc1a5099374cc02568b05ec7d.
Signed-off-by: Ville Syrjälä
---
drivers/gpu/drm/i915/intel_pm.c | 26 +++---
1 file changed, 7 insertions(+), 19 dele
From: Ville Syrjälä
Dbuf slice tracking busted across runtime PM. Back to the
drawing board.
This reverts commit 3cf43cdc63fbc3df19ea8398e9b8717ab44a6304.
Signed-off-by: Ville Syrjälä
---
drivers/gpu/drm/i915/display/intel_display.c | 67 ++-
.../drm/i915/display/intel_display_power.c
From: Ville Syrjälä
Dbuf slice tracking busted across runtime PM. Back to the
drawing board.
This reverts commit c7c0e7ebe4d9963573f81399374e4e95f37fd8e3.
Signed-off-by: Ville Syrjälä
---
drivers/gpu/drm/i915/display/intel_display.c | 41 +++-
drivers/gpu/drm/i915/intel_pm.c
From: Ville Syrjälä
Dbuf slice tracking busted across runtime PM. Back to the
drawing board.
This reverts commit 0cde0e0ff5f5ebd27507069250728c763c14ac81.
Signed-off-by: Ville Syrjälä
---
drivers/gpu/drm/i915/intel_pm.c | 7 +++
drivers/gpu/drm/i915/intel_pm.h | 1 +
2 files changed, 8 in
Pre-allocate command buffer in atomic_commit using intel_dsb_prepare
function which also includes pinning and map in cpu domain.
No functional change is dsb write/commit functions.
Now dsb get/put function is removed and ref-count mechanism is
not needed. Below dsb api added to do respective job
Rather than going back and forth between the rb_node entry and the
virtual_engine type, store the ve local and reuse it. As the
container_of conversion from rb_node to virtual_engine requires a
variable offset, performing that conversion just once shaves off a bit
of code.
v2: Keep a single virtua
== Series Details ==
Series: drm/i915/selftests: Measure dispatch latency (rev5)
URL : https://patchwork.freedesktop.org/series/77308/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
960f5e78842f drm/i915/selftests: Measure dispatch latency
-:727: WARNING:MEMORY_BARRIER: memory b
On 18/05/2020 09:14, Chris Wilson wrote:
Once a virtual engine has been bound to a sibling, it will remain bound
until we finally schedule out the last active request. We can not rebind
the context to a new sibling while it is inflight as the context save
will conflict, hence we wait. As we can
Quoting Tvrtko Ursulin (2020-05-18 13:53:29)
>
> On 18/05/2020 09:14, Chris Wilson wrote:
> > Once a virtual engine has been bound to a sibling, it will remain bound
> > until we finally schedule out the last active request. We can not rebind
> > the context to a new sibling while it is inflight a
On 18/05/2020 13:33, Chris Wilson wrote:
Rather than going back and forth between the rb_node entry and the
virtual_engine type, store the ve local and reuse it. As the
container_of conversion from rb_node to virtual_engine requires a
variable offset, performing that conversion just once shaves
Quoting Tvrtko Ursulin (2020-05-18 14:01:27)
>
> On 18/05/2020 13:33, Chris Wilson wrote:
> > +static struct virtual_engine *
> > +first_virtual_engine(struct intel_engine_cs *engine)
> > +{
> > + struct intel_engine_execlists *el = &engine->execlists;
> > + struct rb_node *rb = rb_first_c
Quoting Ville Syrjala (2020-05-18 13:13:54)
> From: Ville Syrjälä
>
> The current dbuf slice computation only happens when there are
> active pipes. If we are turning off all the pipes we just leave
> the dbuf slice mask at it's previous value, which may be something
> other that BIT(S1). If runt
== Series Details ==
Series: drm/i915/selftests: Measure dispatch latency (rev5)
URL : https://patchwork.freedesktop.org/series/77308/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8494 -> Patchwork_17686
Summary
---
== Series Details ==
Series: drm/i915/selftests: Refactor sibling selection
URL : https://patchwork.freedesktop.org/series/77352/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8494 -> Patchwork_17687
Summary
---
**SU
On Mon, May 18, 2020 at 09:33:29AM +0300, Ville Syrjälä wrote:
> On Sun, May 17, 2020 at 03:12:49PM +0300, Lisovskiy, Stanislav wrote:
> > On Sat, May 16, 2020 at 07:15:42PM +0300, Ville Syrjala wrote:
> > > From: Ville Syrjälä
> > >
> > > The current dbuf slice computation only happens when ther
== Series Details ==
Series: drm/i915/selftests: Change priority overflow detection
URL : https://patchwork.freedesktop.org/series/77326/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8494_full -> Patchwork_17683_full
Summa
On Mon, May 18, 2020 at 03:23:00PM +0300, Ville Syrjala wrote:
> From: Ville Syrjälä
>
> Dbuf slice tracking busted across runtime PM. Back to the
> drawing board.
>
> This reverts commit 70b1a26f299c729cc1a5099374cc02568b05ec7d.
>
> Signed-off-by: Ville Syrjälä
> ---
> drivers/gpu/drm/i915/i
== Series Details ==
Series: HDCP minor refactoring (rev2)
URL : https://patchwork.freedesktop.org/series/77224/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8494 -> Patchwork_17688
Summary
---
**SUCCESS**
No reg
== Series Details ==
Series: series starting with [1/4] drm/i915/params: don't expose
inject_probe_failure in debugfs (rev2)
URL : https://patchwork.freedesktop.org/series/77272/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
0df592070475 drm/i915/params: don't expose inject_pr
== Series Details ==
Series: series starting with [1/4] drm/i915/params: don't expose
inject_probe_failure in debugfs (rev2)
URL : https://patchwork.freedesktop.org/series/77272/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_8494 -> Patchwork_17689
===
On Fri, May 15, 2020 at 10:48 AM Ramalingam C wrote:
>
> On 2020-04-29 at 15:54:46 -0400, Sean Paul wrote:
> > From: Sean Paul
> >
> > Changes in v6:
> > -Rebased on -tip
> > -Disabled HDCP over MST on GEN12
> > -Addressed Lyude's review comments in the QUERY_STREAM_ENCRYPTION_STATUS
> > patch
>
On Mon, May 18, 2020 at 03:23:03PM +0300, Ville Syrjala wrote:
> From: Ville Syrjälä
>
> Dbuf slice tracking busted across runtime PM. Back to the
> drawing board.
>
> This reverts commit 3cf43cdc63fbc3df19ea8398e9b8717ab44a6304.
>
> Signed-off-by: Ville Syrjälä
> ---
> drivers/gpu/drm/i915/d
A useful metric of the system's health is how fast we can tell the GPU
to do various actions, so measure our latency.
v2: Refactor all the instruction building into emitters.
Signed-off-by: Chris Wilson
Cc: Mika Kuoppala
Cc: Joonas Lahtinen
---
drivers/gpu/drm/i915/selftests/i915_request.c |
On Mon, May 18, 2020 at 03:23:01PM +0300, Ville Syrjala wrote:
> From: Ville Syrjälä
>
> Dbuf slice tracking busted across runtime PM. Back to the
> drawing board.
>
> This reverts commit c7c0e7ebe4d9963573f81399374e4e95f37fd8e3.
>
> Signed-off-by: Ville Syrjälä
> ---
> drivers/gpu/drm/i915/d
On Mon, May 18, 2020 at 03:23:02PM +0300, Ville Syrjala wrote:
> From: Ville Syrjälä
>
> Dbuf slice tracking busted across runtime PM. Back to the
> drawing board.
>
> This reverts commit 0cde0e0ff5f5ebd27507069250728c763c14ac81.
>
> Signed-off-by: Ville Syrjälä
> ---
> drivers/gpu/drm/i915/i
On 18/05/2020 14:00, Chris Wilson wrote:
Quoting Tvrtko Ursulin (2020-05-18 13:53:29)
On 18/05/2020 09:14, Chris Wilson wrote:
Once a virtual engine has been bound to a sibling, it will remain bound
until we finally schedule out the last active request. We can not rebind
the context to a new
== Series Details ==
Series: drm/i915: Fix dbuf slice mask when turning off all the pipes (rev2)
URL : https://patchwork.freedesktop.org/series/77322/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8494 -> Patchwork_17690
Su
== Series Details ==
Series: series starting with [1/4] Revert "drm/i915: Clean up dbuf debugs
during .atomic_check()"
URL : https://patchwork.freedesktop.org/series/77358/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
4de007ca5a0f Revert "drm/i915: Clean up dbuf debugs during
Chris Wilson writes:
> A useful metric of the system's health is how fast we can tell the GPU
> to do various actions, so measure our latency.
>
> v2: Refactor all the instruction building into emitters.
>
> Signed-off-by: Chris Wilson
> Cc: Mika Kuoppala
> Cc: Joonas Lahtinen
> ---
> drivers
== Series Details ==
Series: drm/i915/selftests: Measure dispatch latency (rev5)
URL : https://patchwork.freedesktop.org/series/77308/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8494_full -> Patchwork_17686_full
Summary
Quoting Mika Kuoppala (2020-05-18 16:07:47)
> Chris Wilson writes:
> > + cs = emit_timestamp_store(cs, ce, offset + i * sizeof(u32));
>
> Is the dual writes here so that when you kick the semaphore, you get the
> latest no matter which side you were on?
We wait on the first write in
Quoting Chris Wilson (2020-05-18 16:14:43)
> Quoting Mika Kuoppala (2020-05-18 16:07:47)
> > Chris Wilson writes:
> > > + cs = emit_timestamp_store(cs, ce, offset + i * sizeof(u32));
> >
> > Is the dual writes here so that when you kick the semaphore, you get the
> > latest no matter
== Series Details ==
Series: series starting with [1/4] Revert "drm/i915: Clean up dbuf debugs
during .atomic_check()"
URL : https://patchwork.freedesktop.org/series/77358/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_8494 -> Patchwork_17691
=
Quoting Tvrtko Ursulin (2020-05-18 15:55:46)
>
> On 18/05/2020 14:00, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2020-05-18 13:53:29)
> >>
> >> On 18/05/2020 09:14, Chris Wilson wrote:
> >>> Once a virtual engine has been bound to a sibling, it will remain bound
> >>> until we finally schedul
== Series Details ==
Series: drm/i915/selftests: Refactor sibling selection
URL : https://patchwork.freedesktop.org/series/77352/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8494_full -> Patchwork_17687_full
Summary
-
Quoting Chris Wilson (2020-05-18 16:40:15)
> Quoting Tvrtko Ursulin (2020-05-18 15:55:46)
> >
> > On 18/05/2020 14:00, Chris Wilson wrote:
> > > Quoting Tvrtko Ursulin (2020-05-18 13:53:29)
> > >>
> > >> On 18/05/2020 09:14, Chris Wilson wrote:
> > >>> Once a virtual engine has been bound to a sib
== Series Details ==
Series: drm/i915/dsb: Pre allocate and late cleanup of cmd buffer (rev9)
URL : https://patchwork.freedesktop.org/series/73036/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_8494 -> Patchwork_17692
Summa
== Series Details ==
Series: series starting with [1/8] drm/i915: Move saturated workload detection
back to the context (rev2)
URL : https://patchwork.freedesktop.org/series/77344/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
ebeb414a1e8b drm/i915: Move saturated workload det
== Series Details ==
Series: series starting with [1/8] drm/i915: Move saturated workload detection
back to the context (rev2)
URL : https://patchwork.freedesktop.org/series/77344/
State : warning
== Summary ==
$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.0
Fast mode used, each com
== Series Details ==
Series: drm/i915/selftests: Measure dispatch latency (rev7)
URL : https://patchwork.freedesktop.org/series/77308/
State : failure
== Summary ==
Applying: drm/i915/selftests: Measure dispatch latency
error: corrupt patch at line 23
error: could not build fake ancestor
hint:
A useful metric of the system's health is how fast we can tell the GPU
to do various actions, so measure our latency.
v2: Refactor all the instruction building into emitters.
Signed-off-by: Chris Wilson
Cc: Mika Kuoppala
Cc: Joonas Lahtinen
---
drivers/gpu/drm/i915/selftests/i915_request.c |
== Series Details ==
Series: series starting with [1/8] drm/i915: Move saturated workload detection
back to the context (rev2)
URL : https://patchwork.freedesktop.org/series/77344/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_8494 -> Patchwork_17693
=
== Series Details ==
Series: HDCP minor refactoring (rev2)
URL : https://patchwork.freedesktop.org/series/77224/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8494_full -> Patchwork_17688_full
Summary
---
**SUCCESS**
1 - 100 of 190 matches
Mail list logo