== Series Details ==
Series: drm/i915: HDCP: retry link integrity check on failure
URL : https://patchwork.freedesktop.org/series/76917/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8422_full -> Patchwork_17572_full
Summar
We need a new PCode request commands and reply codes
to be added as a prepartion patch for QGV points
restricting for new SAGV support.
v2: - Extracted those changes into separate patch
(Ville Syrjälä)
v3: - Moved new PCode masks to another place from
PCode commands(Ville)
v4: - Move
According to BSpec 53998, we should try to
restrict qgv points, which can't provide
enough bandwidth for desired display configuration.
Currently we are just comparing against all of
those and take minimum(worst case).
v2: Fixed wrong PCode reply mask, removed hardcoded
values.
v3: Forbid si
On 05/05/2020 03:09, D Scott Phillips wrote:
D Scott Phillips writes:
Previously we set HDC_PIPELINE_FLUSH in dword 1 of gen12
pipe_control commands. HDC Pipeline flush actually resides in
dword 0, and the bit we were setting in dword 1 was Indirect State
Pointers Disable, which invalidates in
On 2020-05-04 at 14:35:24 +0200, Oliver Barta wrote:
> From: Oliver Barta
>
> A single Ri mismatch doesn't automatically mean that the link integrity
> is broken. Update and check of Ri and Ri' are done asynchronously. In
> case an update happens just between the read of Ri' and the check against
Chris Wilson writes:
> Use a local to shrink a line under 80 columns, and refactor the common
> emit_xcs_breadcrumb() wrapper of ggtt-write.
>
> Signed-off-by: Chris Wilson
Reviewed-by: Mika Kuoppala
> ---
> drivers/gpu/drm/i915/gt/intel_lrc.c | 34 +
> 1 file cha
== Series Details ==
Series: drm/i915/gt: Small tidy of gen8+ breadcrumb emission
URL : https://patchwork.freedesktop.org/series/76918/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8422_full -> Patchwork_17573_full
Summary
== Series Details ==
Series: SAGV support for Gen12+ (rev34)
URL : https://patchwork.freedesktop.org/series/75129/
State : failure
== Summary ==
Applying: drm/i915: Introduce skl_plane_wm_level accessor.
Applying: drm/i915: Use bw state for per crtc SAGV evaluation
Using index info to reconstr
On 05/05/2020 00:34, Matt Roper wrote:
On Mon, May 04, 2020 at 12:43:54PM +0100, Tvrtko Ursulin wrote:
On 02/05/2020 05:57, Matt Roper wrote:
Reads of multicast registers give the value associated with
slice/subslice 0 by default unless we manually steer the reads to a
different slice/subslic
== Series Details ==
Series: drm/i915/gt: Stop holding onto the pinned_default_state (rev2)
URL : https://patchwork.freedesktop.org/series/76738/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8422_full -> Patchwork_17574_full
===
If we cannot trust the reset will flush out the CS event queue such that
process_csb() reports an accurate view of HW, we will need to search the
active and pending contexts to determine which was actually running at
the time we issued the reset.
Signed-off-by: Chris Wilson
Reviewed-by: Mika Kuop
The Documentation/DMA-API-HOWTO.txt states that dma_map_sg returns the
numer of the created entries in the DMA address space. However the
subsequent calls to dma_sync_sg_for_{device,cpu} and dma_unmap_sg must be
called with the original number of the entries passed to dma_map_sg. The
sg_table->nent
Chris Wilson writes:
> Quoting Mika Kuoppala (2020-04-30 16:47:30)
>> Flush TDL and L3.
>>
>> Signed-off-by: Mika Kuoppala
>
> That's very misnamed bit!
>
> There's a comment that this must be paired with the corresponding pc in
> the same HW dispatch.
Not for gen12.
-Mika
As patches 2,3,7 were pushed - can't now send particular patches, because it
fails to apply
same patch twice.
So will _have to_ resend the whole series again.
Best Regards,
Lisovskiy Stanislav
Organization: Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo
___
On Tue, May 05, 2020 at 07:55:00AM +0200, Michał Orzeł wrote:
>
>
> On 04.05.2020 13:53, Daniel Vetter wrote:
> > On Fri, May 01, 2020 at 05:49:33PM +0200, Michał Orzeł wrote:
> >>
> >>
> >> On 30.04.2020 20:30, Daniel Vetter wrote:
> >>> On Thu, Apr 30, 2020 at 5:38 PM Sean Paul wrote:
>
>
Chris Wilson writes:
> As we only restore the default context state upon banning a context, we
> only need enough of the state to run the ring and nothing more. That is
> we only need our bare protocontext.
>
> Signed-off-by: Chris Wilson
> Cc: Tvrtko Ursulin
> Cc: Mika Kuoppala
> Cc: Andi Shy
On Mon, May 04, 2020 at 09:41:16PM +, Patchwork wrote:
> == Series Details ==
>
> Series: drm/i915/tgl+: Fix interrupt handling for DP AUX transactions
> URL : https://patchwork.freedesktop.org/series/76892/
> State : success
Puhsed to -dinq, thanks for the review and re-reporting.
>
> ==
Quoting Mika Kuoppala (2020-05-05 10:12:49)
> > @@ -4166,8 +4163,6 @@ static void __execlists_reset(struct intel_engine_cs
> > *engine, bool stalled)
> >* image back to the expected values to skip over the guilty request.
> >*/
> > __i915_request_reset(rq, stalled);
> > -
Quoting Chris Wilson (2020-05-05 10:21:46)
> Quoting Mika Kuoppala (2020-05-05 10:12:49)
> > > @@ -4166,8 +4163,6 @@ static void __execlists_reset(struct
> > > intel_engine_cs *engine, bool stalled)
> > >* image back to the expected values to skip over the guilty
> > > request.
> > >
Starting from TGL we need to have a separate wm0
values for SAGV and non-SAGV which affects
how calculations are done.
v2: Remove long lines
v3: Removed COLOR_PLANE enum references
v4, v5, v6: Fixed rebase conflict
Signed-off-by: Stanislav Lisovskiy
---
drivers/gpu/drm/i915/display/intel_displa
Introduce platform dependent SAGV checking in
combination with bandwidth state pipe SAGV mask.
v2, v3, v4, v5, v6: Fix rebase conflict
Signed-off-by: Stanislav Lisovskiy
---
drivers/gpu/drm/i915/intel_pm.c | 30 --
1 file changed, 28 insertions(+), 2 deletions(-)
di
According to BSpec 53998, we should try to
restrict qgv points, which can't provide
enough bandwidth for desired display configuration.
Currently we are just comparing against all of
those and take minimum(worst case).
v2: Fixed wrong PCode reply mask, removed hardcoded
values.
v3: Forbid si
For Gen11+ platforms BSpec suggests disabling specific
QGV points separately, depending on bandwidth limitations
and current display configuration. Thus it required adding
a new PCode request for disabling QGV points and some
refactoring of already existing SAGV code.
Also had to refactor intel_can
Even if one client is blocked on a resource, that should not impact
another client.
Signed-off-by: Chris Wilson
---
tests/i915/gem_ctx_exec.c | 122 +-
1 file changed, 121 insertions(+), 1 deletion(-)
diff --git a/tests/i915/gem_ctx_exec.c b/tests/i915/gem_ct
We need a new PCode request commands and reply codes
to be added as a prepartion patch for QGV points
restricting for new SAGV support.
v2: - Extracted those changes into separate patch
(Ville Syrjälä)
v3: - Moved new PCode masks to another place from
PCode commands(Ville)
v4: - Move
For future Gen12 SAGV implementation we need to
seemlessly alter wm levels calculated, depending
on whether we are allowed to enable SAGV or not.
So this accessor will give additional flexibility
to do that.
Currently this accessor is still simply working
as "pass-through" function. This will be
Flip the switch and enable SAGV support
for Gen12 also.
Signed-off-by: Stanislav Lisovskiy
---
drivers/gpu/drm/i915/intel_pm.c | 4
1 file changed, 4 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 5d0aab515e2a..a12f1d0a0be2 100644
--- a/dr
On Tue, May 05, 2020 at 01:22:43PM +0300, Stanislav Lisovskiy wrote:
> Introduce platform dependent SAGV checking in
> combination with bandwidth state pipe SAGV mask.
>
> v2, v3, v4, v5, v6: Fix rebase conflict
>
> Signed-off-by: Stanislav Lisovskiy
> ---
> drivers/gpu/drm/i915/intel_pm.c | 30
== Series Details ==
Series: drm/i915/execlists: Record the active CCID from before reset
URL : https://patchwork.freedesktop.org/series/76946/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8427 -> Patchwork_17580
Summary
-
On Tue, May 05, 2020 at 01:22:44PM +0300, Stanislav Lisovskiy wrote:
> Starting from TGL we need to have a separate wm0
> values for SAGV and non-SAGV which affects
> how calculations are done.
>
> v2: Remove long lines
> v3: Removed COLOR_PLANE enum references
> v4, v5, v6: Fixed rebase conflict
On Tue, May 05, 2020 at 01:42:46PM +0300, Ville Syrjälä wrote:
> On Tue, May 05, 2020 at 01:22:43PM +0300, Stanislav Lisovskiy wrote:
> > Introduce platform dependent SAGV checking in
> > combination with bandwidth state pipe SAGV mask.
> >
> > v2, v3, v4, v5, v6: Fix rebase conflict
> >
> > Sign
On Tue, May 05, 2020 at 01:22:45PM +0300, Stanislav Lisovskiy wrote:
> We need a new PCode request commands and reply codes
> to be added as a prepartion patch for QGV points
> restricting for new SAGV support.
>
> v2: - Extracted those changes into separate patch
> (Ville Syrjälä)
>
> v3:
igt_require_gem() is a pecularity of i915/, move it out of the core.
Signed-off-by: Chris Wilson
---
lib/Makefile.sources| 2 +
lib/i915/gem.c | 80 +
lib/i915/gem.h | 30
lib/i915/gem
== Series Details ==
Series: Prefer drm_WARN* over WARN* (rev3)
URL : https://patchwork.freedesktop.org/series/75543/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8424_full -> Patchwork_17575_full
Summary
---
**SUCC
On 2020-05-05 at 14:06:51 +0200, Oliver Barta wrote:
> On Tue, May 5, 2020 at 9:38 AM Ramalingam C wrote:
> >
> > On 2020-05-04 at 14:35:24 +0200, Oliver Barta wrote:
> > > From: Oliver Barta
> > >
> > > A single Ri mismatch doesn't automatically mean that the link integrity
> > > is broken. Upda
We need each test in an isolated context, so that bad results from one
test do not interfere with the next. In particular, we want to clean up
the device and reset it to the defaults so that they are known for the
next test, and the test can focus on behaviour it wants to control.
Signed-off-by: C
We recorded the dependencies for WAIT_FOR_SUBMIT in order that we could
correctly perform priority inheritance from the parallel branches to the
common trunk. However, for the purpose of timeslicing and reset
handling, the dependency is weak -- as we the pair of requests are
allowed to run in paral
While we ordinarily do not skip submit-fences due to the accompanying
hook that we want to callback on execution, a submit-fence on the same
timeline is meaningless.
Signed-off-by: Chris Wilson
Cc: Tvrtko Ursulin
---
drivers/gpu/drm/i915/i915_request.c | 3 +++
1 file changed, 3 insertions(+)
== Series Details ==
Series: Introduce Rocket Lake (rev4)
URL : https://patchwork.freedesktop.org/series/76826/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_8424_full -> Patchwork_17577_full
Summary
---
**FAILURE**
Signed-off-by: Chris Wilson
---
include/drm-uapi/i915_drm.h | 8 +---
lib/i915/gem_scheduler.c| 15 +++
lib/i915/gem_scheduler.h| 1 +
3 files changed, 21 insertions(+), 3 deletions(-)
diff --git a/include/drm-uapi/i915_drm.h b/include/drm-uapi/i915_drm.h
index 2b55af13
When we allow a wait on a future future fence, it must autoexpire if the
fence is never signaled by userspace. Also put future fences to work, as
the intention is to use them, along with WAIT_SUBMIT and semaphores, for
userspace to perform its own fine-grained scheduling. Or simply run
concurrent c
== Series Details ==
Series: SAGV support for Gen12+ (rev35)
URL : https://patchwork.freedesktop.org/series/75129/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8430 -> Patchwork_17581
Summary
---
**SUCCESS**
No r
If a syncobj has not yet been assigned, treat it as a future fence and
install and wait upon a dma-fence-proxy. The proxy will be replace by
the real fence later, and that fence will be responsible for signaling
our waiter.
Link: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4854
Signe
Quoting Chris Wilson (2020-05-05 14:48:19)
> +static void await_proxy_work(struct work_struct *work)
> +{
> + struct await_proxy *ap = container_of(work, typeof(*ap), work);
> + struct i915_request *rq = ap->request;
> +
> + del_timer_sync(&ap->timer);
> +
> + if (ap->fence)
== Series Details ==
Series: drm/i915/tgl: Put HDC flush pipe_control bit in the right dword
URL : https://patchwork.freedesktop.org/series/76925/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8424_full -> Patchwork_17578_full
==
We need to calculate cdclk after watermarks/ddb has been calculated
as with recent hw CDCLK needs to be adjusted accordingly to DBuf
requirements, which is not possible with current code organization.
Setting CDCLK according to DBuf BW requirements and not just rejecting
if it doesn't satisfy BW r
We quite often need now to iterate only particular dbuf slices
in mask, whether they are active or related to particular crtc.
v2: - Minor code refactoring
v3: - Use enum for max slices instead of macro
Let's make our life a bit easier and use a macro for that.
Signed-off-by: Stanislav Lisovskiy
No need to bump up CDCLK now, as it is now correctly
calculated, accounting for DBuf BW as BSpec says.
Reviewed-by: Manasi Navare
Signed-off-by: Stanislav Lisovskiy
---
drivers/gpu/drm/i915/display/intel_cdclk.c | 12
1 file changed, 12 deletions(-)
diff --git a/drivers/gpu/drm/i9
According to BSpec max BW per slice is calculated using formula
Max BW = CDCLK * 64. Currently when calculating min CDCLK we
account only per plane requirements, however in order to avoid
FIFO underruns we need to estimate accumulated BW consumed by
all planes(ddb entries basically) residing on tha
We need to calculate cdclk after watermarks/ddb has been calculated
as with recent hw CDCLK needs to be adjusted accordingly to DBuf
requirements, which is not possible with current code organization.
Setting CDCLK according to DBuf BW requirements and not just rejecting
if it doesn't satisfy BW r
In Gen11+ whenever we might exceed DBuf bandwidth we might need to
recalculate CDCLK which DBuf bandwidth is scaled with.
Total Dbuf bw used might change based on particular plane needs.
In intel_atomic_check_planes we try to filter out the cases when
we definitely don't need to recalculate requir
On Tue, May 05, 2020 at 10:20:58AM +0530, Anshuman Gupta wrote:
> On 2020-05-04 at 15:52:13 -0700, Matt Roper wrote:
> > RKL power wells are similar to TGL power wells, but have some important
> > differences:
> >
> > * PG1 now has pipe A's VDSC (rather than sticking it in PG2)
> > * PG2 no long
If a syncobj has not yet been assigned, treat it as a future fence and
install and wait upon a dma-fence-proxy. The proxy will be replace by
the real fence later, and that fence will be responsible for signaling
our waiter.
Link: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4854
Signe
== Series Details ==
Series: series starting with [1/2] drm/i915: Mark concurrent submissions with a
weak-dependency
URL : https://patchwork.freedesktop.org/series/76953/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8430 -> Patchwork_17582
===
== Series Details ==
Series: Consider DBuf bandwidth when calculating CDCLK (rev9)
URL : https://patchwork.freedesktop.org/series/74739/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
b9fba3b4f20a drm/i915: Decouple cdclk calculation from modeset checks
0debc96ba00f drm/i915: Fo
== Series Details ==
Series: Consider DBuf bandwidth when calculating CDCLK (rev9)
URL : https://patchwork.freedesktop.org/series/74739/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8430 -> Patchwork_17583
Summary
---
== Series Details ==
Series: series starting with [1/6] drm/i915: Mark concurrent submissions with a
weak-dependency (rev3)
URL : https://patchwork.freedesktop.org/series/76912/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
d336bdf650f8 drm/i915: Mark concurrent submissions wi
On Tue, May 05, 2020 at 07:39:04AM -0700, Matt Roper wrote:
> On Tue, May 05, 2020 at 10:20:58AM +0530, Anshuman Gupta wrote:
> > On 2020-05-04 at 15:52:13 -0700, Matt Roper wrote:
> > > RKL power wells are similar to TGL power wells, but have some important
> > > differences:
> > >
> > > * PG1 n
Replacing an inter-engine fence with a semaphore reduced the HW
execution latency, but that comes at a cost. For normal fences, we are
able to propagate the metadata such as errors along with the signaling.
For semaphores, we are missing this error propagation so add it in the
back channel we use t
On Tue, May 5, 2020 at 3:27 AM Oliver Barta wrote:
>
> On Mon, May 4, 2020 at 10:24 PM Sean Paul wrote:
> >
> > On Mon, May 4, 2020 at 1:32 PM Oliver Barta wrote:
> > >
> > > From: Oliver Barta
> > >
> > > A single Ri mismatch doesn't automatically mean that the link integrity
> > > is broken.
== Series Details ==
Series: series starting with [1/6] drm/i915: Mark concurrent submissions with a
weak-dependency (rev3)
URL : https://patchwork.freedesktop.org/series/76912/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8430 -> Patchwork_17584
On 2020-05-01 11:20, Jason Gunthorpe wrote:
From: Jason Gunthorpe
hmm_vma_walk->last is supposed to be updated after every write to the
pfns, so that it can be returned by hmm_range_fault(). However, this is
not done consistently. Fortunately nothing checks the return code of
hmm_range_fault()
On 2020-05-01 11:20, Jason Gunthorpe wrote:
From: Jason Gunthorpe
This is just an alias for HMM_PFN_ERROR, nothing cares that the error was
because of a special page vs any other error case.
Reviewed-by: John Hubbard
thanks,
--
John Hubbard
NVIDIA
Acked-by: Felix Kuehling
Reviewed-by: Ch
On Tue, May 5, 2020 at 9:38 AM Ramalingam C wrote:
>
> On 2020-05-04 at 14:35:24 +0200, Oliver Barta wrote:
> > From: Oliver Barta
> >
> > A single Ri mismatch doesn't automatically mean that the link integrity
> > is broken. Update and check of Ri and Ri' are done asynchronously. In
> > case an
On Mon, May 4, 2020 at 10:24 PM Sean Paul wrote:
>
> On Mon, May 4, 2020 at 1:32 PM Oliver Barta wrote:
> >
> > From: Oliver Barta
> >
> > A single Ri mismatch doesn't automatically mean that the link integrity
> > is broken. Update and check of Ri and Ri' are done asynchronously. In
> > case an
On 2020-05-01 11:20, Jason Gunthorpe wrote:
From: Jason Gunthorpe
Presumably the intent here was that hmm_range_fault() could put the data
into some HW specific format and thus avoid some work. However, nothing
actually does that, and it isn't clear how anything actually could do that
as hmm_ra
Lionel Landwerlin writes:
> On 05/05/2020 03:09, D Scott Phillips wrote:
>> D Scott Phillips writes:
>>
>>> Previously we set HDC_PIPELINE_FLUSH in dword 1 of gen12
>>> pipe_control commands. HDC Pipeline flush actually resides in
>>> dword 0, and the bit we were setting in dword 1 was Indirect
== Series Details ==
Series: drm/i915: Propagate fence->error across semaphores
URL : https://patchwork.freedesktop.org/series/76968/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8430 -> Patchwork_17585
Summary
---
Quoting Chris Wilson (2020-05-05 17:13:02)
> Replacing an inter-engine fence with a semaphore reduced the HW
> execution latency, but that comes at a cost. For normal fences, we are
> able to propagate the metadata such as errors along with the signaling.
> For semaphores, we are missing this error
On Wed, Apr 29, 2020 at 12:20 PM Ramalingam C wrote:
>
> On 2020-04-29 at 10:46:29 -0400, Sean Paul wrote:
> > On Wed, Apr 29, 2020 at 10:22 AM Ramalingam C
> > wrote:
> > >
> > > On 2020-04-29 at 09:58:16 -0400, Sean Paul wrote:
> > > > On Wed, Apr 29, 2020 at 9:50 AM Ramalingam C
> > > > wro
Hi Chris,
On Mon, May 04, 2020 at 05:48:48AM +0100, Chris Wilson wrote:
> As we only restore the default context state upon banning a context, we
> only need enough of the state to run the ring and nothing more. That is
> we only need our bare protocontext.
>
> Signed-off-by: Chris Wilson
> Cc:
== Series Details ==
Series: drm/i915/execlists: Record the active CCID from before reset
URL : https://patchwork.freedesktop.org/series/76946/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8427_full -> Patchwork_17580_full
Quoting Andi Shyti (2020-05-05 21:08:03)
> Hi Chris,
>
> On Mon, May 04, 2020 at 05:48:48AM +0100, Chris Wilson wrote:
> > As we only restore the default context state upon banning a context, we
> > only need enough of the state to run the ring and nothing more. That is
> > we only need our bare p
On Tue, May 05, 2020 at 02:01:16PM +0300, Ville Syrjälä wrote:
> On Tue, May 05, 2020 at 01:42:46PM +0300, Ville Syrjälä wrote:
> > On Tue, May 05, 2020 at 01:22:43PM +0300, Stanislav Lisovskiy wrote:
> > > Introduce platform dependent SAGV checking in
> > > combination with bandwidth state pipe SA
Mika Kuoppala writes:
> Aim for completeness for invalidating everything
> and mark state pointers stale.
>
> Signed-off-by: Mika Kuoppala
nak, this breaks iris. indirect state disable removes push constant
state from the render context, not just invalidating it
emphemerally. iris is depending
Mika Kuoppala writes:
> HDC pipeline flush is bit on the first dword of
> the PIPE_CONTROL, not the second. Make it so.
>
> Signed-off-by: Mika Kuoppala
Fixes: 4aa0b5d457f5 ("drm/i915/tgl: Add HDC Pipeline Flush")
___
Intel-gfx mailing list
Intel-gfx@
On Fri, May 01, 2020 at 11:32:17PM +, Patchwork wrote:
> == Series Details ==
>
> Series: drm/i915/icp: Add Wa_14010685332
> URL : https://patchwork.freedesktop.org/series/76841/
> State : failure
>
> == Summary ==
>
> CI Bug Log - changes from CI_DRM_8407_full -> Patchwork_17547_full
> ==
If a syncobj has not yet been assigned, treat it as a future fence and
install and wait upon a dma-fence-proxy. The proxy will be replace by
the real fence later, and that fence will be responsible for signaling
our waiter.
Link: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4854
Signe
As a means for a small code consolidation, but primarily to start
thinking more carefully about internal-vs-external linkage, pull the
pair of i915_sw_fence_await_dma_fence() calls into a common routine.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/i915_request.c | 16 ++--
1
Allow the callers to supply a dma-fence-proxy for asynchronous waiting on
future fences.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/drm_syncobj.c | 8 ++--
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
index 4
While we ordinarily do not skip submit-fences due to the accompanying
hook that we want to callback on execution, a submit-fence on the same
timeline is meaningless.
Signed-off-by: Chris Wilson
Cc: Tvrtko Ursulin
---
drivers/gpu/drm/i915/i915_request.c | 3 +++
1 file changed, 3 insertions(+)
Let userspace know if they can trust timeslicing by including it as part
of the I915_PARAM_HAS_SCHEDULER::I915_SCHEDULER_CAP_TIMESLICING
v2: Only declare timeslicing if we can safely preempt userspace.
Fixes: 8ee36e048c98 ("drm/i915/execlists: Minimalistic timeslicing")
Link: https://gitlab.freed
Just tidy up the return handling for completed dma-fences.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/i915_sw_fence.c | 10 --
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c
b/drivers/gpu/drm/i915/i915_sw_fence.c
index 7daf81
This timeout is only used in one place, to provide a tiny bit of grace
for slow igt to cleanup after themselves. If we are a bit stricter and
opt to kill outstanding requsts rather than wait, we can speed up igt by
not waiting for 200ms after a hang.
Signed-off-by: Chris Wilson
---
drivers/gpu/d
We allow exported sync_file fences to be used as submit fences, but they
are not the only source of user fences. We also accept an array of
syncobj, and as with sync_file these are dma_fences underneath and so
feature the same set of controls. The submit-fence allows for a request
to be scheduled a
The downside of using semaphores is that we lose metadata passing
along the signaling chain. This is particularly nasty when we
need to pass along a fatal error such as EFAULT or EDEADLK. For
fatal errors we want to scrub the request before it is executed,
which means that we cannot preload the req
These were used to set various timeouts for the reset procedure
(deciding when the engine was dead, and even if the reset itself was not
making forward progress). No longer used.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/i915_drv.h | 7 ---
1 file changed, 7 deletions(-)
diff --g
We need to preserve fatal errors from fences that are being terminated
as we hook them up.
Fixes: ef4688497512 ("drm/i915: Propagate fence errors")
Signed-off-by: Chris Wilson
Cc: Tvrtko Ursulin
Cc: Matthew Auld
---
drivers/gpu/drm/i915/i915_request.c | 4 +++-
1 file changed, 3 insertions(+),
Expose the hardcoded timeout for unsignaled foreign fences as a Kconfig
option, primarily to allow brave systems to disable the timeout and
solely rely on correct signaling.
Signed-off-by: Chris Wilson
Cc: Joonas Lahtinen
---
drivers/gpu/drm/i915/Kconfig.profile | 12
dri
We recorded the dependencies for WAIT_FOR_SUBMIT in order that we could
correctly perform priority inheritance from the parallel branches to the
common trunk. However, for the purpose of timeslicing and reset
handling, the dependency is weak -- as we the pair of requests are
allowed to run in paral
Often we need to create a fence for a future event that has not yet been
associated with a fence. We can store a proxy fence, a placeholder, in
the timeline and replace it later when the real fence is known. Any
listeners that attach to the proxy fence will automatically be signaled
when the real f
When we allow a wait on a future future fence, it must autoexpire if the
fence is never signaled by userspace. Also put future fences to work, as
the intention is to use them, along with WAIT_SUBMIT and semaphores, for
userspace to perform its own fine-grained scheduling. Or simply run
concurrent c
Signed-off-by: Chris Wilson
---
include/drm-uapi/i915_drm.h | 8 +---
lib/i915/gem_scheduler.c| 15 +++
lib/i915/gem_scheduler.h| 1 +
3 files changed, 21 insertions(+), 3 deletions(-)
diff --git a/include/drm-uapi/i915_drm.h b/include/drm-uapi/i915_drm.h
index 2b55af13
== Series Details ==
Series: series starting with [01/14] drm/i915: Mark concurrent submissions with
a weak-dependency
URL : https://patchwork.freedesktop.org/series/76973/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
c4828948d40e drm/i915: Mark concurrent submissions with a
== Series Details ==
Series: series starting with [01/14] drm/i915: Mark concurrent submissions with
a weak-dependency
URL : https://patchwork.freedesktop.org/series/76973/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8433 -> Patchwork_17586
=
== Series Details ==
Series: SAGV support for Gen12+ (rev35)
URL : https://patchwork.freedesktop.org/series/75129/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8430_full -> Patchwork_17581_full
Summary
---
**SUCCESS
== Series Details ==
Series: series starting with [1/2] drm/i915: Mark concurrent submissions with a
weak-dependency
URL : https://patchwork.freedesktop.org/series/76953/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8430_full -> Patchwork_17582_full
=
== Series Details ==
Series: Consider DBuf bandwidth when calculating CDCLK (rev9)
URL : https://patchwork.freedesktop.org/series/74739/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8430_full -> Patchwork_17583_full
Summar
== Series Details ==
Series: series starting with [1/6] drm/i915: Mark concurrent submissions with a
weak-dependency (rev3)
URL : https://patchwork.freedesktop.org/series/76912/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8430_full -> Patchwork_17584_full
==
== Series Details ==
Series: drm/i915: Propagate fence->error across semaphores
URL : https://patchwork.freedesktop.org/series/76968/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8430_full -> Patchwork_17585_full
Summary
-
100 matches
Mail list logo