On Wed, May 13, 2020 at 9:02 AM Christian König
wrote:
>
> Am 12.05.20 um 10:59 schrieb Daniel Vetter:
> > This is a bit tricky, since ->notifier_lock is held while calling
> > dma_fence_wait we must ensure that also the read side (i.e.
> > dma_fence_begin_signalling) is on the same side. If we mi
Could our scheduling now be good enough that we avoid unnecessary
semaphores and do not waste bus cycles checking old results? Judging by
local runs of the examples from last year, possibly!
References: ca6e56f654e7 ("drm/i915: Disable semaphore busywaits on saturated
systems")
Signed-off-by: Chr
Now that atomic64_fetch_add() exists we can use it to return the base
context id, rather than the atomic64_add_return(N) - N concoction.
Suggested-by: Mika Kuoppala
Signed-off-by: Chris Wilson
Cc: Mika Kuoppala
---
drivers/dma-buf/dma-fence.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(
Sometimes we have to be very careful not to allocate underneath a mutex
(or spinlock) and yet still want to track activity. Enter
i915_active_acquire_for_context(). This raises the activity counter on
i915_active prior to use and ensures that the fence-tree contains a slot
for the context.
Signed-
These were used to set various timeouts for the reset procedure
(deciding when the engine was dead, and even if the reset itself was not
making forward progress). No longer used.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/i915_drv.h | 7 ---
1 file changed, 7 deletions(-)
diff --g
The second try at staging the transfer of the breadcrumb. In part one,
we realised we could not simply move to the second engine as we were
only holding the breadcrumb lock on the first. So in commit 6c81e21a4742
("drm/i915/gt: Stage the transfer of the virtual breadcrumb"), we
removed it from the
Let userspace know if they can trust timeslicing by including it as part
of the I915_PARAM_HAS_SCHEDULER::I915_SCHEDULER_CAP_TIMESLICING
v2: Only declare timeslicing if we can safely preempt userspace.
Fixes: 8ee36e048c98 ("drm/i915/execlists: Minimalistic timeslicing")
Link: https://gitlab.freed
Allocate a few dma fence context id that we can use to associate async work
[for the CPU] launched on behalf of this context. For extra fun, we allow
a configurable concurrency width.
A current example would be that we spawn an unbound worker for every
userptr get_pages. In the future, we wish to
It is reasonably common for userspace (even modern drivers like iris) to
reuse an active address for a new buffer. This would cause the
application to stall under its mutex (originally struct_mutex) until the
old batches were idle and it could synchronously remove the stale PTE.
However, we can que
Often we need to create a fence for a future event that has not yet been
associated with a fence. We can store a proxy fence, a placeholder, in
the timeline and replace it later when the real fence is known. Any
listeners that attach to the proxy fence will automatically be signaled
when the real f
We allow exported sync_file fences to be used as submit fences, but they
are not the only source of user fences. We also accept an array of
syncobj, and as with sync_file these are dma_fences underneath and so
feature the same set of controls. The submit-fence allows for a request
to be scheduled a
Currently, if an error is raised we always call the cleanup locally
[and skip the main work callback]. However, some future users may need
to take a mutex to cleanup and so we cannot immediately execute the
cleanup as we may still be in interrupt context.
With the execute-immediate flag, for most
When we introduced the saturated workload detection to tell us to back
off from semaphore usage [semaphores have a noticeable impact on
contended bus cycles with the CPU for some heavy workloads], we first
introduced it as a per-context tracker. This allows individual contexts
to try and optimise t
The initial-breadcrumb is used to mark the end of the awaiting and the
beginning of the user payload. We verify that we do not start the user
payload before all signaler are completed, checking our semaphore setup
by looking for the initial breadcrumb being written too early. We also
want to ensure
It is illegal to wait on an another vma while holding the vm->mutex, as
that easily leads to ABBA deadlocks (we wait on a second vma that waits
on us to release the vm->mutex). So while the vm->mutex exists, move the
waiting outside of the lock into the async binding pipeline.
Signed-off-by: Chris
Since there can only be one of in_fence/exec_fence, just use the single
in_fence local.
Signed-off-by: Chris Wilson
---
.../gpu/drm/i915/gem/i915_gem_execbuffer.c| 24 ---
1 file changed, 10 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbu
By providing the default values configured into the kernel via sysfs, it
is much more convenient for userspace to restore those sane defaults, or
at least know what are considered good baseline. This is useful, for
example, to cleanup after any failed userspace prior to commencing new
jobs.
/sys/c
While this does not appear to fix any issues, the backend itself knows
when it wants to emit a breadcrumb, so let it make the final call.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/selftests/i915_perf.c | 3 +--
drivers/gpu/drm/i915/selftests/igt_spinner.c | 3 +--
2 files changed, 2
In preparation for making eb_vma bigger and heavy to run inn parallel,
we need to stop apply an in-place swap() to reorder around ww_mutex
deadlocks. Keep the array intact and reorder the locks using a dedicated
list.
Signed-off-by: Chris Wilson
---
.../gpu/drm/i915/gem/i915_gem_execbuffer.c
Treat the dependency between bonded requests as weak and leave the
remainder of the pair on the GPU if one hangs.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/intel_lrc.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c
b/drivers/gpu/drm/i
Allow the callers to supply a dma-fence-proxy for asynchronous waiting on
future fences.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/drm_syncobj.c | 8 ++--
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
index 4
This timeout is only used in one place, to provide a tiny bit of grace
for slow igt to cleanup after themselves. If we are a bit stricter and
opt to kill outstanding requsts rather than wait, we can speed up igt by
not waiting for 200ms after a hang.
Signed-off-by: Chris Wilson
---
drivers/gpu/d
If a syncobj has not yet been assigned, treat it as a future fence and
install and wait upon a dma-fence-proxy. The proxy will be replace by
the real fence later, and that fence will be responsible for signaling
our waiter.
Link: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4854
Signe
Now that we have fast timeslicing on semaphores, we no longer need to
prioritise none-semaphore work as we will yield any work blocked on a
sempahore to the next in the queue. Previously with no timeslicing,
blocking on the semaphore caused extremely bad scheduling with multiple
clients utilising m
Since a few rearragements ago, we have an explicit reference to the
containing intel_context from inside the active reference and can drop
our own reference handling dancing around releasing the i915_active.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/intel_context.c | 8
1
== Series Details ==
Series: series starting with [01/24] drm/i915/gt: Transfer old virtual
breadcrumbs to irq_worker
URL : https://patchwork.freedesktop.org/series/77206/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
cef179c6c753 drm/i915/gt: Transfer old virtual breadcrumbs
On 12/05/2020 17:20, Chris Wilson wrote:
Quoting Tvrtko Ursulin (2020-05-12 17:07:23)
On 12/05/2020 16:52, Chris Wilson wrote:
Quoting Tvrtko Ursulin (2020-05-12 16:17:30)
On 12/05/2020 14:22, Chris Wilson wrote:
- spin_lock(&old->breadcrumbs.irq_lock);
- if (!list_empty(&ve->cont
Quoting Tvrtko Ursulin (2020-05-13 09:10:48)
>
> On 12/05/2020 17:20, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2020-05-12 17:07:23)
> >>
> >> On 12/05/2020 16:52, Chris Wilson wrote:
> >>> Quoting Tvrtko Ursulin (2020-05-12 16:17:30)
>
> On 12/05/2020 14:22, Chris Wilson wrote:
>
On 31-Jan-20 4:50 PM, Ville Syrjälä wrote:
On Thu, Jan 30, 2020 at 08:07:07PM +, Souza, Jose wrote:
On Thu, 2020-01-30 at 19:25 +0200, Ville Syrjälä wrote:
On Thu, Jan 16, 2020 at 05:58:37PM -0800, José Roberto de Souza
wrote:
TGL timeouts when disabling MST transcoder and fifo underruns
== Series Details ==
Series: series starting with [01/24] drm/i915/gt: Transfer old virtual
breadcrumbs to irq_worker
URL : https://patchwork.freedesktop.org/series/77206/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_8471 -> Patchwork_17639
==
On Tue, May 12, 2020 at 11:19 AM Chris Wilson wrote:
> Quoting Daniel Vetter (2020-05-12 10:08:47)
> > On Tue, May 12, 2020 at 10:04:22AM +0100, Chris Wilson wrote:
> > > Quoting Daniel Vetter (2020-05-12 09:59:29)
> > > > Design is similar to the lockdep annotations for workers, but with
> > > >
Upon gt resume, we first poison then sanitize the engine. However, our
testing shows that gen9 will very rarely retain the poisoned value from
the HWSP mappings of the execlists status registers. This suggests that
it is reading back from the HWSP, so rejig the register reset.
Signed-off-by: Chris
== Series Details ==
Series: drm/i915/gt: Reset execlists registers before HWSP
URL : https://patchwork.freedesktop.org/series/77207/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_8471 -> Patchwork_17640
Summary
---
Chris Wilson writes:
> Upon gt resume, we first poison then sanitize the engine. However, our
> testing shows that gen9 will very rarely retain the poisoned value from
> the HWSP mappings of the execlists status registers. This suggests that
> it is reading back from the HWSP, so rejig the regist
For future Gen12 SAGV implementation we need to
seemlessly alter wm levels calculated, depending
on whether we are allowed to enable SAGV or not.
So this accessor will give additional flexibility
to do that.
Currently this accessor is still simply working
as "pass-through" function. This will be
Seems that only skl needs to have SAGV turned off
for multipipe scenarios, so lets do it this way.
If anything blows up - we can always revert this patch.
v2: Changed if condition to look better (Ville).
Signed-off-by: Stanislav Lisovskiy
---
drivers/gpu/drm/i915/intel_pm.c | 13 -
Introduce platform dependent SAGV checking in
combination with bandwidth state pipe SAGV mask.
This is preparation to adding TGL support, which
requires different way of SAGV checking.
v2, v3, v4, v5, v6: Fix rebase conflict
v7: - Nuke icl specific function, use skl
for icl as well, gen sp
For Gen11+ platforms BSpec suggests disabling specific
QGV points separately, depending on bandwidth limitations
and current display configuration. Thus it required adding
a new PCode request for disabling QGV points and some
refactoring of already existing SAGV code.
Also had to refactor intel_can
Starting from TGL we need to have a separate wm0
values for SAGV and non-SAGV which affects
how calculations are done.
v2: Remove long lines
v3: Removed COLOR_PLANE enum references
v4, v5, v6: Fixed rebase conflict
v7: - Removed skl_plane_wm_level accessor from skl_allocate_pipe_ddb(Ville)
- R
Flip the switch and enable SAGV support
for Gen12 also.
Signed-off-by: Stanislav Lisovskiy
---
drivers/gpu/drm/i915/intel_pm.c | 4
1 file changed, 4 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index ce5a5262471d..cb70db0cb58b 100644
--- a/dr
According to BSpec 53998, we should try to
restrict qgv points, which can't provide
enough bandwidth for desired display configuration.
Currently we are just comparing against all of
those and take minimum(worst case).
v2: Fixed wrong PCode reply mask, removed hardcoded
values.
v3: Forbid si
Quoting Mika Kuoppala (2020-05-13 10:32:37)
> Chris Wilson writes:
>
> > Upon gt resume, we first poison then sanitize the engine. However, our
> > testing shows that gen9 will very rarely retain the poisoned value from
> > the HWSP mappings of the execlists status registers. This suggests that
>
Upon gt resume, we first poison then sanitize the engine. However, our
testing shows that gen9 will very rarely retain the poisoned value from
the HWSP mappings of the execlists status registers. This suggests that
it is reading back from the HWSP, so rejig the register reset.
v2: Maybe RING_CONTE
Upon gt resume, we first poison then sanitize the engine. However, our
testing shows that gen9 will very rarely retain the poisoned value from
the HWSP mappings of the execlists status registers. This suggests that
it is reading back from the HWSP, so rejig the register reset.
v2: Maybe RING_CONTE
== Series Details ==
Series: SAGV support for Gen12+ (rev37)
URL : https://patchwork.freedesktop.org/series/75129/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
1a28a5c25a7d drm/i915: Introduce skl_plane_wm_level accessor.
c93186da4f39 drm/i915: Extract skl SAGV checking
197c50
This fixes the following use-after-free problem in case an MST down
message times out, while waiting for the response for it:
[ 449.022841] [drm:drm_dp_mst_wait_tx_reply.isra.26] timedout msg send
80ba7fa2 2 0
[ 449.022898] [ cut here ]
[ 449.022903] list_add co
== Series Details ==
Series: SAGV support for Gen12+ (rev37)
URL : https://patchwork.freedesktop.org/series/75129/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8471 -> Patchwork_17641
Summary
---
**SUCCESS**
No r
If our CPU client is very slow to notice that the GPU spinner has
started, we may consume the full heartbeat interval without noticing.
This is bad if we are trying to test that a client that yield within the
heartbeat interval is not selected for termination.
Closes: https://gitlab.freedesktop.or
== Series Details ==
Series: drm/i915/gt: Reset execlists registers before HWSP (rev3)
URL : https://patchwork.freedesktop.org/series/77207/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8471 -> Patchwork_17642
Summary
-Original Message-
From: Navare, Manasi D
Sent: Wednesday, May 13, 2020 11:05 AM
To: intel-gfx@lists.freedesktop.org; dri-de...@lists.freedesktop.org
Cc: Modem, Bhanuprakash ; Navare, Manasi D
; Jani Nikula ; Ville
Syrjälä
Subject: [PATCH v5 3/3] drm/i915/dp: Expose connector VRR info
== Series Details ==
Series: SAGV support for Gen12+ (rev37)
URL : https://patchwork.freedesktop.org/series/75129/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8471_full -> Patchwork_17641_full
Summary
---
**SUCCESS
On Wed, May 13, 2020 at 01:58:47PM +0530, Sharma, Swati2 wrote:
>
>
> On 31-Jan-20 4:50 PM, Ville Syrjälä wrote:
> > On Thu, Jan 30, 2020 at 08:07:07PM +, Souza, Jose wrote:
> >> On Thu, 2020-01-30 at 19:25 +0200, Ville Syrjälä wrote:
> >>> On Thu, Jan 16, 2020 at 05:58:37PM -0800, José Rober
On Wed, May 13, 2020 at 03:09:52PM +0300, Ville Syrjälä wrote:
> On Wed, May 13, 2020 at 01:58:47PM +0530, Sharma, Swati2 wrote:
> >
> >
> > On 31-Jan-20 4:50 PM, Ville Syrjälä wrote:
> > > On Thu, Jan 30, 2020 at 08:07:07PM +, Souza, Jose wrote:
> > >> On Thu, 2020-01-30 at 19:25 +0200, Vill
== Series Details ==
Series: series starting with [v5,1/3] drm/dp: DRM DP helper for reading Ignore
MSA from DPCD (rev2)
URL : https://patchwork.freedesktop.org/series/77204/
State : failure
== Summary ==
Applying: drm/dp: DRM DP helper for reading Ignore MSA from DPCD
Applying: drm/i915/dp:
== Series Details ==
Series: drm/dp_mst: Fix timeout handling of MST down messages
URL : https://patchwork.freedesktop.org/series/77216/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8472 -> Patchwork_17643
Summary
---
== Series Details ==
Series: drm/i915/gt: Reset execlists registers before HWSP (rev3)
URL : https://patchwork.freedesktop.org/series/77207/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8471_full -> Patchwork_17642_full
Su
It is possible for a residual tasklet to be pending execution as we
resume (whether that's some prior test kicking off the tasklet, or if we
are in a suspend/resume stress test). As such, we do not want that
tasklet to execute in the middle of our sanitization, such that it sees
the poisoned state.
Currently intel_hdcp_update_pipe() is also getting called for non-hdcp
connectors and got though its conditional code flow, which is completely
unnecessary for non-hdcp connectors, therefore it make sense to
have an early return. No functional change.
Signed-off-by: Anshuman Gupta
---
drivers/gp
No functional change.
Anshuman Gupta (2):
drm/i915/hdcp: Add update_pipe early return
drm/i915/hdcp: No direct access to power_well desc
drivers/gpu/drm/i915/display/intel_hdcp.c | 24 ++-
1 file changed, 10 insertions(+), 14 deletions(-)
--
2.26.0
HDCP code doesn't require to access power_well internal stuff,
instead it should use the intel_display_power_well_is_enabled()
to get the status of desired power_well.
No functional change.
Cc: Jani Nikula
Signed-off-by: Anshuman Gupta
---
drivers/gpu/drm/i915/display/intel_hdcp.c | 16
Chris Wilson writes:
> Now that atomic64_fetch_add() exists we can use it to return the base
> context id, rather than the atomic64_add_return(N) - N concoction.
>
> Suggested-by: Mika Kuoppala
> Signed-off-by: Chris Wilson
> Cc: Mika Kuoppala
> ---
> drivers/dma-buf/dma-fence.c | 2 +-
> 1 f
Chris Wilson writes:
> These were used to set various timeouts for the reset procedure
> (deciding when the engine was dead, and even if the reset itself was not
> making forward progress). No longer used.
>
> Signed-off-by: Chris Wilson
> ---
> drivers/gpu/drm/i915/i915_drv.h | 7 ---
> 1
== Series Details ==
Series: drm/i915/gt: Suspend tasklets before resume sanitization
URL : https://patchwork.freedesktop.org/series/77223/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
a06430f11a80 drm/i915/gt: Suspend tasklets before resume sanitization
-:13: WARNING:COMMIT_L
On Wed, May 13, 2020 at 01:31:55PM +0300, Imre Deak wrote:
> This fixes the following use-after-free problem in case an MST down
> message times out, while waiting for the response for it:
>
> [ 449.022841] [drm:drm_dp_mst_wait_tx_reply.isra.26] timedout msg send
> 80ba7fa2 2 0
> [ 449.
We have traces for the semaphore and the error, but not the far more
frequent CS interrupts. This is likely to be too much, but for the
purpose of live_unlite_preempt it may answer a question or two.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/intel_gt_irq.c | 6 +-
1 file change
It is possible for a residual tasklet to be pending execution as we
resume (whether that's some prior test kicking off the tasklet, or if we
are in a suspend/resume stress test). As such, we do not want that
tasklet to execute in the middle of our sanitization, such that it sees
the poisoned state.
== Series Details ==
Series: drm/i915/gt: Suspend tasklets before resume sanitization
URL : https://patchwork.freedesktop.org/series/77223/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8472 -> Patchwork_17645
Summary
-
On Wed, May 13, 2020 at 03:48:58PM +0300, Ville Syrjälä wrote:
> On Wed, May 13, 2020 at 01:31:55PM +0300, Imre Deak wrote:
> > This fixes the following use-after-free problem in case an MST down
> > message times out, while waiting for the response for it:
> >
> > [ 449.022841] [drm:drm_dp_mst_w
The Documentation/DMA-API-HOWTO.txt states that the dma_map_sg() function
returns the number of the created entries in the DMA address space.
However the subsequent calls to the dma_sync_sg_for_{device,cpu}() and
dma_unmap_sg must be called with the original number of the entries
passed to the dma_
On Wed, May 13, 2020 at 12:38:13PM +0300, Stanislav Lisovskiy wrote:
> Seems that only skl needs to have SAGV turned off
> for multipipe scenarios, so lets do it this way.
Commit msg still a bit misleading, but meh, pushed 1-3 anyway. Thanks.
>
> If anything blows up - we can always revert this
Chris Wilson writes:
> By providing the default values configured into the kernel via sysfs, it
> is much more convenient for userspace to restore those sane defaults, or
> at least know what are considered good baseline. This is useful, for
> example, to cleanup after any failed userspace prior
On Wed, May 13, 2020 at 12:38:14PM +0300, Stanislav Lisovskiy wrote:
> Starting from TGL we need to have a separate wm0
> values for SAGV and non-SAGV which affects
> how calculations are done.
>
> v2: Remove long lines
> v3: Removed COLOR_PLANE enum references
> v4, v5, v6: Fixed rebase conflict
On Wed, May 13, 2020 at 12:38:15PM +0300, Stanislav Lisovskiy wrote:
> According to BSpec 53998, we should try to
> restrict qgv points, which can't provide
> enough bandwidth for desired display configuration.
>
> Currently we are just comparing against all of
> those and take minimum(worst case)
From: Linus Torvalds
commit 594cc251fdd0d231d342d88b2fdff4bc42fb0690 upstream.
Originally, the rule used to be that you'd have to do access_ok()
separately, and then user_access_begin() before actually doing the
direct (optimized) user access.
But experience has shown that people then decide no
Am 12.05.20 um 10:59 schrieb Daniel Vetter:
This is a bit tricky, since ->notifier_lock is held while calling
dma_fence_wait we must ensure that also the read side (i.e.
dma_fence_begin_signalling) is on the same side. If we mix this up
lockdep complaints, and that's again why we want to have the
This patch fixes CVE-2018-20669 in 4.19 tree.
On 13/05/20, 11:36 AM, "Greg KH" wrote:
On Wed, May 13, 2020 at 07:19:21AM +0530, ashwin-h wrote:
> From: Linus Torvalds
>
> commit 594cc251fdd0d231d342d88b2fdff4bc42fb0690 upstream.
>
> Originally, the rule used to be tha
== Series Details ==
Series: drm/dp_mst: Fix timeout handling of MST down messages
URL : https://patchwork.freedesktop.org/series/77216/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8472_full -> Patchwork_17643_full
Summar
Quoting Ville Syrjala (2020-03-02 14:39:38)
> From: Ville Syrjälä
>
> Bunch of places use a 64bit divisor needlessly. Switch
> to 32bit divisor.
>
> Cc: Lionel Landwerlin
> Signed-off-by: Ville Syrjälä
> ---
> drivers/gpu/drm/i915/i915_perf.c | 11 +--
> 1 file changed, 5 insertions(+
== Series Details ==
Series: HDCP minor refactoring
URL : https://patchwork.freedesktop.org/series/77224/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8474 -> Patchwork_17646
Summary
---
**SUCCESS**
No regression
Quoting Ville Syrjala (2020-03-02 14:39:39)
> From: Ville Syrjälä
>
> kHz isn't accurate enough for storing the CS timestamp
> frequency on some of the platforms. Store the value
> in Hz instead.
>
> Cc: Lionel Landwerlin
> Signed-off-by: Ville Syrjälä
> ---
> drivers/gpu/drm/i915/i915_debugf
== Series Details ==
Series: series starting with [CI,1/2] drm/i915/gt: Suspend tasklets before
resume sanitization
URL : https://patchwork.freedesktop.org/series/77226/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
97e29f1cb47b drm/i915/gt: Suspend tasklets before resume sani
On 02/03/2020 16:39, Ville Syrjala wrote:
From: Ville Syrjälä
kHz isn't accurate enough for storing the CS timestamp
frequency on some of the platforms. Store the value
in Hz instead.
Cc: Lionel Landwerlin
Signed-off-by: Ville Syrjälä
Probably the only patch in this series where I'm qualif
Quoting Ville Syrjala (2020-03-02 14:39:42)
> From: Ville Syrjälä
>
> Pull the code to do the CS timestamp ns<->ticks conversion into
> helpers and use them all over.
Reviewed-by: Chris Wilson
> The check in i915_perf_noa_delay_set() seems a bit dubious,
> so we switch it to do what I assume
== Series Details ==
Series: drm/i915/gt: Suspend tasklets before resume sanitization
URL : https://patchwork.freedesktop.org/series/77223/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8472_full -> Patchwork_17645_full
Sum
== Series Details ==
Series: make 'user_access_begin()' do 'access_ok()'
URL : https://patchwork.freedesktop.org/series/77233/
State : failure
== Summary ==
Applying: make 'user_access_begin()' do 'access_ok()'
error: sha1 information is lacking or useless (arch/x86/include/asm/uaccess.h).
err
== Series Details ==
Series: series starting with [CI,1/2] drm/i915/gt: Suspend tasklets before
resume sanitization
URL : https://patchwork.freedesktop.org/series/77226/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8474 -> Patchwork_17647
Chris Wilson writes:
> It is possible for a residual tasklet to be pending execution as we
> resume (whether that's some prior test kicking off the tasklet, or if we
> are in a suspend/resume stress test). As such, we do not want that
> tasklet to execute in the middle of our sanitization, such t
If a syncobj has not yet been assigned, treat it as a future fence and
install and wait upon a dma-fence-proxy. The proxy will be replace by
the real fence later, and that fence will be responsible for signaling
our waiter.
Link: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4854
Signe
The initial-breadcrumb is used to mark the end of the awaiting and the
beginning of the user payload. We verify that we do not start the user
payload before all signaler are completed, checking our semaphore setup
by looking for the initial breadcrumb being written too early. We also
want to ensure
Let userspace know if they can trust timeslicing by including it as part
of the I915_PARAM_HAS_SCHEDULER::I915_SCHEDULER_CAP_TIMESLICING
v2: Only declare timeslicing if we can safely preempt userspace.
Fixes: 8ee36e048c98 ("drm/i915/execlists: Minimalistic timeslicing")
Link: https://gitlab.freed
Since there can only be one of in_fence/exec_fence, just use the single
in_fence local.
Signed-off-by: Chris Wilson
---
.../gpu/drm/i915/gem/i915_gem_execbuffer.c| 24 ---
1 file changed, 10 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbu
Often we need to create a fence for a future event that has not yet been
associated with a fence. We can store a proxy fence, a placeholder, in
the timeline and replace it later when the real fence is known. Any
listeners that attach to the proxy fence will automatically be signaled
when the real f
We allow exported sync_file fences to be used as submit fences, but they
are not the only source of user fences. We also accept an array of
syncobj, and as with sync_file these are dma_fences underneath and so
feature the same set of controls. The submit-fence allows for a request
to be scheduled a
Allow the callers to supply a dma-fence-proxy for asynchronous waiting on
future fences.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/drm_syncobj.c | 8 ++--
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
index 4
When we allow a wait on a future future fence, it must autoexpire if the
fence is never signaled by userspace. Also put future fences to work, as
the intention is to use them, along with WAIT_SUBMIT and semaphores, for
userspace to perform its own fine-grained scheduling. Or simply run
concurrent c
Signed-off-by: Chris Wilson
---
include/drm-uapi/i915_drm.h | 8 +---
lib/i915/gem_scheduler.c| 15 +++
lib/i915/gem_scheduler.h| 1 +
3 files changed, 21 insertions(+), 3 deletions(-)
diff --git a/include/drm-uapi/i915_drm.h b/include/drm-uapi/i915_drm.h
index 2b55af13
Chris Wilson writes:
> Now that we have fast timeslicing on semaphores, we no longer need to
> prioritise none-semaphore work as we will yield any work blocked on a
> sempahore to the next in the queue. Previously with no timeslicing,
sempahore is back at blocking again :)
> blocking on the sem
Ping for merging this? If there are no issues, I'd prefer to pull in
next gvt-next and tag the final pull sooner than later.
Regards, Joonas
Quoting Joonas Lahtinen (2020-04-30 15:49:04)
> Hi Dave & Daniel,
>
> Fix for performance regression GitLab #1698: Iris Plus 655 and
> 4K screen. Missing w
Now that we have fast timeslicing on semaphores, we no longer need to
prioritise none-semaphore work as we will yield any work blocked on a
semaphore to the next in the queue. Previously with no timeslicing,
blocking on the semaphore caused extremely bad scheduling with multiple
clients utilising m
Now that we have fast timeslicing on semaphores, we no longer need to
prioritise none-semaphore work as we will yield any work blocked on a
semaphore to the next in the queue. Previously with no timeslicing,
blocking on the semaphore caused extremely bad scheduling with multiple
clients utilising m
1 - 100 of 143 matches
Mail list logo