== Series Details ==
Series: series starting with [01/10] drm/i915: Seal races between async GPU
cancellation, retirement and signaling
URL : https://patchwork.freedesktop.org/series/59912/
State : warning
== Summary ==
$ dim sparse origin/drm-tip
Sparse version: v0.5.2
Commit: drm/i915: Seal
== Series Details ==
Series: drm/i915: Allow multiple user handles to the same VM
URL : https://patchwork.freedesktop.org/series/59913/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_5995 -> Patchwork_12867
Summary
---
== Series Details ==
Series: series starting with [01/10] drm/i915: Seal races between async GPU
cancellation, retirement and signaling
URL : https://patchwork.freedesktop.org/series/59912/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_5995 -> Patchwork_12868
== Series Details ==
Series: drm/i915/gen11: enable support for headerless msgs (rev4)
URL : https://patchwork.freedesktop.org/series/59839/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
95faef437aa7 drm/i915/gen11: enable support for headerless msgs
-:6: WARNING:TYPO_SPELLING:
On Wed, 24 Apr 2019, Chris Wilson wrote:
> Start partitioning off the code that talks to the hardware (GT) from the
> uapi layers and move the device facing code under gt/
>
> One casualty is s/intel_ringbuffer.h/intel_engine.h/ with the plan to
> subdivide that header and body further (and split
== Series Details ==
Series: series starting with [1/2] drm/i915/icl: Factor out combo PHY lane
power setup helper
URL : https://patchwork.freedesktop.org/series/59893/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_5991_full -> Patchwork_12863_full
===
== Series Details ==
Series: drm/i915/gen11: enable support for headerless msgs (rev4)
URL : https://patchwork.freedesktop.org/series/59839/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_5995 -> Patchwork_12869
Summary
On Wed, 24 Apr 2019, Imre Deak wrote:
> Factor out the combo PHY lane power configuration code to a separate
> helper; it will be also needed by the next patch adding the same
> configuration for DDI ports.
>
> While at it also add support to handle lane reversal which wasn't
> needed for DSI, but
drm_fb_helper_is_bound() is used to check if DRM userspace is in control.
This is done by looking at the fb on the primary plane. By the time
fb-helper gets around to committing, it's possible that the facts have
changed.
Avoid this race by holding the drm_device->master_mutex lock while
committin
The Intel CI [1] was not happy with the previous version and I don't
know which part it didn't like. So I'll split up the series and feed it
piece by piece until I know where the problem is.
Noralf.
[1] https://patchwork.freedesktop.org/series/58597/
Noralf Trønnes (1):
drm/fb-helper: Avoid ra
== Series Details ==
Series: drm/i915: Move GraphicsTechnology files under gt/
URL : https://patchwork.freedesktop.org/series/59900/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_5991_full -> Patchwork_12864_full
Summary
--
== Series Details ==
Series: drm/fb-helper: Move modesetting code to drm_client (rev5)
URL : https://patchwork.freedesktop.org/series/58597/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_5996 -> Patchwork_12870
Summary
On Thu, Apr 25, 2019 at 11:30:36AM +0300, Jani Nikula wrote:
> On Wed, 24 Apr 2019, Imre Deak wrote:
> > Factor out the combo PHY lane power configuration code to a separate
> > helper; it will be also needed by the next patch adding the same
> > configuration for DDI ports.
> >
> > While at it al
We no longer track the execution order along the engine and so no longer
need to enforce ordering of retire along the engine.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/i915_request.c | 128 +++-
1 file changed, 52 insertions(+), 76 deletions(-)
diff --git a/dr
Use i915_gem_object_lock() to guard the LUT and active reference to
allow us to break free of struct_mutex for handling GEM_CLOSE.
Testcase: igt/gem_close_race
Testcase: igt/gem_exec_parallel
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gem/i915_gem_context.c | 76 ++-
An old optimisation to reduce the number of atomics per batch sadly
relies on struct_mutex for coordination. In order to remove struct_mutex
from serialising object/context closing, always taking and releasing an
active reference on first use / last use greatly simplifies the locking.
Signed-off-b
Continuing the decluttering of i915_gem.c by moving the object busy
checking into its own file.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/Makefile| 1 +
drivers/gpu/drm/i915/gem/i915_gem_busy.c | 138 +++
drivers/gpu/drm/i915/i915_gem.c | 128
To continue the onslaught of removing the assumption of a global
execution ordering, another casualty is the engine->timeline. Without an
actual timeline to track, it is overkill and we can replace it with a
much less grand plain list. We still need a list of requests inflight,
for the simple purpo
If we have multiple contexts of equal priority pending execution,
activate a timer to demote the currently executing context in favour of
the next in the queue when that timeslice expires. This enforces
fairness between contexts (so long as they allow preemption -- forced
preemption, in the future,
When using a global seqno, we required a precise stop-the-workd event to
handle preemption and unwind the global seqno counter. To accomplish
this, we would preempt to a special out-of-band context and wait for the
machine to report that it was idle. Given an idle machine, we could very
precisely s
As a lockmap takes a reference for every ww_mutex used together, this
can be an arbitrarily large number and under control of userspace --
easily overflowing the limit of 4096.
Signed-off-by: Chris Wilson
---
include/linux/lockdep.h | 4 ++--
kernel/locking/lockdep.c | 15 +--
2 fi
Continuing the decluttering of i915_gem.c by moving the client self
throttling into its own file.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/Makefile| 1 +
drivers/gpu/drm/i915/gem/i915_gem_throttle.c | 74
drivers/gpu/drm/i915/i915_drv.h
Continuing the decluttering of i915_gem.c by moving the object wait
decomposition into its own file.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/Makefile | 1 +
drivers/gpu/drm/i915/gem/i915_gem_object.h | 8 +
drivers/gpu/drm/i915/gem/i915_gem_wait.c | 276 ++
Continuing the theme of separating out the GEM clutter.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/Makefile | 26 +--
drivers/gpu/drm/i915/Makefile.header-test | 2 -
.../gpu/drm/i915/{ => gem}/i915_gem_clflush.c | 27 +++
drivers/gpu/drm/i915/gem/i
Having allowed the user to define a set of engines that they will want
to only use, we go one step further and allow them to bind those engines
into a single virtual instance. Submitting a batch to the virtual engine
will then forward it to any one of the set in a manner as best to
distribute load.
Our eventual goal is to rid request construction of struct_mutex, with
the short term step of lifting the struct_mutex requirements into the
higher levels (i.e. the caller must ensure that the context is already
pinned into the GTT). In this patch, we pin GVT's shadow context upon
allocation and so
Rename the engine this HW context is currently active upon (that we are
flying upon) to disambiguate between the mixture of different active
terms (and prevent conflict in future patches).
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/intel_context_types.h | 2 +-
drivers/gpu/drm/i915
Tidy up the cleanup sequence by always ensure that the tasklet is
flushed on parking (before we cleanup). The parking provides a
convenient point to ensure that the backend is truly idle.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/intel_lrc.c | 25 +++--
driv
We need to keep the context image pinned in memory until after the GPU
has finished writing into it. Since it continues to write as we signal
the final breadcrumb, we need to keep it pinned until the request after
it is complete. Currently we know the order in which requests execute on
each engine,
Having hid the partially exposed new ABI from the PR, put it back again
for completion of context recovery. A significant part of context
recovery is the ability to reuse as much of the old context as is
feasible (to avoid expensive reconstruction). The biggest chunk kept
hidden at the moment is fi
The SINGLE_TIMELINE flag can be used to create a context such that all
engine instances within that context share a common timeline. This can
be useful for mixing operations between real and virtual engines, or
when using a composite context for a single client API context.
Signed-off-by: Chris Wi
Remove the modification of the "constant" device info by promoting the
inconsistent intel_engine static table into an initialisation error.
Now, if we add a new engine into the device_info, we must first add that
engine information into the intel_engines.
Signed-off-by: Chris Wilson
---
drivers/
We no longer need to track the active intel_contexts within each engine,
allowing us to drop a tricky mutex_lock from inside unpin (which may
occur inside fs_reclaim).
Signed-off-by: Chris Wilson
Reviewed-by: Tvrtko Ursulin
---
drivers/gpu/drm/i915/gt/intel_context.c | 11 +--
In the next patch, we require the engine vfuncs setup prior to
initialising the pinned kernel contexts, so split the vfunc setup from
the engine initialisation and call it earlier.
v2: s/setup_xcs/setup_common/ for intel_ring_submission_setup()
Signed-off-by: Chris Wilson
Reviewed-by: Tvrtko Urs
Having transitioned GEM over to using intel_context as its primary means
of tracking the GEM context and engine combined and using
i915_request_create(), we can move the older i915_request_alloc()
helper function into selftests/ where the remaining users are confined.
Signed-off-by: Chris Wilson
Over the last few years, we have debated how to extend the user API to
support an increase in the number of engines, that may be sparse and
even be heterogeneous within a class (not all video decoders created
equal). We settled on using (class, instance) tuples to identify a
specific engine, with a
Make the engine responsible for cleaning itself up!
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/intel_engine.h | 4 ++
drivers/gpu/drm/i915/gt/intel_engine_cs.c| 63 ++--
drivers/gpu/drm/i915/gt/intel_engine_types.h | 2 +-
drivers/gpu/drm/i915/gt/intel_lr
Having removed the urge to modify the engine_mask at runtime, we can
promote the num_engines from a runtime calculation to a static and push
it into the device_info tables.
Signed-off-by: Chris Wilson
Cc: Tvrtko Ursulin
Cc: Jani Nikula
---
drivers/gpu/drm/i915/gt/intel_engine_cs.c | 3 --
Split the plain old shmem object into its own file to start decluttering
i915_gem.c
v2: Lose the confusing, hysterical raisins, suffix of _gtt.
Signed-off-by: Chris Wilson
Reviewed-by: Matthew Auld
---
drivers/gpu/drm/i915/Makefile | 3 +-
drivers/gpu/drm/i915/gem/i915_gem_ob
Out scatterlist utility routines can be pulled out of i915_gem.h for a
bit more decluttering.
v2: Push I915_GTT_PAGE_SIZE out of i915_scatterlist itself and into the
caller.
Signed-off-by: Chris Wilson
Reviewed-by: Matthew Auld
---
drivers/gpu/drm/i915/Makefile | 1 +
drivers
Combine the (i915_gem_context, intel_engine) into a single parameter,
the intel_context for convenience and later simplification.
Signed-off-by: Chris Wilson
Reviewed-by: Tvrtko Ursulin
---
.../gpu/drm/i915/selftests/i915_gem_context.c | 74 +++
1 file changed, 44 insertions(+),
Simply the setup slightly for the sseu selftests to use the actual
kernel_context.
Signed-off-by: Chris Wilson
Reviewed-by: Tvrtko Ursulin
---
.../gpu/drm/i915/selftests/i915_gem_context.c | 17 -
1 file changed, 4 insertions(+), 13 deletions(-)
diff --git a/drivers/gpu/drm/i
In the next patch, we will want to configure the slave request
depending on which physical engine the master request is executed on.
For this, we introduce a callback from the execute fence to convey this
information.
Signed-off-by: Chris Wilson
Reviewed-by: Tvrtko Ursulin
---
drivers/gpu/drm/i
A usecase arose out of handling context recovery in mesa, whereby they
wish to recreate a context with fresh logical state but preserving all
other details of the original. Currently, they create a new context and
iterate over which bits they want to copy across, but it would much more
convenient i
Allow the user to direct which physical engines of the virtual engine
they wish to execute one, as sometimes it is necessary to override the
load balancing algorithm.
v2: Only kick the virtual engines on context-out if required
Signed-off-by: Chris Wilson
Cc: Tvrtko Ursulin
---
drivers/gpu/drm
For convenience in avoiding inline spaghetti, keep the type definition
as a separate header.
Signed-off-by: Chris Wilson
Reviewed-by: Matthew Auld
---
drivers/gpu/drm/i915/Makefile | 1 +
drivers/gpu/drm/i915/gem/Makefile | 1 +
drivers/gpu/drm/i915/gem/Makefile.
Move the intel_context_instance() to the caller so that we can decouple
ourselves from one context instance per engine.
v2: Rename pin_lock() to lock_pinned(), hopefully that is clearer.
Signed-off-by: Chris Wilson
Reviewed-by: Tvrtko Ursulin
---
drivers/gpu/drm/i915/gt/intel_context.c |
Some users require that when a master batch is executed on one particular
engine, a companion batch is run simultaneously on a specific slave
engine. For this purpose, we introduce virtual engine bonding, allowing
maps of master:slaves to be constructed to constrain which physical
engines a virtual
We switched to a tree of per-engine HW context to accommodate the
introduction of virtual engines. However, we plan to also support
multiple instances of the same engine within the GEM context, defeating
our use of the engine as a key to looking up the HW context. Just
allocate a logical per-engine
Currently there is an underlying assumption that i915_request_unsubmit()
is synchronous wrt the GPU -- that is the request is no longer in flight
as we remove it. In the near future that may change, and this may upset
our signaling as we can process an interrupt for that request while it
is no long
We want to pass in a intel_context into intel_context_pin() and that
requires us to first be able to lookup the intel_context!
Signed-off-by: Chris Wilson
Reviewed-by: Tvrtko Ursulin
---
drivers/gpu/drm/i915/gt/intel_context.c| 37 +++---
drivers/gpu/drm/i915/gt/intel_contex
Currently the code for manipulating the pages on an object is still
residing in i915_gem.c, move it to i915_gem_object.c
Signed-off-by: Chris Wilson
Cc: Joonas Lahtinen
Reviewed-by: Matthew Auld
---
drivers/gpu/drm/i915/Makefile | 4 +-
.../gpu/drm/i915/{ => gem}/i915_gem_obj
On Thu, Apr 25, 2019 at 10:31 AM Noralf Trønnes wrote:
>
> The Intel CI [1] was not happy with the previous version and I don't
> know which part it didn't like. So I'll split up the series and feed it
> piece by piece until I know where the problem is.
You can also send stuff to intel-gfx-try...
On Thu, 25 Apr 2019, Chris Wilson wrote:
> Having removed the urge to modify the engine_mask at runtime, we can
> promote the num_engines from a runtime calculation to a static and push
> it into the device_info tables.
\o/
Acked-by: Jani Nikula
>
> Signed-off-by: Chris Wilson
> Cc: Tvrtko
Declutter i915_drv/gem.h by moving the ioctl API into its own header.
Signed-off-by: Chris Wilson
Reviewed-by: Matthew Auld
---
drivers/gpu/drm/i915/gem/i915_gem_ioctls.h | 52 ++
drivers/gpu/drm/i915/i915_drv.c| 1 +
drivers/gpu/drm/i915/i915_drv.h|
Continuing the decluttering of i915_gem.c, that of the read/write
domains, perhaps the biggest of GEM's follies?
Signed-off-by: Chris Wilson
Reviewed-by: Matthew Auld
---
drivers/gpu/drm/i915/Makefile | 1 +
drivers/gpu/drm/i915/gem/i915_gem_domain.c| 784 +
There is a desire to split a task onto two engines and have them run at
the same time, e.g. scanline interleaving to spread the workload evenly.
Through the use of the out-fence from the first execbuf, we can
coordinate secondary execbuf to only become ready simultaneously with
the first, so that w
Use the per-object local lock to control the cache domain of the
individual GEM objects, not struct_mutex. This is a huge leap forward
for us in terms of object-level synchronisation; execbuffers are
coordinated using the ww_mutex and pread/pwrite is finally fully
serialised again.
Signed-off-by:
Continuing the decluttering of i915_gem.c, this time the legacy physical
object.
Signed-off-by: Chris Wilson
Reviewed-by: Matthew Auld
---
drivers/gpu/drm/i915/Makefile | 2 +
drivers/gpu/drm/i915/gem/i915_gem_object.h| 11 +-
.../gpu/drm/i915/gem/i915_gem_object_types.h
Continuing the decluttering of i915_gem.c, now the turn of do_mmap and
the faulthandlers
Signed-off-by: Chris Wilson
Reviewed-by: Matthew Auld
---
drivers/gpu/drm/i915/Makefile | 1 +
drivers/gpu/drm/i915/gem/i915_gem_mman.c | 505
drivers/gpu/drm/i915/ge
Op 06-03-2019 om 23:43 schreef Rodrigo Siqueira:
> On 03/01, Maarten Lankhorst wrote:
>> Convert vkms to using __drm_atomic_helper_crtc_reset(), instead of
>> writing its own version. Instead of open coding destroy_state(),
>> call it directly for freeing the old state.
>>
>> Signed-off-by: Maarten
- Remove the extra array member of stack_dump_trace[] along with the
ARRAY_SIZE - 1 initialization for struct stack_trace :: max_entries.
Both are historical leftovers of no value. The stack tracer never exceeds
the array and there is no extra storage requirement either.
- Make variables wh
Replace the indirection through struct stack_trace by using the storage
array based interfaces.
The original code in all printing functions is really wrong. It allocates a
storage array on stack which is unused because depot_fetch_stack() does not
store anything in it. It overwrites the entries po
Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.
Signed-off-by: Thomas Gleixner
---
kernel/latencytop.c | 17 ++---
1 file changed, 2 insertions(+), 15 deletions(-)
--- a/kernel/latencytop.c
+++ b/kernel/latencytop.c
@@ -1
Replace the indirection through struct stack_trace by using the storage
array based interfaces.
The original code in all printing functions is really wrong. It allocates a
storage array on stack which is unused because depot_fetch_stack() does not
store anything in it. It overwrites the entries po
Replace the indirection through struct stack_trace by using the storage
array based interfaces.
Signed-off-by: Thomas Gleixner
Acked-by: Catalin Marinas
Cc: linux...@kvack.org
---
mm/kmemleak.c | 24 +++-
1 file changed, 3 insertions(+), 21 deletions(-)
--- a/mm/kmemleak.
Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.
Signed-off-by: Thomas Gleixner
Reviewed-by: Christoph Hellwig
Cc: io...@lists.linux-foundation.org
Cc: Robin Murphy
Cc: Marek Szyprowski
---
kernel/dma/debug.c | 14 ++
1
Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.
Signed-off-by: Thomas Gleixner
Acked-by: Christoph Lameter
Cc: Andrew Morton
Cc: Pekka Enberg
Cc: linux...@kvack.org
Cc: David Rientjes
---
mm/slub.c | 12
1 file change
There is only one caller which hands in save_trace as function pointer.
Signed-off-by: Thomas Gleixner
---
kernel/locking/lockdep.c | 16
1 file changed, 8 insertions(+), 8 deletions(-)
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2158,8 +2158,7 @@ check_
Replace the indirection through struct stack_trace with an invocation of
the storage array based interface. This results in less storage space and
indirection.
Signed-off-by: Thomas Gleixner
Cc: dm-de...@redhat.com
Cc: Mike Snitzer
Cc: Alasdair Kergon
---
drivers/md/persistent-data/dm-block-ma
No more users of the struct stack_trace based interfaces. Remove them.
Remove the macro stubs for !CONFIG_STACKTRACE as well as they are pointless
because the storage on the call sites is conditional on CONFIG_STACKTRACE
already. No point to be 'smart'.
Signed-off-by: Thomas Gleixner
---
includ
Replace the indirection through struct stack_trace by using the storage
array based interfaces and storing the information is a small lockdep
specific data structure.
Signed-off-by: Thomas Gleixner
Acked-by: Peter Zijlstra (Intel)
---
include/linux/lockdep.h |9 +--
kernel/locking/lock
Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.
Signed-off-by: Thomas Gleixner
Reviewed-by: Johannes Thumshirn
Acked-by: David Sterba
Cc: Chris Mason
Cc: Josef Bacik
Cc: linux-bt...@vger.kernel.org
---
fs/btrfs/ref-verify.c | 15 +
All operations with stack traces are based on struct stack_trace. That's a
horrible construct as the struct is a kitchen sink for input and
output. Quite some usage sites embed it into their own data structures
which creates weird indirections.
There is absolutely no point in doing so. For all use
No more users of the struct stack_trace based interfaces.
Signed-off-by: Thomas Gleixner
Acked-by: Alexander Potapenko
---
include/linux/stackdepot.h |4
lib/stackdepot.c | 20
2 files changed, 24 deletions(-)
--- a/include/linux/stackdepot.h
+++ b/inc
The per cpu stack trace buffer usage pattern is odd at best. The buffer has
place for 512 stack trace entries on 64-bit and 1024 on 32-bit. When
interrupts or exceptions nest after the per cpu buffer was acquired the
stacktrace length is hardcoded to 8 entries. 512/1024 stack trace entries
in kerne
It's only used in trace.c and there is absolutely no point in compiling it
in when user space stack traces are not supported.
Signed-off-by: Thomas Gleixner
Reviewed-by: Steven Rostedt
---
kernel/trace/trace.c | 14 --
kernel/trace/trace.h |8
2 files changed, 8 inser
Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.
Signed-off-by: Thomas Gleixner
Reviewed-by: Alexey Dobriyan
Cc: Andrew Morton
---
fs/proc/base.c | 14 +-
1 file changed, 5 insertions(+), 9 deletions(-)
--- a/fs/proc/bas
Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.
Signed-off-by: Thomas Gleixner
Cc: Akinobu Mita
---
lib/fault-inject.c | 12 +++-
1 file changed, 3 insertions(+), 9 deletions(-)
--- a/lib/fault-inject.c
+++ b/lib/fault-injec
Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.
Signed-off-by: Thomas Gleixner
Cc: dm-de...@redhat.com
Cc: Mike Snitzer
Cc: Alasdair Kergon
---
drivers/md/dm-bufio.c | 15 ++-
1 file changed, 6 insertions(+), 9 deletions
The struct stack_trace indirection in the stack depot functions is a truly
pointless excercise which requires horrible code at the callsites.
Provide interfaces based on plain storage arrays.
Signed-off-by: Thomas Gleixner
Acked-by: Alexander Potapenko
---
V3: Fix kernel-doc
---
include/linux/
This is an update to V2 which can be found here:
https://lkml.kernel.org/r/20190418084119.056416...@linutronix.de
Changes vs. V2:
- Fixed the kernel-doc issue pointed out by Mike
- Removed the '-1' oddity from the tracer
- Restricted the tracer nesting to 4
- Restored the lockdep ma
Replace the indirection through struct stack_trace by using the storage
array based interfaces.
Signed-off-by: Thomas Gleixner
Reviewed-by: Steven Rostedt (VMware)
---
kernel/trace/trace.c | 40 +---
1 file changed, 13 insertions(+), 27 deletions(-)
--- a/
Replace the indirection through struct stack_trace by using the storage
array based interfaces.
Signed-off-by: Thomas Gleixner
Acked-by: Dmitry Vyukov
Acked-by: Andrey Ryabinin
Cc: Alexander Potapenko
Cc: kasan-...@googlegroups.com
Cc: linux...@kvack.org
---
mm/kasan/common.c | 30 +
Signed-off-by: Thomas Gleixner
---
kernel/locking/lockdep.c |9 -
1 file changed, 4 insertions(+), 5 deletions(-)
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1522,10 +1522,9 @@ static inline int class_equal(struct loc
}
static noinline int print_circular_bug
Simplify the stack retrieval code by using the storage array based
interface.
Signed-off-by: Thomas Gleixner
Reviewed-by: Steven Rostedt (VMware)
---
kernel/trace/trace_stack.c | 37 -
1 file changed, 16 insertions(+), 21 deletions(-)
--- a/kernel/trace/tr
Replace the indirection through struct stack_trace by using the storage
array based interfaces.
Signed-off-by: Thomas Gleixner
---
kernel/backtracetest.c | 11 +++
1 file changed, 3 insertions(+), 8 deletions(-)
--- a/kernel/backtracetest.c
+++ b/kernel/backtracetest.c
@@ -48,19 +48,1
All architectures which support stacktrace carry duplicated code and
do the stack storage and filtering at the architecture side.
Provide a consolidated interface with a callback function for consuming the
stack entries provided by the architecture specific stack walker. This
removes lots of dupli
Replace the indirection through struct stack_trace by using the storage
array based interfaces.
Signed-off-by: Thomas Gleixner
Acked-by: Miroslav Benes
---
kernel/livepatch/transition.c | 22 +-
1 file changed, 9 insertions(+), 13 deletions(-)
--- a/kernel/livepatch/trans
The indirection through struct stack_trace is not necessary at all. Use the
storage array based interface.
Signed-off-by: Thomas Gleixner
Tested-by: Tom Zanussi
Reviewed-by: Tom Zanussi
Acked-by: Steven Rostedt (VMware)
---
kernel/trace/trace_events_hist.c | 12 +++-
1 file changed,
Replace the stack_trace_save*() functions with the new arch_stack_walk()
interfaces.
Signed-off-by: Thomas Gleixner
Cc: linux-a...@vger.kernel.org
---
arch/x86/Kconfig |1
arch/x86/kernel/stacktrace.c | 116 +++
2 files changed, 20 insert
* Thomas Gleixner wrote:
> - if (unlikely(!ret))
> + if (unlikely(!ret)) {
> + if (!trace->nr_entries) {
> + /*
> + * If save_trace fails here, the printing might
> + * trigger a WARN but because of the !nr_entries
On 25/04/2019 10:19, Chris Wilson wrote:
Having removed the urge to modify the engine_mask at runtime, we can
promote the num_engines from a runtime calculation to a static and push
it into the device_info tables.
What about fused off engines (intel_device_info_init_mmio)?
I don't see the pat
Quoting Tvrtko Ursulin (2019-04-25 11:20:47)
>
> On 25/04/2019 10:19, Chris Wilson wrote:
> > Having removed the urge to modify the engine_mask at runtime, we can
> > promote the num_engines from a runtime calculation to a static and push
> > it into the device_info tables.
>
> What about fused o
On 25/04/2019 10:19, Chris Wilson wrote:
Currently there is an underlying assumption that i915_request_unsubmit()
is synchronous wrt the GPU -- that is the request is no longer in flight
as we remove it. In the near future that may change, and this may upset
our signaling as we can process an in
Chris Wilson writes:
> Check that we can reorder batches around userspace sempahore waits by
semaphore
> injecting a semaphore that is only released by a later context.
>
> Signed-off-by: Chris Wilson
> ---
> tests/i915/gem_exec_schedule.c | 143 +
> 1 file cha
Quoting Tvrtko Ursulin (2019-04-25 11:35:01)
>
> On 25/04/2019 10:19, Chris Wilson wrote:
> > Currently there is an underlying assumption that i915_request_unsubmit()
> > is synchronous wrt the GPU -- that is the request is no longer in flight
> > as we remove it. In the near future that may chang
On 25/04/2019 11:30, Chris Wilson wrote:
Quoting Tvrtko Ursulin (2019-04-25 11:20:47)
On 25/04/2019 10:19, Chris Wilson wrote:
Having removed the urge to modify the engine_mask at runtime, we can
promote the num_engines from a runtime calculation to a static and push
it into the device_info t
Op 26-02-2019 om 17:17 schreef Matt Roper:
> On Tue, Feb 26, 2019 at 08:26:36AM +0100, Maarten Lankhorst wrote:
>> Hey,
>>
>> Op 21-02-2019 om 01:28 schreef Matt Roper:
>>> Some display controllers can be programmed to present non-black colors
>>> for pixels not covered by any plane (or pixels cove
== Series Details ==
Series: series starting with [CI,1/5] drm/i915: Introduce struct intel_wakeref
URL : https://patchwork.freedesktop.org/series/59904/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_5992_full -> Patchwork_12865_full
===
1 - 100 of 190 matches
Mail list logo