Re: [Intel-gfx] [PATCH] drm/i915: Avoid using ctx->file_priv during construction

2019-03-30 Thread Jordan Justen
I think the change is focused mainly around setting the vm param, so perhaps the subject should mention that. Maybe something like: drm/i915: Avoid using ctx->file_priv for VM param during construction I guess a similar issue could arise if other context params are added later. Hopefully any issu

[Intel-gfx] [PATCH i-g-t] kms_busy: Use igt_waitchildren_timeout()

2019-03-30 Thread Chris Wilson
Replace the convoluted raising of SIGALRM from the child with an interruptible sleep in the parent with the equivalent and far more natural igt_waitchildren_timeout(). Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=103182 Signed-off-by: Chris Wilson --- tests/kms_busy.c | 31 +---

Re: [Intel-gfx] [PATCH 13/16] drm/fb-helper: Avoid race with DRM userspace

2019-03-30 Thread Noralf Trønnes
Den 28.03.2019 09.17, skrev Daniel Vetter: > On Tue, Mar 26, 2019 at 06:55:43PM +0100, Noralf Trønnes wrote: >> drm_fb_helper_is_bound() is used to check if DRM userspace is in control. >> This is done by looking at the fb on the primary plane. By the time >> fb-helper gets around to committing,

Re: [Intel-gfx] [PATCH 06/16] drm/i915/fbdev: Move intel_fb_initial_config() to fbdev helper

2019-03-30 Thread Noralf Trønnes
Den 27.03.2019 14.33, skrev Jani Nikula: > On Tue, 26 Mar 2019, Noralf Trønnes wrote: >> It is generic code and having it in the helper will let other drivers >> benefit from it. >> >> One change was necessary assuming this to be true: >> INTEL_INFO(dev_priv)->num_pipes == dev->mode_config.num_c

[Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915: Avoid using ctx->file_priv during construction

2019-03-30 Thread Patchwork
== Series Details == Series: drm/i915: Avoid using ctx->file_priv during construction URL : https://patchwork.freedesktop.org/series/58769/ State : success == Summary == CI Bug Log - changes from CI_DRM_5841_full -> Patchwork_12642_full Sum

[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915: Avoid using ctx->file_priv during construction

2019-03-30 Thread Patchwork
== Series Details == Series: drm/i915: Avoid using ctx->file_priv during construction URL : https://patchwork.freedesktop.org/series/58769/ State : success == Summary == CI Bug Log - changes from CI_DRM_5841 -> Patchwork_12642 Summary -

[Intel-gfx] [PATCH i-g-t] i915: Add gem_vm_create

2019-03-30 Thread Chris Wilson
Exercise basic creation and swapping between new address spaces. v2: Check isolation that the same vm_id on different fd are indeed different VM. v3: Cross-over check with CREATE_EXT_SETPARAM Signed-off-by: Chris Wilson Cc: Tvrtko Ursulin --- lib/Makefile.sources | 2 + lib/i915/gem_vm

[Intel-gfx] [PATCH] drm/i915: Avoid using ctx->file_priv during construction

2019-03-30 Thread Chris Wilson
As we only set ctx->file_priv on registering the GEM context after construction, it is invalid to try and use it in the middle for setting various parameters. Indeed, we put the file_priv into struct create_ext so that we have the right file_private available without having to look at ctx->file_pri

Re: [Intel-gfx] [CI 2/4] drm/i915: Create/destroy VM (ppGTT) for use with contexts

2019-03-30 Thread Chris Wilson
Quoting Jordan Justen (2019-03-30 09:46:49) > On 2019-03-22 02:23:23, Chris Wilson wrote: > > > > diff --git a/drivers/gpu/drm/i915/i915_gem_context.c > > b/drivers/gpu/drm/i915/i915_gem_context.c > > index 00dec72f6875..d0a56c8d0bb9 100644 > > --- a/drivers/gpu/drm/i915/i915_gem_context.c > > ++

Re: [Intel-gfx] [CI 2/4] drm/i915: Create/destroy VM (ppGTT) for use with contexts

2019-03-30 Thread Jordan Justen
On 2019-03-22 02:23:23, Chris Wilson wrote: > > diff --git a/drivers/gpu/drm/i915/i915_gem_context.c > b/drivers/gpu/drm/i915/i915_gem_context.c > index 00dec72f6875..d0a56c8d0bb9 100644 > --- a/drivers/gpu/drm/i915/i915_gem_context.c > +++ b/drivers/gpu/drm/i915/i915_gem_context.c > + > +static

Re: [Intel-gfx] [PATCH v5 3/5] drm/i915: Watchdog timeout: Ringbuffer command emission for gen8+

2019-03-30 Thread Chris Wilson
Quoting Carlos Santa (2019-03-22 23:41:16) > From: Michel Thierry > > Emit the required commands into the ring buffer for starting and > stopping the watchdog timer before/after batch buffer start during > batch buffer submission. I'm expecting to see some discussion of how this is handled acros

Re: [Intel-gfx] [PATCH v5 3/5] drm/i915: Watchdog timeout: Ringbuffer command emission for gen8+

2019-03-30 Thread Chris Wilson
Quoting Carlos Santa (2019-03-22 23:41:16) > static int gen8_emit_bb_start(struct i915_request *rq, > u64 offset, u32 len, > const unsigned int flags) > { > + struct intel_engine_cs *engine = rq->engine; > + struct i915_gem_c

Re: [Intel-gfx] [PATCH v5 1/5] drm/i915: Add engine reset count in get-reset-stats ioctl

2019-03-30 Thread Chris Wilson
Quoting Carlos Santa (2019-03-22 23:41:14) > From: Michel Thierry > > Users/tests relying on the total reset count will start seeing a smaller > number since most of the hangs can be handled by engine reset. > Note that if reset engine x, context a running on engine y will be unaware > and unaffe

Re: [Intel-gfx] [PATCH v2] drm/i915/guc: Retry GuC load for all load failures

2019-03-30 Thread Chris Wilson
Quoting Chris Wilson (2019-03-30 08:01:28) > Quoting Robert M. Fosha (2019-03-29 23:17:46) > > Currently we only retry to load GuC firmware if the load fails due to > > timeout. On Gen9 GuC loading may fail for different reasons, not just > > hang/timeout. Direction from the GuC team is to retry fo

Re: [Intel-gfx] [PATCH v2] drm/i915/guc: Retry GuC load for all load failures

2019-03-30 Thread Chris Wilson
Quoting Robert M. Fosha (2019-03-29 23:17:46) > Currently we only retry to load GuC firmware if the load fails due to > timeout. On Gen9 GuC loading may fail for different reasons, not just > hang/timeout. Direction from the GuC team is to retry for all cases of > GuC load failure on Gen9, not just

Re: [Intel-gfx] [PATCH] drm/i915: Engine relative MMIO

2019-03-30 Thread Chris Wilson
Quoting john.c.harri...@intel.com (2019-03-30 00:10:45) > From: John Harrison > > With virtual engines, it is no longer possible to know which specific > physical engine a given request will be executed on at the time that > request is generated. This means that the request itself must be engine