On Sun, 25 Sep 2011 11:40:14 +0200 Daniel Vetter <dan...@ffwll.ch> wrote:
> On Sat, Sep 24, 2011 at 03:23:05PM -0700, Ben Widawsky wrote: > > Can you extract that horrible condition into e.g. > NEEDS_ILK_IOMMU_WORKAROUND or something like that? I think future me > will go wtf on this without the context ... > Sure. > > + > > intel_i9xx_setup_flush(); > > > > return 0; > > diff --git a/drivers/gpu/drm/i915/i915_gem.c > > b/drivers/gpu/drm/i915/i915_gem.c index 61ce1b7..d369e48 100644 > > --- a/drivers/gpu/drm/i915/i915_gem.c > > +++ b/drivers/gpu/drm/i915/i915_gem.c > > @@ -2268,27 +2268,39 @@ static int i915_ring_idle(struct > > intel_ring_buffer *ring) return i915_wait_request(ring, > > i915_gem_next_request_seqno(ring)); } > > > > +static int > > +flush_rings(drm_i915_private_t *dev_priv) > > +{ > > + int i, ret; > > + > > + /* Flush everything onto the inactive list. */ > > + for (i = 0; i < I915_NUM_RINGS; i++) { > > + ret = i915_ring_idle(&dev_priv->ring[i]); > > + if (ret) > > + return ret; > > + } > > + > > + return 0; > > +} > > + > > int > > i915_gpu_idle(struct drm_device *dev) > > { > > drm_i915_private_t *dev_priv = dev->dev_private; > > bool lists_empty; > > - int ret, i; > > > > lists_empty = (list_empty(&dev_priv->mm.flushing_list) && > > list_empty(&dev_priv->mm.active_list)); > > if (lists_empty) { > > - WARN_ON(!(I915_READ(MI_MODE) & MI_RINGS_IDLE)); > > - return 0; > > - > > - /* Flush everything onto the inactive list. */ > > - for (i = 0; i < I915_NUM_RINGS; i++) { > > - ret = i915_ring_idle(&dev_priv->ring[i]); > > - if (ret) > > - return ret; > > + /* If we require idle maps, enforce the ring is > > idle */ > > + bool ring_idle = (I915_READ(MI_MODE) & > > MI_RINGS_IDLE) != 0; > > + if (!ring_idle && dev_priv->mm.gtt->do_idle_maps) > > + return flush_rings(dev_priv); > > + else > > + return 0; > > Can you extract this hack into its own patch? This way > - the actual work-around would stand out more clearly > - we put more emphasive on the fact that we seem to lose track of > things. > >From your/Chris' comment from patch 1, I will remove this hunk completely, and patch 1 will simply always idle the rings. > > } > > > > - return 0; > > + return flush_rings(dev_priv); > > } > > > > static int sandybridge_write_fence_reg(struct drm_i915_gem_object > > *obj, diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c > > b/drivers/gpu/drm/i915/i915_gem_gtt.c index 7a709cd..0c6226b 100644 > > --- a/drivers/gpu/drm/i915/i915_gem_gtt.c > > +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c > > @@ -49,10 +49,39 @@ static unsigned int > > cache_level_to_agp_type(struct drm_device *dev, } > > } > > > > +static bool do_idling(struct drm_i915_private *dev_priv) > > +{ > > + bool ret = dev_priv->mm.interruptible; > > + > > + if (dev_priv->mm.gtt->do_idle_maps) { > > + dev_priv->mm.interruptible = false; > > + if (i915_gpu_idle(dev_priv->dev)) > > + DRM_ERROR("couldn't idle GPU %d\n", ret); > > + } > > I don't like the uninterruptible sleep in core gem - this is only for > modesetting where backtracking is just not really feasible. I've > quickly checked the code and I haven't seen any problems in handing > back the -ERESTARTSYS to gem_object_unbind. > While I agree that we could probably return at idle time, we have to guarantee that no GPU activity occurs until the unmap has completed. I checked the code and don't really understand how it can occur, but I believe it *can* occur as I've seen it happen in the idle function. Any other proposal how to fix this, or any proof that the GPU can't wake up to do display stuff while doing the unmap? Ben _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/intel-gfx