On Sun, Sep 04, 2011 at 10:08:17AM -0700, Ben Widawsky wrote: > diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c > index 55518e3..3bc1479 100644 > --- a/drivers/gpu/drm/i915/i915_irq.c > +++ b/drivers/gpu/drm/i915/i915_irq.c > @@ -415,12 +415,7 @@ static void gen6_pm_rps_work(struct work_struct *work) > gen6_set_rps(dev_priv->dev, new_delay); > dev_priv->cur_delay = new_delay; > > - /* > - * rps_lock not held here because clearing is non-destructive. There > is > - * an *extremely* unlikely race with gen6_rps_enable() that is > prevented > - * by holding struct_mutex for the duration of the write. > - */ > - I915_WRITE(GEN6_PMIMR, pm_imr & ~pm_iir); > + I915_WRITE(GEN6_PMIMR, pm_imr & dev_priv->pm_iir); > mutex_unlock(&dev_priv->dev->struct_mutex); > }
For this to work we'd need to hold the rps_lock (to avoid racing with the irq handler). But imo my approach is conceptually simpler: The work func grabs all oustanding PM interrupts and then enables them again in one go (protected by rps_lock). And because the dev_priv->wq workqueue is single-threaded (no point in using multiple threads when all work items grab dev->struct mutex) we also cannot make a mess by running work items in the wrong order (or in parallel). -Daniel -- Daniel Vetter Mail: dan...@ffwll.ch Mobile: +41 (0)79 365 57 48 _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/intel-gfx