On Tue, Nov 07, 2017 at 06:45:36PM +0200, Jani Nikula wrote:
> On Tue, 07 Nov 2017, Daniel Vetter <daniel.vet...@ffwll.ch> wrote:
> > Now that we have CI, and that pm_rpm fully passes (I guess the audio
> > folks have implemented proper runtime pm for snd-hda, hooray, pls
> > confirm) it's time to enable this again by default.
> >
> > Real goal here is to have 1 configuration only that we fully support,
> > instead of tons of different codes with every user/customer tuning it
> > differently. And really, power stuff should work by default, and
> > should be enabled by everywhere where it is save to do so.
> >
> > v2: Completely new commit message, a few years passed since v1 ...
> 
> I suppose this is something that could use more than a single round of
> IGT CI before merging...?

The dedicated pm_rpm tests explicitly enable this (with a timeout of 0) on
all machines we run in shards. It's been green for a long time. So this
just tests for accidental fallout everywhere else (timing shifts
essentially), which I think is fairly low priority.

Hence I'm not terribly worried about this one here.
 
> BR,
> Jani.
> 
> 
> 
> >
> > Cc: Takashi Iwai <ti...@suse.de>
> > Cc: Liam Girdwood <liam.r.girdw...@intel.com>
> > Cc: "Yang, Libin" <libin.y...@intel.com>
> > Cc: "Lin, Mengdong" <mengdong....@intel.com>
> > Cc: "Li, Jocelyn" <jocelyn...@intel.com>
> > Cc: "Kaskinen, Tanu" <tanu.kaski...@intel.com>
> > Cc: "Zanoni, Paulo R" <paulo.r.zan...@intel.com>
> > Signed-off-by: Daniel Vetter <daniel.vet...@intel.com>
> > ---
> >  drivers/gpu/drm/i915/intel_runtime_pm.c | 4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c 
> > b/drivers/gpu/drm/i915/intel_runtime_pm.c
> > index 8315499452dc..dc24d008d8d4 100644
> > --- a/drivers/gpu/drm/i915/intel_runtime_pm.c
> > +++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
> > @@ -3232,7 +3232,7 @@ void intel_runtime_pm_enable(struct drm_i915_private 
> > *dev_priv)
> >     struct pci_dev *pdev = dev_priv->drm.pdev;
> >     struct device *kdev = &pdev->dev;
> >  
> > -   pm_runtime_set_autosuspend_delay(kdev, 10000); /* 10s */
> > +   pm_runtime_set_autosuspend_delay(kdev, 100);

Wrt the data requested for this here. On bxt (probably the slowest box we
have), from the pm_rpm tests, looking at dmesg I get the following values:

- device suspend: 3-4ms
- device resume: 10-11ms

So grand total is 15ms for a transition.

On top the display will keep us out of runtime pm, as will even a mildly
busy gt. I think the 100ms value isn't aggressive at all given that.
-Daniel


> >     pm_runtime_mark_last_busy(kdev);
> >  
> >     /*
> > @@ -3251,6 +3251,8 @@ void intel_runtime_pm_enable(struct drm_i915_private 
> > *dev_priv)
> >             pm_runtime_use_autosuspend(kdev);
> >     }
> >  
> > +   pm_runtime_allow(kdev);
> > +
> >     /*
> >      * The core calls the driver load handler with an RPM reference held.
> >      * We drop that here and will reacquire it during unloading in
> 
> -- 
> Jani Nikula, Intel Open Source Technology Center

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to