On Wed, 2018-05-09 at 05:21 -0700, Rodrigo Vivi wrote:
> On Wed, May 09, 2018 at 10:13:21AM +0300, Jani Nikula wrote:
> > We've opted to use the maximum link rate and lane count for eDP
> > panels,
> > because typically the maximum supported configuration reported by
> > the
> > panel has matched the native resolution requirements of the panel,
> > and
> > optimizing the link has lead to problems.
> > 
> > With eDP 1.4 rate select method and DSC features, this is
> > decreasingly
> > the case. There's a need to optimize the link parameters. Moreover,
> > already eDP 1.3 states fast link with fewer lanes is preferred over
> > the
> > wide and slow. (Wide and slow should still be more reliable for
> > longer
> > cable lengths.)
> > 
> > Additionally, there have been reports of panels failing on
> > arbitrary
> > link configurations, although arguably all configurations they
> > claim to
> > support should work.
> > 
> > Optimize eDP 1.4+ link config fast and narrow.
> > 
> > Side note: The implementation has a near duplicate of the link
> > config
> > function, with just the two inner for loops turned inside out.
> > Perhaps
> > there'd be a way to make this, say, more table driven to reduce the
> > duplication, but seems like that would lead to duplication in the
> > table
> > generation. We'll also have to see how the link config optimization
> > for
> > DSC turns out.
> > 
> > Cc: Ville Syrjälä <ville.syrj...@linux.intel.com>
> > Cc: Manasi Navare <manasi.d.nav...@intel.com>
> > Cc: Rodrigo Vivi <rodrigo.v...@intel.com>
> 
> Cc: Matt Atwood <matthew.s.atw...@intel.com>
> 
> I believe Matt is interested on this and know who could test this for
> us.
I'm actually in the process of working with my counterpart at Google to
actually quantify what power is saved. With both chromeos-
4.14/chromeos-4.4 patches to do so across multiple boards and multiple
systems.
> 
> > Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=105267
> > Signed-off-by: Jani Nikula <jani.nik...@intel.com>
> 
> This matches my understand of the eDP 1.4 spec I believe this is the
> way to go, so
> 
> Acked-by: Rodrigo Vivi <rodrigo.v...@intel.com>
> 
> but probably better to get a proper review and wait for someone
> to test...
> 
> > 
> > ---
> > 
> > Untested. It's possible this helps the referenced bug. The downside
> > is
> > that this patch has a bunch of dependencies that are too much to
> > backport to stable kernels. If the patch works, we may need to
> > consider
> > hacking together an uglier backport.
> > ---
> >  drivers/gpu/drm/i915/intel_dp.c | 73
> > ++++++++++++++++++++++++++++++++++-------
> >  1 file changed, 62 insertions(+), 11 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/intel_dp.c
> > b/drivers/gpu/drm/i915/intel_dp.c
> > index dde92e4af5d3..1ec62965ece3 100644
> > --- a/drivers/gpu/drm/i915/intel_dp.c
> > +++ b/drivers/gpu/drm/i915/intel_dp.c
> > @@ -1768,6 +1768,42 @@ intel_dp_compute_link_config_wide(struct
> > intel_dp *intel_dp,
> >     return false;
> >  }
> >  
> > +/* Optimize link config in order: max bpp, min lanes, min clock */
> > +static bool
> > +intel_dp_compute_link_config_fast(struct intel_dp *intel_dp,
> > +                             struct intel_crtc_state
> > *pipe_config,
> > +                             const struct link_config_limits
> > *limits)
I personally called this intel_dp_compute_link_config_narrow as a
counterpart to the "wide" implementation.
> > +{
> > +   struct drm_display_mode *adjusted_mode = &pipe_config-
> > >base.adjusted_mode;
> > +   int bpp, clock, lane_count;
> > +   int mode_rate, link_clock, link_avail;
> > +
> > +   for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -=
> > 2 * 3) {
> > +           mode_rate = intel_dp_link_required(adjusted_mode-
> > >crtc_clock,
> > +                                              bpp);
> > +
> > +           for (lane_count = limits->min_lane_count;
> > +                lane_count <= limits->max_lane_count;
> > +                lane_count <<= 1) {
> > +                   for (clock = limits->min_clock; clock <=
> > limits->max_clock; clock++) {
> > +                           link_clock = intel_dp-
> > >common_rates[clock];
> > +                           link_avail =
> > intel_dp_max_data_rate(link_clock,
> > +                                                             
> >   lane_count);
> > +
> > +                           if (mode_rate <= link_avail) {
> > +                                   pipe_config->lane_count =
> > lane_count;
> > +                                   pipe_config->pipe_bpp =
> > bpp;
> > +                                   pipe_config->port_clock =
> > link_clock;
> > +
> > +                                   return true;
> > +                           }
> > +                   }
> > +           }
> > +   }
> > +
> > +   return false;
> > +}
> > +
> >  static bool
> >  intel_dp_compute_link_config(struct intel_encoder *encoder,
> >                          struct intel_crtc_state *pipe_config)
> > @@ -1792,13 +1828,15 @@ intel_dp_compute_link_config(struct
> > intel_encoder *encoder,
> >     limits.min_bpp = 6 * 3;
> >     limits.max_bpp = intel_dp_compute_bpp(intel_dp,
> > pipe_config);
> >  
> > -   if (intel_dp_is_edp(intel_dp)) {
> > +   if (intel_dp_is_edp(intel_dp) && intel_dp->edp_dpcd[0] <
> > DP_EDP_14) {
> >             /*
> >              * Use the maximum clock and number of lanes the
> > eDP panel
> > -            * advertizes being capable of. The panels are
> > generally
> > -            * designed to support only a single clock and
> > lane
> > -            * configuration, and typically these values
> > correspond to the
> > -            * native resolution of the panel.
> > +            * advertizes being capable of. The eDP 1.3 and
> > earlier panels
> > +            * are generally designed to support only a single
> > clock and
> > +            * lane configuration, and typically these values
> > correspond to
> > +            * the native resolution of the panel. With eDP
> > 1.4 rate select
> > +            * and DSC, this is decreasingly the case, and we
> > need to be
> > +            * able to select less than maximum link config.
> >              */
> >             limits.min_lane_count = limits.max_lane_count;
> >             limits.min_clock = limits.max_clock;
> > @@ -1812,12 +1850,25 @@ intel_dp_compute_link_config(struct
> > intel_encoder *encoder,
> >                   intel_dp->common_rates[limits.max_clock],
> >                   limits.max_bpp, adjusted_mode->crtc_clock);
> >  
> > -   /*
> > -    * Optimize for slow and wide. This is the place to add
> > alternative
> > -    * optimization policy.
> > -    */
> > -   if (!intel_dp_compute_link_config_wide(intel_dp,
> > pipe_config, &limits))
> > -           return false;
> > +   if (intel_dp_is_edp(intel_dp)) {
> > +           /*
> > +            * Optimize for fast and narrow. eDP 1.3 section
> > 3.3 and eDP 1.4
> > +            * section A.1: "It is recommended that the
> > minimum number of
> > +            * lanes be used, using the minimum link rate
> > allowed for that
> > +            * lane configuration."
> > +            *
> > +            * Note that we use the max clock and lane count
> > for eDP 1.3 and
> > +            * earlier, and fast vs. wide is irrelevant.
> > +            */
This is where I got hung up on *many* eDP panels. This will break many
pre edp 1.4 panels. I wrapped mine on a DPCD compare to DP_DPCD_REV.
> > +           if (!intel_dp_compute_link_config_fast(intel_dp,
> > pipe_config,
> > +                                                  &limits))
> > +                   return false;
> > +   } else {
> > +           /* Optimize for slow and wide. */
> > +           if (!intel_dp_compute_link_config_wide(intel_dp,
> > pipe_config,
> > +                                                  &limits))
> > +                   return false;
> > +   }
> >  
> >     DRM_DEBUG_KMS("DP lane count %d clock %d bpp %d\n",
> >                   pipe_config->lane_count, pipe_config-
> > >port_clock,
> > -- 
> > 2.11.0
> > 
I'm honestly glad that someone else cares I know in the case of the
wide optimization on a product I was working one we saved 200 uW, I'm
hoping for a bigger share with the narrow/fast implementation.

-Matt
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to