Hey,

Op 19-07-16 om 18:21 schreef Ville Syrjälä:
> On Tue, Jul 19, 2016 at 04:50:49PM +0100, Chris Wilson wrote:
>> On Tue, Jul 19, 2016 at 06:25:42PM +0300, Ville Syrjälä wrote:
>>> On Tue, Jul 19, 2016 at 05:14:23PM +0200, Maarten Lankhorst wrote:
>>>> num_levels should be level+1, not level, else num_levels - 1 becomes
>>>> negative. This resulted in bogus watermarks being written to the first
>>>> 255 levels like below:
>>>>
>>>> [drm] Setting FIFO watermarks - C: plane=0, cursor=0, sprite0=0, 
>>>> sprite1=0, SR: plane=0, cursor=0 level=255 cxsr=0
>>>> [drm:chv_set_memory_dvfs [i915]] *ERROR* timed out waiting for Punit DDR 
>>>> DVFS request
>>>> [drm:intel_cpu_fifo_underrun_irq_handler [i915]] *ERROR* CPU pipe C FIFO 
>>>> underrun
>>>> [drm:chv_set_memory_dvfs [i915]] *ERROR* timed out waiting for Punit DDR 
>>>> DVFS request
>>>>
>>>> Testcase: kms_atomic_transition
>>>> Fixes: 262cd2e154c2 ("drm/i915: CHV DDR DVFS support and another watermark 
>>>> rewrite")
>>>> Cc: sta...@vger.kernel.org
>>>> Cc: Ville Syrjälä <ville.syrj...@linux.intel.com>
>>>> Signed-off-by: Maarten Lankhorst <maarten.lankho...@linux.intel.com>
>>>> ---
>>>> Urgent fix for watermark support. This is definitely a pre-requisite for 
>>>> this series.
>>>> With this I've noticed that patch "[RFC 3/8] drm/i915/vlv: Move fifo_size 
>>>> from
>>>> intel_plane_wm_parameters to vlv_wm_state" introduces a regression with 
>>>> invalid FIFO split.
>>>>
>>>> I need to find out what's going wrong in that patch before this series can 
>>>> be applied.
>>>>
>>>>  drivers/gpu/drm/i915/intel_pm.c | 2 +-
>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/i915/intel_pm.c 
>>>> b/drivers/gpu/drm/i915/intel_pm.c
>>>> index 376c60b98515..8defdcc54529 100644
>>>> --- a/drivers/gpu/drm/i915/intel_pm.c
>>>> +++ b/drivers/gpu/drm/i915/intel_pm.c
>>>> @@ -1148,7 +1148,7 @@ static void vlv_compute_wm(struct intel_crtc *crtc)
>>>>                    }
>>>>            }
>>>>  
>>>> -          wm_state->num_levels = level;
>>>> +          wm_state->num_levels = level + 1;
>>> Nope. The loop above breaks when the current level is bad, hence level-1
>>> is actually the higher usable level.
>> Without knowing the limits of plane->wm.fifo_size, it looks like it can
>> break on level == 0 though.
> Hmm. That shouldn't be possible. So looks like a bug snuck in.
>
>> Might as well set that hack to paranoid levels:
>>
>> diff --git a/drivers/gpu/drm/i915/intel_pm.c 
>> b/drivers/gpu/drm/i915/intel_pm.c
>> index 630b116988f6..e8c2874b8629 100644
>> --- a/drivers/gpu/drm/i915/intel_pm.c
>> +++ b/drivers/gpu/drm/i915/intel_pm.c
>> @@ -1124,11 +1124,13 @@ static void vlv_compute_wm(struct intel_crtc *crtc)
>>                 /* normal watermarks */
>>                 for (level = 0; level < wm_state->num_levels; level++) {     
>>                         int wm = vlv_compute_wm_level(plane, crtc, state, 
>> level);
>> -                       int max_wm = plane->base.type == 
>> DRM_PLANE_TYPE_CURSOR ? 63 : 511;
>> -
>>                         /* hack */
>> -                       if (WARN_ON(level == 0 && wm > max_wm))
>> -                               wm = max_wm;
> Actually this should just be 
>
> if (WARN_ON(level == 0 && wm > fifo_size))
>       wm = fifo_size;
>
> assuming we want to keep the hack around for now.
>
> Eventually we'll want to make it just return an error though.
Yes, that's what I came up with.

But we still need to clamp further, probably to plane->wm.fifo_size.
However sr_fifo_size is also clamped to 511 because of the level calculations 
here.

Below it sets sr[level].plane = min(sr_fifo_size, wm[level].plane),
which seems weird. How can this ever end up being something other than 
wm[level].plane?
Because sr_fifo_size >= max_wm is always true.

I guess vlv_invert_wms will invert it, but we could simply only set it there 
then, or remove the min()..

~Maarten
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to