The total or the nominal link bandwidth, which we save in terms of PBN, is
a factor of link rate and lane count. But, currently we hardcode it to
2560 PBN. This results in incorrect computation of total slots.

E.g, 2 lane HBR2 configuration and 4k@60Hz, 24bpp mode
  nominal link bw = 1080 MBps = 1280PBN = 64 slots
  required bw 533.25 MHz*3 = 1599.75 MBps or 1896 PBN
     with +0.6% margin = 1907.376 PBN = 96 slots
  This is greater than the max. possible value of 64 slots. But, we
  incorrectly compute available slots as 2560 PBN = 128 slots and don't
  return error.

So, let's fix this by calculating the total link bandwidth as
link bw (PBN) = BW per time slot(PBN) * max. time slots , where max. time
slots is 64

Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandi...@intel.com>
---
 drivers/gpu/drm/drm_dp_mst_topology.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c 
b/drivers/gpu/drm/drm_dp_mst_topology.c
index 04e4571..26dfd99 100644
--- a/drivers/gpu/drm/drm_dp_mst_topology.c
+++ b/drivers/gpu/drm/drm_dp_mst_topology.c
@@ -2038,9 +2038,8 @@ int drm_dp_mst_topology_mgr_set_mst(struct 
drm_dp_mst_topology_mgr *mgr, bool ms
                        ret = -EINVAL;
                        goto out_unlock;
                }
-
-               mgr->total_pbn = 2560;
-               mgr->total_slots = DIV_ROUND_UP(mgr->total_pbn, mgr->pbn_div);
+               mgr->total_pbn = 64 * mgr->pbn_div;
+               mgr->total_slots = 64;
                mgr->avail_slots = mgr->total_slots;
 
                /* add initial branch device at LCT 1 */
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to