On Wed, Oct 08, 2025 at 05:06:43PM +0300, Ville Syrjälä wrote:
> On Wed, Oct 08, 2025 at 02:04:03PM +0200, Maxime Ripard wrote:
> > The DP MST implementation relies on a drm_private_obj, that is
> > initialized by allocating and initializing a state, and then passing it
> > to drm_private_obj_init.
> >
> > Since we're gradually moving away from that pattern to the more
> > established one relying on a reset implementation, let's migrate this
> > instance to the new pattern.
> >
> > Signed-off-by: Maxime Ripard <[email protected]>
> > ---
> > drivers/gpu/drm/display/drm_dp_mst_topology.c | 39
> > ++++++++++++++++++---------
> > 1 file changed, 26 insertions(+), 13 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c
> > b/drivers/gpu/drm/display/drm_dp_mst_topology.c
> > index
> > 64e5c176d5cce9df9314f77a0b4c97662c30c070..255fbdcea9f0b6376d15439e3da1dc02be472a20
> > 100644
> > --- a/drivers/gpu/drm/display/drm_dp_mst_topology.c
> > +++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c
> > @@ -5181,10 +5181,34 @@ static void drm_dp_mst_destroy_state(struct
> > drm_private_obj *obj,
> >
> > kfree(mst_state->commit_deps);
> > kfree(mst_state);
> > }
> >
> > +static void drm_dp_mst_reset(struct drm_private_obj *obj)
> > +{
> > + struct drm_dp_mst_topology_mgr *mgr =
> > + to_dp_mst_topology_mgr(obj);
> > + struct drm_dp_mst_topology_state *mst_state;
> > +
> > + if (obj->state) {
> > + drm_dp_mst_destroy_state(obj, obj->state);
> > + obj->state = NULL;
>
> I'm not a big fan of this "free+reallocate without any way to handle
> failures" pattern.
>
> Currently i915 does not do any of this, and so there are no unhandled
> points of failure. But the mst and tunneling changes here force it
> on i915 as well.
>
> I think for the common things it would be nicer to just implement
> the reset as just that "a reset of the current state", and leave
> the "let's play fast and loose with kmalloc() failures" to the
> drivers that want it.
>
> That said I haven't even thought through whether this mst and
> tunneling state reset might have some undesired side effects
> since we previously did none of it. I suspect it should be fine,
> but at least the mst code does some questionable things with
> the state so not 100% sure.
>
> Imre, do you recall if we might somehow depend on preserving
> the state across drm_mode_config_reset()?
Yes, the stream payload info in the MST state and the stream BW info in
the tunnel state is computed for the active streams (i.e. CRTCs) before
suspend and this info is reused after resume. These active streams must
be restored to their pre-suspend state after resume, which will need the
above payload/BW info.
The restore should happen without recomputing the state for CRTCs, so
the payload/BW info should be also preserved across suspend/resume.
crtc/plane/connector objects which have the reset semantic added now in
this patch for private objects do preserve their state across
suspend/resume, see drm_atomic_helper_duplicate_state() and
drm_atomic_helper_commit_duplicated_state().
> > + }
> > +
> > + mst_state = kzalloc(sizeof(*mst_state), GFP_KERNEL);
> > + if (!mst_state)
> > + return;
> > +
> > + __drm_atomic_helper_private_obj_reset(obj, &mst_state->base);
> > +
> > + mst_state->total_avail_slots = 63;
> > + mst_state->start_slot = 1;
> > +
> > + mst_state->mgr = mgr;
> > + INIT_LIST_HEAD(&mst_state->payloads);
> > +}
> > +
> > static bool drm_dp_mst_port_downstream_of_branch(struct drm_dp_mst_port
> > *port,
> > struct drm_dp_mst_branch
> > *branch)
> > {
> > while (port->parent) {
> > if (port->parent == branch)
> > @@ -5619,10 +5643,11 @@ int drm_dp_mst_atomic_check(struct drm_atomic_state
> > *state)
> > EXPORT_SYMBOL(drm_dp_mst_atomic_check);
> >
> > const struct drm_private_state_funcs drm_dp_mst_topology_state_funcs = {
> > .atomic_duplicate_state = drm_dp_mst_duplicate_state,
> > .atomic_destroy_state = drm_dp_mst_destroy_state,
> > + .reset = drm_dp_mst_reset,
> > };
> > EXPORT_SYMBOL(drm_dp_mst_topology_state_funcs);
> >
> > /**
> > * drm_atomic_get_mst_topology_state: get MST topology state
> > @@ -5705,12 +5730,10 @@
> > EXPORT_SYMBOL(drm_atomic_get_new_mst_topology_state);
> > int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr,
> > struct drm_device *dev, struct drm_dp_aux *aux,
> > int max_dpcd_transaction_bytes, int
> > max_payloads,
> > int conn_base_id)
> > {
> > - struct drm_dp_mst_topology_state *mst_state;
> > -
> > mutex_init(&mgr->lock);
> > mutex_init(&mgr->qlock);
> > mutex_init(&mgr->delayed_destroy_lock);
> > mutex_init(&mgr->up_req_lock);
> > mutex_init(&mgr->probe_lock);
> > @@ -5740,22 +5763,12 @@ int drm_dp_mst_topology_mgr_init(struct
> > drm_dp_mst_topology_mgr *mgr,
> > mgr->aux = aux;
> > mgr->max_dpcd_transaction_bytes = max_dpcd_transaction_bytes;
> > mgr->max_payloads = max_payloads;
> > mgr->conn_base_id = conn_base_id;
> >
> > - mst_state = kzalloc(sizeof(*mst_state), GFP_KERNEL);
> > - if (mst_state == NULL)
> > - return -ENOMEM;
> > -
> > - mst_state->total_avail_slots = 63;
> > - mst_state->start_slot = 1;
> > -
> > - mst_state->mgr = mgr;
> > - INIT_LIST_HEAD(&mst_state->payloads);
> > -
> > drm_atomic_private_obj_init(dev, &mgr->base,
> > - &mst_state->base,
> > + NULL,
> > &drm_dp_mst_topology_state_funcs);
> >
> > return 0;
> > }
> > EXPORT_SYMBOL(drm_dp_mst_topology_mgr_init);
> >
> > --
> > 2.51.0
>
> --
> Ville Syrjälä
> Intel