Hi Dan, Rob, Helge

> > -   while (1) {
> > +   for_each_endpoint_of_node(np, epn) {
> >             struct v4l2_fwnode_endpoint v4l2_epn = { .bus_type = 0 };
> > -
> > -           epn = of_graph_get_next_endpoint(np, epn);
> > -           if (!epn)
> > -                   return 0;
> > +           int ret;
> >  
> >             ret = v4l2_fwnode_endpoint_parse(of_fwnode_handle(epn),
> >                                              &v4l2_epn);
> >             if (ret) {
> > -                   ret = -EINVAL;
> > +                   of_node_put(epn);
> >                     dev_err(dev, "Could not parse the endpoint\n");
> > -                   break;
> > +                   return -EINVAL;
> >             }
> >  
> >             subdev_entity = devm_kzalloc(dev, sizeof(*subdev_entity),
> >                                          GFP_KERNEL);
> >             if (!subdev_entity) {
> > -                   ret = -ENOMEM;
> > -                   break;
> > +                   of_node_put(epn);
> > +                   return -ENOMEM;
> >             }
> >             subdev_entity->epn = epn;
> 
> This code is an example of what Laurent was talking about.  We're taking
> storing "subdev_entity->epn = epn" but then not incrementing the
> refcount.  Perhaps it's not necessary?
> 
> The difference between this and _scoped() would be if we stored epn and
> then returned.  I feel like that's less common.  We could detect that
> sort of thing using static analysis if it turned out to be an issue.

Thank you for pointing good sample, Dan.

But I wonder should this patch-set include and use _scoped() macro ?
If above is just a comment, _scoped() macro will be separate patch-set.
If above is pointing the issue, v4 need to have _scoped() macro.

Thank you for your help !!
Best regards
---
Kuninori Morimoto

Reply via email to