Neeraj Kumar wrote: > Updated mutex_lock() with guard(mutex)() You are missing the 'why' justification here.
The detail is that __pmem_label_update() is getting more complex and this change helps to reduce the complexity later. However... [snip] > @@ -998,9 +998,8 @@ static int init_labels(struct nd_mapping *nd_mapping, int > num_labels) > label_ent = kzalloc(sizeof(*label_ent), GFP_KERNEL); > if (!label_ent) > return -ENOMEM; > - mutex_lock(&nd_mapping->lock); > + guard(mutex)(&nd_mapping->lock); > list_add_tail(&label_ent->list, &nd_mapping->labels); > - mutex_unlock(&nd_mapping->lock); ... this change is of little value. And... > } > > if (ndd->ns_current == -1 || ndd->ns_next == -1) > @@ -1039,7 +1038,7 @@ static int del_labels(struct nd_mapping *nd_mapping, > uuid_t *uuid) > if (!preamble_next(ndd, &nsindex, &free, &nslot)) > return 0; > > - mutex_lock(&nd_mapping->lock); > + guard(mutex)(&nd_mapping->lock); > list_for_each_entry_safe(label_ent, e, &nd_mapping->labels, list) { > struct nd_namespace_label *nd_label = label_ent->label; > > @@ -1061,7 +1060,6 @@ static int del_labels(struct nd_mapping *nd_mapping, > uuid_t *uuid) > nd_mapping_free_labels(nd_mapping); > dev_dbg(ndd->dev, "no more active labels\n"); > } > - mutex_unlock(&nd_mapping->lock); ... this technically changes the scope of the lock to include writing the index under the lock. It does not affect anything AFAICS but really these last two changes should be dropped from this patch. Ira > > return nd_label_write_index(ndd, ndd->ns_next, > nd_inc_seq(__le32_to_cpu(nsindex->seq)), 0); > -- > 2.34.1 > >