On Mon, Dec 08, 2025 at 01:27:32PM +0100, Martin Wilck wrote:
> On Thu, 2025-12-04 at 14:33 -0500, Benjamin Marzinski wrote:
> > On Wed, Dec 03, 2025 at 08:19:33AM -0800, Brian Bunker wrote:
>
> > How about a simplifying the purge thread slightly. When the checker
> > gets
> > a path it is supposed to purge, it sets purge_path, just like now. I
> > don't think we need to track purge_retries at all. When the purge
> > loop
> > is triggered, it locks the vecs lock, and finds a single path to
> > purge,
> > like now. The only info it needs to save is the udev device (with a
> > new
> > refcount taken like now) and a dup of pp->fd (You need this to
> > guarantee
> > that the devnode won't get reused if the device is deleted and a new
> > one
> > is re-added). You could also save the name, but
> > udev_device_get_devnode() will get it for you. Like I mentioned
> > above,
> > you need to have a pthread cleanup handler that gets run to clean
> > these
> > up if they are set and the thread is cancelled, just like how the
> > vecs
> > lock is handled. Then purgeloop() clears pp->purge_path, drops the
> > vecs
> > lock, does the sysfs delete, and cleans up the purge_info by calling
> > pthread_cleanup_pop(). Finally it loops back to reacquiring the vecs
> > lock and finding the next path to delete. I don't really see a value
> > in
> > all the stats tracking. If it goes through all the paths, and doesn't
> > find one in need of purging, it waits, just like now.
>
> I'd go one step further and suggest an approach similar to what we do
> in the uevent handler, for which we have a separate list of uevents
> that shares no data with the main multipath threads.
>
> Roughly like this:
>
> enum purge_state {
> PURGE_REQUESTED = 1,
> PURGE_DONE,
> PURGE_ERROR
> };
>
> struct path_to_purge {
> struct list_head queue;
> struct udev_device *udev; /* one ref taken while on the list
> */
> int fd; /* dup of pp->fd */
> enum purge_state state;
> };
>
> // global list_head for the queue of paths to be purged.
> static LIST_HEAD(purge_queue);
>
> If multipathd detects a path to purge in the checker thread, it
> allocates and adds a path_to_purge entry with state PURGE_REQUESTED to
> the queue and triggers the purge thread e.g. with a pthreads condition
> variable.
>
> The purge thread then walks through the list (without modifying the
> list itself), sends a delete request to the kernel and atomically sets
> the state field to PURGE_DONE when successful.
>
> multipathd's checker thread walks the list next time and removes
> entries that have state != PURGE_REQUESTED, closing the fd and unref-
> ing the struct udev. IOW the resource allocation/freeing for this list
> and its elements would be handled by the checker alone. The checker
> would not immediately drop internal references to the path; this would
> be handled by the remove uevent handler as usual.
>
> The purge thread would never write to the list, except to the state
> attribute of list members (which we could protect from being freed via
> RCU).
>
> This would make the purge thread almost an independent process.
I'm a little confused about this design. Lets say a path gets removed
while on this list. If the purgeloop doesn't modify the list, then how
does the remove function know that it's safe to remove the item. The
purgeloop could be in the process notifying sysfs right now, making use
of the udev device. Similarly multipathd will change a path's udev
device when it gets a new uevent. How does it know if it's safe to
update the pointer on this list. Someone has to unref the purgeloops
extra ref, and multipathd can't, once it's updated pp->udev and
forgotten the old value. You might be able to solve this with more
purge_states so that the purgeloop could signal that it's actually
working on the device, but I'm pretty sure you'd need actual locking to
avoid races here. Aside from the timing issue, we're adding multiple
places where multipathd needs to go checking and modifying this list as
part of other actions, which seems like it gets pretty fiddly.
If we instead say that purgeloop is in charge of cleaning up the list,
there would still need to be locking, because multipathd might want to
take items off the list, if they become reconnected or removed before
the thread gets around to purging. In theory, you could just not modify
the list in these cases, and just let the purgeloop run on those paths,
but then you would be removing paths that have been reconnected, and
possibly keeping a number of references to udev devices and fds for
paths that are gone (assuming that deleting devices can actually block
for a chunk of time). Also, multipathd would need to check the list so
paths didn't get added multiple times, if the checker re-ran before the
path was purged. Again, unless you were fine with keeping unnecessary
extra references to the device.
My original idea was to try to keep purgeloop from needing to grab the
vecs lock, but I just kept coming up with extra complications that this
would cause. So I settled on something where the normal multipath code
just needs to set a path variable to communicate with the purgeloop, and
the purgeloop just needs to hold the vecs lock while looking through the
path list until it finds a device in need of purging.
My other option make the purgeloop do much more work. The idea was that
multipathd just put the pathname on a list when it found a disconnected
path, and the purge loop would pull items off that list. This would
still need locking, but you could probably do it with rcu locking.
Alternatively, you could use sockets to send the pathnames to the
purgeloop thread, to avoid locking and possibly make the purgeloop it's
own process completely.
For each path, the purgeloop would
1. open the path device (to make sure it didn't get deleted and reused)
2. check the path holders to make sure it was part of a multipath device
3. run a TUR check on the path to make sure that it was still
disconnected
4. delete the path, if so.
5. close the path device.
This is completely disconnected from multipath, minus the pathname
communication, and should be able to make the right decisions about
paths that were removed or reconnected, without sharing any resources
that need cleaing up. It just makes the purgeloop duplicate a bunch of
work that multipathd already did.
Thoughts? Am I misunderstanding your design?
-Ben
> Opinions?
>
> Martin