On 08.04.22 10:04, Fabian Grünbichler wrote: > On April 6, 2022 1:46 pm, Aaron Lauterer wrote: >> If two RBD storages use the same pool, but connect to different >> clusters, we cannot say to which cluster the mapped RBD image belongs to >> if krbd is used. To avoid potential data loss, we need to verify that no >> other storage is configured that could have a volume mapped under the >> same path before we create the image. >> >> The ambiguous mapping is in >> /dev/rbd/<pool>/<ns>/<image> where the namespace <ns> is optional. >> >> Once we can tell the clusters apart in the mapping, we can remove these >> checks again. >> >> See bug #3969 for more information on the root cause. >> >> Signed-off-by: Aaron Lauterer <a.laute...@proxmox.com> > > Acked-by: Fabian Grünbichler <f.gruenbich...@proxmox.com> > Reviewed-by: Fabian Grünbichler <f.gruenbich...@proxmox.com> > > (small nit below, and given the rather heavy-handed approach a 2nd ack > might not hurt.. IMHO, a few easily fixable false-positives beat more > users actually running into this with move disk/volume and losing > data..)
The obvious question to me is: why bother with this workaround when we can make udev create the symlink now already? Patching the rules file and/or binary shipped by ceph-common, or shipping our own such script + rule, would seem relatively simple. _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel