On 11.04.22 11:08, Aaron Lauterer wrote: > On 4/11/22 09:39, Thomas Lamprecht wrote: >> The obvious question to me is: why bother with this workaround when we can >> make udev create the symlink now already? >> >> Patching the rules file and/or binary shipped by ceph-common, or shipping our >> own such script + rule, would seem relatively simple. > > The thinking was to implement a stop gap to have more time to consider a > solution that we can upstream.
If the stop-gap would be trivial and non intrusive (compared to the Correct Solution™) or if it would be really pressing, sure, but IMO the situation here isn't exactly /that/ clear cut. > > Fabian might have some more thoughts on it but yeah, right now we could patch > the udev rules and the ceph-rbdnamer which is called by the rule to create > the current paths and then additionally the cluster specific ones. > Unfortunately, it seems like the unwieldy cluster fsid is the only identifier > we have for the cluster. The unwieldiness doesn't really matter for us here though, this is just an internal check. > > Some more (smaller) changes might be necessary, if the implementation we > manage to upstream will be a bit different. But that should not be much of an > issue AFAICT. We can always ship our downstream solution to be whatever we want and sync up on major release, so not a real problem. FWIW, with storage getting the following patch the symlinks get created (may need an trigger for reloading udev (or manually `udevadm control -R`). We'd only need to check to prioritize /deb/rbd-pve/$fsid/... paths first; do you have time to give that a go? diff --git a/Makefile b/Makefile index 431db16..029b586 100644 --- a/Makefile +++ b/Makefile @@ -41,6 +41,7 @@ install: PVE pvesm.1 pvesm.bash-completion pvesm.zsh-completion install -d ${DESTDIR}${SBINDIR} install -m 0755 pvesm ${DESTDIR}${SBINDIR} make -C PVE install + make -C udev-rbd install install -d ${DESTDIR}/usr/share/man/man1 install -m 0644 pvesm.1 ${DESTDIR}/usr/share/man/man1/ gzip -9 -n ${DESTDIR}/usr/share/man/man1/pvesm.1 diff --git a/udev-rbd/50-rbd-pve.rules b/udev-rbd/50-rbd-pve.rules new file mode 100644 index 0000000..79432df --- /dev/null +++ b/udev-rbd/50-rbd-pve.rules @@ -0,0 +1,2 @@ +KERNEL=="rbd[0-9]*", ENV{DEVTYPE}=="disk", PROGRAM="/usr/libexec/ceph-rbdnamer-pve %k", SYMLINK+="rbd-pve/%c" +KERNEL=="rbd[0-9]*", ENV{DEVTYPE}=="partition", PROGRAM="/usr/libexec/ceph-rbdnamer-pve %k", SYMLINK+="rbd-pve/%c-part%n" diff --git a/udev-rbd/Makefile b/udev-rbd/Makefile new file mode 100644 index 0000000..065933b --- /dev/null +++ b/udev-rbd/Makefile @@ -0,0 +1,21 @@ +PACKAGE=libpve-storage-perl + +DESTDIR= +PREFIX=/usr +LIBEXECDIR=${PREFIX}/libexec +LIBDIR=${PREFIX}/lib + +all: + +.PHONY: install +install: 50-rbd-pve.rules ceph-rbdnamer-pve + install -d ${DESTDIR}${LIBEXECDIR} + install -m 0755 ceph-rbdnamer-pve ${DESTDIR}${LIBEXECDIR} + install -d ${DESTDIR}${LIBDIR}/udev/rules.d + install -m 0644 50-rbd-pve.rules ${DESTDIR}${LIBDIR}/udev/rules.d + +.PHONY: clean +clean: + +.PHONY: distclean +distclean: clean diff --git a/udev-rbd/ceph-rbdnamer-pve b/udev-rbd/ceph-rbdnamer-pve new file mode 100755 index 0000000..23dd626 --- /dev/null +++ b/udev-rbd/ceph-rbdnamer-pve @@ -0,0 +1,24 @@ +#!/bin/sh + +DEV=$1 +NUM=`echo $DEV | sed 's#p.*##g; s#[a-z]##g'` +POOL=`cat /sys/devices/rbd/$NUM/pool` +CLUSTER_FSID=`cat /sys/devices/rbd/$NUM/cluster_fsid` + +if [ -f /sys/devices/rbd/$NUM/pool_ns ]; then + NAMESPACE=`cat /sys/devices/rbd/$NUM/pool_ns` +else + NAMESPACE="" +fi +IMAGE=`cat /sys/devices/rbd/$NUM/name` +SNAP=`cat /sys/devices/rbd/$NUM/current_snap` + +echo -n "/$CLUSTER_FSID/$POOL" + +if [ -n "$NAMESPACE" ]; then + echo -n "/$NAMESPACE" +fi +echo -n "/$IMAGE" +if [ "$SNAP" != "-" ]; then + echo -n "@$SNAP" _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel