[pve-devel] [PATCH manager] ceph-after-pve-cluster: enable for ceph-volume, disable for ceph-disk

2022-07-18 Thread Aaron Lauterer
The ceph-disk service seems to have been removed with octopus (v15) and
we did not yet have them for ceph-volume which could lead to some
startup issues in cases where the pve-cluster service did not start fast
enough.

Signed-off-by: Aaron Lauterer 
---
 services/Makefile | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/services/Makefile b/services/Makefile
index b46c7119..75809a37 100644
--- a/services/Makefile
+++ b/services/Makefile
@@ -25,8 +25,8 @@ install: ${SERVICES}
install -m 0644 ceph-after-pve-cluster.conf 
${SERVICEDIR}/ceph-mgr@.service.d
install -d ${SERVICEDIR}/ceph-osd@.service.d
install -m 0644 ceph-after-pve-cluster.conf 
${SERVICEDIR}/ceph-osd@.service.d
-   install -d ${SERVICEDIR}/ceph-disk@.service.d
-   install -m 0644 ceph-after-pve-cluster.conf 
${SERVICEDIR}/ceph-disk@.service.d
+   install -d ${SERVICEDIR}/ceph-volume@.service.d
+   install -m 0644 ceph-after-pve-cluster.conf 
${SERVICEDIR}/ceph-volume@.service.d
install -d ${SERVICEDIR}/ceph-mds@.service.d
install -m 0644 ceph-after-pve-cluster.conf 
${SERVICEDIR}/ceph-mds@.service.d
install -d ${DESTDIR}/usr/share/doc/${PACKAGE}/examples/
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH V2 manager 1/3] fix #2822: add iscsi, lvm, lvmthin & zfs storage for all cluster nodes

2022-07-18 Thread Fabian Ebner
Am 18.07.22 um 16:33 schrieb Stefan Hrdlicka:
> Servus Fabian,
> 
> wenn ich die automatische Node Einschränkung weg nehme. Dann müsste ich
> vermutlich etwas im Perl code anpassen, da ich sonst einen Fehler
> bekomme wenn ich versuche neue Storages auf anderen Nodes hinzufügen. Da
> er dann überprüft ob das Storage auch verfügbar ist/existiert. Das ist
> z.B. bei ZFS oder LVMThin ist der Fall.
> 
> Was sagst du denn dazu?
> 

Please always post such discussions about patches/development to the
developer list, so others can see it too ;) Also, please use
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

For non-German speakers: Stefan is concerned that users will run into
errors when the node is not automatically restricted.

Well, it actually *is* an error if the storage is configured for all
nodes and not available on the current (i.e. from where it's added) node.

But you do have a point of course, because why would a user even change
the node to scan, if the storage is already available on the current node.

I'd still rather not change the default restriction behavior (see
below). Maybe we should just auto-restrict if the user actively changed
the node to scan?

> grüße Stefan
> 
> 
> On 6/28/22 12:33, Fabian Ebner wrote:
>> Am 22.06.22 um 16:39 schrieb Stefan Hrdlicka:
>>> This adds a dropdown box for iSCSI, LVM, LVMThin & ZFS storage
>>> options where a
>>> cluster node needs to be chosen. As default the current node is
>>> selected. It restricts the the storage to be only availabe on the
>>> selected node.
>>
>> I don't think we should change the default restriction, especially for
>> iSCSI and (shared) LVM, but also for local ones, as in many cases,
>> cluster nodes will be set-up with similar storage and the new default
>> might trip up some people.
>>
>>>


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel