in beeing one 6 and the other 9 TB.
any clue to that?
regards,
felix
Gesendet: Dienstag, 05. April 2022 um 10:44 Uhr
Von: "Robert Sander"
An: ceph-users@ceph.io
Betreff: [ceph-users] Re: loosing one node from a 3-node cluster
Hi,
Am 05.04.22 um 02:53 schrieb Felix Joussein:
>
Yes, each node has one monitor, manager and mds running.
regards,
Felix
Gesendet: Dienstag, 05. April 2022 um 03:00 Uhr
Von: "Wolfpaw - Dale Corse"
An: "'Felix Joussein'" , ceph-users@ceph.io
Betreff: RE: [ceph-users] loosing one node from a 3-node clust
Hi Everyone,
I run a 3-node proxmox+ceph cluster in my home-lab serving as rdb storage for virtual machines for 2 years now.
When I installed it, I did some testing to ensure, that when one node would fail, the remaining 2 nodes would keep the system up while the 3rd node is being replaced.
Rec
Hi, after Upgrade Ceph to 15.2.14 on a proxmox 6.4.13 3-node cluster (Debian) I get a health warning:
Module 'volumes' has failed dependency: /lib/python3/dist-packages/cephfs.cpython-37m-x86_64-linux-gnu.so: undefined symbol: ceph_abort_conn
What am I missing?
I have already checked, that pytho