[ceph-users] Re: loosing one node from a 3-node cluster

2022-04-05 Thread Felix Joussein
in beeing one 6 and the other 9 TB. any clue to that? regards, felix   Gesendet: Dienstag, 05. April 2022 um 10:44 Uhr Von: "Robert Sander" An: ceph-users@ceph.io Betreff: [ceph-users] Re: loosing one node from a 3-node cluster Hi, Am 05.04.22 um 02:53 schrieb Felix Joussein: >

[ceph-users] Re: loosing one node from a 3-node cluster

2022-04-05 Thread Robert Sander
Hi, Am 05.04.22 um 02:53 schrieb Felix Joussein: As the command outputs below show, ceph-iso_metadata consume 19TB accordingly to ceph df, how ever, the mounted ceph-iso filesystem is only 9.2TB big. The values nearly add up. ceph-vm has 2.7 TiB stored and 8.3 TiB used (3x replication). cep

[ceph-users] Re: loosing one node from a 3-node cluster

2022-04-04 Thread Felix Joussein
Yes, each node has one monitor, manager and mds running. regards, Felix       Gesendet: Dienstag, 05. April 2022 um 03:00 Uhr Von: "Wolfpaw - Dale Corse" An: "'Felix Joussein'" , ceph-users@ceph.io Betreff: RE: [ceph-users] loosing one node from a 3-node cluster Hi Felix,   Where are yo

[ceph-users] Re: loosing one node from a 3-node cluster

2022-04-04 Thread Wolfpaw - Dale Corse
Hi Felix, Where are your monitors located? Do you have one on each node? Dale Corse CEO/CTO Cell: 780-504-1756 24/7 NOC: 888-965-3729 www.wolfpaw.com