[ceph-users] Re: Changing failure domain

2020-01-06 Thread Francois Legrand
Thanks again for your answer. I still have few questions before going on. It seems that some metadata should remains on the original data pool, preventing it's deletion (http://ceph.com/geen-categorie/ceph-pool-migration/ and https://www.spinics.net/lists/ceph-users/msg41374.html). Thus does do

[ceph-users] Re: [Ceph-users] Re: MDS failing under load with large cache sizes

2020-01-06 Thread Janek Bevendorff
Hi, my MDS failed again, but this time I cannot recover it by deleting the mds*_openfiles .0 object. The startup behaviour is also different. Both inode count and cache size stay at zero while the MDS is replaying. When I set the MDS log level to 7, I get tons of these messages: 2020-01-06 11:59:

[ceph-users] Re: Balancing PGs across OSDs

2020-01-06 Thread Lars Täuber
Hi Konstantin, Mon, 23 Dec 2019 13:47:55 +0700 Konstantin Shalygin ==> Lars Täuber : > On 12/18/19 2:16 PM, Lars Täuber wrote: > > the situation after moving the PGs with osdmaptool is not really better > > than without: > > > > $ ceph osd df class hdd > > […] > > MIN/MAX VAR: 0.86/1.08 STDDEV

[ceph-users] Re: [Ceph-users] Re: MDS failing under load with large cache sizes

2020-01-06 Thread Janek Bevendorff
Update: turns out I just had to wait for an hour. The MDSs were sending Beacons regularly, so the MONs didn't try to kill them and instead let them finish doing whatever they were doing. Unlike the other bug where the number of open files outgrows what the MDS can handle, this incident allowed "se

[ceph-users] RBD Mirroring down+unknown

2020-01-06 Thread miguel . castillo
Happy New Year Ceph Community! I'm in the process of figuring out RBD mirroring with Ceph and having a really tough time with it. I'm trying to set up just one way mirroring right now on some test systems (baremetal servers, all Debian 9). The first cluster is 3 nodes, and the 2nd cluster is 2

[ceph-users] Re: RBD Mirroring down+unknown

2020-01-06 Thread Jason Dillaman
On Mon, Jan 6, 2020 at 4:59 PM wrote: > > Happy New Year Ceph Community! > > I'm in the process of figuring out RBD mirroring with Ceph and having a > really tough time with it. I'm trying to set up just one way mirroring right > now on some test systems (baremetal servers, all Debian 9). The fi