[ceph-users] Re: bunch of " received unsolicited reservation grant from osd" messages in log

2021-12-17 Thread Kenneth Waegeman
Hi all, I'm also seeing these messages spamming the logs after update from octopus to pacific 16.2.7. Any clue yet what this means? Thanks!! Kenneth On 29/10/2021 22:21, Alexander Y. Fomichev wrote: Hello. After upgrading to 'pacific' I found log spammed by messages like this: ... active+c

[ceph-users] Re: v15.2.10 Octopus released

2021-03-18 Thread Kenneth Waegeman
Hi, The 15.2.10 image is not yet on docker? (An hour ago *14*.2.10 image was pushed) Thanks! Kenneth On 18/03/2021 15:10, David Galloway wrote: We're happy to announce the 10th backport release in the Octopus series. We recommend users to update to this release. For a detailed release notes

[ceph-users] Re: Container deployment - Ceph-volume activation

2021-03-12 Thread Kenneth Waegeman
Hi, The osd activate will probably be nice in the future, but for now I'm doing it like this: ceph-volume activate --all for id in `ls -1 /var/lib/ceph/osd`; do echo cephadm adopt --style legacy --name ${id/ceph-/osd.}; done It's not ideal because you still need the ceph rpms installed and s

[ceph-users] Re: ceph version of new daemons deployed with orchestrator

2021-02-26 Thread Kenneth Waegeman
- add host as usual orchestrator will use the configured v15 image which on the new host corresponds to v15.2.6 hope it helps best, tobi Am 26.02.2021 um 11:16 schrieb Kenneth Waegeman : Hi all, I am running a cluster managed by orchestrator/cephadm. I installed new host for OSDS yesterday, th

[ceph-users] ceph version of new daemons deployed with orchestrator

2021-02-26 Thread Kenneth Waegeman
Hi all, I am running a cluster managed by orchestrator/cephadm. I installed new host for OSDS yesterday, the osd daemons were automatically created using drivegroups service specs (https://docs.ceph.com/en/latest/cephadm/drivegroups/#drivegroups

[ceph-users] Re: reinstalling node with orchestrator/cephadm

2021-02-12 Thread Kenneth Waegeman
On 08/02/2021 16:52, Kenneth Waegeman wrote: Hi Eugen, all, Thanks for sharing your results! Since we have multiple clusters and clusters with +500 OSDs, this solution is not feasible for us. In the meantime I created an issue for this : https://tracker.ceph.com/issues/49159 Hi all, For

[ceph-users] Re: reinstalling node with orchestrator/cephadm

2021-02-08 Thread Kenneth Waegeman
Maybe someone is working on it though. Regards, Eugen Zitat von Kenneth Waegeman : Hi all, I'm running a 15.2.8 cluster using ceph orch with all daemons adopted to cephadm. I tried reinstall an OSD node. Is there a way to make ceph orch/cephadm activate the devices on this node again

[ceph-users] reinstalling node with orchestrator/cephadm

2021-02-03 Thread Kenneth Waegeman
Hi all, I'm running a 15.2.8 cluster using ceph orch with all daemons adopted to cephadm. I tried reinstall an OSD node. Is there a way to make ceph orch/cephadm activate the devices on this node again, ideally automatically? I tried running `cephadm ceph-volume -- lvm activate --all` but t

[ceph-users] Re: cephfs change/migrate default data pool

2020-05-08 Thread Kenneth Waegeman
n Wed, Apr 29, 2020 at 5:56 AM Kenneth Waegeman wrote: I read in some release notes it is recommended to have your default data pool replicated and use erasure coded pools as additional pools through layouts. We have still a cephfs with +-1PB usage with a EC default pool. Is there a way to change t

[ceph-users] Re: cephfs change/migrate default data pool

2020-05-07 Thread Kenneth Waegeman
Someone an idea /experience if this is possible ? :) On 29/04/2020 14:56, Kenneth Waegeman wrote: Hi all, I read in some release notes it is recommended to have your default data pool replicated and use erasure coded pools as additional pools through layouts. We have still a cephfs with

[ceph-users] cephfs change/migrate default data pool

2020-04-29 Thread Kenneth Waegeman
Hi all, I read in some release notes it is recommended to have your default data pool replicated and use erasure coded pools as additional pools through layouts. We have still a cephfs with +-1PB usage with a EC default pool. Is there a way to change the default pool or some other kind of mig

[ceph-users] Re: regurlary 'no space left on device' when deleting on cephfs

2019-09-11 Thread Kenneth Waegeman
On 11/09/2019 04:14, Yan, Zheng wrote: On Wed, Sep 11, 2019 at 6:51 AM Kenneth Waegeman wrote: We sync the file system without preserving hard links. But we take snapshots after each sync, so I guess deleting files which are still in snapshots can also be in the stray directories? [root

[ceph-users] Re: regurlary 'no space left on device' when deleting on cephfs

2019-09-10 Thread Kenneth Waegeman
We sync the file system without preserving hard links. But we take snapshots after each sync, so I guess deleting files which are still in snapshots can also be in the stray directories? [root@mds02 ~]# ceph daemon mds.mds02 perf dump | grep -i 'stray\|purge'     "finisher-PurgeQueue": {