[ceph-users] Re: Determine client/inode/dnode source of massive explosion in CephFS metadata pool usage (Red Hat Nautilus CephFS)

2024-05-13 Thread Eugen Block
Hi Paul, I don't really have a good answer to your question, but maybe this approach can help track down the clients. Each MDS client has an average "uptime" metric stored in the MDS: storage01:~ # ceph tell mds.cephfs.storage04.uxkclk session ls ... "id": 409348719, ... "upt

[ceph-users] Re: Determine client/inode/dnode source of massive explosion in CephFS metadata pool usage (Red Hat Nautilus CephFS)

2024-05-13 Thread Eugen Block
I just read your message again, you only mention newly created files, not new clients. So my suggestion probably won't help you in this case, but it might help others. :-) Zitat von Eugen Block : Hi Paul, I don't really have a good answer to your question, but maybe this approach can hel

[ceph-users] Re: Multisite: metadata behind on shards

2024-05-13 Thread Christian Rohmann
On 13.05.24 5:26 AM, Szabo, Istvan (Agoda) wrote: Wonder what is the mechanism behind the sync mechanism because I need to restart all the gateways every 2 days on the remote sites to keep those it in sync. (Octopus 15.2.7) We've also seen lots of those issues with stuck RGWs with earlier vers

[ceph-users] cephfs-data-scan orphan objects while mds active?

2024-05-13 Thread Olli Rajala
Hi, I suspect that I have some orphan objects on a data pool after quite haphazardly evicting and removing a cache pool after deleting 17TB of files from cephfs. I have forward scrubbed the mds and the filesystem is in clean state. This is a production system and I'm curious if it would be safe t

[ceph-users] Re: Problem with take-over-existing-cluster.yml playbook

2024-05-13 Thread vladimir franciz blando
Hi, If I follow the guide, it only says to define the mons on the ansible hosts files under the section [mons] which I did with this example (not real ip) [mons] vlad-ceph1 monitor_address=192.168.1.1 ansible_user=ceph vlad-ceph2 monitor_address=192.168.1.2 ansible_user=ceph vlad-ceph3 monitor_ad

[ceph-users] Re: Upgrading Ceph Cluster OS

2024-05-13 Thread Götz Reinicke
Hi, > Am 11.05.2024 um 15:54 schrieb Nima AbolhassanBeigi > : > > Hi, > > We want to upgrade the OS version of our production ceph cluster by > reinstalling the OS on the server. from which OS to which OS do you like to upgrade? Whats your ceph version? Regards . Goetz smime.p7s Descripti

[ceph-users] Re: Upgrading Ceph Cluster OS

2024-05-13 Thread Michel Jouvin
Nima, Can you also specify the Ceph version you are using and whether your current configuration is cephadm-based? Michel Le 13/05/2024 à 15:19, Götz Reinicke a écrit : Hi, Am 11.05.2024 um 15:54 schrieb Nima AbolhassanBeigi : Hi, We want to upgrade the OS version of our production ceph

[ceph-users] Re: Ceph User + Community Meeting and Survey [May 23]

2024-05-13 Thread Laura Flores
Thanks to everyone who has already completed the survey. There is still time this week to get your voice heard in the upcoming User + Dev meeting if you haven't done so already! Take the survey here: https://docs.google.com/forms/d/e/1FAIpQLSet7HyqfREYCSYZxA1ggvBchDN7GZh1av4WG86MLbVK1gyhaw/viewfor

[ceph-users] Re: cephfs-data-scan orphan objects while mds active?

2024-05-13 Thread Gregory Farnum
The cephfs-data-scan tools are built with the expectation that they'll be run offline. Some portion of them could be run without damaging the live filesystem (NOT all, and I'd have to dig in to check which is which), but they will detect inconsistencies that don't really exist (due to updates that