[ceph-users] Re: memory leak in mds?

2024-08-18 Thread Venky Shankar
[cc Xiubo] On Fri, Aug 16, 2024 at 8:10 PM Dario Graña wrote: > > Hi all, > We’re experiencing an issue with CephFS. I think we are facing this issue > . The main symptom is that the MDS > starts using a lot of memory within a few minutes and finally it gets

[ceph-users] Re: memory leak in mds?

2024-08-18 Thread Frédéric Nass
Hi Dario, A workaround may be to downgrade client's kernel or ceph-fuse version to a lower version than those listed in Enrico's comment #22, I believe. Can't say for sure though since I couldn't verify it myself. Cheers, Frédéric. De : Dario Graña Envoyé : ven

[ceph-users] Re: squid release codename

2024-08-18 Thread Alfredo Rezinovsky
Sorry for initiating this. El sáb, 17 ago 2024 a las 10:11, Anthony D'Atri () escribió: > > It's going to wreak havoc on search engines that can't tell when > > someone's looking up Ceph versus the long-establish Squid Proxy. > > Search engines are way smarter than that, and I daresay that people

[ceph-users] Re: ceph device ls missing disks

2024-08-18 Thread Alfredo Rezinovsky
No, I don´t monitor drivers in non-CEPH nodes. All my non-CEPH nodes are disposables. Only CEPH has data I can't lose El jue, 15 ago 2024 a las 20:49, Anthony D'Atri () escribió: > Do you monitor OS drives on non-Ceph nodes? > > > > > On Aug 15, 2024, at 8:17 AM, Alfredo Rezinovsky > wrote: > >

[ceph-users] Re: The snaptrim queue of PGs has not decreased for several days.

2024-08-18 Thread Eugen Block
Can you share the current ceph status? Are the OSDs reporting anything suspicious? How is the disk utilization? Zitat von Giovanna Ratini : More information: The snaptrim take a lot of time but the he objects_trimmed are "0"  "objects_trimmed": 0, "snaptrim_duration": 500.5807601752, I

[ceph-users] Data recovery after resharding mishap

2024-08-18 Thread Gauvain Pocentek
Hello list, We have made a mistake and dynamically resharded a bucket in a multi-site RGW setup running Quincy (support for this has been added in Reef). So we have now ~200 million objects still stored in the rados cluster, but completely removed from the bucket index (basically ceph has created

[ceph-users] Bug with Cephadm module osd service preventing orchestrator start

2024-08-18 Thread benjaminmhuth
Hey there, so I went to upgrade my ceph from 18.2.2 to 18.2.4 and have encountered a problem with my managers. After they had been upgraded, my ceph orch module broke because the cephadm module would not load. This obviously halted the update because you can't really update without the orchestra

[ceph-users] The snaptrim queue of PGs has not decreased for several days.

2024-08-18 Thread Giovanna Ratini
Hello all, We use Ceph (v18.2.2) and Rook (1.14.3) as the CSI for a Kubernetes environment. Last week, we had a problem with the MDS falling behind on trimming every 4-5 days (GitHub issue link ). We resolved the issue using the steps outlined in the

[ceph-users] Re: weird outage of ceph

2024-08-18 Thread Anthony D'Atri
> > You may want to look into https://github.com/digitalocean/pgremapper to get > the situation under control first. > > -- > Alex Gorbachev > ISS Not a bad idea. >> We had a really weird outage today of ceph and I wonder how it came about. >> The problem seems to have started around midnigh

[ceph-users] CephFS troubleshooting

2024-08-18 Thread Eugenio Tampieri
Hello, I'm writing to troubleshoot an otherwise functional Ceph quincy cluster that has issues with cephfs. I cannot mount it with ceph-fuse (it gets stuck), and if I mount it with NFS I can list the directories but I cannot read or write anything. Here's the output of ceph -s cluster: id:

[ceph-users] Re: Identify laggy PGs

2024-08-18 Thread Boris
Good to know. Everything is bluestore and usually 5 spinners share an SSD for block.db. Memory should not be a problem. We plan with 4GB / OSD with a minimum of 256GB memory. The pirmary affinity is a nice idea. I only thought about it in our s3 cluster, because the index is on SAS AND SATA SSDs a

[ceph-users] orch adoption and disk encryption without cephx?

2024-08-18 Thread Boris
Hi, I have some legacy clusters that I can not move to cephx due to customer workload. (We have a plan to move everything iterative to a new cluster, but that might still take a lot time). I would like to adopt the orchestrator and use the ceph disk encryption feature. Is this possible without us