[ceph-users] Re: problem with mgr prometheus module

2024-06-05 Thread Dario Graña
At the moment I've found that the mgr daemon works fine when I move it to an OSD node. All nodes have the same OS version, so I can conclude that the problem is limited to the nodes that normally run mgr. I'm still investigating what's happening, but at least I got the monitoring back. Regards. O

[ceph-users] Re: Rebalance OSDs after adding disks?

2024-06-05 Thread tpDev Tester
Hi, Am 30.05.2024 um 08:58 schrieb Robert Sander: ... Please show the output of ceph ... sorry for PM. Short update, I started from scratch with new Cluster, Reef instead of Quincy and this time I used a RBD instead of Filesystem for the first test and the rebalancing took place as expected

[ceph-users] Re: CORS Problems

2024-06-05 Thread mailing-lists
Hi, so there is a problem, alright! I do not have a separate nginx proxy in front of my rgw/ingress. As far as I understand, the "ingress" service is only keepalived and haproxy. I am not sure how to strip the OPTIONS method within those containers... I will probably just disable the redirec

[ceph-users] Re: CORS Problems

2024-06-05 Thread Reid Guyett
Hi, There is a bug with preflight on PUT requests: https://tracker.ceph.com/issues/64308. We have worked around it by stripping the query parameters of OPTIONS requests to the RGWs. Nginx proxy config: if ($request_method = OPTIONS) { rewrite ^\/(.+)$ /$1? break; } Regards, Reid On Wed, Jun

[ceph-users] Re: CORS Problems

2024-06-05 Thread mailing-lists
OK, sorry for spam, apparently this hasn't been working for a month... Forget this mail. Sorry! On 05.06.24 17:41, mailing-lists wrote: Dear Cephers, I am facing a Problem. I have updated our ceph cluster form 17.2.3 to 17.2.7 last week and i've just gotten complains about a website that is

[ceph-users] CORS Problems

2024-06-05 Thread mailing-lists
Dear Cephers, I am facing a Problem. I have updated our ceph cluster form 17.2.3 to 17.2.7 last week and i've just gotten complains about a website that is not able to use s3 via CORS anymore. (GET works, PUT does not). I am using cephadm and i have deployed 3 rgws + 2 ingress services. The

[ceph-users] Re: Error EINVAL: check-host failed - Failed to add host

2024-06-05 Thread Eugen Block
Can you paste the output of: ls -l /var/lib/ceph/ on cephhost01? It says it can't write to that directory: Unable to write cephhost01:/var/lib/ceph/d5d1b7c6-232f-11ef-9ea1-a73759ab75e5/cephadm.2b9d7d139a9cb40289f2358faf49a109fc297c0a25 Which distro are you using? Zitat von isnraj...@yahoo.

[ceph-users] Re: Error EINVAL: check-host failed - Failed to add host

2024-06-05 Thread isnraju26
Yes looks it's issue related to SSH, but not sure where is the problem to fix this. root@cephhost01:~# cephadm --verbose check-host --expect-hostname cephhost01 cephadm ['--verbose', 'check-host', '--expect-host

[ceph-users] degraded objects when setting different CRUSH rule on a pool, why?

2024-06-05 Thread Stefan Kooman
Hi, TL;DR: Selecting a different CRUSH rule (stretch_rule, no device class) for pool SSD results in degraded objects (unexpected) and misplaced objects (expected). Why would Ceph drop up to two healthy copies? Consider this two data center cluster: ID CLASS WEIGHT TYPE NAME S

[ceph-users] CephFS metadata pool size

2024-06-05 Thread Lars Köppel
Hello everyone, we have a cluster with 72 HDDs with 16TB and 3 SSDs with 4TB each in 3 Nodes. The 3 SSDs are used to store the metadata for the CephFS filesystem. After the update to 18.2.2 the size of the metadata pool went from around 2 TiB to over 3.5 TiB filling up the OSDs. After a few days t

[ceph-users] Re: Testing CEPH scrubbing / self-healing capabilities

2024-06-05 Thread Eugen Block
Do you have osd_scrub_auto_repair set to true? Zitat von Petr Bena : Hello, I wanted to try out (lab ceph setup) what exactly is going to happen when parts of data on OSD disk gets corrupted. I created a simple test where I was going through the block device data until I found something

[ceph-users] Re: Error EINVAL: check-host failed - Failed to add host

2024-06-05 Thread Eugen Block
It sounded like you were pointing towards the ssh key as a possible root cause, you didn't mention that it worked on other clusters. Then you'll need to compare the settings between a working host and the failing one, check the ceohadm.log for more details. You could also execute the check-h