At the moment I've found that the mgr daemon works fine when I move it to
an OSD node. All nodes have the same OS version, so I can conclude that the
problem is limited to the nodes that normally run mgr. I'm still
investigating what's happening, but at least I got the monitoring back.
Regards.
O
Hi,
Am 30.05.2024 um 08:58 schrieb Robert Sander:
... Please show the output of ceph ...
sorry for PM. Short update, I started from scratch with new Cluster,
Reef instead of Quincy and this time I used a RBD instead of Filesystem
for the first test and the rebalancing took place as expected
Hi,
so there is a problem, alright!
I do not have a separate nginx proxy in front of my rgw/ingress. As far
as I understand, the "ingress" service is only keepalived and haproxy. I
am not sure how to strip the OPTIONS method within those containers...
I will probably just disable the redirec
Hi,
There is a bug with preflight on PUT requests:
https://tracker.ceph.com/issues/64308.
We have worked around it by stripping the query parameters of OPTIONS
requests to the RGWs.
Nginx proxy config:
if ($request_method = OPTIONS) {
rewrite ^\/(.+)$ /$1? break;
}
Regards,
Reid
On Wed, Jun
OK, sorry for spam, apparently this hasn't been working for a month...
Forget this mail. Sorry!
On 05.06.24 17:41, mailing-lists wrote:
Dear Cephers,
I am facing a Problem. I have updated our ceph cluster form 17.2.3 to
17.2.7 last week and i've just gotten complains about a website that
is
Dear Cephers,
I am facing a Problem. I have updated our ceph cluster form 17.2.3 to
17.2.7 last week and i've just gotten complains about a website that is
not able to use s3 via CORS anymore. (GET works, PUT does not).
I am using cephadm and i have deployed 3 rgws + 2 ingress services.
The
Can you paste the output of:
ls -l /var/lib/ceph/
on cephhost01? It says it can't write to that directory:
Unable to write
cephhost01:/var/lib/ceph/d5d1b7c6-232f-11ef-9ea1-a73759ab75e5/cephadm.2b9d7d139a9cb40289f2358faf49a109fc297c0a25
Which distro are you using?
Zitat von isnraj...@yahoo.
Yes looks it's issue related to SSH, but not sure where is the problem to fix
this.
root@cephhost01:~# cephadm --verbose check-host --expect-hostname cephhost01
cephadm ['--verbose', 'check-host', '--expect-host
Hi,
TL;DR:
Selecting a different CRUSH rule (stretch_rule, no device class) for
pool SSD results in degraded objects (unexpected) and misplaced objects
(expected). Why would Ceph drop up to two healthy copies?
Consider this two data center cluster:
ID CLASS WEIGHT TYPE NAME S
Hello everyone,
we have a cluster with 72 HDDs with 16TB and 3 SSDs with 4TB each in 3
Nodes.
The 3 SSDs are used to store the metadata for the CephFS filesystem. After
the update to 18.2.2 the size of the metadata pool went from around 2 TiB
to over 3.5 TiB filling up the OSDs.
After a few days t
Do you have osd_scrub_auto_repair set to true?
Zitat von Petr Bena :
Hello,
I wanted to try out (lab ceph setup) what exactly is going to happen
when parts of data on OSD disk gets corrupted. I created a simple
test where I was going through the block device data until I found
something
It sounded like you were pointing towards the ssh key as a possible
root cause, you didn't mention that it worked on other clusters. Then
you'll need to compare the settings between a working host and the
failing one, check the ceohadm.log for more details.
You could also execute the check-h
12 matches
Mail list logo