Hi Robert,
Thank you for the help. We had previously refered the link:
https://docs.ceph.com/en/quincy/rados/configuration/mon-lookup-dns/
But we were not able to configure mon_dns_srv_name correctly.
We find the following link:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storag
Thank you Reed. I tried your solution but it didn't work, the warning emails
are arriving anyway. Two possible reasons:
1) I issued `ceph health mute OSD_SLOW_PING_TIME_BACK --sticky` while the
warning was not active, so it had no effect
2) according to this
(https://people.redhat.com/bhubbard/n
Well, I guess the mute is now active:
```
# ceph health detail
HEALTH_WARN 4 OSD(s) have spurious read errors; (muted: OSD_SLOW_PING_TIME_BACK
OSD_SLOW_PING_TIME_FRONT)
```
but I still get emails from the alert module reporting about
OSD_SLOW_PING_TIME_BACK/FRONT. Is this expected?
_
Hi everyone,
I have a ceph test cluster and a proxmox test cluster (for try upgrade in test
before the prod).
My ceph cluster is made up of three servers running debian 11, with two
separate networks (cluster_network and public_network, in VLANs).
In ceph version 16.2.10 (cephadm with docker).
E
We are finally going to upgrade our Ceph from Nautilus to Octopus, before
looking at moving onward. We are still on Ubuntu 18.04, so once on Octopus, we
will then upgrade the OS to 20.04, ready for the next upgrade.
Unfortunately, we have already upgraded our rados gateways to Ubuntu 20.04,
la
>
> Question:
> What does the future hold with regard to cephadm vs rpm/deb packages? If it is
> now suggested to use cephadm and thus containers to deploy new clusters, what
> does the future hold? Is there an intent, at sometime in the future, to no
> longer support rpm/deb packages for Linux sy
Odd, usually cephadm handles the permissions. You may have to work through
some `chown ceph.ceph -R /dev/sd{x}`
You are correct, no osds means no crush map.
You should review https://docs.ceph.com/en/quincy/rados/operations/crush-map/
You can check the crush rules with `ceph osd crush dump`
I
> Op 4 feb. 2023 om 00:03 heeft Thomas Cannon het
> volgende geschreven:
>
>
> Hello Ceph community.
>
> The company that recently hired me has a 3 mode ceph cluster that has been
> running and stable. I am the new lone administrator here and do not know ceph
> and this is my first experie
On Thu, Feb 2, 2023 at 7:56 PM Eugen Block wrote:
> Hi,
>
> > I have a cluster with approximately one billion objects and when I run a
> PG
> > query, it shows that I have 27,000 objects per PG.
>
> which query is that, can you provide more details about that cluster and
> pool?
>
Thanks for you