[ceph-users] Ceph Crash Module "RADOS permission denied"

2024-10-29 Thread mailing-lists
Hey Cephers, i was investigating some other issue, when I stumbled across this. I am not sure, if this is "as intended" or faulty. This is a cephadm cluster on reef 18.2.4, containerized with docker. The ceph-crash module states that it cant find its key and that it cant access RADOS. Pre-

[ceph-users] Re: Issue with Recovery Throughput Not Visible in Ceph Dashboard After Upgrade to 19.2.0 (Squid)

2024-10-21 Thread mailing-lists
Hey there, i've that problem too, although I got it from updating 17.2.7 to 18.2.4. After i read this mail I've just fiddled around a bit aaand Prometheus does not have ceph_osd_recovery_ops. Then i looked into the files in /var/lib/ceph/xyz/prometheus.node-name/etc/prometheus/prometheus.yml

[ceph-users] WAL on NVMe/SSD not used after OSD/HDD replace

2024-09-27 Thread mailing-lists
Dear Ceph-users, I have a problem that I'd like to have your input for. Preface: I have got a test-cluster and a productive-cluster. Both are setup the same and both are having the same "issue". I am running Ubuntu 22.04 and deployed ceph 17.2.3 via cephadm. Upgraded to 17.2.7 later on, which i

[ceph-users] Re: Grafana dashboards is missing data

2024-09-04 Thread Frank de Bot (lists)
Hi Sake, Do you have the config mgr/cephadm/secure_monitoring_stack to true? If so, this pull request will fix your problem: https://github.com/ceph/ceph/pull/58402 Regards, Frank Sake Ceph wrote: After the upgrade from 17.2.7 to 18.2.4 a lot of graphs are empty. For example the Osd laten

[ceph-users] ceph orchestrator upgrade quincy to reef, missing ceph-exporter

2024-08-02 Thread Frank de Bot (lists)
Hi, When upgrading a cephadm deployed quincy cluster to reef, there will be no ceph-exporter service launched. Being new in reef (from release notes: ceph-exporter: Now the performance metrics for Ceph daemons are exported by ceph-exporter, which deploys on each daemon rather than using prom

[ceph-users] wrong public_ip after blackout / poweroutage

2024-06-14 Thread mailing-lists
Dear Cephers, after a succession of unfortunate events, we have suffered a complete datacenter blackout today. Ceph _nearly_ perfectly came back up. The Health was OK and all services were online, but we were having weird problems. Weird as in, we could sometimes map rbds and sometimes not,

[ceph-users] Re: CORS Problems

2024-06-05 Thread mailing-lists
com/issues/64308. We have worked around it by stripping the query parameters of OPTIONS requests to the RGWs. Nginx proxy config: if ($request_method = OPTIONS) {     rewrite ^\/(.+)$ /$1? break; } Regards, Reid On Wed, Jun 5, 2024 at 12:10 PM mailing-lists wrote: OK, sorry for spam,

[ceph-users] Re: CORS Problems

2024-06-05 Thread mailing-lists
OK, sorry for spam, apparently this hasn't been working for a month... Forget this mail. Sorry! On 05.06.24 17:41, mailing-lists wrote: Dear Cephers, I am facing a Problem. I have updated our ceph cluster form 17.2.3 to 17.2.7 last week and i've just gotten complains about a we

[ceph-users] CORS Problems

2024-06-05 Thread mailing-lists
Dear Cephers, I am facing a Problem. I have updated our ceph cluster form 17.2.3 to 17.2.7 last week and i've just gotten complains about a website that is not able to use s3 via CORS anymore. (GET works, PUT does not). I am using cephadm and i have deployed 3 rgws + 2 ingress services. The

[ceph-users] Timeout in Dashboard

2023-05-23 Thread mailing-lists
Hey all, im facing a "minor" problem. I do not always get results when going to the dashboard, under Block->Images in the tab Images or Namespaces. The little refresh button will keep spinning and sometimes after several minutes it will finally show something. That is odd, because from the sh

[ceph-users] Re: Do not use SSDs with (small) SLC cache

2023-02-21 Thread mailing-lists
Dear Michael, I don't have an explanation for your problem unfortunately, but I just wondered that you experience a drop in performance, that this SSD shouldn't have. Your SSDs drives (Samsung 870 EVO) should not get slower on large writes. You can verify this on the post you've attached [1] o

[ceph-users] Re: Replacing OSD with containerized deployment

2023-02-08 Thread mailing-lists
7cc-90e2-c5cc96bdd825/osd-block-2a1d1bf0-300e-4160-ac55-047837a5af0b and block.wal on /dev/ceph-3a336b8e-ed39-4532-a199-ac6a3730840b/osd-wal-5d845dba-8b55-4984-890b-547fbdaff10c from there, check if that device is well an LV member of the NVME device. Can you share the full output of lsblk ? Than

[ceph-users] Re: Replacing OSD with containerized deployment

2023-02-01 Thread mailing-lists
OK, attachments wont work. See this: https://filebin.net/t0p7f1agx5h6bdje Best Ken On 01.02.23 17:22, mailing-lists wrote: I've pulled a few lines from the log and i've attached this to this mail. (I hope this works for this mailinglist?) I found the line 135 [2023-01-26 16

[ceph-users] Re: Replacing OSD with containerized deployment

2023-02-01 Thread mailing-lists
recreation steps. Thanks, On Wed, 1 Feb 2023 at 10:13, mailing-lists wrote: Ah, nice. service_type: osd service_id: dashboard-admin-1661788934732 service_name: osd.dashboard-admin-1661788934732 placement:   host_pattern: '*' spec:   data_devices:   

[ceph-users] Re: Replacing OSD with containerized deployment

2023-02-01 Thread mailing-lists
bluestore   wal_devices:     model: Dell Ent NVMe AGN MU AIC 6.4TB status:   created: '2022-08-29T16:02:22.822027Z'   last_refresh: '2023-02-01T09:03:22.853860Z'   running: 306   size: 306 Best Ken On 31.01.23 23:51, Guillaume Abrioux wrote: On Tue, 31 Jan 2023 at 22:31, mai

[ceph-users] Re: Replacing OSD with containerized deployment

2023-01-31 Thread mailing-lists
? Did your db/wall device show as having free space prior to the OSD creation? On Tue, Jan 31, 2023, at 04:01, mailing-lists wrote: OK, the OSD is filled again. In and Up, but it is not using the nvme WAL/DB anymore. And it looks like the lvm group of the old osd is still on the nvme drive. I co

[ceph-users] Re: Replacing OSD with containerized deployment

2023-01-31 Thread mailing-lists
? Did your db/wall device show as having free space prior to the OSD creation? On Tue, Jan 31, 2023, at 04:01, mailing-lists wrote: OK, the OSD is filled again. In and Up, but it is not using the nvme WAL/DB anymore. And it looks like the lvm group of the old osd is still on the nvme drive. I co

[ceph-users] Re: Replacing OSD with containerized deployment

2023-01-31 Thread mailing-lists
dashboard). Do you have a hint on how to fix this? Best Ken On 30.01.23 16:50, mailing-lists wrote: oph wait, i might have been too impatient: 1/30/23 4:43:07 PM[INF]Deploying daemon osd.232 on ceph-a1-06 1/30/23 4:42:26 PM[INF]Found osd claims for drivegroup dashboard-admin

[ceph-users] Re: Replacing OSD with containerized deployment

2023-01-30 Thread mailing-lists
dashboard-admin-1661788934732 -> {'ceph-a1-06': ['232']} 1/30/23 4:39:34 PM[INF]Found osd claims -> {'ceph-a1-06': ['232']} 1/30/23 4:39:34 PM[INF]Found osd claims -> {'ceph-a1-06': ['232']} Although, it doesnt show the NVM

[ceph-users] Re: Replacing OSD with containerized deployment

2023-01-30 Thread mailing-lists
nderstand ramifications before running any commands. :) David On Mon, Jan 30, 2023, at 04:24, mailing-lists wrote: # ceph orch osd rm status No OSD remove/replace operations reported # ceph orch osd rm 232 --replace Unable to find OSDs: ['232'] It is not finding 232 anymore. It is

[ceph-users] Re: Replacing OSD with containerized deployment

2023-01-30 Thread mailing-lists
filling to the other OSDs for the PGs that were on the failed disk? David On Fri, Jan 27, 2023, at 03:25, mailing-lists wrote: Dear Ceph-Users, i am struggling to replace a disk. My ceph-cluster is not replacing the old OSD even though I did: ceph orch osd rm 232 --replace The OSD 232 is st

[ceph-users] Replacing OSD with containerized deployment

2023-01-29 Thread mailing-lists
Dear Ceph-Users, i am struggling to replace a disk. My ceph-cluster is not replacing the old OSD even though I did: ceph orch osd rm 232 --replace The OSD 232 is still shown in the osd list, but the new hdd will be placed as a new OSD. This wouldnt mind me much, if the OSD was also placed o

[ceph-users] Replacing OSD with containerized deployment

2023-01-29 Thread mailing-lists
Dear Ceph-Users, i am struggling to replace a disk. My ceph-cluster is not replacing the old OSD even though I did: ceph orch osd rm 232 --replace The OSD 232 is still shown in the osd list, but the new hdd will be placed as a new OSD. This wouldnt mind me much, if the OSD was also placed o

[ceph-users] Re: PG Ratio for EC overwrites Pool

2022-11-04 Thread mailing-lists
VERY least the number of OSDs it lives on, rounded up to the next power of 2. I’d probably go for at least (2x#OSD) rounded up. If you have two few, your metadata operations will contend with each other. On Nov 3, 2022, at 10:24, mailing-lists wrote: Dear Ceph'ers, I am wondering o

[ceph-users] PG Ratio for EC overwrites Pool

2022-11-03 Thread mailing-lists
Dear Ceph'ers, I am wondering on how to choose the number of PGs for a RBD-EC-Pool. To be able to use RBD-Images on a EC-Pool, it needs to have an regular RBD-replicated-pool, as well as an EC-Pool with EC overwrites enabled, but how many PGs would you need for the RBD-replicated-pool. It does

[ceph-users] PGImbalance

2022-09-26 Thread mailing-lists
Dear Ceph-Users, i've recently setup a 4.3P Ceph-Cluster with cephadm. I am seeing that the health is ok, as seen here: ceph -s   cluster:     id: 8038f0xxx     health: HEALTH_OK   services:     mon: 5 daemons, quorum ceph-a2-07,ceph-a1-01,ceph-a1-10,ceph-a2-01,ceph-a1-05 (age 3w)     mg

[ceph-users] Doing minor version update of Ceph cluster with ceph-ansible and rolling-update playbook

2020-09-28 Thread andreas . elvers+lists . ceph . io
I want to update my mimic cluster to the latest minor version using the rolling-update script of ceph-ansible. The cluster was rolled out with that setup. So as long as ceph_stable_release stays on the current installed version (mimic) the rolling update script will do only a minor update. I

[ceph-users] Re: extract disk usage stats from running ceph cluster

2020-02-11 Thread lists
Hi Joe and Mehmet! Thanks for your responses! The requested outputs at the end of the message. But to make my question more clear: What we are actually after, is not about CURRENT usage of our OSDs, but stats on total GBs written in the cluster, per OSD, and read/write ratio. With those num

[ceph-users] extract disk usage stats from running ceph cluster

2020-02-10 Thread lists
Hi, We would like to replace the current seagate ST4000NM0034 HDDs in our ceph cluster with SSDs, and before doing that, we would like to checkout the typical usage of our current drives, over the last years, so we can select the best (price/performance/endurance) SSD to replace them with. I

[ceph-users] subscribe

2020-01-21 Thread lists
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io