[ceph-users] MDS Performance and PG/PGP value

2022-10-05 Thread Yoann Moulin
Hello As previously describe here, we have a full-flash NVME ceph cluster (16.2.6) with currently only cephfs service configured. The current setup is 54 nodes with 1 NVME each, 2 partitions for each NVME. 8 MDSs (7 actives, 1 sandby) MDS cache memory limit to 128GB. It's an hyperconverged K8S

[ceph-users] rbd mirroring questions

2022-10-05 Thread John Ratliff
We're testing rbd mirroring so that we can replicate openstack volumes to a new ceph cluster for use in a new openstack deployment. We currently have one-way mirroring enabled on two of our test clusters in pool mode. How can I disable replication on the new cluster for a particular image once I

[ceph-users] Ceph Leadership Team Meeting Minutes - October 5, 2022

2022-10-05 Thread Neha Ojha
Hi everyone, Here are the topics discussed in today's meeting. - What changes with the announcement about IBM [1]? Nothing changes for the upstream Ceph community. There will be more focus on performance and scale testing. - 17.2.4 was released last week, no major issues reported yet. This releas

[ceph-users] Re: Trying to add NVMe CT1000P2SSD8

2022-10-05 Thread Murilo Morais
I've already tested the performance. Great performance by the way, but this anomaly is occurring in the OSDs starting in an Error state. I don't know how to debug this problem. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email

[ceph-users] Re: Trying to add NVMe CT1000P2SSD8

2022-10-05 Thread Eneko Lacunza
Hi, This is a consumer SSD. Did you test it's performance first? Better get a datacenter disk... Cheers El 5/10/22 a las 17:53, Murilo Morais escribió: Nobody? ___ ceph-users mailing list --ceph-users@ceph.io To unsubscribe send an email toceph-use

[ceph-users] Re: cephfs-top doesn't work

2022-10-05 Thread Jos Collin
Yes, you need perf stats version 2 for the latest cephfs-top UI to work. On Wed, 5 Oct 2022 at 20:03, Vladimir Brik wrote: > It looks like my cluster is too old. I am getting "perf > stats version mismatch!" > > Vlad > > On 10/5/22 08:37, Jos Collin wrote: > > This issue is fixed in > > https://

[ceph-users] Re: Trying to add NVMe CT1000P2SSD8

2022-10-05 Thread Murilo Morais
Nobody? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: 17.2.4: mgr/cephadm/grafana_crt is ignored

2022-10-05 Thread Redouane Kachach Elhichou
Glad it helped you to fix the issue. I'll open a tracker to fix the docs. On Wed, Oct 5, 2022 at 3:52 PM E Taka <0eta...@gmail.com> wrote: > Thanks, Redouane, that helped! The documentation should of course also be > updated in this context. > > Am Mi., 5. Okt. 2022 um 15:33 Uhr schrieb Redouane

[ceph-users] Re: cephfs-top doesn't work

2022-10-05 Thread Vladimir Brik
It looks like my cluster is too old. I am getting "perf stats version mismatch!" Vlad On 10/5/22 08:37, Jos Collin wrote: This issue is fixed in https://github.com/ceph/ceph/pull/48090 . Could you please check it out and let me know? Thanks. On Tue

[ceph-users] Re: 17.2.4: mgr/cephadm/grafana_crt is ignored

2022-10-05 Thread E Taka
Thanks, Redouane, that helped! The documentation should of course also be updated in this context. Am Mi., 5. Okt. 2022 um 15:33 Uhr schrieb Redouane Kachach Elhichou < rkach...@redhat.com>: > Hello, > > As of this PR https://github.com/ceph/ceph/pull/47098 grafana cert/key are > now stored per-n

[ceph-users] Re: cephfs-top doesn't work

2022-10-05 Thread Jos Collin
This issue is fixed in https://github.com/ceph/ceph/pull/48090. Could you please check it out and let me know? Thanks. On Tue, 19 Apr 2022 at 01:14, Vladimir Brik wrote: > Does anybody know why cephfs-top may only display header > lines (date, client types, metric names) but no actual data? > >

[ceph-users] Re: ceph tell setting ignored?

2022-10-05 Thread Nicola Mori
That's indeed the case: # ceph config get osd osd_op_queue mclock_scheduler Thank you very much for this tip, I'll play with mclock parameters. On 05/10/22 13:11, Janne Johansson wrote: # ceph tell osd.2 config get osd_max_backfills { "osd_max_backfills": "1000" } makes little sense to

[ceph-users] Re: 17.2.4: mgr/cephadm/grafana_crt is ignored

2022-10-05 Thread Redouane Kachach Elhichou
Hello, As of this PR https://github.com/ceph/ceph/pull/47098 grafana cert/key are now stored per-node. So instead of *mgr/cephadm/grafana_crt* they are stored per-nodee as: *mgr/cephadm/{hostname}/grafana_crt* *mgr/cephadm/{hostname}/grafana_key* In order to see the config entries that have been

[ceph-users] 17.2.4: mgr/cephadm/grafana_crt is ignored

2022-10-05 Thread E Taka
Hi, since the last update from 17.2.3 to version 17.2.4, the mgr/cephadm/grafana_crt setting is ignored. The output of ceph config-key get mgr/cephadm/grafana_crt ceph config-key get mgr/cephadm/grafana_key ceph dashboard get-grafana-frontend-api-url ist correct. Grafana and the Dashboard are r

[ceph-users] Re: ceph on kubernetes

2022-10-05 Thread Nico Schottelius
Hey Oğuz, the typical recommendations of native ceph still uphold in k8s, additionally something you need to consider: - Hyperconverged setup or dedicated nodes - what is your workload and budget - Similar to native ceph, think about where you want to place data, this influences the selector

[ceph-users] Re: ceph tell setting ignored?

2022-10-05 Thread Stefan Kooman
On 10/5/22 12:09, Nicola Mori wrote: Dear Ceph users, I am trying to tune my cluster's recovery and backfill. On the web I found that I can set related tunables by e.g.: ceph tell osd.* injectargs --osd-recovery-sleep-hdd=0.0 --osd-max-backfills=8 --osd-recovery-max-active=8 --osd-recovery-

[ceph-users] Re: cephadm automatic sizing of WAL/DB on SSD

2022-10-05 Thread Anh Phan Tuan
It seems the 17.2.4 release has fixed this. ceph-volume: fix fast device alloc size on mulitple device (pr#47293, > Arthur Outhenin-Chalandre) Bug #56031: batch compute a lower size than what it should be for blockdb with multiple fast device - ceph-volume - Ceph

[ceph-users] ceph on kubernetes

2022-10-05 Thread Oğuz Yarımtepe
Hi, I am using Ceph on RKE2. Rook operator is installed on a rke2 cluster running on Azure vms. I would like to learn whether there are best practices for ceph on Kubernetes, like separating ceph nodes or pools or using some custom settings for Kubernetes environment. Will be great if anyone share

[ceph-users] Re: ceph tell setting ignored?

2022-10-05 Thread Wout van Heeswijk
Hi Nicola, Maybe 'config diff' can be of use to you ceph tell osd.2 config diff It should tell you every value that is not 'default' and where the value(s) came from (File, mon, override). Wout -Oorspronkelijk bericht- Van: Nicola Mori Verzonden: Wednesday, 5 October 2022 12:33 Aan:

[ceph-users] Re: ceph tell setting ignored?

2022-10-05 Thread Janne Johansson
> # ceph tell osd.2 config get osd_max_backfills > { > "osd_max_backfills": "1000" > } > > makes little sense to me. This means you have the mClock IO scheduler, and it gives back this value since you are meant to change the mClock priorities and not the number of backfills. Some more info

[ceph-users] Re: ceph tell setting ignored?

2022-10-05 Thread Nicola Mori
But how can I check if the applied temporary value has been correctly set? Maybe I'm doing something wrong, but this: # ceph tell osd.2 config set osd_max_backfills 8 { "success": "osd_max_backfills = '8' " } # ceph tell osd.2 config get osd_max_backfills { "osd_max_backfills": "1000" }

[ceph-users] Re: ceph tell setting ignored?

2022-10-05 Thread Anthony D'Atri
Injection modifies the running state of the specified daemons. It does not modify the central config database (saved / persistent state). Injected values will go away when the daemon restarts. > On Oct 5, 2022, at 6:10 AM, Nicola Mori wrote: > > Dear Ceph users, > > I am trying to tune my

[ceph-users] ceph tell setting ignored?

2022-10-05 Thread Nicola Mori
Dear Ceph users, I am trying to tune my cluster's recovery and backfill. On the web I found that I can set related tunables by e.g.: ceph tell osd.* injectargs --osd-recovery-sleep-hdd=0.0 --osd-max-backfills=8 --osd-recovery-max-active=8 --osd-recovery-max-single-start=4 but I cannot find