[ceph-users] Re: MDS stuck in "up:replay"

2023-01-23 Thread Venky Shankar
On Tue, Jan 24, 2023 at 1:34 AM wrote: > > Hello Thomas, > > I have same issue with mds like you describe, and ceph version is a same. Did > state up:replay ever finish in your case? There is probably much going on with Thomas's cluster which is blocking the mds to make progress. Could you uploa

[ceph-users] Re: ceph cluster iops low

2023-01-23 Thread Mark Nelson
Hi Peter, I'm not quite sure if you're cluster is fully backed by NVMe drives based on your description, but you might be interested in the CPU scaling article we posted last fall. It's available here: https://ceph.io/en/news/blog/2022/ceph-osd-cpu-scaling/ That gives a good overview of wh

[ceph-users] rbd_mirroring_delete_delay not removing images with snaps

2023-01-23 Thread Tyler Brekke
We use the rbd-mirror as a way to migrate volumes between clusters. The process is enable mirroring on the image to migrate, demote on the primary cluster, promote on the secondary cluster, and then disable mirroring on the image. When we started using `rbd_mirroring_delete_delay` so we could retai

[ceph-users] ceph cluster iops low

2023-01-23 Thread petersun
I have my ceph IOPS very low with over 48 SSD backed on NVMs for DB/WAL on four physical servers. The whole cluster has only about 20K IO total. Looks the IOs are suppressed over bottleneck somewhere. Dstat shows a lots csw and interrupts over 150K, while I am using FIO bench 4K 128QD test. I c

[ceph-users] Re: rbd-mirror ceph quincy Not able to find rbd_mirror_journal_max_fetch_bytes config in rbd mirror

2023-01-23 Thread ankit raikwar
hello , i tryed all of the option it's not working , my replication network speed still same , can you help me any other way we can do speed performance increase ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email t

[ceph-users] Re: MDS stuck in "up:replay"

2023-01-23 Thread adjurdjevic
Hello Thomas, I have same issue with mds like you describe, and ceph version is a same. Did state up:replay ever finish in your case? Thx Aleksandar ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph

[ceph-users] Re: Ceph Disk Prediction module issues

2023-01-23 Thread Nikhil Shah
Hey, did you ever find a resolution for this? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Set async+rdma in Ceph cluster

2023-01-23 Thread Aristide Bekroundjo
Hi, I try to set rdma setting in cluster of 6 nodes (3 mon, 3 OSD Nodes and 10 OSDs on each OSD nodes). OS : CentOS Stream release 8. I've followed the step bellow, but I got an error. [root@mon1 ~]# cephadm shell Inferring fsid 9414e1bc-9061-11ed-90fc-00163e4f92ad Using recent ceph image qu

[ceph-users] Re: 16.2.11 pacific QE validation status

2023-01-23 Thread Yuri Weinstein
Ilya, Venky rbd, krbd, fs reruns are almost ready, pls review/approve On Mon, Jan 23, 2023 at 2:30 AM Ilya Dryomov wrote: > > On Fri, Jan 20, 2023 at 5:38 PM Yuri Weinstein wrote: > > > > The overall progress on this release is looking much better and if we > > can approve it we can plan to pub

[ceph-users] Re: 16.2.11 pacific QE validation status

2023-01-23 Thread Ilya Dryomov
On Fri, Jan 20, 2023 at 5:38 PM Yuri Weinstein wrote: > > The overall progress on this release is looking much better and if we > can approve it we can plan to publish it early next week. > > Still seeking approvals > > rados - Neha, Laura > rook - Sébastien Han > cephadm - Adam > dashboard - Erne

[ceph-users] Re: 16.2.11 pacific QE validation status

2023-01-23 Thread Venky Shankar
Hey Yuri, On Fri, Jan 20, 2023 at 10:08 PM Yuri Weinstein wrote: > > The overall progress on this release is looking much better and if we > can approve it we can plan to publish it early next week. > > Still seeking approvals > > rados - Neha, Laura > rook - Sébastien Han > cephadm - Adam > dash

[ceph-users] Re: Pools and classes

2023-01-23 Thread Massimo Sgaravatto
Thanks a lot Cheers, Massimo On Mon, Jan 23, 2023 at 9:55 AM Robert Sander wrote: > Am 23.01.23 um 09:44 schrieb Massimo Sgaravatto: > > >> This triggered the remapping of some pgs and therefore some data > movement. > >> Is this normal/expected, since for the time being I have only hdd osds ? >

[ceph-users] Re: trouble deploying custom config OSDs

2023-01-23 Thread Guillaume Abrioux
On Fri, 20 Jan 2023 at 13:12, seccentral wrote: > Hello, > Thank you for the valuable info, and especially for the slack link (it's > not listed on the community page) > The ceph-volume command was issued in the following manner : > login to my 1st vps from which I performed the boostrap with cep

[ceph-users] Re: Pools and classes

2023-01-23 Thread Massimo Sgaravatto
Any feedback ? I would just like to be sure that I am using the right procedure ... Thanks, Massimo On Fri, Jan 20, 2023 at 11:28 AM Massimo Sgaravatto < massimo.sgarava...@gmail.com> wrote: > Dear all > > I have a ceph cluster where so far all OSDs have been rotational hdd disks > (actually the

[ceph-users] Re: Pools and classes

2023-01-23 Thread Robert Sander
Am 23.01.23 um 09:44 schrieb Massimo Sgaravatto: This triggered the remapping of some pgs and therefore some data movement. Is this normal/expected, since for the time being I have only hdd osds ? This is expected behaviour as the cluster map has changed. Internally the device classes are rep