[ceph-users] About ceph osd slow ops

2023-11-30 Thread VÔ VI
Hi community, My cluster running with 10 nodes and 2 nodes goes down, sometimes the log shows the slow ops, what is the root cause? My osd is HDD and block.db and wal is 500GB SSD per osd. Health check update: 13 slow ops, oldest one blocked for 167 sec, osd.10 has slow ops (SLOW_OPS) Thanks to

[ceph-users] Re: Space reclaim doesn't happening in nautilus RBD pool

2023-11-30 Thread Szabo, Istvan (Agoda)
Thrash empty. Istvan Szabo Staff Infrastructure Engineer --- Agoda Services Co., Ltd. e: istvan.sz...@agoda.com --- From: Ilya Dryomov

[ceph-users] Ceph/daemon container lvm tools don’t work

2023-11-30 Thread Gaël THEROND
Is there anyone using containerized CEPH over CentOS Stream 9 Hosts already? I think there is a pretty big issue in here if CEPH images are built over CentOS but never tested against it. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe se

[ceph-users] Re: Recommended architecture

2023-11-30 Thread Anthony D'Atri
I try to address these ideas in https://www.amazon.com/Learning-Ceph-scalable-reliable-solution-ebook/dp/B01NBP2D9I though as with any tech topic the details change over time. It's difficult to interpret the table the OP included, but I think it shows a 3 node cluster. When you only have 3 nod

[ceph-users] Re: Recommended architecture

2023-11-30 Thread Janne Johansson
Den tors 30 nov. 2023 kl 17:35 skrev Francisco Arencibia Quesada < arencibia.franci...@gmail.com>: > Hello again guys, > > Can you recommend me a book that explains best practices with Ceph, > for example is it okay to have mon,mgr, osd in the same virtual machine, > OSDs can need very much RAM d

[ceph-users] Re: Public/private network

2023-11-30 Thread John Jasen
cluster_network is an optional add-on to handle some of the internal ceph traffic. Your mon address needs to be accessible/routable for anything outside your ceph cluster that wants to consume it. That should also be in your public_network range. I stumbled over this a few times in figuring out ho

[ceph-users] Public/private network

2023-11-30 Thread Albert Shih
Hi everyone. Status : Installing a ceph cluster Version : 17.2.7 Quincy OS : Debian 11. Each of my server got two ip address. One public and one private. When I'm trying to deploy my cluster with on a server server1 (the hostname) with cephadm bootstrap --mon-id hostname --mon-ip IP_P

[ceph-users] Recommended architecture

2023-11-30 Thread Francisco Arencibia Quesada
Hello again guys, Can you recommend me a book that explains best practices with Ceph, for example is it okay to have mon,mgr, osd in the same virtual machine, what is the recommended architecture according to your experience? Because by default is doing this: Cluster Ceph | +

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-30 Thread Yuri Weinstein
The fs PRs: https://github.com/ceph/ceph/pull/54407 https://github.com/ceph/ceph/pull/54677 were approved/tested and ready for merge. What is the status/plan for https://tracker.ceph.com/issues/63618? On Wed, Nov 29, 2023 at 10:51 AM Igor Fedotov wrote: > > https://tracker.ceph.com/issues/63618

[ceph-users] Re: MDS_DAMAGE in 17.2.7 / Cannot delete affected files

2023-11-30 Thread Sebastian Knust
Hi Patrick, On 30.11.23 03:58, Patrick Donnelly wrote: I've not yet fully reviewed the logs but it seems there is a bug in the detection logic which causes a spurious abort. This does not appear to be actually new damage. We are accessing the metadata (read-only) daily. The issue only popped

[ceph-users] rook-ceph RAW USE / DATA size difference reported after osd resize operation

2023-11-30 Thread merp
Hi, I am set to resize OSDs in ceph cluster to extend overall cluster capacity, by adding 40GB to each of disk and noticed that after disk resize and OSD restart RAW USE size grows proportionally to new size, ex. by 20GB while DATA remains the same, which makes new space not readily available.

[ceph-users] Re: error deploying ceph

2023-11-30 Thread Adam King
That message in the `ceph orch device ls` output is just why the device is unavailable for an OSD. The reason it now has sufficient space in this case is because you've already put an OSD on it, so it's really just telling you you can't place another one. So you can expect to see something like tha

[ceph-users] Re: error deploying ceph

2023-11-30 Thread Francisco Arencibia Quesada
Thanks again guys, The cluster is healthy now, is this normal? all looks look except for this output *Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected * root@node1-ceph:~# cephadm shell -- ceph status Inferring fsid 209a7bf0-8f6d-11ee-8828-23977d76b74f Inferring config /v

[ceph-users] Re: Space reclaim doesn't happening in nautilus RBD pool

2023-11-30 Thread Ilya Dryomov
On Thu, Nov 30, 2023 at 8:25 AM Szabo, Istvan (Agoda) wrote: > > Hi, > > Is there any config on Ceph that block/not perform space reclaim? > I test on one pool which has only one image 1.8 TiB in used. > > > rbd $p du im/root > warning: fast-diff map is not enabled for root. operation may be slow.