[ceph-users] Re: Repo name bug?

2025-04-11 Thread Alex
Thanks for the response John. We "spoke" on my PR for the log level set to DEBUG. I also have a PR open https://github.com/ceph/cephadm-ansible/pull/339 . I tested this one on my Ceph cluster. The issue which caused me to was that when I ran the preflight playbook it populated my /etc/yum.repos.d

[ceph-users] Re: Repo name bug?

2025-04-11 Thread John Mulligan
On Thursday, April 10, 2025 1:08:00 AM Eastern Daylight Time Alex wrote: > Good morning everyone. > > > Does the preflight playbook have a bug? > > https://github.com/ceph/cephadm-ansible/blob/devel/cephadm-preflight.yml > > Line 82: > paths: "{{ ['noarch', '$basearch'] if ceph_origin == 'commu

[ceph-users] Re: Cephadm flooding /var/log/ceph/cephadm.log

2025-04-11 Thread Alex
Sounds good to me. I responded to your comment in the PR. Thanks. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Cephadm flooding /var/log/ceph/cephadm.log

2025-04-11 Thread John Mulligan
On Thursday, April 10, 2025 10:42:50 PM Eastern Daylight Time Alex wrote: > I made a Pull Request for cephadm.log set DEBUG. > Not sure if I should merge it. Please, no. Even if github allows you to (I think it won't) you should not merge your own PRs unless you are a component lead and it is an

[ceph-users] Re: ceph deployment best practice

2025-04-11 Thread Anthony D'Atri
> > Hi Anthony, > We will be using Samsung SSD 870 QVO 8TB disks on > all OSD servers. Your choices are yours to make, but for what it’s worth, I would not use these. * They are client-class, not designed for enterprise workloads or duty cycle * Best I can tell this lac

[ceph-users] Re: Experience with 100G Ceph in Proxmox

2025-04-11 Thread Anthony D'Atri
Please do let me know if that strategy works out. When you change an osd_spec, out of an abundance of caution it won’t be retroactively applied to existing OSDs, which can be exploited for migrations. > On Apr 11, 2025, at 3:29 PM, Giovanna Ratini > wrote: > > Hello Eneko, > > I switched to

[ceph-users] Re: nodes with high density of OSDs

2025-04-11 Thread Anthony D'Atri
Filestore, pre-ceph-volume may have been entirely different. IIRC LVM is used these days to exploit persistent metadata tags. > On Apr 11, 2025, at 4:03 PM, Tim Holloway wrote: > > I just checked an OSD and the "block" entry is indeed linked to storage using > a /dev/mapper uuid LV, not a /de

[ceph-users] Re: nodes with high density of OSDs

2025-04-11 Thread Tim Holloway
I just checked an OSD and the "block" entry is indeed linked to storage using a /dev/mapper uuid LV, not a /dev/device. When ceph builds an LV-based OSD, it creates a VG whose name is "ceph-u", where "" is a UUID, and an LV named "osd-block-", where "" is also a uuid. So althoug

[ceph-users] Re: Experience with 100G Ceph in Proxmox

2025-04-11 Thread Giovanna Ratini
Hello Eneko, I switched to KRDB, and I’m seeing slightly better performance now. For Switching: https://forum.proxmox.com/threads/how-to-safely-enable-krbd-in-a-5-node-production-environment-running-7-4-19.159186/ NVMe performance remains disappointing, though... They went from 35MB/s to 45MB

[ceph-users] Re: ceph deployment best practice

2025-04-11 Thread gagan tiwari
Hi Anthony, We will be using Samsung SSD 870 QVO 8TB disks on all OSD servers. One more thing , I want to know is that CephFS supports mounting with FsCache on clients ? 500T data stored in the cluster will be accessed by the jobs running on the clients nodes and we need

[ceph-users] Re: [Ceph-announce] v18.2.5 Reef released

2025-04-11 Thread Stephan Hohn
Ok the two issues I see with reef release v18.2.5 - Subnet check seems to be ipv4 only which leads to e.g "public address is not in 'fd01:1:f00f:443::/64' subnet" warnings on ipv6 only clusters. - common/pick_address: check if address in subnet all public address ( pr#57590

[ceph-users] Re: ceph deployment best practice

2025-04-11 Thread Anthony D'Atri
There are a lot of variables there, including whether one uses KRBD or librbd for clients. I suspect that one can’t make a blanket statement either way. > > > Hi Anthony, > > Your statement about MDS is interesting... So it's possible depending on the > CPU-type that read/write operations on

[ceph-users] Re: ceph deployment best practice

2025-04-11 Thread Dominique Ramaekers
Hi Anthony, Your statement about MDS is interesting... So it's possible depending on the CPU-type that read/write operations on RBD will show a better performance than similar read/write operations on a CephFS? > > MDS is single-threaded, so unlike most Ceph daemons it benefits more from > a h

[ceph-users] Re: v19.2.2 Squid released

2025-04-11 Thread Stephan Hohn
Looks like this "common/pick_address: check if address in subnet all public address (pr#57590 , Nitzan Mordechai)" is ipv4 only Am Fr., 11. Apr. 2025 um 13:36 Uhr schrieb Vladimir Sigunov < vladimir.sigu...@gmail.com>: > Hi All, > > My upgrade 19.2.1 -> 1

[ceph-users] Re: ceph deployment best practice

2025-04-11 Thread Anthony D'Atri
> On Apr 11, 2025, at 4:04 AM, gagan tiwari > wrote: > > Hi Anthony, > Thanks for the reply! > > We will be using CephFS to access Ceph Storage from clients. So, this > will need MDS daemon also. MDS is single-threaded, so unlike most Ceph daemons it benefits more f

[ceph-users] Re: nodes with high density of OSDs

2025-04-11 Thread Anthony D'Atri
> I think one of the scariest things about your setup is that there are only 4 > nodes (I'm assuming that means Ceph hosts carrying OSDs). I've been bouncing > around different configurations lately between some of my deployment issues > and cranky old hardware and I presently am down to 4 hos

[ceph-users] Re: nodes with high density of OSDs

2025-04-11 Thread Anthony D'Atri
I thought those links were to the by-uuid paths for that reason? > On Apr 11, 2025, at 6:39 AM, Janne Johansson wrote: > > Den fre 11 apr. 2025 kl 09:59 skrev Anthony D'Atri : >> >> Filestore IIRC used partitions, with cute hex GPT types for various states >> and roles. Udev activation was so

[ceph-users] Re: Cannot reinstate ceph fs mirror because i destroyed the ceph fs mirror peer/ target server

2025-04-11 Thread Eugen Block
Hi, I would expect that you have a similar config-key entry: ceph config-key ls |grep "peer/cephfs" "cephfs/mirror/peer/cephfs/18c02021-8902-4e3f-bc17-eaf48331cc56", Maybe removing that peer would already suffice? Zitat von Jan Zeinstra : Hi, This is my first post to the forum and I don

[ceph-users] Re: endless remapping after increasing number of PG in a pool

2025-04-11 Thread Michel Jouvin
Hi, After 2 weeks, the increase of the number of PGs in an EC pool (9+6) from 256 PGs to 1024 completed successfully! I was still wondering if such a duration was expected or may be the sign of a problem... After the previous exchanges, I restarted the increase by setting both pg_num and pgp

[ceph-users] Re: v19.2.2 Squid released

2025-04-11 Thread Vladimir Sigunov
Hi All, My upgrade 19.2.1 -> 19.2.2 was successful (8 nodes, 320 OSDs, HDD for data, SSD for WAL/DB). Could the issue be related to IP v6? I'm using IP v4, public network only. Today I will test the upgrade 18.2.4 to 18.2.5 (same cluster configuration). Will provide feedback, if needed. SIncerel

[ceph-users] Re: nodes with high density of OSDs

2025-04-11 Thread Konstantin Shalygin
Hi, > On 11 Apr 2025, at 10:53, Alex from North wrote: > > Hello Tim! First of all, thanks for the detailed answer! > Yes, probably in set up of 4 nodes by 116 OSD it looks a bit overloaded, but > what if I have 10 nodes? Yes, nodes itself are still heavy but in a row it > seems to be not that

[ceph-users] Re: FS not mount after update to quincy

2025-04-11 Thread Konstantin Shalygin
Hi, > On 11 Apr 2025, at 09:59, Iban Cabrillo wrote: > > 10.10.3.1:3300,10.10.3.2:3300,10.10.3.3:3300:/ /cephvmsfs ceph > name=cephvmsfs,secretfile=/etc/ceph/cephvmsfs.secret,noatime,mds_namespace=cephvmsfs,_netdev > 0 0 Try add the ms_mode option, because you use msgr2 protocol. For example,

[ceph-users] Re: nodes with high density of OSDs

2025-04-11 Thread Tim Holloway
Hi Alex, I think one of the scariest things about your setup is that there are only 4 nodes (I'm assuming that means Ceph hosts carrying OSDs). I've been bouncing around different configurations lately between some of my deployment issues and cranky old hardware and I presently am down to 4 h

[ceph-users] Re: nodes with high density of OSDs

2025-04-11 Thread Janne Johansson
Den fre 11 apr. 2025 kl 09:59 skrev Anthony D'Atri : > > Filestore IIRC used partitions, with cute hex GPT types for various states > and roles. Udev activation was sometimes problematic, and LVM tags are more > flexible and reliable than the prior approach. There no doubt is more to it > but

[ceph-users] Re: FS not mount after update to quincy

2025-04-11 Thread Iban Cabrillo
Hi Janne, yes both mds are rechable: zeus01:~ # telnet cephmds01 6800 Trying 10.10.3.8... Connected to cephmds01. Escape character is '^]'. ceph v2 zeus01:~ # telnet cephmds02 6800 Trying 10.10.3.9... Connected to cephmds02. Escape character is '^]'. ceph v2 Regards, I --

[ceph-users] Re: ceph deployment best practice

2025-04-11 Thread gagan tiwari
Hi Anthony, Thanks for the reply! We will be using CephFS to access Ceph Storage from clients. So, this will need MDS daemon also. So, based on your advice, I am thinking of having 4 Dell PowerEdge servers . 3 of them will run 3 Monitor daemons and one of them will run

[ceph-users] Re: FS not mount after update to quincy

2025-04-11 Thread Janne Johansson
Can the client talk to the MDS on the port it listens on? Den fre 11 apr. 2025 kl 08:59 skrev Iban Cabrillo : > > > > Hi guys Good morning, > > > Since I performed the update to Quincy, I've noticed a problem that wasn't > present with Octopus. Currently, our Ceph cluster exports a filesystem to

[ceph-users] Re: nodes with high density of OSDs

2025-04-11 Thread Alex from North
Hello Tim! First of all, thanks for the detailed answer! Yes, probably in set up of 4 nodes by 116 OSD it looks a bit overloaded, but what if I have 10 nodes? Yes, nodes itself are still heavy but in a row it seems to be not that dramatic, no? However, in a docu I see that it is quite common for

[ceph-users] Re: [Ceph-announce] v18.2.5 Reef released

2025-04-11 Thread Stephan Hohn
Hi all, started an update on our staging cluster from v18.2.4 --> v18.2.5 ~# ceph orch upgrade start --image quay.io/ceph/ceph:v18.2.5 Mons and Mgr went fine but osds not coming up with v18.2.5 Apr 11 06:59:56 0cc47a6df14e podman[263290]: 2025-04-11 06:59:56.697993041 + UTC m=+0.057869056 ima

[ceph-users] FS not mount after update to quincy

2025-04-11 Thread Iban Cabrillo
Hi guys Good morning, Since I performed the update to Quincy, I've noticed a problem that wasn't present with Octopus. Currently, our Ceph cluster exports a filesystem to certain nodes, which we use as a backup repository. The machines that mount this FS are currently running Ubuntu 24 with