[ceph-users] new crush map requires client version hammer

2022-07-19 Thread Iban Cabrillo
Dear cephers, The upgrade has been successful and all cluster elements are running version 14.2.22 (including clients), and right now the cluster is HEALTH_OK, msgr2 is enabled and working properly. Following the upgrade guide from mimic to nautilus https://docs.ceph.com/en/latest/releases/na

[ceph-users] Re: new crush map requires client version hammer

2022-07-19 Thread Iban Cabrillo
HI, Looking deeper at my configuration i see: [root@cephmon03 ~]# ceph osd dump | grep min_compat_client require_min_compat_client firefly min_compat_client hammer It is safe to make: ceph osd set-require-min-compat-client hammer In order to enable straw2? regards I,

[ceph-users] Re: Can't setup Basic Ceph Client

2022-07-19 Thread Iban Cabrillo
Hi Jean, If you do not want to use the admin user, which is the most logical thing to do, you must create a client with rbd access to the pool on which you are going to perform the I/O actions. For example in our case it is the user cinder: client.cinder key: X

[ceph-users] Continuos remapping over 5% mispalced

2022-07-27 Thread Iban Cabrillo
Hello everyone, After upgrading the monitors and mgrs to octopus (15.2.16) the system told me that some pools did not have the correct pg_nums, some of them above the optimum and one of them the busiest below 256 of 1024 required. [root@cephmon01 ~]# ceph versions { "mon": { "ceph version

[ceph-users] MAnual Upgrade from Octopus Ubuntu 18.04 to Quincy 20.04

2023-05-03 Thread Iban Cabrillo
Dear Cephers, We are planing the dist upgrade from Octopus to Quincy in the next weeks. The first step is the linux version upgrade from Ubuntu 18.04 to Ubuntu 20.04 from some big ODS servers runnign this OS version. we just have a look at ( Upgrading non-cephadm clusters [ https://ceph.io/en/

[ceph-users] advise to Ceph upgrade from mimic to ***

2022-01-31 Thread Iban Cabrillo
Dear Cephers, We are planning the upgrade of out Ceph cluster, version Mimic 13.2.10. (3Mons, 3MGRs, 181OSD, 2MSDs, 2 RGW) Cluster is healthy and all the pools are running size 3 min_size 2. This is an old cluster implementation that has been upgraded from firefly (There are still a clouple OS

[ceph-users] Re: advise to Ceph upgrade from mimic to ***

2022-01-31 Thread Iban Cabrillo
Thanks a lot Guys for you answers. One question about OMAP. I see that "after the upgrade, the first time each OSD starts, it will do a format conversion to improve the accounting for “omap” data. It may take a few minutes or up to a few hours" Is there any way to check/control this proc

[ceph-users] Low performance on format volume

2022-04-07 Thread Iban Cabrillo
Dear, Some users are noticing a low performance, especially when formatting large volumes (around 100GB), apparently the system is healthy and no errors are detected in the logs: [root@cephmon01 ~]# ceph health detail HEALTH_OK except this one that I see repeatedly in one of the OSD servers

[ceph-users] Re: Low performance on format volume

2022-04-12 Thread Iban Cabrillo
Hi, Following with the performance (mimic, 144 SATA disk 10Gbps network),the [OSD] entry has the default conf there is no tunnig yet. I see a lot of parameters that can be set : [OSD] osd journal size = osd max write size = osd client message size cap = osd deep scrub st

[ceph-users] Ceph mon cannot join to cluster during upgrade

2022-06-29 Thread Iban Cabrillo
Hi Guys, I am in the upgrade proccess from mimic to nautilus. The first step was to upgrade one cephmon, but after that this cephmon can not rejoin the cluster I see this at logs: 2022-06-29 15:54:48.200 7fd3d015f1c0 0 ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (

[ceph-users] Re: Ceph mon cannot join to cluster during upgrade

2022-06-29 Thread Iban Cabrillo
Hi Eugen, There is only ceph-mgr and ceph-mon on this node (working fine for years with versions <14) Jun 29 16:08:42 cephmon03 systemd: ceph-mon@cephmon03.service failed. Jun 29 16:16:36 cephmo

[ceph-users] Re: Ceph mon cannot join to cluster during upgrade

2022-06-30 Thread Iban Cabrillo
Hi Eugen et al. , Seems to be a problem only on that node I had to increase the memory to 7GB,after that the deamon could start. The other two mons are working as usual with 3.5GB and witout no trouble. Thanks a lot for your advices, now cluster is up and running under nautilus version. R

[ceph-users] ceph iscsi gateway

2025-02-10 Thread Iban Cabrillo
Good morning, I wanted to inquire about the status of the Ceph iSCSI gateway service. We currently have several machines installed with this technology that are working correctly, although I have seen that it appears to be discontinued since 2022. My question is whether to continue down th

[ceph-users] Re: ceph iscsi gateway

2025-02-11 Thread Iban Cabrillo
Okay Gregory, Bad news for me, I will have to find another way. The truth is that from the operational and long-term maintenance point of view it is practically transparent. This option fitted very well in our system, since the Ceph cluster is easy to maintain, while for example the iSCSI c

[ceph-users] Re: Updating ceph to pacific and quince

2025-04-05 Thread Iban Cabrillo
Thanks Eugen, One question more: Should I uninstall the monitor and create it again with the Quincy packages already installed, or can I do it while I'm still on the Pacific version and the new monitor will install with rocksdb? Regards, I --

[ceph-users] Updating ceph to pacific and quince

2025-04-01 Thread Iban Cabrillo
Dear cephers, We intend to begin the migration of our Ceph cluster from Octopus to Pacific and subsequently to Quincy. I have seen that from Pacific onwards, it is possible to automate installations with cephadm. One of the questions that arise is whether the clients (depending on the Op

[ceph-users] Re: Updating ceph to pacific and quince

2025-04-01 Thread Iban Cabrillo
Hi, Thanks so much, guys, for all your input and perspectives, it's been really enriching Regards, I -- Ibán Cabrillo Bartolomé Instituto de Física de Cantabria (IFCA-CSIC) Santander, Spain Tel: +34942200969/+3466993042

[ceph-users] Re: FS not mount after update to quincy

2025-04-11 Thread Iban Cabrillo
Hi Janne, yes both mds are rechable: zeus01:~ # telnet cephmds01 6800 Trying 10.10.3.8... Connected to cephmds01. Escape character is '^]'. ceph v2 zeus01:~ # telnet cephmds02 6800 Trying 10.10.3.9... Connected to cephmds02. Escape character is '^]'. ceph v2 Regards, I --

[ceph-users] FS not mount after update to quincy

2025-04-11 Thread Iban Cabrillo
Hi guys Good morning, Since I performed the update to Quincy, I've noticed a problem that wasn't present with Octopus. Currently, our Ceph cluster exports a filesystem to certain nodes, which we use as a backup repository. The machines that mount this FS are currently running Ubuntu 24 with

[ceph-users] Re: FS not mount after update to quincy

2025-04-12 Thread Iban Cabrillo
Hi Konstantine, Perfect!!! it works Regards, I -- Ibán Cabrillo Bartolomé Instituto de Física de Cantabria (IFCA-CSIC) Santander, Spain Tel: +34942200969/+34669930421 Responsible for advanced computing service (RSC)

[ceph-users] external multipath disk not mounted after power off/on the server

2025-02-27 Thread Iban Cabrillo
Dear cephers, We have a series of devices that mount several SATA disks via an external cabinet. These disks have 4 paths managed by multipath. (local disks about 20 work perfectly) cephosd23:~ # multipath -ll mpathe (35000c500d88657e3) dm-30 LENOVO-X,ST14000NM004J size=13T features='0'

[ceph-users] Re: external multipath disk not mounted after power off/on the server

2025-02-27 Thread Iban Cabrillo
Hi more info, The ceph-volume lvm list, show the wrong osds for example: == osd.82 == [block] /dev/ceph-3b5662ac-854a-4954-aa44-8951feaa1840/osd-block-f96c826d-3570-4c78-9ef6-bea191589102 block device /dev/ceph-3b5662ac-854a-4954-aa44-8951feaa1840/osd-

[ceph-users] Re: external multipath disk not mounted after power off/on the server

2025-02-27 Thread Iban Cabrillo
Hi, I am Still debbuging, ceph-volume lvm activate -all work in 2 servers, but in the other always the same error: failed to read label: cephosd23:~ # ceph-volume lvm activate --all --> Activating OSD ID 112 FSID d8fc1a6f-3a29-41f3-aebb-3c6be84047e5 Running command: /usr/bin/chown -R ceph

[ceph-users] Re: Updating ceph to pacific and quince

2025-04-02 Thread Iban Cabrillo
Hello again guys, In the end, I've decided to do it manually. I know it's more tedious, but I control every step I take. The migration between Octopus and Pacific went without any problems. However, I now see that the monitors, at least, are using leveldb: cephmon01:~ # cat /var/lib/ceph/mon/c