[ceph-users] Re: reef 18.2.5 QE validation status

2025-04-01 Thread Venky Shankar
Hi Yuri, On Tue, Apr 1, 2025 at 2:36 AM Yuri Weinstein wrote: > > Release Notes - > > https://github.com/ceph/ceph/pull/62589 > https://github.com/ceph/ceph.io/pull/856 > > Still seeking approvals/reviews for: > > rados - Travis? Nizamudeen? > > rgw - Adam E approved? > > fs - Venky approved? f

[ceph-users] Re: Prometheus anomaly in Reef

2025-04-01 Thread Tim Holloway
Hi Eugen, I never used a spec file before now. It was all done directly originally. One thing that came up, however, is that my 14 year old motherboards seem to have been rejecting the extra disks I've been trying to add. That includes a spinning disk, a SATA SSD and even an M.2 PCI adapter.

[ceph-users] Re: reef 18.2.5 QE validation status

2025-04-01 Thread Nizamudeen A
dashboard approved on behalf of @Afreen Misbah (since she is on vacation) Regards, Nizam On Tue, Apr 1, 2025 at 2:36 AM Yuri Weinstein wrote: > Release Notes - > > https://github.com/ceph/ceph/pull/62589 > https://github.com/ceph/ceph.io/pull/856 > > Still seeking approvals/reviews for: > > ra

[ceph-users] endless remapping after increasing number of PG in a pool

2025-04-01 Thread Michel Jouvin
Hi, We are observing a new strange behaviour on our production cluster : we increased the number of PG (from 256 to 2048) in a (EC) pool after a warning that there was a very high number of objects per pool (the pool has 52M objects). Background: this happens in the cluster that had a strang

[ceph-users] Re: endless remapping after increasing number of PG in a pool

2025-04-01 Thread Burkhard Linke
Hi, On 4/1/25 10:03, Michel Jouvin wrote: Hi Bukhard, Thanks for your answer. Your explanation seems to match well our observations, in particular the fact that new misplaced objects are added when we fall under something like 0.5% of misplaced objects. What is not clear for me anyway is tha

[ceph-users] [grafana] ceph-cluster-advanced: wrong title for object count

2025-04-01 Thread Eugen Block
Hi Ankush, I found another (minor) Grafana issue. The "Ceph Cluster - Advanced" panel contains a graph for "object count", but its title is "OSD Type Count". I see that in Squid 19.2.0 (Grafana version 9.4.12) and 19.2.1 (Grafana 10.4.0), and the latest ceph-cluster-advanced.json [0] also

[ceph-users] Re: reef 18.2.5 QE validation status

2025-04-01 Thread Adam Emerson
On 31/03/2025, Yuri Weinstein wrote: > Release Notes - > > https://github.com/ceph/ceph/pull/62589 > https://github.com/ceph/ceph.io/pull/856 [snip] > rgw - Adam E approved? RGW approved! ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscrib

[ceph-users] Re: Prometheus anomaly in Reef

2025-04-01 Thread Eugen Block
Hi Tim, I'm glad you sorted it out. But I'm wondering, did the prometheus spec file ever work? I had just assumed that it had since you wrote you had prometheus up and running before, so I didn't even question the "networks" parameter in there. Now that you say you only used the "--placem

[ceph-users] Re: Updating ceph to pacific and quince

2025-04-01 Thread Eugen Block
Hi, first of all, the Ceph docs are "official", here's the relevant section for upgrading Ceph: https://docs.ceph.com/en/latest/cephadm/upgrade/ Octopus was the first version using the orchestrator, not Pacific. So you could adopt your cluster to cephadm in your current version already:

[ceph-users] Re: ceph.log using seconds since epoch instead of date/time stamp

2025-04-01 Thread Dan van der Ster
Hi Eugen & Bryan, I've been trying to understand this issue -- I can't find anything in 17.2.8 that fixed it. Should we just assume that by switching the base image to el9 from el8 fixed it? Do we need to create a ticket here to look into this further? Cheers, Dan On Fri, Feb 21, 2025 at 9:44 AM

[ceph-users] Updating ceph to pacific and quince

2025-04-01 Thread Iban Cabrillo
Dear cephers, We intend to begin the migration of our Ceph cluster from Octopus to Pacific and subsequently to Quincy. I have seen that from Pacific onwards, it is possible to automate installations with cephadm. One of the questions that arise is whether the clients (depending on the Op

[ceph-users] Re: reef 18.2.5 QE validation status

2025-04-01 Thread Yuri Weinstein
Thank you all for review and approval The only remaining issues are: upgrade-clients:client-upgrade-octopus-reef-reef - Ilya and Josh looked with no conclusion yet upgrade/pacific-x (reef) - Laura is looking Neha, can you take a look at those and reply with recommendations or approval as is? On

[ceph-users] Re: Updating ceph to pacific and quince

2025-04-01 Thread Anthony D'Atri
This, gentle readers, is why the Ceph community is without equal. > On Apr 1, 2025, at 6:44 PM, Tim Holloway wrote: > > but I think we've reached the point where if you ask on the list, we can > clear that. ___ ceph-users mailing list -- ceph-users@c

[ceph-users] Re: reshard stale-instances

2025-04-01 Thread Richard Bade
Thanks for that Casey. The docs are a bit sparse on these commands and it just told me "ERROR: bucket name not specified" when I ran it without anything. After a bit of googling I was able to find a mailing list response with this information from J. Eric Ivancich from a couple of years ago: > Whe

[ceph-users] Re: Updating ceph to pacific and quince

2025-04-01 Thread Tim Holloway
As Eugen has noted, cephadm/containers were already available in Octopus. In fact, thanks to the somewhat scrambled nature of the documents, I had a mix of both containerized and legacy OSDs under Octopus and for the most part had no issues attributable to that. My bigger problem with Octopus

[ceph-users] Re: Major version upgrades with CephADM

2025-04-01 Thread Dominique Ramaekers
Hi Alex, My own sysadmin experience... Did a major version upgrade two times now always to a version higher in minor versioning than the '.0'-version. Without any issues regarding skipping the '.0'-version. But also upgrading to for instance the v18 image. I never specify the minor versioning

[ceph-users] Re: Major version upgrades with CephADM

2025-04-01 Thread Tim Holloway
Well, I just upgraded Pacific to Reef 18.2.4 and the only problems I ran into had been problems previously seen in Pacific. Your Mileage May Vary. as the only part of Ceph I put a strain on is deploying stuff and adding/removing OSDs, but for the rest, I'd look to recent Reef complaints on the lis

[ceph-users] Re: ceph.log using seconds since epoch instead of date/time stamp

2025-04-01 Thread Eugen Block
Hi Dan, I just found my notes from last year, unfortunately without any "evidence". So I checked one of my lab clusters running with 17.2.8 (so it's el9), but the ceph.log is still using the epoch timestamp: quincy-1:~ # tail /var/log/ceph/1e6e5cb6-73e8-11ee-b195-fa163ee43e22/ceph.log 17435

[ceph-users] Re: endless remapping after increasing number of PG in a pool

2025-04-01 Thread Burkhard Linke
Hi, On 4/1/25 09:06, Michel Jouvin wrote: Hi, We are observing a new strange behaviour on our production cluster : we increased the number of PG (from 256 to 2048) in a (EC) pool after a warning that there was a very high number of objects per pool (the pool has 52M objects). Background: t

[ceph-users] Re: Major version upgrades with CephADM

2025-04-01 Thread Joel Davidow
In addition to the blog posts, as part of my planning for an upgrade, I check several places for potential issues and changes: 1) https://docs.ceph.com/en//cephadm/upgrade/ 2) https://docs.ceph.com/en/latest/releases// 3) https://tracker.ceph.com/ 4) this mailing list I also test the up

[ceph-users] Re: endless remapping after increasing number of PG in a pool

2025-04-01 Thread Michel Jouvin
Hi Bukhard, Thanks for your answer. Your explanation seems to match well our observations, in particular the fact that new misplaced objects are added when we fall under something like 0.5% of misplaced objects. What is not clear for me anyway is that 'ceph osd pool ls detail' for the pool mo

[ceph-users] Re: Updating ceph to pacific and quince

2025-04-01 Thread Iban Cabrillo
Hi, Thanks so much, guys, for all your input and perspectives, it's been really enriching Regards, I -- Ibán Cabrillo Bartolomé Instituto de Física de Cantabria (IFCA-CSIC) Santander, Spain Tel: +34942200969/+3466993042

[ceph-users] Purpose of the cephadm account

2025-04-01 Thread dhivagar selvam
Hi, We have set up a ceph cluster via ceph-ansible. When we add a new osd or mds to the ceph cluster, the "cephadm" account is automatically created with "/bin/bash". 1. Is this account necessary? 2. Can I delete this account or do I need to change something in the ceph-ansible configuration?

[ceph-users] Re: constant increase in osdmap epoch

2025-04-01 Thread Joel Davidow
I'm seeing a similar increase in osdmap epochs with the only diff from `ceph osd dump` being epoch and modified date. Duration of osdmap epochs varies a lot on scale of seconds but are generally less than 10 minutes with a max of about half an hour. This is in a cephadm cluster running 18.2.4 (u