[ceph-users] Re: squid 19.2.1 RC QE validation status

2024-12-18 Thread Ilya Dryomov
On Mon, Dec 16, 2024 at 6:27 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/69234#note-1 > > Release Notes - TBD > LRC upgrade - TBD > Gibba upgrade -TBD > > Please provide tracks for failures so we avoid duplicates. > Seeking approval

[ceph-users] Re: Experimental upgrade of a Cephadm-managed Squid cluster to Ubuntu Noble (walk-through and RFC)

2024-12-18 Thread Robert Sander
Hi Florian, On 17.12.24 20:10, Florian Haas wrote: 1. Disable orchestrator scheduling for the affected node: "ceph orch host label add _no_schedule". 14. Re-enable orchestrator scheduling with "ceph orch host label rm _no_schedule". Wouldn't it be easier to run "ceph orch host maintenance

[ceph-users] pgs not deep-scrubbed in time

2024-12-18 Thread Jan Kasprzak
Hello, Ceph users, a question/problem related to deep scrubbing: I have a HDD-based Ceph 18 cluster currently with 34 osds and 600-ish pgs. In order to avoid latency peaks which apparently correlate with HDD being 100 % busy for several hours during a deep scrub, I wanted to relax the scr

[ceph-users] Re: pgs not deep-scrubbed in time

2024-12-18 Thread Jan Kasprzak
Hi Eugen, Eugen Block wrote: > check out the docs [0] or my blog post [1]. Either set the new interval > globally, or at least for the mgr as well, otherwise it will still check for > the default interval. Thanks for the pointers. I did ceph config set global osd_deep_scrub_interval 2592

[ceph-users] Re: Squid Manager Daemon: balancer crashing orchestrator and dashboard

2024-12-18 Thread Konstantin Shalygin
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: pgs not deep-scrubbed in time

2024-12-18 Thread Eugen Block
Hi, check out the docs [0] or my blog post [1]. Either set the new interval globally, or at least for the mgr as well, otherwise it will still check for the default interval. Regards, Eugen [0] https://docs.ceph.com/en/latest/rados/operations/health-checks/#first-method [1] http://heit

[ceph-users] Re: Experimental upgrade of a Cephadm-managed Squid cluster to Ubuntu Noble (walk-through and RFC)

2024-12-18 Thread Florian Haas
On 18/12/2024 15:37, Robert Sander wrote: Hi Florian, On 17.12.24 20:10, Florian Haas wrote: 1. Disable orchestrator scheduling for the affected node: "ceph orch host label add _no_schedule". 14. Re-enable orchestrator scheduling with "ceph orch host label rm _no_schedule". Wouldn't it b

[ceph-users] Announcing go-ceph v0.31.0

2024-12-18 Thread Anoop C S
Hi, We are happy to announce another release of the go-ceph API library. This is a regular release following our every-two-months release cadence. https://github.com/ceph/go-ceph/releases/tag/v0.31.0 More details are available at the link above. The library includes bindings that aim to play a

[ceph-users] Re: Update host operating system - Ceph version 18.2.4 reef

2024-12-18 Thread Stefan Kooman
On 02-12-2024 21:53, alessan...@universonet.com.br wrote: Ceph version 18.2.4 reef (cephadm) Hello, We have a cluster running with 6 Ubuntu 20.04 servers and we would like to add another host but with Ubuntu 22.04, will we have any problems? We would like to add new HOST with Ubuntu 22.04 and

[ceph-users] RGW sizing in multisite and rgw_run_sync_thread

2024-12-18 Thread Adam Prycki
Hello, I was recently reading rook ceph multisite documentation and I've found this rgw recommendation: "Scaling the number of gateways that run the synchronization thread to 2 or more can increase the latency of the replication of each S3 object. The recommended way to scale a multisite con

[ceph-users] Re: squid 19.2.1 RC QE validation status

2024-12-18 Thread Harry G Coin
Any chance for this one or one that fixes 'all osd's unreachable' when ipv6 in use? https://github.com/ceph/ceph/pull/60881 On 12/18/24 11:35, Ilya Dryomov wrote: On Mon, Dec 16, 2024 at 6:27 PM Yuri Weinstein wrote: Details of this release are summarized here: https://tracker.ceph.com/issu

[ceph-users] Re: squid 19.2.1 RC QE validation status

2024-12-18 Thread Eugen Block
I asked the same question last week. ;-) Quoting Yuri: We merged two PRs and hope that the issues were addressed. We are resuiming testing and will email the QE status email as soon as the results are ready for review. Zitat von Harry G Coin : Any chance for this one or one that fixes 'all

[ceph-users] Issue With Dasboard TLS Certificate (Renewal)

2024-12-18 Thread duluxoz
Hi All, So we've been using the Ceph (v18.2.4) Dashboard with internally generated TLS Certificates (via our Step-CA CA), one for each of our three Ceph Manager Nodes. Everything was working AOK. The TLS Certificates came up for renewal, which they were successfully renewed. Accordingly, th

[ceph-users] Re: Squid: deep scrub issues

2024-12-18 Thread Frédéric Nass
Hi everyone, Just to make sure everyone reading this thread gets the info, setting osd_scrub_disable_reservation_queuing to 'true' is a temporary workaround, as confirmed by Laimis on the tracker [1]. Cheers, Frédéric. [1] https://tracker.ceph.com/issues/69078 - Le 5 Déc 24, à 23:09, Laim

[ceph-users] Re: CRC Bad Signature when using KRBD

2024-12-18 Thread Friedrich Weber
Hi Ilya, On 13/12/2024 16:25, Ilya Dryomov wrote: > [...] >> We're currently checking how our stack could handle this more >> gracefully. From my understanding of the rxbounce option, it seems like >> always passing it when mapping a volume (i.e., even if the VM disks >> belong to Linux VMs that a