On Mon, Dec 16, 2024 at 6:27 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/69234#note-1
>
> Release Notes - TBD
> LRC upgrade - TBD
> Gibba upgrade -TBD
>
> Please provide tracks for failures so we avoid duplicates.
> Seeking approval
Hi Florian,
On 17.12.24 20:10, Florian Haas wrote:
1. Disable orchestrator scheduling for the affected node: "ceph orch
host label add _no_schedule".
14. Re-enable orchestrator scheduling with "ceph orch host label rm
_no_schedule".
Wouldn't it be easier to run "ceph orch host maintenance
Hello, Ceph users,
a question/problem related to deep scrubbing:
I have a HDD-based Ceph 18 cluster currently with 34 osds and 600-ish pgs.
In order to avoid latency peaks which apparently correlate with HDD being
100 % busy for several hours during a deep scrub, I wanted to relax the
scr
Hi Eugen,
Eugen Block wrote:
> check out the docs [0] or my blog post [1]. Either set the new interval
> globally, or at least for the mgr as well, otherwise it will still check for
> the default interval.
Thanks for the pointers. I did
ceph config set global osd_deep_scrub_interval 2592
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
check out the docs [0] or my blog post [1]. Either set the new
interval globally, or at least for the mgr as well, otherwise it will
still check for the default interval.
Regards,
Eugen
[0]
https://docs.ceph.com/en/latest/rados/operations/health-checks/#first-method
[1]
http://heit
On 18/12/2024 15:37, Robert Sander wrote:
Hi Florian,
On 17.12.24 20:10, Florian Haas wrote:
1. Disable orchestrator scheduling for the affected node: "ceph orch
host label add _no_schedule".
14. Re-enable orchestrator scheduling with "ceph orch host label rm
_no_schedule".
Wouldn't it b
Hi,
We are happy to announce another release of the go-ceph API library.
This is a regular release following our every-two-months release
cadence.
https://github.com/ceph/go-ceph/releases/tag/v0.31.0
More details are available at the link above.
The library includes bindings that aim to play a
On 02-12-2024 21:53, alessan...@universonet.com.br wrote:
Ceph version 18.2.4 reef (cephadm)
Hello,
We have a cluster running with 6 Ubuntu 20.04 servers and we would like to add
another host but with Ubuntu 22.04, will we have any problems?
We would like to add new HOST with Ubuntu 22.04 and
Hello,
I was recently reading rook ceph multisite documentation and I've found
this rgw recommendation:
"Scaling the number of gateways that run the synchronization thread to 2
or more can increase the latency of the replication of each S3 object.
The recommended way to scale a multisite con
Any chance for this one or one that fixes 'all osd's unreachable' when
ipv6 in use?
https://github.com/ceph/ceph/pull/60881
On 12/18/24 11:35, Ilya Dryomov wrote:
On Mon, Dec 16, 2024 at 6:27 PM Yuri Weinstein wrote:
Details of this release are summarized here:
https://tracker.ceph.com/issu
I asked the same question last week. ;-) Quoting Yuri:
We merged two PRs and hope that the issues were addressed. We are
resuiming testing and will email the QE status email as soon as the
results are ready for review.
Zitat von Harry G Coin :
Any chance for this one or one that fixes 'all
Hi All,
So we've been using the Ceph (v18.2.4) Dashboard with internally
generated TLS Certificates (via our Step-CA CA), one for each of our
three Ceph Manager Nodes.
Everything was working AOK.
The TLS Certificates came up for renewal, which they were successfully
renewed. Accordingly, th
Hi everyone,
Just to make sure everyone reading this thread gets the info, setting
osd_scrub_disable_reservation_queuing to 'true' is a temporary workaround, as
confirmed by Laimis on the tracker [1].
Cheers,
Frédéric.
[1] https://tracker.ceph.com/issues/69078
- Le 5 Déc 24, à 23:09, Laim
Hi Ilya,
On 13/12/2024 16:25, Ilya Dryomov wrote:
> [...]
>> We're currently checking how our stack could handle this more
>> gracefully. From my understanding of the rxbounce option, it seems like
>> always passing it when mapping a volume (i.e., even if the VM disks
>> belong to Linux VMs that a
15 matches
Mail list logo