Hi,
> On 26 Jul 2024, at 20:22, Josh Durgin wrote:
>
> We didn't want to stop building on Centos 8, but the way it went end of
> life and stopped doing any security updates forced our hand. See this
> thread for details [0].
>
> Essentially this made even building and testing with Centos 8 infe
We didn't want to stop building on Centos 8, but the way it went end of
life and stopped doing any security updates forced our hand. See this
thread for details [0].
Essentially this made even building and testing with Centos 8 infeasible,
so we suggest users migrate to Centos 9 (so they continue
Hello,
On 2024-07-25 16:39, Harry G Coin wrote:
Upgraded to 18.2.4 yesterday. Healthy cluster reported a few minutes
after the upgrade completed. Next morning, this:
# ceph health detail
HEALTH_ERR Module 'diskprediction_local' has failed: No module named
'sklearn'
[ERR] MGR_MODULE_ERROR: M
Rook users are seeing OSDs fail on arm64 with v18.2.4. I would think it
also affects non-rook users.
Tracker opened: https://tracker.ceph.com/issues/67213
Thanks,
Travis
On Wed, Jul 24, 2024 at 3:13 PM Yuri Weinstein wrote:
> We're happy to announce the 4th release in the Reef series.
>
> An ea
I'll echo Adam's comments here.
Moreover, if we take a step to perform in place upgrade from el8 to el9, will
18.2.2 compatible for el9 before upgrading to 18.2.4?
This seems like too much effort for a minor release.
Amar
From: Adam Tygart
Sent: 26 July 2024 1
On 26/07/24 12:35, Kai Stian Olstad wrote:
On Tue, Jul 23, 2024 at 08:24:21AM +0200, Iztok Gregori wrote:
Am I missing something obvious or with Ceph orchestrator there are non
way to specify an id during the OSD creation?
You can use osd_id_claims.
I tried the osd_id_claims in a yaml file l
Are you saying that the el9 rpms work on el8 hosts?
The complaint so far is that a minor release in Ceph has silently and
unexpectedly seemed to remove support for an OS version that is still
widely deployed. If this was intentional, there is a question as to why
it didn't even warrant a note
Use https://download.ceph.com/rpm-reef/el9/x86_64/
On Fri, Jul 26, 2024 at 1:37 AM Amardeep Singh
wrote:
>
> Hi,
>
> I have tried running simple update on Rocky 8.9 running reef 18.2.2, it's
> failing straight away.
>
> dnf update
> Ceph x86_64
On Tue, Jul 23, 2024 at 08:24:21AM +0200, Iztok Gregori wrote:
Am I missing something obvious or with Ceph orchestrator there are non
way to specify an id during the OSD creation?
You can use osd_id_claims.
This command is for replacing a HDD in hybrid osd.344 and reuse the block.db
device on
On Fri, Jul 26, 2024 at 12:17 PM Dan O'Brien wrote:
>
> I'll try that today.
>
> Looking at the tracker issue you flagged, it seems like it should be fixed in
> v18.2.4, which is what I'm running.
Hi Dan,
The reef backport [1] has "Target version: Ceph - v18.2.5". It was
originally targeted fo
I'll try that today.
Looking at the tracker issue you flagged, it seems like it should be fixed in
v18.2.4, which is what I'm running. Did that commit make it into the 18.2.4
build that was released?
___
ceph-users mailing list -- ceph-users@ceph.io
To
Hi,
I have tried running simple update on Rocky 8.9 running reef 18.2.2, it's
failing straight away.
dnf update
Ceph x86_64
84 B/s | 146 B
00:01
Errors during do
Hi,
I have two ceph clusters on 16.2.13 and one CephFS in each.
I can see only one difference between FSs: the second one FS has two data pools
and one of the root directories is pinned to the second pool. In the first one
FS we have only one default data pool.
And, when I do:
ceph fs authorize
Yes, that is the strange part. It says "data is caught up with source",
but in secondary data pool (hn2.rgw.buckets.data), nothing in there.
The data has been uploaded before the multisite setup is not replicated.
___
ceph-users mailing list -- ceph-users
14 matches
Mail list logo