Cool. Glad you made it through. ;-)
Regards,
Frédéric.
- Le 9 Oct 24, à 16:46, Alex Rydzewski rydzewski...@gmail.com a écrit :
> Great thanks, Frédéric!
>
> It seems that --yes-i-really-mean-it helped. The cluster rebuilding now
> and I can access my data on it!
>
> On 09.10.24 15:48, Fréd
Great thanks, Frédéric!
It seems that --yes-i-really-mean-it helped. The cluster rebuilding now
and I can access my data on it!
On 09.10.24 15:48, Frédéric Nass wrote:
There's this --yes-i-really-mean-it option you could try but only after making
sure that all OSDs are actually running Pacif
There's this --yes-i-really-mean-it option you could try but only after making
sure that all OSDs are actually running Pacific.
What does a 'ceph versions' says? Did you restart all OSDs after the upgrade?
Regards,
Frédéric.
- Le 9 Oct 24, à 14:39, Alex Rydzewski rydzewski...@gmail.com a é
I thought so too, Frédéric
But when I try change it, I get this error:
root@helper:~# ceph osd require-osd-release pacific
Error EPERM: not all up OSDs have CEPH_FEATURE_SERVER_PACIFIC feature
root@helper:~# ceph osd require-osd-release help
Invalid command: help not in luminous|mimic|nautilus|
Here's an example of what a Pacific cluster upgraded from Hammer shows:
$ ceph osd dump | head -13
epoch 186733
fsid e029-4xx0-4xx9-axx9-5735
created 2016-02-29T16:23:21.035599+
modified 2024-10-09T11:30:10.916414+
flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardl
Of course, Frédéric,
root@helper:~# ceph osd dump | head -13
epoch 45887
fsid 96b6ff1d-25bf-403f-be3d-78c2fb0ff747
created 2018-06-02T13:12:54.207727+0300
modified 2024-10-09T11:08:53.638661+0300
flags sortbitwise,recovery_deletes,purged_snapdirs
crush_version 82
full_ratio 0.95
backfillfull_rati
Alex,
First thing that comes to mind when seeing these logs suggesting a version
incompatibility is that you may have forgotten to run some commands (usually
mentioned in the release notes) after each major version upgrade, such as
setting flags (like sortbitwise, recovery_deletes, purged_snap
pus
Bygning 109, rum S14
From: Alex Rydzewski
Sent: Wednesday, October 9, 2024 12:26 PM
To: Frédéric Nass
Cc: ceph-users
Subject: [ceph-users] Re: Forced upgrade OSD from Luminous to Pacific
Hello, Frédéric!
1.
First I repaired mon when ceph was Luminous
Hello, Frédéric!
1.
First I repaired mon when ceph was Luminous but it wouldn't start with
some error I don't remember. Then I upgrade ceph and repeat repair
procedure and I then upgrade ceph and repeated the restore procedure and
mon started. Now I can query to it.
root@helper:~# ceph --vers
- Le 8 Oct 24, à 15:24, Alex Rydzewski rydzewski...@gmail.com a écrit :
> Hello, dear community!
>
> I kindly ask for your help in resolving my issue.
>
> I have a server with a single-node CEPH setup with 5 OSDs. This server
> has been powered off for about two years, and when I needed the
10 matches
Mail list logo